text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape.
Jonathan Stuart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
The researchers split the problems into two factions: technological and legal. The second bit has added gravity today in light of recent leaks on the data mining activities of the United States National Security Agency, although those specific circumstances will not be discussed here. However, according to the report, an incident in 2010 (Wikileaks) laid the foundation for an environment where such infringement could happen.
However, the technological concerns are more relevant to those seeking to outsource HPC applications to the cloud. Virtualization, according to the report, is a key to running high performance applications in a cloud setting. That should be neither surprising nor interesting, as cloud computing is sometimes referred to as ‘computing in a virtualized environment.’
However, it is an important distinction to consider. As the report noted, “virtualizing a computer system reduces its management overhead and allows it to be moved between physical hosts and to be quickly instantiated or terminated.”
As computations in a public cloud must be somehow sent back to the host and it is preferable that such sending happens quickly, virtualization is understandably important. The preferred infrastructure to virtualize into a cloud environment would be that of the Intel x86, used in many localized HPC instances. That affinity presents problems for cloud computing.
“The x86 architecture was not conceived as a platform for virtualization. The mechanisms which allow x86 based virtualization either require a heavily modified guest OS or utilise an additional instruction set provided by modern CPUs which handles the intercepting and redirecting traps and interrupts at the hardware level.” It is of course possible to virtualize such an architecture, but it will result in what the researchers call a performance penalty. That penalty has been significantly reduced over the last few years, but is still present and can manifest itself in I/O performance, sometimes in extreme ways.
“IO performance in certain scenarios,” the researchers note, “suffers an 88% slowdown compared to the equivalent physical machine.” One of the main principles behind computing in the cloud is the optimization of resources. Virtualized machines (or Virtual Machines, or VMs) curtail performance to ensure the servers are in usage, which is not necessarily ideal.
A further issue raised by Ward and Barker to computing in the cloud is the interoperability among major cloud service providers like Amazon, Google, Rackspace, and Microsoft. They related it to mainframe computing, which was dominated by IBM in the 1970s. “Increased interoperability is essential in order to avoid the market shakeout the mainframe industry encountered in the 1970s. This is a significant concern for the future of cloud computing.”
Scaling up is another issue presented by the researchers, but one they feel is at least somewhat adequately addressed by the development of NoSQL. “It is NoSQL which has been a driving force behind cloud computing. The unstructured and highly scalable properties of many common NoSQL databases allows for large volumes of users to make use of single database installation to store many different types of information.” It is this notion that carries the storage capacity for HPC applications in things like Azure and S3.
Of course, it is difficult to discuss the complications of computing in the cloud without addressing security and what the report refers to as trust issues. The report, which was coincidentally published last week, seems prescient considering the NSA PRISM leaks that have been brought to light over the last week or so.
The researchers here delved into how the Wikileaks incident in 2010 laid the groundwork. “Without a comprehensive legal framework in place it is impossible to conclusively argue what parties cannot access or otherwise interfere with cloud based operations. This issue is problematic for organisations such as Wikileaks which are not well received by world governments. Unfavorable organisations can be effectively barred from operating on the cloud by any organisations able to exert influence against the provider.” Determining jurisdiction in these circumstances is hazy. The Amazon datacenter in question over the Wikileaks scandal was based in Europe. However, Amazon is based in the United States, potentially subjecting it to US government pressure if necessary.
“Worse still is the possibility that governments can compel cloud providers to provide access to client’s services or data,” the researchers argued. “This is a major problem for cloud computing and if this issue remains unanswered, [one] could potentially see cloud providers relinquishing user and company data to world governments based on a legal mandate.”
The security issue is not a new one. Companies with sensitive data take measures to ensure the security of their cloud-housed data, such as adding additional vendor-supplied security layers or participating in a sort of ‘virtual private cloud.’
In this case, it seems unlikely that the NSA would mine experimental financial data to find terrorism patterns. However, as the report noted, a potentially dangerous precedent could be set by these actions. Will this break the trust of companies looking to keep their potentially critical and sensitive data in a cloud service? It is unclear, but this report at least indicates that could happen.
From I/O bottleneck issues to scalability to security and trust issues, the complications of cloud HPC are significant. However things like NoSQL (for scale) and better virtualization tools and workload managers are being built to mitigate those issues. | <urn:uuid:89912aec-ab2a-436e-b83e-a2abb1c21d4f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/06/13/examining_questions_of_virtualization_and_security_in_the_cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95623 | 1,207 | 2.71875 | 3 |
The basic premise of this document is simple: to explain why distributed transactional databases are the Holy Grail of database management systems (DBMS).
The promise of these systems is to provide on-demand capacity, continuous availability and geographically distributed operations. However, most of them require substantial trade-offs in terms of overall effort, cost, time to deployment and ongoing administration. Despite those trade-offs, these offerings have dominated the industry for decades, forcing compromises from start to finish – from initial application development through ongoing maintenance and administration.
The Three Traditional Architectures And NuoDB are:
- Shared-Disk Databases
- Shared-Nothing Databases
- Synchronous Commit (Replication) Databases
- New DDC Architecture Offers Comprehensive Solution | <urn:uuid:f77696dc-7be1-4fb9-b4e3-c2fff101dd95> | CC-MAIN-2017-04 | http://www.dbta.com/DBTA-Downloads/WhitePapers/What-is-a-Distributed-Database-And-Why-Do-You-Need-One-4418.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00452-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885117 | 154 | 2.609375 | 3 |
Two German computer scientists have proved that it’s possible to access and recover data from an encrypted Android smartphone by performing a set of simple and easily replicable steps that start with putting the phone in a freezer.
They tested the attack on Samsung Galaxy Nexus devices, which they kept “on ice” for an hour before.
Since version 4.0 of the Android platform, the device’s storage is automatically encrypted and not accessible except by entering the required PIN. When the device is switched off, the data contained in its RAM chips does not instantly disappear, but fades over time (the so-called “remanence effect”).
The researchers’ theory was that when the switching off and rebooting of the device is performed at sub-zero temperatures, the fading of the data will be slowed down enough to allow them to access it from the phone’s memory.
After pulling the device out of the freezer, they rebooted it, unlocked its bootloader, and they booted up their FROST (Forensic Recovery of Scrambled Telephones) data recovery tool, which allowed them to recover sensitive information such emails, photos, contacts, calendar entries, WiFi credentials, and eve the disk encryption key.
“If a bootloader is already unlocked before we gain access to a device, we can break disk encryption. The keys that we recover from RAM then allow us to decrypt the user partition. However, if a bootloader is locked, we need to unlock it first in order to boot FROST and the unlocking procedure wipes the user partition (but preserves RAM contents),” they shared.
“Since bootloaders of Galaxy Nexus devices are locked by default, and since we conjecture that most people do not unlock them, disk encryption can mostly not be broken in real cases. Nevertheless, in addition we integrated a brute force option that breaks disk encryption for short PINs.”
“We believe that our study about Android’s encryption is important for two reasons: First, it reveals a significant security gap that users should be aware of. Since smartphones are switched off only seldom, the severity of this gap is more concerning than on PCs. Second, we provide the recovery utility FROST which allows law enforcement to recover data from encrypted smartphones comfortably,” they concluded. | <urn:uuid:4a33799a-ef5f-410e-8f9c-c3d2bd0f1bf3> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/02/18/freezing-android-devices-to-break-disk-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929471 | 475 | 3.078125 | 3 |
QKD – Quantum key distribution is the magic part of quantum cryptography. Every other part of this new cryptography mechanism remains the same as in standard cryptography techniques currently used.
By using quantum particles which behave under rules of quantum mechanics, keys can be generated and distributed to receiver side in completely safe way. Quantum mechanics principle, which describes the base rule protecting the exchange of keys, is Heisenberg’s Uncertainty Principle.
Heisenberg’s Uncertainty Principle states that it is impossible to measure both speed and current position of quantum particles at the same time. It furthermore states that the state of observed particle will change if and when measured. This fairly negative axiom which says that measurement couldn’t be done without perturbing the system is used in positive way by quantum key distribution. | <urn:uuid:0253e5bc-6ea2-4698-9a89-a4ce2f797562> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/cryptography | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00480-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915717 | 166 | 3.03125 | 3 |
Statistics sometimes get a bad rap, as being somehow divorced from the real world of complex events and relationships. But in several notable cases, statistics helped provide a useful view of seemingly diverse and sporadic events.
Back in the 1980s for example Jack Maple, a cop in New York City's subways, got tired of responding to crimes after the fact, and decided to put together information that would predict where crimes would occur. He had no computers or fancy analytic technology -- just crayons and butcher paper -- but Maple's analysis of crime statistics superimposed on maps of the subway, revolutionized police work, and comstat, as it is now called, has enlisted the help of computerized analytical tools and has spread to police departments around the world.
The South Carolina Office of Research and Statistics (ORS) is also breaking new ground in the use of statistical data. ORS crunches the numbers to help analyze a broad spectrum of social services programs -- from health to justice, education and corrections -- to provide a sort of "information dashboard" for some 20 state agencies and private health-care providers, in order to help the state assess the effectiveness of various programs and focus social services money and attention where it will make the biggest difference in the lives of those being served.
"One of the projects we did," said Pete Bailey, health and demographics section chief, "was to look at what happened to children that aged out of the juvenile justice system, what proportion of them were incarcerated later, and so on. The Department of Juvenile Justice itself didn't have any data on adult arrests or incarceration, but we do, because we receive that from state law enforcement. So with permission, we conducted a study."
Bailey says ORS is also doing a study for the Department of Education. "Unfortunately, in South Carolina we did not always have the ability to track a kid from year to year." Bailey said the study will tie educational data to Medicaid system data for low-income children, and to the social services and juvenile justice systems. "And what that means is that ... you would be able to do analysis to see how Medicaid children are doing in school versus food stamp children, foster care, or protective services cases that weren't removed from the home. You could look at the impact of all of those. And the next step we're going to is ... to be able to look individually at each of those kids with a tracking number and -- without knowing who they are -- look at their history in terms of how did they get where they were in the educational system, in health, or with social services, or with law enforcement ... What caused their blocks and their breaks and their successes?
"That's an awesome capability. Government has a responsibility to use all the information that's sitting in every computer they can get their hands on, to better understand and evaluate why our programs work or don't work and how to come up with better outcomes. If we do that, you add to government a volume way of work per employee and who gets the best outcomes. We can use that to improve those that aren't doing so well."
Connecting human services data to elected officials responsible for funding programs is one of the possibilities said Bailey. "Tell them the problems people have in their districts ... How many people are on food stamps or in foster care? How are kids doing in school? continued Bailey. "And once you do this, when they are elected, you can evaluate every year how things have gone. It sort of feels like democracy."
So if this is such a great idea, why isn't every state doing it? David Patterson, deputy chief of health and demographics said that while South Carolina probably has the largest and most comprehensive state level warehouse in existence, other states are also looking at doing something similar. For example, they have had some contact with Arizona, Arkansas and Maryland in that regard. But as every government agency knows, sharing data, protecting privacy and knocking down stovepipes to get an enterprise view is not always easy. There are technological as well as human barriers. So how did ORS build its data-sharing system, and what advice do they have for others?
Workable Data Sharing
ORS collects data from over 20 state agencies said Patterson, "and we don't release anything without prior approval of the originating source. We apply algorithms to the data that allow us to maintain entity relationships at the person level across all these data."
"We're like a data Switzerland," he said. "We develop a lot of applications for customers, everything we do is customer-driven and requires their approval. The [memo of understanding] process resulted in some statutes to clarify and extend ORS' authority."
When it comes to sharing data, said Bailey, there are some wrong ways to go about it. "A lot of states tend to try to have one agency grab another agency's data, and people might be willing to share their data, but they're not willing to have you put your data in their computer, because we live in a world where whoever has the most data or the biggest computer is thought to be the winner ... And if you put your data in my computer, you're likely to get up the next morning and see in the paper that I've done an analysis showing you did some stupid stuff."
Instead, ORS has an agreement with data-providing agencies so that those agencies have full control of their data. "They allow us to run through the unique IDs," said Bailey, "and build a tracking number so we can link data across all of these systems without using the identifiers. Once we have the tracking number, that's what's linked to the statistic, so we can link a massive amount of data from some 27 blocks of agencies and God only knows how many programs, and [in this way] they love to share data and do research together, because that's where the great answers are. So you see, that's different from saying 'I want you to give me your data'"
Patterson said that preceding the agreements "is the issue of privacy protection, consensus on the mission, and transparency on what we do. If we weren't neutral, then none of these other things would happen."
ORS is not competitive for budget with any of these agencies, and has it has the trust of the private sector, which enables the collection of data on hospitalizations, emergency room visits, outpatient surgeries, even home health and free clinic visits, etc., and that in turn can be linked with state agency data.
So ORS crosses the boundaries between private, public, not for profit, health, social services, criminal justice and education systems -- without making anybody unhappy or upsetting the balance of power between organizations.
However, said Bailey, each state agency has a "federal godfather," and if those federal agencies don't get along well at the national level, it can make it difficult at the state level.
Connecting social services data to geographical location gives it additional import. "Panorama has helped us to build the mapping application," said Randy Rambo, IT/DBA manager of health
and demographics., "using ESRI mapping to drill down to any type of grouping that you want to have mapped -- legislative boundaries, census tracts, virtual neighborhoods, etc."
The enthusiasm for data use at this level is evident as ORS staff suggested combining data -- such as people moving in or out of a community, crime data, kids' progress in schools, emergency room injuries or violence -- in such a way as to help isolate the actual causes of community decay or other problems that may develop.
"We have massive data sitting in government computers that represent pieces of the puzzle," said Bailey. "If you put it together we can better understand our children, and our parents and us as humans, so that we could better make a substantial difference between the world we have versus the world we could have. And the world we could have is awesome." | <urn:uuid:cd1f3005-ba96-4b49-a29f-c86f868606eb> | CC-MAIN-2017-04 | http://www.govtech.com/health/South-Carolina-Builds-Enterprise-Social-Services.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968644 | 1,621 | 2.578125 | 3 |
Editor’s Picks: What We Like
A Cord-Free Future
Commonplace electronics may go completely cordless in the next few years. According to a CNN article, wireless electricity isn’t too far off. One way that wireless electricity could function is by converting power from an electrical socket into a magnetic field that is then sent through the air at a specific frequency — a technology called magnetically coupled resonance.
“Five years from now, this [technology] will seem completely normal,” Eric Giler, CEO of WiTricity, said in the article. Giler’s company, which was developed from a Massachusetts Institute of Technology research team, is one of several groups working on the concept. (Check out his demonstration of the technology at TED.com.)
Wireless electricity, in whatever form it may appear, could have huge societal and environmental implications. Eliminating the need for power cords could lead the way for the adoption of electric cars, while decreasing the production of disposable batteries could benefit the environment, the CNN article stated. Additionally, wireless electricity could simply make recharging laptops, cell phones and MP3 players much more convenient.
The Smart Phone War
This fall, The Wall Street Journal reported that Dell will build a mobile phone for AT&T using the Android platform, likely available in early 2010. This is exciting news on the mobile device front, as it has the potential to further diversify the booming smart phone market.
The phone is Dell’s first foray into cell phone development. Dell’s phone will be based on Android, a mobile operating system running on the Linux kernel. Originally begun as a small startup, Android was acquired by Google and then folded into the Open Handset Alliance. It’s been available as open source software since last year.
Its key features are high adaptability to third-party applications and libraries, a high rate of compatibility with various networks and support of a range of media formats and hardware. It’s billed as easily maintained and able to support open-air download of applications, without the use of a PC. Further, it supports a high level of functionality in touch-screen usage.
Application of Android was already seen in the mobile phone market in Google’s HTC Dream — often referred to as the gPhone — but to date that has seen sales of roughly 1 million, compared to BlackBerry’s estimated 28.5 million and iPhone’s more than 21 million. We’ll see whether Dell’s input will make for a sleeker, smarter design that captures the public’s imagination and excels Google’s entrance into the mobile phone market.
eBay to the Rescue
One of our editors has a cell phone that’s been on its last legs for a while, but recently it developed a new quirk. If it’s touched with damp fingers, it dials itself until it dries out. (It really likes the number seven.) Plus, while it’s happily dialing that one number — without stopping — our editor can’t access text messages or dial out. While she can accept incoming calls, whoever is calling hears the number seven beeping over and over. Needless to say, it makes holding a conversation a little tough.
She looked into buying a new phone, but she didn’t want to have to renew or extend her contract. So, after sending out numerous feelers and asking around, she discovered — dum, dum, dum! — eBay sells phones sans contracts.
Now all she’s got to do is perfect the fine art of bidding online. | <urn:uuid:67b7faac-261b-489e-b702-c30d8381e8f6> | CC-MAIN-2017-04 | http://certmag.com/editors-picks-like-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00442-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94512 | 753 | 2.65625 | 3 |
With the kids on-board for the “Can we learn how to code together?” project, we rushed head-long into the Scratch language. The project, which was developed in 2003 at the MIT Media Lab, is designed for 8- to 16-year-olds, as well as their parents. It introduces a bunch of programming concepts and logic in a fun and easy-to-learn manner. It’s free to use, and you can also then sign up for free with a username/password.
This is where our first moment of drama arrived with the kids. They don’t have a lot of experience yet in trying to come up with usernames, especially ones that ask you not to use their real names. My son, of course, wants us to have a username that includes the word “fart”, while my younger daughter just wants a reference to either Elsa or Anna from Frozen. I suggested “TeamShaw” as a username, but my son pointed out that we didn’t want to have our last name involved. In the end, we picked favorite characters from movies, and we ended up with a combination of Woody from Toy Story and Star-Lord from Guardians of the Galaxy.
The Scratch site has a neat video on its home page that shows a bunch of the different projects that its users have created, so I played that for the kids to get them motivated and excited, to see that it was more than just a paint program or animation tool. I think the biggest problem in trying to get the kids started with programming is that they have the 50,000-foot idea (such as “I want to invent a game where you can fight your friends”), but haven’t yet figured out the basic parts just yet.
Fortunately, after the video was done we could jump right into the on-site tutorial, which lets you create a program with the tools and sort of shows you the interface. Scratch uses a series of visual blocks to represent commands, and when you click and drag them into your “Scripts” area, you can connect several blocks together to start creating your program, which is then “acted out” on the interface’s “Stage”. On the stage is where the program’s “Sprites” appear, and each sprite that you create can include multiple scripts running on them, depending on whether you need the end user to perform input (click a flag, click a spot on the stage, etc.). You can also have multiple sprites doing things at the same time, or reacting to each other, etc. Other cool features include the ability to modify the sprites with color (through a simple Paint-like interface), as well as add sounds or photos through the computer’s microphone and webcam. It was this area that the kids really enjoyed -- instead of creating a soundtrack to their initial game with the provided background music, we recorded them singing their own song, which we were able to play in a loop.
After we got through the Scratch tutorial provided on the site, we jumped into the first project in our DK Publishing book, called “Escape the Dragon!”. It’s a simple program where your initial sprite (always a cat, the unofficial mascot of the language) is being chased by a dragon, and the user controls the direction of the cat by moving their mouse around. As you get deeper into the project, you can direct the cat by moving a third Sprite around (in the book, it was a donut).
We decided to go off the board a bit and change the sprites from a dragon chasing a cat to a mouse chasing a Mom. We replaced the donut with a bowl of cheese puffs (closest thing to cheese for our mouse). The flexibility of letting kids choose their own sprite designs or come up with their own is very cool -- it reminds me of several different kid-themed painting and rubber stamping apps on the iPad. The combinations may not make sense to the logic-driven parent, but kids are just fine with a mouse chasing a wizard on the moon, for example.
The instructions on creating the different scripts were explained well within the book, and after about 60-90 minutes we had our finished “game” that we could start playing (basically, the game’s object is to see how long you can run around the screen before the mouse catches you.) At this point it was time for the kids to go to bed.
My oldest daughter (age 8.5) seems to understand the logic within the scripts, but I think for the next project we’re going to switch positions and I’m going to have her doing the click-and-drag portions. For this one, she was reading along in the book and then watching me build the script. My son is more interested in the visual part of the stages, sprites and colors, rather than the logic, such as the “if/then” statements or figuring out how to move. My youngest is just happy to be a part of the team.
I also downloaded the “ScratchJr” app for the iPad, which is even more basic since it’s geared to the 5- to 7-year-old crowd. The app has the same ideas in terms of a stage, Sprites, sounds, colors and backgrounds, but the scripts are created by moving visual icon blocks instead of descriptions via words. It helps that we already knew the basics of Scratch to know what icon we needed to start the program (the Green Flag), but it feels like we can get more complex with Scratch once we get a lot of the basics down.
Next up: more script work and examples with the Scratch language through the book, and we’re going to see if we can build a simple dice-based game (or even figure out if we can duplicate a Rock, Scissors, Paper program). | <urn:uuid:cd9913aa-f66e-4796-8793-d21a44370b15> | CC-MAIN-2017-04 | http://www.itworld.com/article/2694942/development/can-keith-code--lesson-1--scratching-the-surface.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967988 | 1,244 | 3.109375 | 3 |
Extreme weather points to importance of BPM software for climate research
Researching weather is an incredibly challenging process that requires scientists to travel outside the office and maintain access to all of their data and applications. As extreme weather conditions become more common, supporting this type of research could get much more difficult. Business process management software could resolve many of the operational challenges facing researchers in this field, streamlining processes and making it easier for them to not only gather the right information, but also use it as effectively as possible.
Considering the rise of extreme weather
Casually looking at weather conditions around the world makes it seem like many regions are facing severe weather. Serious research echoes this idea, as scientists recognize the rise of extreme weather. Omar Baddour, chief of the data management applications division at the World Meteorological Organization, recently told The New York Times that while some extreme weather is always expected, harsh weather conditions and severe storms are becoming more common throughout the world.
“Each year we have extreme weather, but it’s unusual to have so many extreme events around the world at once,” Baddour told the news source. “The heat wave in Australia; the flooding in the UK, and most recently the flooding and extensive snowstorm in the Middle East – it’s already a big year in terms of extreme weather calamity.”
The report explained that weather conditions go beyond being unusual, they are also extreme. For example, Russia often sees extremely cold temperatures. However, parts of the country are experiencing sustained temperatures of negative 50 degrees Fahrenheit. These types of extremes can be problematic, especially as some parts of Russia are experiencing traffic light malfunctions because it is so cold.
Using BPM to improve weather research
It is not uncommon for researchers to travel for field work when major storms or unusual weather conditions emerge. As extreme weather becomes more common, this practice could increase, especially as meteorologists and other scientists work to figure out the extent to which harsh weather indicates climate change. These research processes involve traveling throughout the world with specialized equipment that can gather large quantities of data. Making sure this information can be used effectively is a challenging process that involves a combination of integration and cutting-edge IT functionality. BPM software can ease this processes by making back-office IT systems and data center functionality more social, allowing for seamless process integration and automation.
Vice President of Product Marketing | <urn:uuid:d902a9e3-1196-4862-a397-4422c49a8348> | CC-MAIN-2017-04 | http://www.appian.com/blog/bpm/extreme-weather-points-to-importance-of-bpm-software-for-climate-research | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00130-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94295 | 487 | 2.609375 | 3 |
The discovery of a vulnerability in popular open source web application framework Django has recently demonstrated that using a long password is not always the best thing to do.
As explained by web developer James Bennett, Django uses the PBKDF2 algorithm to hash user passwords, making it extremely difficult for brute-force attacks to be executed successfully.
“Unfortunately, this complexity can also be used as an attack vector. Django does not impose any maximum on the length of the plaintext password, meaning that an attacker can simply submit arbitrarily large — and guaranteed-to-fail — passwords, forcing a server running Django to perform the resulting expensive hash computation in an attempt to check the password. A password one megabyte in size, for example, will require roughly one minute of computation to check when using the PBKDF2 hasher,” Bennet explained in a blog post.
“This allows for denial-of-service attacks through repeated submission of large passwords, tying up server resources in the expensive computation of the corresponding hashes.”
The existence of the flaw was disclosed on the public django-developers mailing list, and has left the core team scrambling to fix it as soon a possible. Fortunately, it took only a day, and they did it by limiting passwords to 4096 bytes.
The newly released Django 1.4.8, Django 1.5.4, and Django 1.6 beta 4 contain the fix and all users are advised to upgrade to one of these versions immediately.
Bennett also made sure to ask that all future potential security issues always be reported via email to firstname.lastname@example.org, rather than through public channels. | <urn:uuid:41c69aab-e043-4ed4-92f1-3b0decd6f4bb> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/09/17/too-long-passwords-can-dos-some-servers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927311 | 341 | 2.6875 | 3 |
For many stakeholders, M2M and cellular connectivity are used interchangeably, and for good reason: Most device designs include a myriad of options to transmit and receive data by cellular networks, and satellite gets relegated to “afterthought” status. But to stop there is to overlook certain advantages, and sometimes necessities, of satellite communications. The fact is that satellite connectivity can sometimes be the right choice for M2M in certain geographies.
How much do you know about the role of satellite-based M2M communications? Here are a few facts to consider:
Did you know?
- Only eight percent of the world’s surface area can be covered by cellular signals. The other 92 percent require satellite connectivity to enable M2M or IoT communication.
- High-volume data is possible with satellite connectivity. Satellite has developed a bad reputation in this area to be sure, but there are many options for high-bandwidth satellite communication. For example GPRS-like capabilities for large data packets are now available through BGAN-M2M, allowing high volumes of data transmission at a relatively low cost per MB.
- Satellite is an excellent option for monitoring fixed assets, such as oil rigs and pipelines. It provides reliable connectivity for reporting requirements that go beyond exception-based notifications, where you need to know more than just something’s wrong, but rather what exactly is wrong.
- Combining cellular and satellite technologies in the same device provides greater peace-of-mind. Even if it only comes into play once a year, users know their device connections will remain live even during cellular network overloads and unexpected outages.
- “Short burst” satellite is often the de facto choice for monitoring and tracking cargo on the open seas, where cellular simply doesn’t reach. These low-use apps can save the day simply by providing alerts that say, “I’m here” or, “Something’s askew.”
Iridium Partner Conference if you are able! | <urn:uuid:17c1fd4a-f230-4710-a4bb-49a13f26605c> | CC-MAIN-2017-04 | http://www.koretelematics.com/blog/why-untethered-m2m-must-go-beyond-cellular-5-facts-to-know-about-satellite-services | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929819 | 420 | 2.625 | 3 |
The energy grid is old, and that’s why a new federal agency is pushing high-risk, high-reward research in an effort to turn innovative ideas into infrastructure that powers the nation.
Known as the Advanced Research Projects Agency-Energy, or ARPA-E, the agency within the U.S. Department of Energy follows the model of the better-known defense research agency, DARPA. In 2007, President George W. Bush codified the agency’s creation, and in 2009, President Barack Obama allocated funding that made its operation today possible. Over the past several years, the agency has funneled hundreds of millions of dollars into dozens of research projects focusing on three key areas of energy grid technology: software, hardware and energy storage.
One of the most outstanding limitations of today’s energy grid is that it was conceived and developed in an era that had not envisioned a future powered by the sun, the wind and data. In an ARPA-E promotional video, Michael Aziz, Harvard Professor of Materials and Energy Technologies, explains the value of new types of energy storage: “The biggest obstacle to us getting a large fraction of our electricity from wind and sunshine is their intermittence. So if we could mass produce a battery that safely and cost-effectively stores massive amounts of electrical energy, we could solve this problem.”
ARPA-E awarded researchers at Harvard University $4.3 million to further their research on flow batteries, a type of storage device that could hold up to 10 times more energy by volume than traditional storage devices. Today’s electric grid needs such batteries if it’s to support wind and solar power, which can sometimes go days without supplying power. Today, solar power accounts for about one-tenth of a percent of the electricity produced in the country, said Harvard Professor Roy Gordon. Parts of California and Hawaii are able to flood the grid with solar power, Gordon noted, but overall the amount of energy storage available is still inadequate.
For new software platforms that power the grid, ARPA-E is funding big data projects. The agency awarded AutoGrid -- whose technology analyzes the data generated by smart meters, building management systems, voltage regulators, thermostats and other equipment -- $3.4 million, and the technology is now being deployed around the nation, in such places as Oklahoma, California, Texas and the Pacific Northwest. The electricity industry is one of the last industries to take advantage of the big data wave, and there’s a lot of potential in doing so, said Sandra Kwak, director of marketing for AutoGrid.
“I would say we’ve barely scratched the surface of the total amount of value that can be recouped from smart grid services,” Kwak said. “If you look at the smart grid in terms of layers, the first layer was smart meters, the second layer was collection of data from those meters, so data management, but now that we’ve collected the data, what do you do with the data? The layer on top of that -- that hasn’t rolled out to the mainstream yet -- is the analytics layer that actually tells you what to do with that data.”
AutoGrid fills that gap, she explained. “Ultimately, what these big data processing engines will do is allow utilities to utilize their existing infrastructure instead of building new power plants,” she said, noting that in the past, utilities had maybe 12 points of data per year – one power meter reading per month – but SmartGrid allows utilities to make decisions based on 3,000 points of data.
“With real-time information, AutoGrid is bringing their applications to market to assist utilities in fully utilizing their assets they have on the grid,” she said. “We actually have the ability to balance supply and demand of power in real time, and we have a complete inventory of every single asset that’s on the grid. […] We can send out text messages, app identifications, phone calls to end users of electricity, and also utilize existing communication channels and protocols and existing hardware in the field. We can send out network communications and ask thermostats to turn down by a couple of degrees, or shift power in between rooftop units. One of the programs that we’re running in Austin Energy involves electric vehicle owners so we can send that price signal to EV owners and tell them it’s a good time or a bad time to charge their car.”
ARPA-E is also driving research in hardware to provide data and controls for platforms like the ones offered by AutoGrid. Smart Wire Grid was awarded just under $4 million by ARPA-E to continue development of a wireless device that clamps onto power lines to control electricity flow.
Their device is the only one of its kind being used today, said Anuj Kapadia, senior engineer at Smart Wire Grid. The devices are now being used by the utilities of Southern Company, headquartered in Atlanta, Ga., and Tennessee Valley Authority, which serves most of Tennessee, and parts of Alabama, Mississippi and Kentucky.
“The technology is a power flow controller, so it controls power. In layman’s terms, you can compare that to a tap on a pipeline. You open the tap, it flows, when you close the tap, you can block the power,” Kapadia explained. “If you look at the power grid now, it’s meshed, so power flows from one point to another, but there’s no way to control it. And now because of all this complexity coming to the grid, a device like ours is very, very useful to actually control and make the grid flexible.”
The clamp-on device can be controlled by the utility wirelessly, using a protocol of the company’s choice, as the device is equipped with three different antennas. Having wireless connectivity is obviously useful, but it also presents a security risk, Kapadia said, which is why they’ve spent a lot of time making the system secure.
Having the ability to control the flow of power is immensely useful, and will continue to become more useful as the grid evolves, Kapadia said. “All these renewables are coming in; we have solar, we have wind, we have electric vehicles coming in, so there is a lot of different generations coming in from all different directions, so we need a flexible grid," he said. "It is not like before where the power flows from line A to line B and that’s about it. It’s going to be vastly used. It’s just a matter of time.”
Editor's Note: This story was edited on July 21 by request of Smart Wire Grid. | <urn:uuid:277512be-a9ed-4557-819e-0d640ad694a6> | CC-MAIN-2017-04 | http://www.govtech.com/federal/ARPA-E-Turning-Innovative-Ideas-into-Infrastructure-that-Powers-the-Nation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00396-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950815 | 1,401 | 2.921875 | 3 |
Your daughter is about to turn 16 and wants to get her driver's license, but she's never been behind the wheel. What do you do? My parents took me to a vacant parking lot (far away from other cars or structures of any kind) and taught me how to safely operate the car. And they started with the basics: buckling my seatbelt.
This thorough and meticulous manner of teaching needs to be applied to teaching children and teens safe Internet practices. Just like oncoming traffic, there are many dangerous things out there on the net.
Parents may be hesitant to talk to their children about technology because they think they don't understand how it works. Sure, some parents might not be able to speak "texting" like their kids, or know what a wiki is. But how many of us can look under the hood and point to the carburetor? Or fix the timing belt? Just because we don't know how a car runs doesn't mean we can't teach someone how to safely operate it.
My parents taught me to drive first through example. Child passengers learn that red means stop and green means go long before they get behind the wheel. Kids need to be taught the basics: don't give out personal information, stay on safe sites, don't respond to e-mails from people you don't know. For older kids and teens, talking about the basics might be nothing more then a refresher ("Duh, I know that already, dad"). But just like my folks told me to buckle that seatbelt, the more you hear it the more reflexive the action becomes.
My cousin has a nine-year-old daughter who is quite tech savvy. But my cousin lets her daughter know that only certain Web sites are allowed, and only when the laptop is in the living room. My cousin is making sure that her daughter is safe but still learning how to use the technology that is very much a part of today's culture. As she get's older and understands better how to protect herself, the rules might be changed. She'll have her own computer and might be allowed to have a social networking page.
Some experts will point to technology to keep children safe while online -- software such as filters and parental controls. The problem with this mentality is some parents walk away after installing this software. When buying a new car most people insist it come with airbags, but we really hope they are never needed. Think of filters and parental controls like this -- they are wonderful safety features, but are nothing more than Internet airbags, hopefully never needed.
If your teen passes her driving test, and you trust her, she's usually allowed to take the car to the mall to meet her friends. But you give her a curfew and forbid her from taking friends in the car, and you remind her to drive carefully ("I know, mom!"). The same idea goes for Internet use. You have taught her what and what not to do online, so now you let her IM her friends. But you continue to talk to her about safety and regularly ask her to show you her MySpace page. Let her see you are not trying to spy on her or invade her privacy.
Unfortunately parents now have one more thing to worry about when it comes to their children. It used to just be saying no to drugs, or buckling that seatbelt. Now it is avoiding cyber predators and identity theft. With open dialogue and good examples we can keep our kids safe when they take the keys.
Good Links for Parents
NetSmartz Workshop: http://www.netsmartz.org/
National Cyber Security Alliance: http://www.staysafeonline.info/
Internet Keep Safe Coalition: http://www.ikeepsafe.org/
OnGuard Online: http://onguardonline.gov/index.html | <urn:uuid:de49a850-cd4c-44df-a4f7-edf5357304b8> | CC-MAIN-2017-04 | http://www.govtech.com/security/Handing-Over-the-Keys-Keeping-Kids.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967878 | 783 | 3.0625 | 3 |
A Network File System is any software and network protocols that support
the sharing of files by multiple users over a network. On
Unix systems, this is usually implemented using the NFS
protocol, which relies on UDP/IP.
On Windows NT systems, this is usually implemented
with the SMB protocol, which in turn can be implemented over
IPX, NetBEUI or TCP/IP.
On NetWare systems, this is usually implemented with the
NCPFS protocol, which in turn relies on IPX. | <urn:uuid:127d4963-8051-4643-90fb-eb297d9d3741> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/network_file_system.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00205-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898493 | 110 | 3.171875 | 3 |
An announcement from MIT discusses research that proposes to replace the traditional communication bus on processors with an on-chip network. The report explains why such an arrangement is much better for multicore, and especially manycore, architectures:
Today, a typical chip might have six or eight cores, all communicating with each other over a single bundle of wires, called a bus. With a bus, however, only one pair of cores can talk at a time, which would be a serious limitation in chips with hundreds or even thousands of cores.
Li-Shiuan Peh, an associate professor of electrical engineering and computer science at MIT, delivered more dismal news about the scalability of the bus architecture. Her research shows that this architecture only scales to around 8 cores, pointing to many 10-core chips that utilize a second bus. She explains the loss of efficiency is related to the fact the buses consume a lot of power, because they have to drive data across long wires to lots of cores at the same time.
Last summer, Peh and her colleagues presented a paper at the Design Automation Conference in which they discussed the efficiency of an on-chip network and demonstrated the performance using a test processor. Instead of using an all-to-all connection, each core only connects to its nearest neighbors using on-chip routers, thereby reducing power requirements and increasing the scalability of the architecture.
The downside is that data from each core has to pass through each subsequent core router along the way to its final destination. Also, if two packets of data show up at a particular router at the same time, one packet has to be saved while the other one is being processed.
Despite such challenges, some manufacturers have already hopped off the bus. San Jose-based chipmaker Tilera, for example, employs an on-chip network in their manycore architecture. They currently offer 32 and 64-core processors and look to scale beyond 100 cores in the near future.
Intel also seems to be in on the trend. The company’s research lab has produced an experimental, 48-core processor named the “Single-chip Cloud Computer” (SCC). Although that’s just a plaything for researchers, the commercialization of Intel’s manycore MIC architecture, along with the recent acquisition of QLogic InfiniBand, could mean an on-chip network will soon be showing up on an x86 processor in the not-to-distant future.
Peh’s research suggests the bus architecture may be on its way out as processors delve into double-digit core territory. If research like that spurs chip vendors to design and build viable on-chip networks, it could usher in a new era of highly scalable processors. | <urn:uuid:bd5e2179-3303-4679-b394-3b52ac48e4c9> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/04/12/hopping_off_the_bus/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948873 | 561 | 3.171875 | 3 |
Science and Maths - Quiz Questions with Answers
Try this quiz (questions and answers) and see if you are good at maths and science. This science and maths quiz that will test your skills with a range of interesting questions related to numbers, geometry, arithmetic, general math knowledge and more. Answers are given at the end of this quiz.
Science and Maths Quiz Questions
- In Physics, what is the name given to the 'Study of Motion'?
- (a) Which gas has the smell of rotten eggs?
(b) What poisonous gas is emitted with the exhaust from a car's engine?
- (a) What is the most common of all metals in the earth's crust?
(b) What are the two gases commonly used to fill scientific balloons?
- When litmus paper turns red, does this indicate the presence of an acid or an alkali?
- Match these inventors and their inventions discoveries:
(a) Benjamin Franklin - Law of Gravitation
(b) J. B. Priestley - Radium
(c) C. V. Raman - Lightning conductor
(d) Madame Curie - Laughing gas
(e) Isaac Newton - Crystal dynamics
- What is the name for the smallest portion of an element?
- Is glass a solid or a liquid?
- (a) What acid is present in vinegar?
(b) Sodium Chloride is the chemist's name for household substance commonly used. What is it?
- (a) What gas is expelled by (i) plants, (ii) animals?
(b) What percentage of the Earth's water is drinkable?
- (a) What was the first radio-active element to be discovered?
(b) Which acid is used in a car battery?
(c) What is the most important constituent of moth balls?
- (a) In which branch of mathematics are 'numbers represented by letters'?
(b) Which Greek mathematician is regarded as the 'Founder of Geometry'?
- (a) A line which touches a curve, but does not cut it, is known as what?
(b) If a line approaches a curve, but never touches it, what is it called?
- (a) What important mathematical notation did India give to the world?
(b) Which mathematician was responsible for the introduction of logarithms?
- (a) What is an abacus, where did it come from?
(b) Which ancient civilization is said to have developed the decimal system, hundreds of years before it was used in Europe?
- (a) What is a figure bounded by more than four straight lines called?
(b) What is the geometrical shape of cells in a hive called?
- What significance is there when you add the number of letters in the names of the playing cards in a deck?
- How many equal angles are there in an Isosceles Triangle?
- What number cannot be represented in Roman numerals?
- What number, when squared, is one-third of its cube?
- What does the letter 'C' means in Roman numerals?
- (a) Hydrogen Sulphide (H2S)
(b) Carbon monoxide
- (a) Aluminum
(b) Helium and hydrogen. Today helium is used much more than hydrogen, because it is not inflammable.
- (a) Benjamin Franklin - Lightning conductor
(b) J. B. Priestley - Laughing gas
(c) Madame Curie - Radium
(d) Isaac Newton - Law of Gravitation
- An atom
- It is a super-cooled liquid
- (a) Acetic acid
- (a) (i) Oxygen (ii) Sulphuric acid
(b) One per cent
- (a) Uranium
(b) Sulphuric acid
(c) Naphthalene, a white crystalline cyclic hydrocarbon
- (a) Algebra
- (a) A tangent
(b) An asymptote
- (a) The Zero
(b) Charles Napier
- (a) The abacus is a calculating frame with balls sliding on wires. It originated in Egypt in 2000 B.C.
(b) The Incas.
- (a) Polygon
- When you add up the number of letters in the names of the playing cards ace (3), Three (5), four (4), five (4), six (3), seven (5), eight (5), nine (4), ten (3), Jack (4), Queen (5), King (4) - the total comes to 52, the exact number of cards in a deck.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:cebe055f-8101-4212-8254-ffa25d483203> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-719.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00159-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913884 | 1,099 | 3.53125 | 4 |
The Internet has become an integral part of all businesses, with some companies employing remote workers. Many business owners and managers also check their email or connect to office systems while on the road, maybe even connecting to the Internet on the many open or public Wi-Fi networks available. While these connections are useful, they can pose a security risk to many businesses.
If you or your employees work outside of the office, and rely on, or frequently connect to public Wi-Fi connections, there are three security dangers you should be aware of.
The number of businesses offering free Wi-Fi to customers, especially coffee shops and restaurants, is growing. Some hackers have actually taken to setting up networks with names that are the same as a location or business in hopes that people will connect to it, believing it is an open network.
The issue is that they may have attached data monitors that collect data – including passwords and other private information going into and out of the network. Some have even gone so far as to set up a portal site that one must navigate to in order to log in and use the service – similar to what you see when you use most public Wi-Fi connections. Only these sites are loaded with malware which can be installed onto your system once you log in.
In order to avoid this, it is a good idea to look at the name of the network you are actually connecting to and check whether there is more than one with a similar name, or if there are any spelling mistakes. If you are unsure, the best approach is to check the name of the network at the business which is providing this connection.
Both major operating systems – OS X and Windows – have files and folders that automatically share any folders and files put into them with other users on the same network. Some business users put important files in these folders while at the office in order to allow colleagues access to them.
The problem with this is when you connect to a public Wi-Fi connection. Other people on that network may also be able to see those files. If you didn’t take the important files out of the folder, they could potentially steal the data contained within. Hackers know this, and may sit on the networks looking for other computers with shared files.
In order to avoid this, you should ensure that you aren’t sharing files stored in public folders on your computer. Try using other ways to share documents like a cloud storage provider.
A man-in-the-middle attack is a form of hacking where the hacker uses technology to actively listen to or capture data being transmitted over a network. What this means is that if there is someone capturing data, they could theoretically gain access to anything that gets sent outside of the network. This could include private files, passwords and more.
If you or an employee connects to the office remotely while connected to a public network, one way to minimize the chances of data being intercepted is by using a VPN. These connections set up a direct link between the computer and the home network, and make it difficult for those who aren’t part of that network to connect to and view data that is transmitted over this connection.
On top of this, it is a good idea to avoid entering passwords or other important information like bank account and ID numbers while connected to public networks.
If you are looking for ways to keep your data secure while out of the office, get in touch with us today to see how we can help. | <urn:uuid:ab243fc8-c2ed-49d8-babf-9d26e5cec4bb> | CC-MAIN-2017-04 | https://www.apex.com/three-public-wi-fi-security-issues/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960116 | 705 | 2.515625 | 3 |
New research being presented tomorrow at RAID 2014 demonstrates that just two signals can automatically and effectively detect hundreds of malicious pages within 150,000 real-world samples with relatively high precision and accuracy: 1) content obfuscation and 2) fake certification seals. The UCSB research paper by Jacopo Corbetta, Luca Invernizzi, Christopher Kruegel and myself entitled “Eyes of a Human, Eyes of a Program: Leveraging Different Views of the Web for Analysis and Detection” dissects these two common techniques used by malicious websites -- particularly rogue online pharmacies -- to mislead web visitors and evade security scanners.
Malicious web developers exploit these discrepancies between what programs and humans see to elude automated detection while masquerading as legitimate web sites for their criminal or unethical purposes. For example, there are many malicious websites disguised as legitimate online pharmacies that are in fact peddling in counterfeit goods, selling illegal or controlled substances, stealing personal information and/or distributing malware. In fact, Lastline’s director of research Christian Kreibich co-authored a fascinating paper in 2012 that looks inside the economics of pharmaceutical affiliate programs and uncovers botnets, malware, bullet-proof hosting and more.
To test our hypotheses, we built a “maliciousness detector” using just these two signals:
Content obfuscation: this technique is used by web authors to hide web content from scanning programs, which might recognize patterns that are associated with malicious intent. Some forms of content obfuscation are common on benign websites, such as email and web addresses, so we ignored those.
Certification seals: these are small images bearing the brand of a certification provider of some sort -- including security vendors, payment systems providers, government administrations, NGOs and professional associations. When used without permission, these seals serve to deceive humans into believing the malicious site owner is certified by a reputable organization and therefore trustworthy. When fake, seals generally do not redirect to the actual certification program.
Six example counterfeit seals found on rogue online pharmacy websites
Ultimately, we’ve determined that content obfuscation and the use of fake seals are both very strong signals for malicious intent. Of the 149,700 pages studied, we found that benign pages rarely exhibit these behaviors. We also uncovered hundreds of malicious pages that traditional malware detectors would have missed, including 400 rogue pharmacy websites displaying fake seals like those above.
While this is by no means a comprehensive way to detect all malicious web pages, we believe this research can contribute to the ever-growing toolshed of cyber-security defenses against Internet fraud. And all of us can learn from this to treat certification seals on otherwise unknown webpages with a healthy dose of suspicion. | <urn:uuid:5724422e-cc7a-41a6-a537-7e36bfce5d34> | CC-MAIN-2017-04 | http://labs.lastline.com/rogue-online-pharmacies-use-fake-security-seals-and-content-obfuscation | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903048 | 548 | 2.546875 | 3 |
NEWPORT BEACH, CA--(Marketwired - May 12, 2014) - Have you ever wondered what the depths of the ocean truly sound like to the multitudes of marine life who call the ocean their home? As part of ExplorOcean's monthly lecture series, on May 29, 2014 at 4 p.m. and 7 p.m. Dr. Ana Širović will discuss the significant increase in background noise levels across the world's oceans and the important ramifications it may have on various marine organisms.
Širović is a marine bioacoustician at the Scripps Institution of Oceanography in La Jolla, Calif. A native of Croatia, she earned a B.A. in creative studies with a biology emphasis from the University of California Santa Barbara and a Ph.D. in oceanography at the Scripps Institution of Oceanography. As a researcher at the school's marine physical laboratory, Širović's expertise focuses on marine biacoustics, especially the use of acoustic methods and technologies to promote a better understanding of endangered marine species. With the increase in human use of the ocean, background noise levels have risen dramatically and this increase might have important effects on whales, fish, and other marine mammals that rely on sounds to navigate, find food, and search for mates, among other biological needs. Širović has logged more than 300 days of sea time, predominantly in the Southern Ocean.
Tom Pollack, ExplorOcean's CEO, notes: "We're proud to welcome Dr. Ana Širović to our ExplorOcean lecture series. To have such high-level speakers as Dr. Širović contribute to our monthly lecture series enriches our programs and our commitment to ocean literacy. Her conversation on the rise in ocean background noise levels is a timely subject and will certainly resonate with our members concerned about the future ocean environment."
The lecture is hosted by ExplorOcean at 600 East Bay Ave., Newport Beach, Calif. It is free to members and $15 per talk for non-members. To register, or for more information, visit http://explorocean.org/education/lecture-series/.
ABOUT EXPLOROCEAN: ExplorOcean, America's premier ocean literacy center, offers a world-class ocean literacy platform and cultural destination where visitors can immerse themselves in interactive exhibitions devised to develop the curious explorer within. ExplorOcean's high quality programs which are grounded in the seven principles of ocean literacy and in STEM content include single day camps, multi-day camps, classes, monthly lectures and an impressive underwater robotics program developed by the director of education, Dr. Wendy Marshall. Headquartered on the Balboa Peninsula between the sparkling Pacific Ocean and the bustling Newport Harbor, the center's nearly two-acre location is the perfect place for people of all ages to learn about the seven seas. For more information about ExplorOcean, please visit www.ExplorOcean.org. | <urn:uuid:9134adba-167b-4040-9c9e-ac63d99a579a> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/explorocean-presents-can-you-hear-me-now-coping-with-an-increasingly-noisy-ocean-1909071.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00242-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91344 | 615 | 2.921875 | 3 |
Goebel M.E.,National Oceanic and Atmospheric Administration |
Perryman W.L.,National Oceanic and Atmospheric Administration |
Hinke J.T.,National Oceanic and Atmospheric Administration |
Krause D.J.,National Oceanic and Atmospheric Administration |
And 3 more authors.
Polar Biology | Year: 2015
Quantifying the distribution and abundance of predators is integral to many ecological studies, but can be difficult in remote settings such as Antarctica. Recent advances in the development of unmanned aerial systems (UAS), particularly vertical takeoff and landing (VTOL) aircraft, have provided a new tool for studying the distribution and abundance of predator populations. We detail our experience and testing in selecting a VTOL platform for use in remote, windy, perennially overcast settings, where acquiring cloud-free high-resolution satellite images is often impractical. We present results from the first use of VTOLs for estimating abundance, colony area, and density of krill-dependent predators in Antarctica, based upon 65 missions flown in 2010/2011 (n = 28) and 2012/2013 (n = 37). We address concerns over UAS sound affecting wildlife by comparing VTOL-generated noise to ambient and penguin-generated sound. We also report on the utility of VTOLs for missions other than abundance and distribution, namely to estimate size of individual leopard seals. Several characteristics of small, battery-powered VTOLs make them particularly useful in wildlife applications: (1) portability, (2) stability in flight, (3) limited launch area requirements, (4) safety, and (5) limited sound when compared to fixed-wing and internal combustion engine aircraft. We conclude that of the numerous UAS available, electric VTOLs are among the most promising for ecological applications. © 2015, The Author(s). Source
Taylor J.K.D.,New England Aquarium |
Kenney R.D.,University of Rhode Island |
LeRoi D.J.,Aerial Imaging Solutions |
Kraus S.D.,New England Aquarium
Marine Technology Society Journal | Year: 2014
Marine aerial surveys are designed to maximize the potential for detecting target species. Collecting data on different taxa from the same platform is economically advantageous but normally comes at the cost of compromising optimal taxon-specific scanning patterns and survey parameters, in particular altitude. Here, we describe simultaneous visual and photographic sampling methods as a proof of concept for detecting large whales and turtles from a single aircraft, despite very different sighting cues. Data were collected for fishing gear, fish, sharks, turtles, seals, dolphins, and whales using two observers and automated vertical photography. The photographic method documented an area directly beneath the aircraft that would otherwise have been obscured from observers. Preliminary density estimates were calculated for five species for which there were sufficient sample sizes from both methods after an initial year of data collection. The photographic method yielded significantly higher mean density estimates for loggerhead turtles, ocean sunfish, and blue sharks (p < 0.01), despite sampling a substantially smaller area than visual scanning (less than 11%). Density estimates from these two methods were not significantly different for leatherback turtles or basking sharks (p > 0.05), two of the largest species included in the analysis, which are relatively easy to detect by both methods. Although postflight manual processing of photographic data was extensive, this sampling method comes at no additional in-flight effort and obtains highquality digital documentation of sightings on the trackline. Future directions for this project include automating photographic sighting detections, expanding the area covered by photography, and performing morphometricmeasurement assessments. Source
Agency: Department of Commerce | Branch: | Program: SBIR | Phase: Phase II | Award Amount: 350.00K | Year: 2010
NOAA uses large-format aerial film cameras to collect data for monitoring marine mammal populations protected under the Marine Mammal Protection Act and the U.S. Endangered Species Act. Historically, the major users of these cameras have been the military and government agencies. As these users move to newer technology, manufacturers are ending the production of aerial films and camera parts. Consequently, NOAA requires a digital camera system that will deliver high resolution aerial imagery equivalent to the imagery they currently gather. In Phase I, Aerial Imaging Solutions designed a multiple digital camera, forward motions compensated mount and control system to fill NOAA’s sampling needs. Additionally, we produced a prototype of the design. The prototype was approved for flight by NOAA’s Aircraft Operations Center and test flown by both the National Marine Mammal Laboratory Steller Sea Lion group and the Southwest Fisheries Science Center Photogrammetry group. The results exceeded all expectations, confirming the feasibility of replacing NOAA’s aerial film cameras with the proposed system. For Phase II, we propose to deliver two commercial-quality FMC mount systems to NOAA researchers.
Agency: Department of Commerce | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 75.00K | Year: 2009
NOAA uses large-format aerial film camera systems to collect data for the monitoring of marine mammal populations protected under the Marine Mammal Protection Act and the U.S. Endangered Species Act. Historically, the major users of these cameras and film have been the military and government mapping agencies. As these users move to newer technology, manufacturers are ending, or severely cutting back, the production of aerial films and camera repair parts. Consequently, NOAA requires a digital camera system that will deliver high resolution aerial imagery equivalent to the imagery they currently gather. Aerial Imaging Solutions proposes to design a multiple digital camera, forward motion compensated, stabilized mount and control system to fill NOAA’s sampling needs. The Phase I goal is to develop a prototype that will match or exceed the image quality and coverage of the venerable film camera systems NOAA now uses.
Aerial Imaging Solutions | Date: 2015-12-14
Unmanned aerial vehicles (UAVs). | <urn:uuid:693a3e1f-d6c3-499b-b284-36571e96365a> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/aerial-imaging-solutions-832152/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00242-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904397 | 1,227 | 2.703125 | 3 |
Solar energy is one of the fastest growing industries in the U.S., and it’s having a transformative impact on energy consumption and production — while helping to drive economic recovery. In 2010, the solar industry alone accounted for 93,000 Americans holding jobs in the U.S., according to The Solar Foundation, and all 50 states have seen growth.
Given that solar power’s key ingredient is the sun, the nation’s southwestern deserts make an ideal setting in which to build solar facilities. Arizona, which is about 42 percent desert and boasts sunshine nearly year-round, has become a hotbed for solar projects that are both operational and under construction.
“Arizona considers itself to be a potential superstar in the solar energy business,” said Dennis Godfrey, a spokesman for the U.S. Department of Interior’s Bureau of Land Management (BLM).
In Pinal County, Ariz., a 20-megawatt (or 20 million-watt) photovoltaic plant, called Copper Crossing Solar Ranch, was completed in September 2011 by Portland, Ore.-based energy company Iberdrola Renewables and the Salt River Project (SRP), a public power utility that mainly serves Maricopa and Pinal counties. Copper Crossing is Iberdrola’s first solar energy project in the U.S.; the company is better known for producing wind energy.
The solar plant sits on 144 acres in Florence, the county seat of Pinal County, and can provide clean energy for up to 3,700 residential homes within the community. Construction on Copper Crossing began in late 2010, and at the height of construction, nearly 200 workers were employed. The plant is estimated to reduce greenhouse gas emissions by 525,000 metric tons over the next 25 years, according to San Jose, Calif.-based SunPower Corp., which manufactured more than 66,000 polycrystalline photovoltaic solar panels for the project. The panels are mounted on single-axis tracking arms, so as the sun moves across the sky, the panels also move, creating more efficiency.
Although Copper Crossing is complete, other desert-region solar projects being built on federal land aren’t as easy to execute. These projects must meet specific criteria, said the BLM’s Godfrey, and each must have an environmental impact statement outlining the details of the proposed project. Given the hurdles to meet those requirements, completion can sometimes stall.
The BLM is currently involved in three active solar projects in Arizona that are awaiting approval to be built.
The partnership between the SRP and Iberdrola was essential to the success of Pinal County’s solar project, said Debbie Kimberly, manager of customer programs and marketing for the SRP. Because the SRP is a political subdivision of Arizona, it can neither take advantage of nor pass on incentives or tax credits for solar energy to school districts and residents. By partnering with Iberdrola, those benefits, in the form of competitive prices for the solar energy, can be shared with customers.
The SRP opened eight of the plant’s 20 megawatts to school districts that wanted to have a portion of their power supply come from solar energy, and created subscription agreements for school districts to pay a fixed price for the solar energy over 10 years, Kimberly said. Eleven school districts and more than 100 schools have subscribed and will pay a flat rate of 9.9 cents per kilowatt-hour (a usage rate of 1,000 watts per hour).
Pinal County Supervisor Bryan Martyn said that while shifting to alternative energy creates long-term benefits for the community, citizens don’t immediately jump on board.
“Most citizens don’t appreciate a whole lot of change. They’re happy to keep things the way they are,” Martyn said. “But at the same time, $4 per gallon gas really motivates people to look at alternative energy sources, and we as Americans have to figure out how we power our country using resources that are inside our own borders.”
Residential customers are allotted a separate portion of the plant’s 20 megawatts, according to Kimberly. As of November 2011, 215 residential customers had subscribed to more than 500 kilowatts of the plant. Unlike the school districts, residential customers aren’t obligated to commit to a 10-year contract. Instead, they subscribe to five-year contracts.
“Residential customers didn’t want a pay-a-cent-per-kilowatt-hour charge; they wanted to pay for a block of solar energy,” Kimberly said, explaining that they pay $24.15 for 1 kilowatt block of energy.
Going forward, local and federal officials hope that BLM-related projects will stimulate job growth in the Arizona desert region with smarter buildings being the result. Completed projects like Copper Crossing in Pinal County will continue producing cleaner energy and providing incentives to those who subscribe to solar energy contracts.
Martyn said all of the key players in the Pinal County project were vital to completing it on time and under budget. However, power production wasn’t the only goal in mind for the future.
“The largest benefit from a project like this is not in power production. It’s more in education and exposure,” Martyn said. “Showing that we are trying to move forward relative to renewable energy sources and we’re utilizing the one resource we have in Arizona — the abundant resource we have in Arizona — which is the sun.” | <urn:uuid:f758b2e5-e6db-461a-b40d-427087aa5944> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Desert-Regions-Reap-the-Benefits-of-Solar-Energy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947709 | 1,155 | 2.84375 | 3 |
Scientists at Oak Ridge National Laboratory (ORNL) are using DOE supercomputers and a neutron scanning technique to develop more efficient methods of extracting gas and oil from shale. Using the systems at the Department of Energy’s National Energy Research Supercomputing Center (NERSC) – supercomputing resources include Hopper and Edison – the researchers are studying the structure of gas and oil deposits to understand how traits like pore size can affect accessibility to natural gas.
The research is considered an important step toward establishing more efficient extraction techniques, cleaner coal-based energy production and improved carbon storage and sequestration technologies. This is the viewpoint of Yuri Melnichenko, an instrument scientist at ORNL’s High Flux Isotope Reactor.
Melnichenko is part of a team from ORNL’s Materials Science and Technology Division Research that is analyzing two-dimensional images of shale, looking for important clues. They’ve developed a technique that relies on small-angle neutron scattering, which when combined with electron microscopy and theory, can be used to examine the function of pore sizes. The promising research is documented in a recent paper in the Journal of Materials Chemistry A.
Using the High Flux Isotope Reactor’s General Purpose SANS instrument, the scientists discovered much higher local structural order than previously thought to exist in nanoporous carbons. This sets the stage for scientists to develop improved modeling methods based on local structure of carbon atoms.
“We have recently developed efficient approaches to predict the effect of pore size on adsorption,” states team member and co-author James Morris. “However, these predictions need verification, and the recent small-angle neutron experiments are ideal for this. The experiments also beg for further calculations, so there is much to be done.”
The knowledge gleaned from this experiment also has important implications for the development of novel nanoporous materials custom-designed for energy storage. This would be a tremendous boon for the capture and sequestration of problematic greenhouse gases. Other potential applications include hydrogen storage, membrane gas separation, environmental remediation and catalysis. | <urn:uuid:1706d16d-15d7-4d09-a066-737e74a8df9f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/12/04/nersc-supercomputer-boosts-energy-research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00114-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915435 | 437 | 3.40625 | 3 |
“Over the years, phishing attacks have changed, as with most things, and have been segmented into different groups of variants.” –Me
If there is one thing you can say about cybercriminals, it’s that they are adaptive. As I mentioned last week, phishing attacks have evolved from just fake web pages and official looking emails to fake web pages and official looking posts and messages through social networks and online games. In fact, social networking itself has been the catalyst that spawned a whole new breed of phishing attacks and the worst part is, it’s easier now.
In this post, we will expand on our Phishing 101 series by discussing the more modern and more unique side to phishing, namely ‘Spear Phishing’ and phishing on the social media highway.
Spear Phishing Emails
Spear Phishing refers to the attempt to phish a specific user or group of users by using specific subjects which appeal specifically to the target user or referencing unique identification which provides a sense of legitimacy to the user. These kinds of attacks are usually used in conjunction with a group of targeted attacks leading to an end goal of accessing specific systems or information. While I could not obtain any spear phishing attacks used in the wild, I have created an example for you.
In the example, I have received an email from my favorite fictional game ‘StoneCraft’. The email references my ‘StoneCraft’ user name of ‘ILIKETURTLES’ and requests that I change my password on my account because of a security compromise on their end. They provide me a link to my account settings as well:
This email looks official enough, the ‘FROM’ address is listed as ‘firstname.lastname@example.org’ and the sincerely block says that Bob Blah, the Security Administrator from StoneCraft Inc. is the one who sent the e-mail. If I did some background research I would discover that Bob Blah, in fact, is the listed security admin for the game. However, once I put my mouse pointer over the link, my email client informs me that the web address is not the same as the link text. If I had clicked that link, it would have taken me to a login page where I would enter my credentials:
After which my credentials would be stolen and my account could be logged into by someone other than myself.
This is of course just an example of a Spear Phishing attack and many of them are not as harmless as this one, meaning that the information gathered is more than just the login credentials for an online game but credit card information, financial institution credentials or even the username and passwords for secure networks. Spear phishing is the first offensive step to obtaining restricted accesses and information.
Security Tip: While not always the most effective way of spotting a Phishing attempt, type the name of the sender into a search engine with the term “Phishing” after it and see if anything pops up, maybe someone else has already been hit by this and mentioned it on the internet.
Phishing in Messages / Social Networking
Phishing does not only refer to sending fake e-mails but also sending any kind of fake messages be it through Social Networking or through chat. The end goal is the same, using social engineering tactics to fool a user into clicking a link and giving up personal information.
Runescape is a very popular online browser role playing video game; it appeals to a great number of gamers and has approximately 10 million active accounts per month. However, as with most popular things, there is always someone trying to exploit people for monetary gain. There is an underground market for people selling Runescape accounts or items and a commonly seen phishing attack revolves around stealing these things from legitimate users.
This is how the usual RuneScape phish happens:
- Hacker A decides to steal some accounts, so he sets up a fake RuneScape login page
- Hacker A goes onto RuneScape and begins announcing free items or assistance to anyone who asks
- User B decides to take Hacker A up on his offer and asks what he needs to do
- Hacker A directs User B to a login page by posting a URL into the chat
- User B navigates to this site where he is presented with a login page, he enters his credentials but nothing happens
- At the same time, the fake login page has transmitted User B’s login credentials to Hacker A who now has the ability to log-in, steal items from or sell User A’s account.
Runescape suffers from other types of phishing attacks as well, including through their Forum and of course through emails. For more information about Runescape phishing, check out their website:
Security Tip: If you come into contact with someone trying to give you a link through chat to a specific part of a legitimate website, ask them how to navigate to that section rather than clicking the link. That way you know that the page you end up on is legitimate.
Facebook and other social networking sites are the perfect place for phishing attacks since they allow popular topics and posts to be spread to multiple users. Add this to the willingness of people to become friends with or subscribe to, random people based on how much they enjoy the persons post. The examples I will show you are of real world posts by fake users attempting to get two things:
- A broader base of users to spread their Spam or phishing attacks to
- People to click on their spam or phishing attacks
The first image shows a spam or phishing attempt which would lead users, under the guise of being given free merchandise, to a website to provide personal information. Once this information is stolen, it now belongs to the attacker. The second image shows the kind of tactics used by these attackers to spread their messages to a greater group of people by posting things like “Hit ‘Like’ if you remember This Phones ;D”. If you notice, at the time this screenshot was taken, the post had already received over 19k likes and been shared 50 times. In addition, the comments attached to this post advertise assistance in gaining more friend requests and even a fake account which backs up the previous comments assurances.
The above post, with all the likes it has received, would be reposted to the Facebook walls of thousands of users by popularity and sharing between friends. It would be so easy for one of these posts to lead users to a “Facebook Login” page where they would reveal their user login credentials and possibly other sensitive personal information.
Don’t think that Facebook is alone in this type of attack, any social networking forum has the same problems including Google+ and especially Twitter. Here is an example:
A new type of botnet emerges that has the capability to take over the Twitter account of the infected user. In doing so, it propagates by posting links to malicious web pages that download the same bot implant on other systems. Let’s say that the malware spreading web page only infects every 10th visitor with malware and does nothing for the rest, to avoid detection by malicious link detection services. The bot implant itself changes and morphs on a regular basis to avoid detection and eventually has infected 25,000 systems. At this point, a single post is made by one of the bots mentioning some interesting topic and linking to a site which uses the same method of user infection. All of the other 24,999 bots favorite or re-tweet this one post and it gets so much attention that legitimate users who did not detect any malicious intent begin to spread it as well. Soon enough you have a huge infection which spread from one single infected system.
Albeit this is a ‘worst-case scenario’ but the trusting nature of the average Facebook, Twitter or YouTube user makes it possible for cybercriminals to exploit them with a higher success rate than that of the traditional phishing attempts.
Security Tip: It’s fun to agree with things and show your approval for posts and comments but you can do your part to keep other users safe by not clicking’ Like’ or sharing the post of someone who you do not know or trust. Even if you are the only one who doesn’t share it, that could be thousands of potential victims you just saved.
How can you protect yourself?
To reinforce what I advised last week:
Phishing Attacks can fail by simply keeping an awareness of computer security practices in your mind whenever you check your email, read Facebook posts or play your favorite online game. Here is a list of a few of the most important tactics to keeping your information safe:
- Don’t open e-mails from senders you are not familiar with.
- Don’t ever click on a link inside of an e-mail unless you know exactly where it is going.
- To layer that protection, if you get an e-mail from a source you are unsure of, navigate to the provided link manually by entering the legitimate website address into your browser.
- Look out for the digital certificate of a website
- If you are asked to provide sensitive information, be sure to check and make sure that the URL of the page starts with ‘HTTPS’ instead of just ‘HTTP’
- This is important not only for securing yourself against phishing attacks but also, ‘HTTP’ can be intercepted by hackers watching your network connection.
- If you suspect the legitimacy of an e-mail, take some of its text or names used in it and type it into your search engine to see if any known phishing attacks exist using the same methods.
- Obtain ‘Password Manager’ tools which can auto-fill login information for you, if you navigate to a page which you had been to before, the fields should be filled in. If they are not, you may be on a phishing page.
This second part of the series went over the dangers of Spear Phishing and phishing in social media, while not seen as often as the classic phishing email method; it is none the less on the rise. However, most social networking sites are taking a proactive stance against these types of attacks by developing and implementing new types of security applications which stop malicious posts before they get out of control. Though, historically, as the technology becomes more secure and the normal methods of attack are phased out, new and more dangerous ones emerge and they always will. The best protection against becoming another victim of these types of attacks is to be suspicious of every post, tweet, video and link you come across and make sure you know with a high amount of certainty that you can trust the source. | <urn:uuid:93491957-283e-42e7-a78b-480cdde92e98> | CC-MAIN-2017-04 | https://blog.malwarebytes.com/101/2012/07/phishing-101-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00170-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954804 | 2,215 | 2.5625 | 3 |
"Sobig.c" - Spam Technology In the Hands of Virus Creators?
04 Jun 2003
As has been reported by Kaspersky Lab (http://www.kaspersky.com/news.html?id=978914), a new modification of the "Sobig" network worm has been spreading across the Internet. Company experts have conducted a detailed analysis of the situation and now suspect that in order to achieve the maximum effect, the virus' creators may likely have used spamming technology to mass mail the "Sobig" worm.
Network worms differ from other malicious programs with their ability to automatically propagate (deliver infected messages, attack P2P networks, local area networks etc.). The situation with "Sobig.c" represents the first time where these functions were fortified by mass mailing technology. As such, the use of this technology would explain how the "Sobig" worm family instantly jumped to first place in May's list of the most widespread virus programs.
Under this assumption it is possible to state a few facts: Firstly, the spreading methods used by the "Sobig" worm itself are not effective enough to cause such a large number of infections in such a short period of time. Secondly, the overwhelming majority of the infected messages being sent out do not use the address email@example.com as stipulated in the worm's code, but rather other falsified addresses. Finally, detailed analysis of the IP-addresses at the source of "Sobig.c" mailings confirms the high probability of the use of spamming technology.
It is doubtful that spammers decided to expand their business to include the anonymous mailing of infected messages. Likewise doubtful would be virus creators using for hire spamming services that would have cost up to several thousand US dollars. For even the most obsessed virus writer this amount would almost surely be prohibitive. On the other hand, it should be noted that the computer underground have perfected the art of covering their tracks. They masterfully use anonymity and the extraterritoriality of the worldwide Web to hide their illegal activities.
"It is possible that virus writers actually decided to quench their irrational thirst to destroy with the help of spamming technology", commented Eugene Kaspersky, Head of Anti-Virus Research at Kaspersky Lab. The consequences of this symbiosis are hard to over estimate. Using "spamesque" mass mailings can tremendously increase the speed by which worms spread and the geographic territory covered. This technological integration could provoke global flood-attacks on the Internet (such as happened with 'Slammer') that could lead to the lowering of the networks productivity and even result in its decomposition into disconnected segments.
"It is possible to simply blame the evil geniuses who thought up this method of network attack. On the other hand one should look at the situation objectively; naturally in the environment of complete chaos and total anonymity that reigns over the Internet, certain people are not able to resist the temptation to commit cyber hooliganism", injected Eugene Kaspersky. According to Kaspersky Lab' research, the overriding factor motivating the overwhelming majority of virus creators to practice their craft is impunity. If they would be confident in the eventuality of being punished for committing unlawful acts, by far the majority of virus creators would simply cease to commit their crimes. This reality once more confirms the urgency to establish additional Internet security measures or to create a parallel, protected network to be used exclusively for business communications.
More detailed information about the "Sobig" family of network worms can be found in the Kaspersky Virus Encyclopedia by clicking on the following links:
Sobig.aSobig.b (aka Palyh)Sobig.c | <urn:uuid:5239a1de-f8db-4bd2-86e9-cb8401412358> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2003/_Sobig_c_Spam_Technology_In_the_Hands_of_Virus_Creators_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942508 | 769 | 2.65625 | 3 |
by Douglas Comer, Purdue University
Traditional packet-processing systems use an approach known as demultiplexing to handle incoming packets (refer to for details). When a packet arrives, protocol software uses the contents of a Type Field in a protocol header to decide how to process the payload in the packet. For example, the Type field in a frame is used to select a Layer 3 module to handle the frame, as Figure 1 illustrates.
Demultiplexing is repeated at each level of the protocol stack. For example, IPv6 uses the Next Header field to select the correct transport layer protocol module, as Figure 2 illustrates.
Modern, high-speed network systems take an entirely different view of packet processing. In place of demultiplexing, they use a technique known as classification . Instead of assuming that a packet proceeds through a protocol stack one layer at a time, they allow processing to cross layers. (In addition to being used by companies such as Cisco and Juniper, classification has been used in Linux and with network processors by companies such as Intel and Netronome .)
Packet classification is especially pertinent to three key network technologies. First, Ethernet switches use classification instead of demultiplexing when they choose how to forward packets. Second, a router that sends incoming packets over Multiprotocol Label Switching (MPLS) tunnels uses classification to choose the appropriate tunnel. Third, classification provides the basis for Software-Defined Networking (SDN) and the OpenFlow protocol.
Motivation for Classification
To understand the motivation for classification, consider a network system that has protocol software arranged in a traditional layered stack. Packet processing relies on demultiplexing at each layer of the protocol stack. When a frame arrives, protocol software looks at the Type field to learn about the contents of the frame payload. If the frame carries an IP datagram, the payload is sent to the IP protocol module for processing. IP uses the destination address to select a next-hop address. If the datagram is in transit (that is, passing through the router on its way to a destination), IP forwards the datagram by sending it back out one of the interfaces. A datagram reaches TCP only if the datagram is destined for the router itself. TCP then uses the protocol port numbers in the TCP segment to further demultiplex the incoming datagram among multiple application programs.
To understand why traditional layering does not solve all problems, consider MPLS processing. In particular, consider a router at the border between a traditional internet and an MPLS core. Such a router must accept packets that arrive from the traditional internet and choose an MPLS path over which to send the packet. Why is layering pertinent to path selection? In many cases, network managers use transport layer protocol port numbers when choosing a path. For example, suppose a manager wants to send all web traffic down a specific MPLS path. All the web traffic will use TCP port 80, meaning that the selection must examine TCP port numbers.
Unfortunately, in a traditional demultiplexing scheme, a datagram does not reach the transport layer unless the datagram is destined for the local network system. Therefore, protocol software must be reorganized to handle MPLS path selection. We can summarize:
A traditional protocol stack is insufficient for the task of MPLS path selection because path selection often involves transport layer information and a traditional stack will not send transit datagrams to the transport layer.
Classification Instead of Demultiplexing
How should protocol software be structured to handle tasks such as MPLS path selection? The answer lies in the use of classification. A classification system differs from conventional demultiplexing in two ways:
To understand classification, imagine a packet that has been received at a router and placed in memory. Encapsulation means that the packet will have a set of contiguous protocol headers at the beginning. For example, Figure 3 illustrates the headers in a TCP packet (for example, a request sent to a web server) that has arrived over an Ethernet.
Given a packet in memory, how can we quickly determine whether the packet is destined to the web? A simplistic approach simply looks at one field in the headers: the TCP destination port number. However, it could be that the packet is not a TCP packet at all. Maybe the frame is carrying Address Resolution Protocol (ARP) data instead of IP. Or maybe the frame does indeed contain an IP datagram, but instead of TCP the transport layer protocol is the User Datagram Protocol (UDP). To make certain that it is destined for the web, software needs to verify each of the headers: the frame contains an IP datagram, the IP datagram contains a TCP segment, and the TCP segment is destined for the web.
Instead of parsing protocol headers, think of the packet as an array of octets in memory. Consider IPv4 as an example. To be an IPv4 datagram, the Ethernet Type field (located in array positions 12 and 13) must contain 0x0800. The IPv4 Protocol field, located at position 23, must contain 6 (the protocol number for TCP). The Destination Port field in the TCP header must contain 80. To know the exact position of the TCP header, we must know the size of the IP header. Therefore, we check the header length octet of the IPv4 header. If the octet contains 0x45, the TCP destination port number will be found in array positions 36 and 37.
As another example, consider classifying Voice over IP (VoIP) traffic that uses the Real-Time Transport Protocol (RTP). Because RTP is not assigned a specific UDP port, vendors use a heuristic to determine whether a given packet carries RTP traffic: check the Ethernet and IP headers to verify that the packet carries UDP, and then examine the octets at a known offset in the RTP packet to verify that the value matches the value used by a known codec.
Observe that all the checks described in the preceding paragraphs require only array lookup. That is, the lookup mechanism treats the packet as an array of octets and merely checks to verify that location X contains value Y, location Z contains value W, and so on—the mechanism does not need to understand any of the protocol headers or the meaning of values. Furthermore, observe that the lookup scheme crosses multiple layers of the protocol stack.
We use the term classifier to describe a mechanism that uses the lookup approach described previously, and we say that the result is a packet classification. In practice, a classification mechanism usually takes a list of classification rules and applies them until a match is found. For example, a manager might specify three rules: send all web traffic to MPLS path 1, send all FTP traffic to MPLS path 2, and send all VPN traffic to MPLS path 3.
Layering When Classification Is Used
If classification crosses protocol layers, how does it relate to traditional layering diagrams? We can think of classification as an extra layer that has been squeezed between Layer 2 and Layer 3. When a packet arrives, the packet passes from a Layer 2 module to the classification module. All packets proceed to the classifier; no demultiplexing occurs before classification. If any of the classification rules matches the packet, the classification layer follows the rule. Otherwise, the packet proceeds up the traditional protocol stack. For example, Figure 4 illustrates layering when classification is used to send some packets across MPLS paths.
Interestingly, a classification layer can subsume all demultiplexing. That is, instead of classifying packets only for MPLS paths, the classifier can be configured with additional rules that check the Type field in a frame for IPv4, IPv6, ARP, Reverse ARP (RARP), and so on.
Classification Hardware and Network Switches
The text in the previous section describes a classification mechanism that is implemented in software—an extra layer is added to a software protocol stack that classifies frames after they arrive at a router. Classification can also be implemented in hardware. In particular, Ethernet switches and other packet-processing hardware devices contain classification hardware that allows packet classification and forwarding to proceed at high speed. The next sections explain hardware classification mechanisms.
We think of network devices, such as switches, as being divided into broad categories by the level of protocol headers they examine and the consequent level of functions they provide:
A Layer 2 Switch examines the Media Access Control (MAC) source address in each incoming frame to learn the MAC address of the computer that is attached to each port. When a switch learns the MAC addresses of all the attached computers, the switch can use the destination MAC address in each frame to make a forwarding decision. If the frame is unicast, the switch sends only one copy of the frame on the port to which the specified computer is attached. For a frame destined to the broadcast or a multicast address, the switch delivers a copy of the frame to all ports.
A VLAN Switch adds one level of virtualization by permitting a manager to assign each port to a specific VLAN. Internally, VLAN switches extend forwarding in a minor way: instead of sending broadcasts and multicasts to all ports on the switch, a VLAN switch consults the VLAN configuration and sends them only to ports on the same VLAN as the source.
A Layer 3 Switch acts like a combination of a VLAN switch and a router. Instead of using only the Ethernet header when forwarding a frame, the switch can look at fields in the IP header. In particular, the switch watches the source IP address in incoming packets to learn the IP address of the computer attached to each switch port. The switch can then use the IP destination address in a packet to forward the packet to its correct destination.
A Layer 4 Device extends the examination of a packet to the transport layer. That is, the device can include the TCP or UDP Source and Destination Port fields when making a forwarding decision.
Switching Decisions and VLAN Tags
All types of switching hardware described previously use classification. That is, switches operate on packets as if a packet is merely an array of octets, and individual fields in the packet are specified by giving offsets in the array. Thus, instead of demultiplexing packets, a switch treats a packet syntactically by applying a set of classification rules similar to the rules described previously.
Surprisingly, even VLAN processing is handled in a syntactic manner. Instead of merely keeping VLAN information in a separate data structure that holds meta information, the switch inserts an extra field in an incoming packet and places the VLAN number of the packet in the extra field. Because it is just another field, the classifier can reference the VLAN number just like any other header field.
We use the term VLAN Tag to refer to the extra field inserted in a packet. The tag contains the VLAN number that the manager assigned to the port over which the frame arrived. For Ethernet, IEEE standard 802.1Q specifies placing the VLAN Tag field after the MAC Source Address field. Figure 5 illustrates the format.
A VLAN tag is used only internally—after the switch has selected an output port and is ready to transmit the frame, the tag is removed. Thus, when computers send and receive frames, the frames do not contain a VLAN tag.
An exception can be made to the rule: a manager can configure one or more ports on a switch to leave VLAN tags in frames when sending the frame. The purpose is to allow two or more switches to be configured to operate as a single, large switch. That is, the switches can share a set of VLANs—a manager can configure each VLAN to include ports on one or both of the switches.
We can think of hardware in a switch as being divided into three main components: a classifier, a set of units that perform actions, and a management component that controls the overall operation. Figure 6 illustrates the overall organization and the flow of packets.
As black arrows in the figure indicate, the classifier provides the high-speed data path that packets follow. When a packet arrives, the classifier uses the rules that have been configured to choose an action. The management module usually consists of a general-purpose pro-cessor that runs management software. A network administrator can interact with the management module to configure the switch, in which case the management module can create or modify the set of rules the classifier follows.
A network system, such as a switch, must be able to handle two types of traffic: transit traffic and traffic destined for the switch itself. For example, to provide management or routing functions, a switch may have a local TCP/IP protocol stack and packets destined for the switch must be passed to the local stack. Therefore, one of the actions a classifier takes may be "pass packet to the local stack for Demultiplexing".
High-Speed Classification and TCAM
Modern switches can allow each interface to operate at 10 Gbps. At 10 Gbps, a frame takes only 1.2 microseconds to arrive, and a switch usually has many interfaces. A conventional processor cannot handle classification at such speeds, so a question arises: how can a hardware classifier achieve high speed? The answer lies in a hardware technology known as Ternary Content Addressable Memory (TCAM).
TCAM uses parallelism to achieve high speed—instead of testing one field of a packet at a given time, TCAM checks all fields simultaneously. Furthermore, TCAM performs multiple checks at the same time. To understand how TCAM works, think of a packet as a string of bits. We imagine TCAM hardware as having two parts: one part holds the bits from a packet and the other part is an array of values that will be compared to the packet. Entries in the array are known as slots. Figure 7 illustrates the idea.
In the figure, each slot contains two parts. The first part consists of hardware that compares the bits from the packet to the pattern stored in the slot. The second part stores a value that specifies an action to be taken if the pattern matches the packet. If a match occurs, the slot hardware passes the action to the component that checks all the results and announces an answer.
One of the most important details concerns the way TCAM handles multiple matches. In essence, the output circuitry selects one match and ignores the others. That is, if multiple slots each pass an action to the output circuit, the circuit accepts only one and passes the action as the output of the classification. For example, the hardware may choose the lowest slot that matches. In any case, the action that the TCAM announces corresponds to the action from one of the matching slots.
The figure indicates that a slot holds a pattern rather than an exact value. Instead of merely comparing each bit in the pattern to the corresponding bit in the packet, the hardware performs a pattern match. The adjective ternary is used because each bit position in a pattern can have three possible values: a one, a zero, or a "don't care". When a slot compares its pattern to the packet, the hardware checks only the one and zero bits in the pattern—the hardware ignores pattern bits that contain "don't care". Thus, a pattern can specify exact values for some fields in a packet header and omit other fields.
To understand TCAM pattern matching, consider a pattern that identifies IP packets. Identifying such packets is easy because an Ethernet frame that carries an IPv4 datagram will have the value 0x0800 in the Ethernet Type field. Furthermore, the Type field occupies a fixed position in the frame: bits 96 through 111. Thus, we can create a pattern that starts with 96 "don't care" bits (to cover the Ethernet destination and source MAC addresses) followed by 16 bits with the binary value 0000100000000000 (the binary equivalent of 0x0800) to cover the Type field. All remaining bit positions in the pattern will be "don't care". Figure 8 illustrates the pattern and example packets.
Although a TCAM hardware slot has one position for each bit, the figure does not display individual bits. Instead, each box corresponds to one octet, and the value in a box is a hexadecimal value that corresponds to 8 bits. We use hexadecimal simply because binary strings are too long to fit into a figure comfortably.
The Size of a TCAM
A question arises: how large is a TCAM? The question can be divided into two important aspects:
A switch can also use patterns to control broadcasting. When a manager configures a VLAN, the switch can add an entry for the VLAN broadcast. For example, if a manager configures VLAN 9, an entry can be added in which the destination address bits are all 1s (that is, the Ethernet broadcast address) and the VLAN tag is 9. The action associated with the entry is "broadcast on VLAN 9".
A Layer 3 switch can learn the IP source address of computers attached to the switch, and can use TCAM to store an entry for each IP address. Similarly, it is possible to create entries that match Layer 4 protocol port numbers (for example, to direct all web traffic to a specific output). SDN technologies allow a manager to place patterns in the classifier to establish paths through a network and direct traffic along the paths. Because such classification rules cross multiple layers of the protocol stack, the potential number of items stored in a TCAM can be large.
TCAM seems like an ideal mechanism because it is both extremely fast and versatile. However, TCAM has two significant drawbacks: cost and heat. The cost is high because TCAM has parallel hardware for each slot and the overall system is designed to operate at high speed. In addition, because it operates in parallel, TCAM consumes much more energy than conventional memory (and generates more heat). Therefore, designers minimize the amount of TCAM to keep costs and power consumption low. A typical switch has 32,000 entries.
Classification-Enabled Generalized Forwarding
Perhaps the most significant advantage of a classification mechanism arises from the generalizations it enables. Because classification examines arbitrary fields in a packet before any demultiplexing occurs, cross-layer combinations are possible. For example, classification can specify that all packets from a given MAC address should be forwarded to a specific output port regardless of the packet contents. In addition, classification can make forwarding decisions depend on combinations of source and destination. An Internet Service Provider (ISP) can choose to forward all packets with IP source address X that are destined for web server W along one path while forwarding packets with IP source address Y that are destined to the same web server along another path.
ISPs need the generality that classification offers to handle traffic engineering that is not usually available in a conventional protocol stack. In particular, classification allows an ISP to offer tiered services in which the path a packet follows depends on a combination of the type of traffic and how much the customer pays.
Classification is a fundamental performance optimization that allows a packet-processing system to cross layers of the protocol stack without demultiplexing. A classifier treats each packet as an array of bits and checks the contents of fields at specific locations in the array.
Classification offers high-speed forwarding for network systems such as Ethernet switches and routers that send packets across MPLS tunnels. To achieve the highest speed, classification can be implemented in hardware; a hardware technology known as TCAM is especially useful because it employs parallelism to perform classification at extremely high speed.
The generalized forwarding capabilities that classification provides allow ISPs to perform traffic engineering. When making a forwarding decision, a classification mechanism can use the source of a packet as well as the destination (for example, to choose a path based on the tier of service to which a customer subscribes).
Material in this article has been taken with permission from Douglas E. Comer, Internetworking With TCP/IP Volume 1: Principles, Protocols, and Architecture, Sixth edition, 2013. | <urn:uuid:73275990-6b73-498f-94d3-3cccd609fb18> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-58/154-packet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00196-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894321 | 4,127 | 4 | 4 |
SQL Server is the Microsoft-driven relational database management system. This system is used to store data as well as retrieve it when necessary; these functions can be supported by individual users or by multiple users within a larger network. The Microsoft SQL Server has warehousing options, quality and integration services, management tools that are simple to implement, as well as robust tools for development.
Looking at the more technical end of things, Microsoft SQL Server uses query languages such as T-SQL and ANSI SQL. Disaster recovery is one of the product's most prominent features, in addition to in-memory performance, scalability, and corporate business intelligence capabilities.
Also known as
Microsoft SQL Server, MSSQL, MS SQL
Microsoft SQL Server is used by businesses in every industry, including Great Western Bank, Aviva, the Volvo Car Corporation, BMW, Samsung, Principality Building Society, Wellmark Blue Cross and Blue Shield, and the Catholic District School Board of Eastern Ontario. | <urn:uuid:be89f282-dd9b-47cc-8751-36b57260d58d> | CC-MAIN-2017-04 | https://www.itcentralstation.com/products/sql-server | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00280-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910513 | 197 | 2.59375 | 3 |
Tech Glossary – E to F
ECC (Error Correction Code)
Stands for “Error Correction Code.” ECC is used to verify data transmissions by locating and correcting transmission errors. It is commonly used by RAM chips that include forward error correction (FEC), which ensures all the data being sent to and from the RAM is transmitted correctly.
EIDE (Enhanced Integrated Drive Electronics)
EIDE is an improved version of the IDE drive interface that provides faster data transfer rates than the original standard. While the original IDE drive controllers supported transfer rates of 8.3 Mbps, EIDE can transfer data up to 16.6 Mbps, which is twice as fast.
Ethernet is the most common type of connection computers use in a local area network (LAN). An Ethernet port looks much like a regular phone jack, but it is slightly wider. This port can be used to connect your computer to another computer, a local network, or an external DSL or cable modem.
This high-speed interface has become a hot new standard for connecting peripherals. Created by Apple Computer in the mid-1990’s, Firewire can be used to connect devices such as digital video cameras, hard drives, audio interfaces, and MP3 players, such as the Apple iPod, to your computer. A standard Firewire connection can transfer data at 400 Mbps, which is roughly 30 times faster than USB 1.1 and is also known as “IEEE 1394.”
Flash drives have many names — jump drives, thumb drives, pen drives, and USB keychain drives. Regardless of what you call them, they all refer to the same thing, which is a small data storage device that uses flash memory and has a built-in USB connection. | <urn:uuid:3702114d-1e61-4111-91d2-bebf3e2cd995> | CC-MAIN-2017-04 | http://icomputerdenver.com/tech-glossary/tech-glossary-e-f/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00306-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937018 | 361 | 3.609375 | 4 |
Materials science, also called materials engineering, is on the cusp of a new era, emboldened by advances in computational power and quantum mechanics. For some time now, manufacturers have used supercomputers to design better airplanes, cars and other equipment, but now scientists are using similar techniques to develop new materials from scratch.
A recent article in Scientific American authored by Gerbrand Ceder, a professor of materials science and engineering at the Massachusetts Institute of Technology, and Kristin Persson, a staff scientist at Lawrence Berkeley National Laboratory, shines a light on the important discipline of computer-driven materials design. Thanks to the powerful combination of supercomputing and advanced mathematics, it’s now possible to build new materials atom by atom.
The method is referred to as high-throughput computational materials design, and it’s responsible for a host of sophisticated developments – improved batteries, solar cells, fuel cells, computer chips, and many other technologies.
Before these digital prototyping tools were invented, designing new materials required a lot of grunt work. Breakthroughs occurred only after much trial and error and guesswork. The new process is remarkably more streamlined and efficient, allowing researchers to virtually test thousands of materials in a very short amount of time.
Going back to the late 1800s, an inventor like Thomas Edison, was guided mainly by intuition and arduous trial and error. Testing materials one at a time, it took Edison 14 months to develop and patent a bulb using a filament made of carbonized cotton thread. Several years later, another American inventor discovered a better material, tungsten filament, which is still used in incandescent lightbulbs to this day.
Even the Sony lithium-ion battery, announced in 1991 – hailed as a huge advance – was the result of decades of research performed by thousands of researchers.
But thanks to high-throughput computing, materials science is headed for even bigger things.
“Materials science is on the verge of a revolution,” write the authors of the Scientific American piece. “We can now use a century of progress in physics and computing to move beyond the Edisonian process. The exponential growth of computer-processing power, combined with work done in the 1960s and 1970s by Walter Kohn and the late John Pople, who developed simplified but accurate solutions to the equations of quantum mechanics, has made it possible to design new materials from scratch using supercomputers and first-principle physics.”
Materials are made up of chemical compounds. Some like battery electrodes are composites of several compounds, others like graphene, are much simpler, consisting of only one element, carbon. High-throughput computational materials design uses powerful supercomputers to virtually analyze hundreds or thousands of chemical compounds at a time looking for specific properties.
A material’s properties – such as density, hardness, shininess, electronic conductivity, and so forth – are determined by the quantum characteristics of the underlying atoms. What high-throughput materials design does is virtually build new materials based on thousands of quantum-mechanical calculations. Virtual atoms become the building blocks of virtual crystal structures. The supercomputer creates hundreds or thousands of these virtual compounds and then it assesses a range of properties, such as shape, size, conductivity, reflectivity, and so on. The computer is asked to screen for a set of desirable properties, and return the most promising prospects. At each step of the way, researchers can further refine their results.
The article asserts that a golden age of materials design is unfolding. Earlier innovations such as chip-grade silicon and fiber-optic glass are integral to the modern era, and many more potential breakthroughs – in areas such as clean-energy, lightweight metal alloys, and even the future of supercomputing itself (post-silicon era anyone?) – are just waiting for the right material to be invented. | <urn:uuid:e5ef1477-be7f-4ea4-8654-798a944f3513> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/12/05/supercomputing-raises-materials-science-new-heights/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00306-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940629 | 796 | 4.4375 | 4 |
Looking to safeguard the nation's computer systems, Germany plans to establish a computer emergency response center, Otto Schilly, the country's interior minister, said at a press conference. Schilly is concerned that German businesses don't do enough to maintain the security of their computer systems and home users underestimate the threat posed by viruses, phishing and other forms of cybercrime.
The national computer emergency response center is similar to US-CERT or the United States Computer Emergency Response Team, with its goals of preventing and responding to cyberthreats and fostering the adoption of common security practices. The organization is part of Germany's Federal Office for Security in Information Technology, known by the initials of its German name, BSI, Computerworld
The announcement by the German interior minister of that country's computer emergency response center is the latest by a foreign government to establish centers for the fighting of cyberthreats. In addition to the United States, Australia, the United Kingdom, Italy, Japan and Hong Kong have also launched their own computer emergency response centers. In addition the European Union was exploring the development of such a center that would have responsibility of containing computer security incidents for each of the member states until last April when talks broke down over disagreements concerning information sharing. | <urn:uuid:1d34b0dd-6197-4d6d-8109-a48acdfc8836> | CC-MAIN-2017-04 | http://www.govtech.com/security/Germany-Launches-Computer-Emergency-Response-Center.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00033-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960432 | 249 | 2.59375 | 3 |
Use a label to display text that identifies a component.
Best practice: Implementing labels
- Use clear, concise labels.
- Try to avoid displaying truncated text. The meaning might be unclear to users if the most important text does not appear. First, try to reduce the size of the text. If you reduce the size but you cannot read the text easily, try wrapping the text onto two lines instead. If you cannot wrap the text, consider using an abbreviation. Otherwise, use an ellipsis (...) to indicate that the text is truncated and provide a tooltip.
- Group and order labels logically (for example, group related items together or include the most common items first). Avoid ordering values alphabetically; alphabetical order is language-specific. | <urn:uuid:1b43b92d-0733-4750-b74a-c3bd434e143e> | CC-MAIN-2017-04 | https://developer.blackberry.com/design/bb7/labels_6_1_1650365_11.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.736984 | 157 | 3.640625 | 4 |
‘We’ve not built the brain, or any brain, but rather a computer inspired by the brain,’ says IBM.
IBM scientists have unveiled a brain-like computer chip that can be scaled to use one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second, all at the size of a postage stamp.
It operates on the equivalent power of a hearing aid battery, and IBM says that the chip could transform science, technology, business, and society by applying it in multi-sensory applications.
Dr. Dharmendra S. Modha, an IBM chief scientist, said: "IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems – that complement today’s von Neumann machines – powered by an evolving ecosystem of systems, software, and services."
The chip has taken almost ten years to research and manufacture, and it has received funding from The Defense Advanced Research Projects Agency (DARPA).
Modha said: "These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation."
IBM envisions the chip being used in varied kinds of situations, taking in sensory data, analysing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.
‘Not a brain (yet)’
Modha is quick to note that IBM has "not built the brain, or any brain," but rather a computer "inspired by the brain," which for the first time can process sensory data in parallel, much like the human brain itself.
IBM’s first-generation chip was only capable of storing 256 neurons. With its one million, scientists hope the new chip will lead to computers nearly as powerful as the human brain by the year 2020. | <urn:uuid:413d23b7-9ed2-4465-ad57-a4ad5bf49460> | CC-MAIN-2017-04 | http://www.cbronline.com/news/enterprise-it/ibms-human-brain-like-chip-is-the-size-of-a-postage-stamp-4338822 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00151-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937729 | 461 | 3.46875 | 3 |
Encryption is an important safeguard to protect sensitive data that’s stored and processed through the cloud. Encryption protects outgoing data so it’s not vulnerable to being read once it’s outside your network. It also satisfies compliance and regulatory standards like HIPAA and PCI DSS and is an essential tool for protecting information used with popular SaaS applications like Salesforce.com.
Even with a highly-secure data center, the protection of important information is a shared responsibility between your service provider and your IT team. Get started by implementing these four encryption best practices for a cloud environment:
1. Clearly Outline Your Business and Security Goals.
Before you choose any encryption products or design a strategy, understand your organization’s business and security objectives. This includes internal and external data governance policies, such as data privacy and residency, and compliance mandates relevant to your business such as HIPAA, PCI, or Gramm-Leach-Bliley (GLB).
You should also have a plan for how to manage your data encryption keys – a critical lynch pin for ensuring data stays protected. In almost all cases, experts recommend you centrally manage encryption keys outside the cloud and ensure no one but you has access to them.
2. Encrypt data before it goes to the cloud.
Since your IT team doesn’t have direct control over data that’s sent through the cloud, it is important to encrypt important information before it leaves your servers. There are many applications that allow you to do this and give you control over the encryption keys.
Also define exactly which types of data need encryption. There are very few organizations that need to encrypt all cloud-based data. Carefully evaluate what information is high-risk and truly requires it.
3. Ensure your provider supports the FIPS standard relevant to your organization.
If you’re a government agency or supporting contractor, you likely need a data center service provider that’s Federal Information Processing Standard (FIPS) compliant.
FIPS is a set of standards that approves cryptographic ciphers for hashing, signature, key exchange, and encryption purposes. There are four levels of FIPS security; each level specifies a tighter degree of protection. Ask your data center service provider which level of FIPS they provide and review their documentation to be sure it meets your needs.
4. Don’t neglect mobile device encryption.
With the proliferation of data on mobile devices, and Bring Your Own Device (BYOD) being adopted by many organizations, it’s tough to know exactly where your data is and what’s happening to it. It only takes one lost or stolen laptop with unencrypted data, and cyber-thieves have easy access to sensitive information. A recent example occurred at MD Anderson Cancer Center when a laptop was stolen from a doctor’s home with 30,000 highly sensitive patient records.
By encrypting data on mobile devices, you prevent these types of embarrassing issues from occurring. Require all employees to use full disk encryption for desktop and notebook computers and especially removable USB drives – even if sensitive information isn’t normally stored on them. Since all mobile devices let users enter data and receive emails, it’s important to encrypt those devices even if your corporate policy specifically prohibits employees from putting sensitive information on them. Much better to be safe than sorry in these cases.
Encryption is an important safeguard for your valuable business data.You also need a reliable data center service provider that understands these complex security issues. Learn more about secure cloud hosting. | <urn:uuid:3d106a0c-74d7-4f6f-9621-84b976863def> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/4-encryption-best-practices-for-the-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919653 | 729 | 2.515625 | 3 |
The new automotive generation is not only equipped with comfort, style, speed, power, functions automation (X-by-wire), fuel efficiency (think green fuel), but with increased safety. For the longest time, innovations accommodated for drivers and passengers in a car. Now, research and innovations in sensors, cameras, radars, and other value added components in automobiles are equipping the cars for increased safety for pedestrians. A number of vehicular accidents involve pedestrian casualties and deaths. With the augmentation of the technology which can sense the presence of pedestrians in the path of the cars, earlier available only in high priced cars by Mercedes S-class, it is obvious that automation is coming to automobiles, and in a big way.
The North America market for pedestrian detection systems in cars was worth US$ XX.XX mn in 2014, and is expected to reach US$ XX.XX mn by 2020, at CAGR of X.XX%. North America, like Europe, represents one of the largest market for automotive sensors because of a high standard of living, disposable income which supports affordability of cars with such competitive technologies.
This technology includes auto scanning of roadways ahead for pedestrians, notifications and warnings issued to the driver, and if the pre-decided distance is not maintained, the automobiles are designed to auto-brake. The radars and cameras, typically fitted in places on the vehicles from where spotting pedestrians can be easier, for example, the front and back of the vehicle on the windshields and bumpers respectively, are operational at a range of speeds, and can go higher than 50 miles per hour.
An increasing number of automobile manufacturers like Ford, Toyota, and others are integrating sensors for pedestrian detection for enhance their product offerings. While most manufacturers are integrating sensors and cameras in new cars, other sensors based companies are offering add-on sensors which can be used in cars without pre-installed pedestrian detection systems, and include names like MobilEye, among others. Some of the major manufacturers mentioned in the report are BMW, Mercedes, Audi, Volvo and Nissan.
Stringent regulations mandating reduction of vehicular pollution and usage of enhanced technologies for road safety by governments, advancements in research into sensors, cameras and other automobile technologies, set up and scaling of manufacturing of sensor based technologies across the world, high disposable income and demand for cars with such luxury features are some of the drivers for pedestrian detection systems enabled vehicles.
High cost of systems and associated components making it impossible to integrate sensors for pedestrian detection in mass market cars, ongoing research, inadequate research in updating existing research, and a few others factors contribute to the bottlenecks in this market.
Who should be interested in this report? | <urn:uuid:ec978b6d-a3e5-4eed-b495-4daa6671eda4> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/north-america-pedestrian-detection-systems-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00391-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937357 | 550 | 2.609375 | 3 |
Secure storage of data has always been essential for any organisation, of whatever size. In the past this involved accurate filing of paper records, and then keeping the physical archive secure – whether it was simply locking a filing cabinet, or guarding an entire building. Modern business technology may have virtualised much of this function, but the principle remains the same: preserving an accurate record of business activity, and ensuring that it is readily accessible to those who require it.
What has changed, however, is the regulatory environment within which many organisations now operate. Corporate governance legislation demands that certain information is retained securely, particularly when it relates to the financial management of the company and the manner in which it interacts with customers. Furthermore companies are required to manage their operational risks effectively through business continuity, which also relies on essential information being securely stored. As a result of recent high-profile cases of infringements, the regulators have become more vigilant, focusing on preventing any breaches, rather than post facto investigations. As a result secure storage and the protection of stored data has zoomed up the corporate agenda, and organisations need an effective policy for managing it.
There are three elements to any policy: people, processes and technology. It is tempting to focus almost exclusively on the IT, at the expense of everything else, and it is easy to see why. There are numerous technologies available for securing storage that operate at several levels. The data that is being stored can itself be secured through the use of encryption; digital certificates and watermarks; file splitting; or even highly locked down pdfs that prevent records being tampered with once they have been created and saved. In addition, the storage systems themselves can be protected. A new generation of wide area and caching systems can be used in conjunction with encryption technologies to preserve data when at rest, in transit or at presentation. Record management systems and storage-specific WORM (Write Once Read Many) products are also available to enhance archiving and storage security.
But, no matter how intelligent and sophisticated the technology, it is still subject to the whims of users. It’s much harder to change human behaviour than it is to install systems. Ignoring the other two elements of the policy – the people and the processes – will inevitably compromise the capability of the technology to protect stored documents, databases and other information. Any policy must therefore take into account the way that employees currently work and should not constrict their ability to carry out their day to day tasks by introducing overly complicated procedures, and unnecessary red tape. People will simply find the easiest route to carrying out their job: and if that means bypassing the security policy then that is what the majority will do. If major behavioural changes are required, then these need to be carefully planned and gradually introduced.
Consider this scenario: a busy senior executive gives his PA his password to check his email, and with it all his access privileges to stored data. It’s not an uncommon event, but it does present a potential security risk. Even if a policy forbids this, the chances are it will still happen, simply because it is the most convenient way for the senior executive to fulfil his role.
When it comes to writing the policy and considering the procedures required, the business needs to answer several questions. First of all: what gets stored? Clearly it is impractical to store everything – indeed it runs the risk of breaching either the Data Protection or the Human Rights Acts. So choices need to be made. Organisations also need to ask themselves where the information will be held? If only the essential documents are stored the implication is that they will need to be retrieved at some point. Accessing it in the future is going to be much more time consuming and inefficient if their whereabouts isn’t planned and recorded – not knowing where corporate knowledge is held is just as dangerous as not having good data security policies.
Which leads to the next question: what happens to the data once it has been stored? Who is going to look at it? And, equally important, who is not? Security is all about maintaining the confidentiality, integrity, and availability of information and proving non-repudiation. All the security technology in the world come to nothing if there is no way of controlling who can access the archives. And, with the increased need for reliable audit trails in mind, the enterprise also needs to prove who has, and hasn’t been viewing saved records and indeed, who has made copies.
Organisations need to address this issue from two angles: classifying the information, and identifying the user. Document management and identity management technologies are therefore two of the most crucial elements for any storage security policy. Most businesses underestimate how much data they produce: technology, especially email, has enabled unprecedented levels of duplication and filing anarchy. Unless a company has been exceptionally meticulous in its IT use there is usually little or no knowledge of what information has been created. Document management procedures will identify which records, files, and data need to be secured, and how long they need to be saved for.
Identifying and classifying the information involved is the first step to ensuring that only authorised personnel have access to it. The next is to allocate access privileges to individuals, based on who they are and the role they fulfil. User authentication, based on comprehensive identity management, therefore plays an essential role in keeping storage secure and will be able to provide the three As of any security measures: authentication, authorisation and audit. Furthermore, by making it easier to integrate data storage with desktop access, identity management assists the organisation to fulfil the first criteria of its security policy: making it user-friendly.
The final consideration for the storage policy is that it must be communicated to the user group. There’s no point in having a carefully drafted plan of action if no one knows about it. Education is essential, and is the responsibility of not just the IT or risk management team, but also business managers and HR. But with everyone involved, and an effective programme of communication in place an appropriate policy for secure storage will ensure that investments made in data encryption and the like will be maximised, and that an organisation need not fear a visit from the regulators.
Electronic data is now essential for modern business and information management, and security, policies form the instruction set by which it will be used. This in turn forms one of the key foundations for best practice business operations. | <urn:uuid:3cfe2850-d01e-4e3d-b811-b4d46210557c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2005/09/08/popular-policies-keeping-storage-secure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00509-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953813 | 1,293 | 2.703125 | 3 |
In September 2006, people started to get sick from spinach contaminated with E. coli bacteria. As more people were hospitalized, investigators narrowed the list of suspects but couldn't pinpoint the source of the contamination.
Incomplete electronic data, unreliable verbal reports and long trudges through too many paper files hindered progress. The FBI served search warrants. Even the Bioterrorism Act, which was enacted after 9/11 in part to tighten monitoring of the American food supply, didn't work as envisioned.
[Related: See "Tracing Guns Is a Low-Tech, Inefficient Process" at the end of this article]
To continue reading this article register now | <urn:uuid:161215aa-b5f1-4b60-b398-8878dc692f2f> | CC-MAIN-2017-04 | http://www.cio.com/article/2384859/supply-chain-management/how-outdated-tech-in-the-supply-chain-threatens-your-safety.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00171-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960077 | 134 | 2.625 | 3 |
Mixed Bag of SystemsBy Doug Bartholomew | Posted 2008-03-11 Email Print
In the face of pushback from hospitals and physicians, the CDC has revamped its ambitious BioSense network, designed to provide early warning of a potential flu outbreak. Now the agency is offering grants to promote the sharing of data among state health departments, while building new systems to alert physicians in the event of a pandemic.
Mixed Bag of Systems
Because BioSense is used only on a limited basis around the country, the CDC continues to rely on a mixed bag of different systems—some completed, some not—to uncover a major pandemic in the making. Chief among these is the Influenza Sentinel Provider Surveillance System, which depends on some 2,200 volunteer physicians to collect information from patients who exhibit flu-like symptoms. The CDC also uses the World Health Organization’s FluNet system, a database that epidemiologists and other researchers can query to learn about flu-related activity in other countries.
Laboratory data can provide yet another indicator of unusual flu activity. “We are looking to recognize cases early on by using laboratory data and to report that data automatically to public health authorities and the CDC at the same time,” the CDC’s Dr. Lenert says. The CDC depends on the Laboratory Response Network, which connects it with state health department laboratories and other laboratories that have special training to perform influenza research.
Pandemics are monitored using a system called the Health Alert Network. “We use this system to communicate to physicians and health departments about how to report cases, what to look for and other information about specific cases,” says Dr. Steve Redd, a CDC epidemiologist.
Another system, FluFinder, was begun in 2004 during a shortage of flu vaccine. The system allows health officials to locate vaccine supplies.
With all these systems—and others—in the works, the CDC must consolidate its information systems in order to provide more timely data to its own staff and to health care professionals in the field. | <urn:uuid:ed406c7c-6c31-48e5-befa-99973ab4a427> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/IT-Management/CDC-Issues-Pandemic-Systems-Plan/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00473-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9385 | 419 | 2.671875 | 3 |
Conflict, put quite simply, is a disagreement between two or more parties
that creates a feeling of discomfort resulting in a change in behaviour by at
least one of the parties. Sometimes the change is quite visible, as in the case
where arguments break out; other times it is quite subtle and involves changes
in tone of voice and/or body language.
Conflict in the workplace is a distraction at best and a disruption at worst.
Managers must be aware of conflict among their team members and be prepared to
provide the tools to help the team resolve it. And, when team members can't
resolve conflict, managers must be prepared to step in.
This one-day conflict management workshop helps build effective conflict
resolution skills in people who manage or influence others.
Benefits for the Individual
- Understand what conflict is and where it comes from
- Learn and apply the tools to manage or resolve conflict in the workplace
- Learn how to turn conflict into an opportunity for the team
- Understand your conflict management style
- Demonstrate ways to modify your style to suit the circumstance
- Deal more effectively with difficult people
Benefits for the Organization
- Increased collaboration and teamwork as conflicts are managed early and
- A faster return-to-productivity once conflicts are managed or resolved
- Improved coaching skills for managers of people
- Fewer issues that result in conflict | <urn:uuid:aaed65c3-8d82-4c97-b140-9975b0145ff9> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/116498/conflict-resolution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946568 | 288 | 3.21875 | 3 |
Although it might sound improbable, researchers from Stanford University believe there’s evidence that California can be powered by all renewables by the year 2050.
The road map, published in the academic journal Energy, relies upon a mix of capital investment and “modest” efficiency measures.
According to a post from the Stanford Woods Institute for the Environment, utilizing solar power plants, wind turbines and ocean wave devices, geothermal, hydrogen fuel-cell cars and other sources would spur a net increase of 220,000 jobs in California, reduce energy demand by 44 percent, and save more than $100 billion annually in health costs related to air pollution.
“If implemented, this plan will eliminate air pollution mortality and global warming emissions from California, stabilize prices and create jobs – there is little downside,” said Mark Z. Jacobson, the study’s lead author and a Stanford professor of civil and environmental engineering. He is also the director of Stanford’s Atmosphere/Energy Program and a senior fellow with the Stanford Woods Institute for the Environment and the Precourt Institute for Energy.
Under the plan, 55.5 percent of the state’s energy would come from solar, 35 percent from wind and the remainder from a combination of hydroelectric, geothermal, tidal and wave energy.
Here’s one scenario, according to Stanford, for fulfilling all of California’s energy needs by 2050:
- 25,000 onshore 5-megawatt wind turbines
- 1,200 100-megawatt concentrated solar plants
- 15 million 5-kilowatt residential rooftop photovoltaic systems
- 72 100-megawatt geothermal plants
- 5,000 0.75-megawatt wave devices
This story was originally posted by TechWire. | <urn:uuid:580dfaaf-5991-422c-9cb7-5e130c8f0188> | CC-MAIN-2017-04 | http://www.govtech.com/state/GT-Stanford-Researchers-Plan-for-Powering-California-Entirely-with-Wind-Water-and-Sun.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897198 | 369 | 3.3125 | 3 |
Three key principles of BlackBerry application design
Active people rely on their BlackBerry devices to keep them informed and up to date. Make sure that status, notifications, new information, and frequently used actions and content are easily accessible.
- Keep important information, such as status, visible so that users don't have to look for it.
- Keep menus short and put infrequently used items in options.
- Balance the information density on each screen.
- Don't place frequently used items at the bottom of a list or out of view.
Make users confident in the information that they receive when they use your application. When users know the status of information, such as when a message is sent, they can feel confident that the application is doing what they want.
- Keep all information and options for a required task visible.
- Provide clear, concise information that helps users perform tasks.
- Give users the freedom to explore by allowing them to undo and redo actions.
- Provide feedback when the application performs what users request.
- Don't allow dead ends. Users should always have a route forward or an alternate way of interacting with the application.
- Design workflows to help users avoid errors. Provide confirmations for critical tasks.
- Help users recognize, diagnose, and recover from errors by suggesting a solution.
- Create Help that is easy to search, focused on user tasks, and lists concrete steps.
Create applications that users are willing to use and try right away. Clean and organized layouts, appealing aesthetics, minimalist design, and reduced complexity make applications more approachable. Since a wide range of people use BlackBerry devices, design your application to cater to both experienced and inexperienced users.
Make screens, layouts, and information easy to understand so that users can learn the application and get started right away. Use real world concepts and metaphors to make your application easier to understand and learn. Handle complexity using progressive disclosure so that users are not overwhelmed. Making the application look great and easy to understand gets people using it.
- Create a simple design that allows users to find what they want quickly and easily.
- Reduce the number of steps that users need to take to achieve their goals.
- Communicate clearly using concise, unambiguous labels and commands.
- Place the most frequently used tasks on the screen. Include additional tasks in the menu or on subsequent screens.
- Present information as users need it.
Aesthetics and minimalist design
- Avoid visual clutter. Limit the use of color and use consistent geometry. Chunk large amounts of information by grouping similar information.
- Use animations and graphics to enhance user understanding and support the metaphors in your application.
- Include accessibility requirements to support users with visual, hearing, or motor disabilities or impairments.
- Make sure that your application scales properly when users change settings like font size. | <urn:uuid:6cde3e4c-4e0c-4db9-bb36-3ac7ed93e70a> | CC-MAIN-2017-04 | https://developer.blackberry.com/design/bb7/three_key_principles_of_bb_application_design_1211051_11.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897808 | 582 | 2.5625 | 3 |
Believe it or not, someday your desk lamp might also send high-definition video to your TV.
Last month, engineering professor Harald Haas of Jacobs University in Edinburgh, Scotland, demonstrated a cutting-edge technology that allows an LED light bulb — the kind found in millions of U.S. homes — to transmit high-speed data streams.
Speaking during the TEDGlobal conference, Haas said that the world is already running out of radio spectrum used to send wireless communications, and the world’s 1.4 million cell towers are expensive to maintain and operate. Sending that data through visible light, which is a much larger band of spectrum, could satiate world demand for connectivity.
Watch Haas present his ideas on LED light bulbs and high-speed data transmission in this video. | <urn:uuid:3487ab35-a611-4161-832b-f2863914b741> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Watch-LED-Light-Bulbs-Transmit-High-Def-Video--VIDEO.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00392-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931433 | 160 | 2.921875 | 3 |
Google announced last year a philanthropic effort it called Project 10 to the 100. (10 to the 100 calculated out is a googol.) Now it’s time to vote for the idea you’d most like to see established.
The Project has been neutralized a bit from the quite ambitious plan that was announced to the more generalized one now, but Google still hopes to improve the world by dedicating $10 million toward up to 5 of the projects. Along with changing their initial ambitions, the voting phase and announcement of the submitted ideas had also been delayed from January 27th to September 24th.
Project 10^100 accepted idea submissions until Oct. 20th, 2008. Ideas were to be in these categories:
- Community: How can we help connect people, build communities and protect unique cultures?
- Opportunity: How can we help people better provide for themselves and their families?
- Energy: How can we help move the world toward safe, clean, inexpensive energy?
- Environment: How can we help promote a cleaner and more sustainable global ecosystem?
- Health: How can we help individuals lead longer, healthier lives?
- Education: How can we help more people get more access to better education?
- Shelter: How can we help ensure that everyone has a safe place to live?
- Everything else: Sometimes the best ideas don’t fit into any category at all.
with this criteria used to determine a “good” idea:
- Reach: How many people would this idea affect?
- Depth: How deeply are people impacted? How urgent is the need?
- Attainability: Can this idea be implemented within a year or two?
- Efficiency: How simple and cost-effective is your idea?
- Longevity: How long will the idea’s impact last?
154,000 ideas were submitted and Google has lumped all the ideas into more general themes. Now Google wants you to vote on which of these themes do you want to see enacted the most. What would make the most difference? What would improve your neighborhood, community, nation, or the world?
You can vote on the following ideas:
- Drive innovation in public transport
- Make educational content available online for free
- Build real-time, user-reported news service
- Create more efficient landmine removal programs
- Build better banking tools for everyone
- Collect and organize the world’s urban data
- Work toward socially conscious tax policies
- Encourage positive media depictions of engineers and scientists
For more information about Project 10^100, go straight to the source and watch this YouTube video from Google:
The deadline for voting is October 8th, 2009.
Go Vote for the idea you’d most like to see accomplished. | <urn:uuid:18cb3ed1-8920-4ddd-abe1-34d0f87050e7> | CC-MAIN-2017-04 | https://www.404techsupport.com/2009/09/googles-project-10-to-the-100-time-to-vote/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944032 | 577 | 2.5625 | 3 |
Joined: 20 Jul 2008 Posts: 19 Location: Schenactady, US
We know that VARCHAR is a variable char and CHAR is a fixed char. If I declare CHAR(50) then 50 bytes will be allocated and used.
If I declare VARCHAR(50) and I'm giving the length as 10. Then only 10 bytes will be allocated and used. | <urn:uuid:a763daac-cd9e-46c5-9900-fd2057c630d1> | CC-MAIN-2017-04 | http://ibmmainframes.com/about34609.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00134-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.866794 | 79 | 2.625 | 3 |
By Sam Grobart
June 13, 2013
The age of wearable computing is upon us: There are now wristbands from Nike (NKE), clip-on devices from Fitbit, and eyewear from Google (GOOG). We’ve come this far because we’ve been able to shrink computing power from something the size of a room to a box that sat atop a desk, to a smaller box that fits in the palm of our hand, to now an even smaller box we can wear on our bodies. But they’re still boxes, more or less: Rigid devices that stick out because they don’t conform to the human shape.
A startup in Cambridge, Mass., called MC10 aims to change that. The 70-person company is developing a manufacturing technology that will allow digital circuits to be embedded in fabric or flexible plastic. MC10’s approach means we will no longer “wear” technology like jewelry but have it sit unobtrusively on our skin or inside our bodies. “By embedding technology in bendable, stretchable materials, you can start to think about entirely new form factors for electronics,” says Benjamin Schlatka, a co-founder of MC10.
The BioStamp is MC10’s first flexible computing prototype. It’s a collection of sensors that can be applied to the skin like a Band-Aid or, because it’s even thinner than that, a temporary tattoo. The sensors within collect data such as body temperature, heart rate, brain activity, and exposure to ultraviolet radiation. Using near field communication—a wireless technology that allows devices to share data (think E-ZPass)—the BioStamp can upload its information to a nearby smartphone for analysis.
Besides being unobtrusive, a device such as the BioStamp can be worn constantly (each lasts about two weeks), which changes the nature of medical diagnosis. Until now, understanding what’s happening inside a body only happens when that body is being actively examined. Implantable sensors can provide full-time monitoring. “You want it to be happening in the background, without thinking about it,” says MC10 Chief Executive Officer David Icke, who worked in the chip and cleantech industries before joining MC10 just over four years ago. “The idea behind continuous pickup of information is you get access to health care when you need it.”
This kind of constant monitoring fuels sci-fi visions of the future, when an ambulance may pull up next to you because the implanted sensors in your body are picking up the earliest indications of a heart attack. The BioStamp is expected to cost less than $10 per unit, and MC10 aims to have a commercial product in the next five years.
MC10 is developing another device that will be available sooner. The Checklight measures velocity and impact to help diagnose concussions in sports. Although not flexible, it’s quite small (about the size of a camera’s memory card) and can be tucked into a skullcap and worn under any type of helmet. Checklight was developed with Reebok (ADS), who will begin marketing it later this year. “A lot of the products we try to create are transparent in their use but apparent in their effectiveness,” says Paul Litchfield, Reebok’s vice president for advanced products. “If you take these hard, plastic pieces and make them work organically with the human body, the sky’s the limit as to what they can do.”
As it has with Reebok, MC10 plans to license its technology to third parties that have the scale and expertise to bring products to market. “We think of ourselves as a latter-day Intel (INTC),” says Icke. “We want to power the next generation of wearable electronics, no matter where they come from.”
Another version of the technology in the BioStamp is used in a catheter being developed with Medtronic (MDT), a maker of medical devices that’s an investor in MC10. The catheter can be inserted through a vein in the leg and run up into a patient’s heart, inflated like a balloon to expose its sensor-laden surface, and then used to collect electrical data about the heart’s rhythm, which can be useful to electrophysiologists when diagnosing rare occurrences of tachycardia. Tests on humans are expected to start within a year. “Today’s catheters don’t have the kind of electronics that we take for granted in many of our consumer devices,” says Schlatka. “By adding that intelligence, doctors can make better decisions about how they are performing the procedure.”
The applications go beyond health care. At AllThingsD.com’s D11 tech conference last month, Regina Dugan, senior vice president for advanced technology and products at Motorola Mobility (GOOG), demonstrated how MC10’s BioStamp could be used to verify a person’s identity to a computer or mobile device. Users now rely on key chain fobs or credit-card-size displays that authenticate a user’s access. But wearing a flexible microprocessor that contains an encrypted code could put that function directly on your skin. “Electronics are boxy and rigid,” says Dugan. “Humans are curvy and soft.”
The bottom line: Startup MC10 miniaturizes medical diagnostic devices and has enlisted big-name partners in the medical and sports world.
Grobart is a senior writer for Bloomberg Businessweek. Follow him on Twitter @samgrobart.Back to all News | <urn:uuid:eb4c7ecc-935d-401d-bd5e-5d21153cfe75> | CC-MAIN-2017-04 | http://www.northbridge.com/mc10s-biostamp-new-frontier-medical-diagnostics | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93792 | 1,200 | 2.875 | 3 |
An introduction to digital signal processing, software radio, and the powerful tools that enable the growing array of SDR projects within the hacker community, this course takes a unique "software radio for hackers" approach, building on the participants' knowledge of computer programming and introducing them to the forefront of digital radio technology. Participants will learn how to transmit, receive, and analyze radio signals and will be prepared to use this knowledge in the research of wireless communication security. Each student will receive a HackRF One software defined radio transceiver, a $300 value.
- Introduction to Software Defined Radio
- Exercise: Finding a Signal
- Complex vs. Real Signals
- Exercise: Working with Complex Signals (part 1)
- Exercise: Working with Complex Signals (part 2)
- Aliasing and Sampling Theory
- Exercise: Transmission and Simulation
- Exercise: Digital Filters
- Exercise: Replay
- Exercise: Modulation Identification
- Reverse Engineering
- Exercise: Reverse Engineering
- Decoding Digital Signals
- Exercise: Decoding
Anyone with an interest in investigating the physical layer of real world digital radio communication systems.
A background in software development and an interest in security are helpful but not required.
HackRF SDR peripheral, exercise workbook, USB flash drive. | <urn:uuid:6425e4ab-896f-4fa1-840e-2f6032502123> | CC-MAIN-2017-04 | https://www.blackhat.com/asia-17/training/software-defined-radio.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.836271 | 265 | 2.9375 | 3 |
“This is a really exciting time for the field of cosmology. We are now ready to collect, simulate and analyze the next level of precision data…there’s more to high performance computing science than we have yet accomplished,” noted Astronomer and Nobel Laureate Saul Perlmutter in his Supercomputing ’13 keynote address.
In keeping with Perlmutter’s astute observation, astronomers at the University of Texas have made a series of remarkable discoveries using some of the most powerful supercomputers in existence. Massive numerical simulations carried out on Stampede, Lonestar and Ranger (now retired) reveal important details about the early Universe from the Big Bang through the first few hundred million years.
The simulations shed light on how the first galaxies formed, and more specifically, how metals in the stellar nurseries shaped the characteristics of the stars in the first galaxies. Lead researcher Milos Milosavljevic, Associate Professor of Astronomy The University of Texas at Austin, reported the results of this study in the January 2014 edition of the Monthly Notices of the Royal Astronomical Society.
The powerful computational tools provided realistic models of supernova blasts, helping explain the range of metalicity that exists throughout the galaxies. “The universe formed at first with just hydrogen and helium,” Milosavljevic said. “But then the very first stars cooked metals and after those stars exploded, the metals were dispersed into ambient space.”
As the ejected metals fell back into the gravitational fields of the dark matter haloes, they formed the second generation of stars. But the metals dispersed by the early supernovae blasts did not distribute in a uniform pattern.
This incomplete mixing explains the discrepancy in metal distribution in early stars; some were metal-rich others metal-poor.
Another important consideration is the way that the heavier elements emerged from the originating blast. Earlier research assumed the process occurred as a neat spherical blast wave, but the new model suggests that the ejection of metals from a supernova was much more chaotic, with shrapnel shooting in every direction.
Milosavljevic maintains that accurately representing this explosion is “very important for understanding where metals ultimately go.”
In astronomical terms, time translates to distance. In order to see the early universe, astronomers have to peer into the deepest recesses of space, and this takes extremely powerful telescopes. Astronomers are hopeful that we will be able to observe some of these early galaxies with the James Webb Space Telescope (JWST), set to launch in 2018.
One important question surrounding the JWST project is whether to focus on one particular spot or instead employ a mosaic approach to survey a larger area. The lessons learned in this study will be used to guide this strategy. | <urn:uuid:449d51c8-a6d3-4c97-931e-fff1820d5076> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/02/07/heavy-metal-shapes-early-cosmos/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926787 | 575 | 3.515625 | 4 |
Nov 95 Level of Govt: Local. Function: Land Use Planning Problem/situation: There was no way for most database users in the city of Raleigh and Wake County, N.C., to access multiple levels of land-use information for analysis and mapping. Solution: City and county GIS departments initiated a joint project and developed a Multi-access Parcel System. Jurisdiction: Raleigh, N.C., Wake County, N.C., Lee County, Fla. Vendors: IBM, Graphic Data Systems Inc. Contact: Charles Friddle, Wake County geographic information systems director 919/856-6375. Colleen Sharpe, geographic information systems director, Raleigh 919/890-3636.
By Bill McGarigle Contributing Writer Until recently, there was no simple way for most database users in the city of Raleigh and Wake County, N.C., to access multiple levels of land-use information for analysis and mapping. Wake County GIS Director Charles Friddle explained that "departments needing information from the assessor's file could access attribute data via the IBM mainframe, using their PCs and Graphic Data Systems Inc. (GDS) software, but were not able to link that data with the graphics. To bring up a map described by the data required a separate search from a GIS workstation. All that took time." In response, city and county GIS departments initiated a joint project with system supplier GDS to find a solution. The result was the Multi-access Parcel System (MAPS) - a software program that Friddle said "gives us a quantum leap in system accessibility." Administrators and staff alike agree that MAPS has significantly cut the time needed to research records, eliminated much data-storage redundancy, and provided real-time data communication between municipal and county agencies. BACKGROUND For Raleigh and Wake County, the capability for rapid, simplified access to land-use information was a much-needed tool. Already home to 500,000 people and growing by 3 percent annually, the county is attracting increasing numbers of high-tech industries in electronics and biotechnology. Keeping up with the combined urban and industrial growth requires city and county agencies to track approximately 14,000 new construction starts a year; research the county's 180,000-plus parcels and identify each one over 10 acres to determine suitability for commercial and industrial development, provide maps in response to requests for sites with specific geographic criteria; and scramble to find locations for new schools. In addition, subdivisions, zoning changes, and development of new parcels currently produce between 6,500 and 10,000 land transactions annually, all requiring timely appraisal. Part of the difficulty in accessing information, explained Raleigh GIS Manager Colleen Sharpe, also stemmed from agencies using different software applications. "The one used by the county assessor to access the mainframe looked completely different from ours over here. When our employees went over to Wake County, they needed to know how to use the computer one way. When their people came over here, they were looking at another computer and a different application." Although the county and the city had been developing applications, adding information to the databases, and regularly upgrading the system since 1989, the GIS departments determined that the rate of growth and land-use changes in the county required larger numbers of staff to have a faster, simpler method of accessing various databases from desktop PCs. The GIS departments discussed the problems collectively and began looking at ideas other state and local agencies were using. Through GDS, they learned that Lee County, Fla., had a comparable IBM mainframe, was using the same GIS software, and had the same needs. Friddle pointed out that Lee County had also been working on a solution. "They had developed the connection between the IBM mainframe and the digital processor that was running the GIS system. We were able to take what they did and modify it for our needs. It took a lot of work. There were similarities, but there were also many differences. It gave us a place to start." The two departments subsequently proposed a joint project to develop their own program, in concert with GDS. The MAPS project, as it was known, was funded by the county and the city of Raleigh. "Both contributed staff resources," Sharpe said. "I worked on it, one of their programmer analysts worked on it, then we had our GDS software-applications person working on it. The project was one of our best examples of cooperation." Despite hardware problems along the way, MAPS was up and running by the summer of 1995, nine months after the project began. "Seamless" is how Sharpe described the MAPS application. "The user doesn't know or need to know what database to access to get the information. It's very user friendly - just follow the menu, point and click. The program was intended to have a Windows look, require a minimal amount of typing and only two to four hours instruction. We developed it pretty much as planned." Friddle stresses that MAPS doesn't allow an accessed database to be altered - it enables users to produce their own maps, do analyses, and make copies from their own PCs. "A lot of municipalities have a GIS system with PCs, workstations, or X-terminals accessing one digital processor. But what we have developed here is a network of PCs, X-terminals and workstations that enable users to access a wide range of databases on various platforms - whether it's an IBM mainframe, the county's digital processor, or the city of Raleigh's digital processor," Friddle said. "With our system, you don't have to know where the data is; you just ask for what you want, and the system tells you where the data is. You don't have two or three terminals sitting on your desk. You can do everything from one device." "MAPS lets users with only one piece of information access the entire database," Sharpe added. "If you want to know where someone lives, and you have only the person's name, type it in, and a map comes up on the screen. If, at the same time, you want ownership information, you can get that also. If you have only a map and want to know who lives in a certain parcel, you can access the person's name and the associated ownership information - all through MAPS." Assistant County Manager Wally Hill uses MAPS to access a variety of real estate and tax-accounting data on the mainframe. "The program is seamless for people like me, who don't have or need GIS training. I don't have to know where the information comes from - it's all accessible. Not only can I see a map of the site I'm interested in, but I can pull up all the existing information on it, using just this one program." Hill believes a version of MAPS may eventually provide citizens with a view-only access to public records. ELUDAS A valuable spin-off from the MAPS project is the Existing Land-Use Derivation Assignment System (ELUDAS) - a program that translates the county assessor's codes into attributes other agencies can use to plot land-use maps. The program was written by Scott Ramage, a student who interned for a year with the county planning department. "ELUDAS is a single-focus program," Friddle explained. "It is used to identify existing land use throughout the county. MAPS, on the other hand, allows users to view, query, analyze and report a wide variety of information - including the land-use information generated by ELUDAS." Friddle added: "In the past, determining existing land use meant the planning department had to send someone out in the field with colored pencils to color maps, then graphically enter that information into the GIS. Now, the ELUDAS program enables planners to access the assessor's files for all that information without ever leaving their desks." EXPANSION VS. COST As the system is presently configured, Raleigh and Wake County agencies have real-time access to each other's databases via a T1 telecommunication line. At the time of this report, nearby towns of Cary and Fuquay-Varina can only download information from county agencies but expect to be in the data-sharing loop before the end of the year. According to Mike Jennings, a Wake County planning director, many small towns throughout the county find the hardware connections needed to access the system much too expensive at this time. However, the county provides maps and data on disk to all municipalities, without charge. OTHER BENEFITS Sharpe sees the MAPS project as an example of the benefits that can come from close cooperation between municipal and county governments. "We save our taxpayers money by not duplicating data; we use each other's data, free of charge, and reduce the time needed to respond to requests for information and other services." OUTLOOK Hill believes MAPS holds much promise for being available to a wide range of non-technical personnel, including himself. "I use it most of the time, except in instances that require expertise in GIS to conduct an intelligent data search, which MAPS doesn't do. Then, I have to call our GIS folks and ask for help. I don't have the knowledge of the software to do that myself." Hill concedes that, although MAPS saves time and cuts down redundancy in data storage, "right now, the real dilemma is how do you make GIS easy enough to use so that you don't always have to have your staff right there to help you do things." When asked about a version of MAPS for use in libraries, Jennings pointed to recent budget cuts. "Right now there's no money to provide either the hardware or a simplified query program that enables citizens to access public information. One thing we are seriously pursuing, however, is creating regional service centers throughout the county. The assessor wants to put a terminal in each of these so that people in outlying towns won't have to come downtown to get property information. But we're not talking about a lot of expansion," Jennings cautioned, "not after the county board of commissioners cut budgets by 20 percent last year." Tight budgets notwithstanding, the GIS program is not static, Friddle stressed. "MAPS and ELUDAS are only the most recent developments." | <urn:uuid:711e311e-f36c-4848-8845-fb719181cc65> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Merging-CityCounty-GIS-Efforts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958281 | 2,127 | 2.53125 | 3 |
Imagine if you could see as well at night as the wilderness predators lurking outside your campfire hoping to eat you.
Some scientists at an independent organization called Science for the Masses tested a molecule that appears to enhance night vision when applied via eye drops.
The molecule, called Ce6, is no stranger to researchers. As The Independent writes, Ce6 is "a natural molecule that can be created from algae and other green plants" that also "is found in some deep sea fish, forms the basis of some cancer therapies and has been previously prescribed intravenously for night blindness."
The California-based "citizen science" organization decided to take the research to the next level by putting drops containing 50 microlitres of Ce6 into the eye of a biochemical researcher acting as a test subject. Here's how they describe it:
After 2 hours of adjustment, the subject and 4 controls were taken to a darkened area and subjected to testing. Three forms of subjective testing were performed. These consisted of symbol recognition by distance, symbol recognition on varying background colors at a static distance, and the ability to identify moving subjects in a varied background at varied distances. ...
The Ce6 subject consistently recognized symbols that did not seem to be visible to the controls. The Ce6 subject identified the distant figures 100% of the time, with the controls showing a 33% identification rate.
It's. Like. A. Superpower!
Or maybe not. Here's Gabriel Lincina, the "Ce6 subject" on his amazing temporary night vision:
Sorry everyone, but despite how thrilled people are about this, the effects are way more subtle than the media would leave you to believe. This is just enhanced night vision. Dark becomes dim, it’s not all Riddick up in here.
The next step was to moisten the eyes of biochemical researcher and willing guinea pig Gabriel Licina’s eyes with 50 microlitres of Ce6.
This story, "These eye drops can give you better night vision, but don't expect to go all Riddick" was originally published by Fritterati. | <urn:uuid:5342ffc2-88b4-4d86-8cd2-5e858a11029a> | CC-MAIN-2017-04 | http://www.itnews.com/article/2908352/these-eye-drops-can-give-you-better-night-vision-but-dont-expect-to-go-all-riddick.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00282-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958306 | 435 | 2.6875 | 3 |
Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications. The aim of cloud computing technology is to apply supercomputing power or high-performance computing power, usually used bydefense and scientific institutions to perform tens of trillions of computations per second, in consumer oriented applications. Such applications include business financial applications like QuickBooks accounting software, tax software, customer relationship management (CRM) software, or any other business utility software. Cloud computing technology is even used to empower massivecomputer games.
Cloud computing comprises of a network of large groups of servers with specialized connections to spread data processing jobs across them. This shared Information Technology infrastructure contains large pools of systems that are linked together. The infrastructure used for connecting the computer systems and the software needed to make cloud computing work are both of robust quality. Virtualization techniques are often deployed to maximize the power output from cloud computing technology.
Cloud computing offers numerous capabilities to its users. Elasticity can be considered as the single most important attribute of the cloud computing technology. You might start running your application on just a single server. But in no time, cloud computing enables you to scale your application to run onhundreds of servers. Once the traffic and usage of your application decreases, you can scale down to tens of servers. All this happens almost instantly, and the best thing is your application and your customers don’t even realize that. This dynamic capability to scale up and scale down on the fly is called Elasticity. Elasticity brings an illusion of infinity. Though nothing is infinite in this world, your application can get any number of resources as it demands.
Elasticity is one of the biggest unique selling points of the Cloud computing technology. In traditional web hosting, when you want to add another server to your web application, your host has to manually provision that for you. Adding additional servers and configuring the network topology introduces additional time lag that your business cannot afford. Most of the Cloud Computing vendors offer an intuitive way of manipulating your server configuration and topology. Elasticity is achieved through virtualization. Scaling up is technically adding more server virtual machines (VMs) to an application and scaling down is detaching the virtual machines or VMs from the application.
Cloud data centers typically run thousands of powerful servers that offer a lot of storage and computing power. You never know which physical server is responsible for running your code and the application. In most of the cases, the application that you deployed may be powered by more than one server running within the same data center. You cannot assume that the same physical server will run the next instances of your application. Servers are treated as a commodity resource to host the virtual machines. There is no affinity between avirtual machine and a physical server. Each server in the cloud data center is optimally utilized as and when needed. | <urn:uuid:74252627-32c8-4ade-bae1-919d543df141> | CC-MAIN-2017-04 | http://www.myrealdata.com/blog/173_cloud-computing-is-flexible-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926739 | 583 | 3.3125 | 3 |
As the HDMI technology and its products being mature, it is widely used in our life. The higher and higher request of the transmisson of high-performance video signals seems to be a challenge as there is a flaw that HDMI data will be corrupted if the transmission distance is too long. That said, many terminal devices, like the out door big screen and the long distance video conference, can not play well because it is limitted by the long transmission distance. In order to solve this problem, engineers found that fiber cables can solve this issue by its highly transmission and long distance advantages. As a result, many video data transmission products over fiber are coming into the market. There are two main products among them, the video fiber converter and the video fiber extender. This article will discribe some of the differences between them.
What is Video Fiber Converter?
Video fiber converter is an Ethernet transmission media conversion unit which transfer the twisted pair electrical signals in a short distance and optical signals over long distances. In general, this unit is in a compact size mini box that is also known as the photoelectric converter or fiber media converter. Video fiber converter is generally used in an actual network environment that Ethernet cable can not be covered and must use the fiber cables to extend the transmission distance. In addition, it is usually applied in the access layer of fiber broadband MAN( Metropolitan Area Network). What’s more, it plays an important role in helping the last kilometer fiber optic lines to connect to the outer area network or more network.
What is Video Fiber Extender?
Video fiber extender is a device which is used to long-distance video data transmission. It is serialize the electrical signal using the SerDe(Serializer/Deserializer). A SerDes converts a parallel data source to one or more serial data lanes and vice-versa. Video fiber extender is generally used in pairs, the transmitter(Tx) and the receiver(Rx). The transmitter sends optical signal for fiber transmission while
the receiver is mainly used to restore the optical signal to the electrical signal. The main purpose of Video Fiber Extender is to extend the signal transmission distance. In a simple explanation, the main function of video fiber extender is to achieve the conversion of electrical to optical signal and optical to electrical signal. Due to the video fiber extender is always designed with HDMI, SDI, VGA or DVI interface, we usually simply call such as HDMI video extender instead of the video fiber extender with HDMI interface.
Comparison of Video Fiber Converter and Video Fiber Extender
Same point: Both of them will carry on the photoelectric conversion
The video fiber converter just takes photoelectric conversion but not changes the code and processes data. Video fiber converter is usually used for Ethernet, running 802.3 protocol, only using for point to point connections.
Video fiber extender is much more complex than the converter. Excepted photoelectric conversion, it also multiplexs and demultiplexs the data or signal. Usually, video fiber extender is mainly used for video transmission areas that requires timeliness such as the high security monitoring, distance learning, video conference and so on. In order to meet the needs of multi-service applications, it can also transmitted and control the switching value, voice and Ethernet signalswe in the same time. Compared to the converter, its fuctions are more powerful and multiple.
Even though there are still some differences between them, their general appearance and function are very similar. As both of them belong to the video fiber transmission system, many manufacturers are accustomed to call them video fiber converter. Therefore, It is worth mentioning that if we want to buy a converter or an extender, we should confirm the product features and working principle before we buy.
Get More Information in Fiberstore
Fiberstore offers all kinds of video fiber transmission products with highly performance and reasonable price. All optical/electric interface of our device are in accordance with the international standards and applicable in various operating environments. In order to meet the needs of our dear customers, we will also offer the customized service. Know more info, please log in our website or contact Fiberstore team! | <urn:uuid:de21cbb0-cc46-40be-ba8b-dd9a866d288e> | CC-MAIN-2017-04 | http://www.fs.com/blog/video-fiber-converter-vs-video-fiber-extender.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911394 | 852 | 2.953125 | 3 |
HTML5, which defines the fifth major revision of the Hyper Text Markup Language, is the new web standard on the World Wide Web (WWW) for creating mobile browser content. HTML5 appears to be the ideal solution for mobile developers, if not now, for the future on mobile devices (and possibly on computers too).
HTML5 is a major step forward for mobile commerce. HTML5 is not only making the web platform more user-friendly to create web apps, but common enough to use for optimized mobile web applications and low-powered devices such as smartphones and tablets. With HTML5, there is no need for a separate operating system for mobile devices as most of the apps support the new platform. For this reason and many others, HTML5 is becoming really popular among developers. Google (News - Alert) and Apple as well as Adobe and Microsoft are fully committed to supporting and using HTML5’s quickly-emerging web specification; but the new free platform is putting pressure on the other developers who have yet to decide on making their move towards HTML5, as some developers do not see it as a complete functional unified, multi-platform content-enabler.
Supported on major mobile devices, HTML5 is compatible with HTML4 and XHTML1 documents. It uses the same standard syntax and displays the same standard behavior as previous versions, but HTML5 features “new open standards created in the mobile era” to provide enhanced functionality for mobile devices. It has advanced web application features that are available for most mobile browsers and devices.
HTML5-driven web apps are sure to attract other developers in the future. As of today, HTML5 enables web designers to build rich web applications, use a cross platform development tool for audio, animation, and other visual effects, all without relying on third party browser plug-ins (like Flash) and open application programming interfaces (APIs).
Today, the smartphone market is dominated by Google's Android (News - Alert) and Apple’s iOS which both support web standards including HTML5. These two developers have relied on HTML5 as their preferred standard for creating mobile browser content.
In summary, HTML5 may be the right solution for creating and deploying content in the browser across mobile platforms as it has improved usability to support video, animation, and interactivity to web pages. | <urn:uuid:79dedec7-5a67-4e98-8cca-75f9314cf815> | CC-MAIN-2017-04 | http://www.html5report.com/topics/html5/articles/270945-google-apple-embrace-html5-mobile-devices.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920632 | 476 | 2.515625 | 3 |
In the world of information technology, the expression “server” can either be referred to simply hardware (computer) or computer software. But in term of hardware, a computer can be utilized as a server with the backing of a specially designed program. This system now can grant services to other computer’s programs, to their users and to other computers. A server can be foremost part of a network to practice certain course of action on the request of client. Anyhow, the processed data can be delivered to other computers by the way of either a local network or the Internet. | <urn:uuid:144cae7a-5b53-4359-b44d-61b4272127d9> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/print-server | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00393-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904233 | 118 | 3.203125 | 3 |
In 2015, social engineering took center stage as hackers shifted from using automated exploits to instead encouraging people to do their dirty work- infecting systems, stealing sensitive credentials, and transferring funds.
Research conducted by Proofpoint suggests that social engineering has become the most actively used attack technique used by hackers.
Proofpoint found that 99.7 percent of attachment documents and 98 percent of URLS found in malicious emails require human interaction to infect the target. The report also found that Tuesday mornings between 9-10 AM was the most popular time criminals used to send out phishing campaigns and that most social media spam generally hits individuals in the afternoon.
In 2015, cyber criminals began targeting organizations in the UK & Europe with Microsoft Office Macros, which first appeared in the late 90s. The report also highlights how popular ransomware was in exploit kits campaigns last year and how this trend seems to be continuing in 2016.
Overall, the report suggests that 2015 was the year where attackers learned that people make the best exploits, and focused heavily on creating social engineering tactics that lured people in and tricked them into opening an attachment, downloading an application, or handing over highly sensitive credentials.
Continuing into 2016 hackers are expected to rely on the same threat framework to conduct attacks- Actor, Vector, Hosts, Payload, and Command and Control Channel. As human’s curiosity often gets the best of them, hackers will continue to rely on people’s gullibility and use individuals as unwitting pawns in their scheme to attack organizations with malware, gain key credentials, and frequently wire money directly to the hackers.
The best approach is to accept that human beings are fallible and will make mistakes and to recognize that checks and balances are going to be essential. Best practice-based security standards require the use of file integrity monitoring, audit log analysis and vulnerability scanning to head off problems.
File Integrity Monitoring (FIM) is advocated as an essential security defense by all leading authorities in security best practices, such as NIST and the PCI Security Standards Council; it will ensure that a secure, hardened build standard is maintained at all times and, if there any changes in underlying core file systems (such as when an unwittingly phished employee introduces malware), this will be reported in real-time. | <urn:uuid:1d02480d-5de7-4931-80c0-d6a5d9ef707e> | CC-MAIN-2017-04 | https://www.newnettechnologies.com/humans-the-perfect-exploit-in-a-hackers-scheme.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00199-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958657 | 459 | 2.53125 | 3 |
The shift toward electronic voting technology has triggered debate over whether the technology is secure, and if not, how states and counties can afford to plug the security holes.
Driven by the Help America Vote Act of 2002 (HAVA), which funds replacement of lever and punch card devices for new voting technology, many jurisdictions are replacing old voting machines with new electronic systems. These systems, called direct recording electronic (DRE) voting machines, meet HAVA's technology requirement allowing those with disabilities to vote independently. They're also considered easier to use -- for voters and precinct workers -- than the old equipment.
Surveys from early implementations showed voters were confident that DREs recorded their ballots accurately. Since then, however, a multitude of computer scientists and activists have contested paperless DREs.
Opponents fear that DREs are susceptible to bugs and tampering, saying the current technology fails to provide a paper audit trail so voters can verify their ballots have been recorded as intended. The security of at least one DRE manufacturer's programming code has come under fire from university researchers.
Security concerns prompted some states to require that DREs be modified to print a paper record that lets citizens verify their votes. In Congress, several bills would create a national voter-verified paper audit trail (VVPAT) requirement. One bill proposed by New Jersey Rep. Rush Holt in May 2003 -- the Voter Confidence and Increased Accessibility Act -- while slow to warm up, has gained sponsorship from more than 100 representatives, including some Republicans. In early February, California Sen. Barbara Boxer introduced a bill similar to Holt's -- the Secure and Verifiable Electronic Voting Act -- that would also fund states for the addition of a printer to DREs.
But many county elections administrators see no need for a VVPAT and are concerned that such a requirement will add to the cost and complexity of elections administration during a historic fiscal crisis.
Voting Security or Voter Confidence?
At first, the notion that a hacker or computer malfunction could throw an election seemed only for the paranoid. It was more commonly referred to as an issue of "voter confidence." If many voters who currently show up and vote are part of this paranoid bunch who don't trust the machines, then maybe even fewer voters would appear at the polls in the future.
But recent scandals involving voting machine vendors caused decision-makers in states such as Maryland and Ohio to re-examine their DRE security. Maryland and Ohio ordered independent investigations after researchers at Johns Hopkins University questioned the security of programming code used by DRE manufacturer Diebold.
Reports by both states recommended that election officials and vendors make security enhancements, which Diebold said have since been made. The California Secretary of State's Office considered decertification of some Diebold systems when it found the vendor installed uncertified updates in several California counties. Several incidents where hackers infiltrated DRE vendor networks have also fueled doubts about the companies' abilities to truly secure the machines.
In 2001 and 2002, voter surveys showed that confidence in DRE accuracy hovered around 90 percent. In Georgia, where the entire state switched to DREs, a poll of 800 random respondents in December 2002 showed 93 percent were at least somewhat confident their votes were counted accurately, compared with 76 percent in 2001 -- before DRE implementation. In a few Washington counties, where electronic voting machines were piloted in late 2001, of 814 participants polled, 94 percent were confident their votes were recorded correctly.
Recent negative publicity, along with actions of decision-makers themselves, may be swaying public opinion from that confidence, said Dan Seligson, editor of electionline.org, a nonpartisan organization that provides information on election reform.
"That was a year and a half ago, before the movement and backlash against these machines really got to be a national thing," he said. "In California, I would say you would not get the same numbers."
A more recent poll in Georgia showed a slight dip in voter confidence from the previous year. In November 2003, only 80 percent were at least somewhat confident the machines produced accurate results in the 2002 election.
As 2003 ended, a few states, including California, Washington and Nevada, mandated or introduced legislation requiring a VVPAT. As 2004 began, several more followed suit.
Although deadlines prescribed by California Secretary of State Kevin Shelley and officials in other states are approaching, the current machines will have to suffice in the meantime. In California, at least 10 counties will use paperless DREs for the 2004 presidential election.
"They said that by 2006, you have to have a voter-verified paper audit trail," said Seligson. "You have to have a piece of paper with every vote by 2006, but this year they're not going to have it. The secretary of state already demonstrated that he doesn't trust these machines without a paper trail, yet I'm voting for a race as important as president on that machine."
In a position paper outlining his security directives, Shelley said he allowed nearly two and a half years for the state to be completely VVPAT-enabled because at the time of his mandate, there were no VVPAT-enabled systems certified. The lengthy procurement process and the need for poll worker training on new equipment were also cited as reasons for the long transition period. "I do not believe that expediting the implementation schedule is feasible," he wrote. "A transition period is necessary in order to assure the fair and efficient conduct of elections in California."
To soothe voters' interim concerns, Shelley required implementation of operational procedures to minimize security risks. This would include a requirement that state testers conduct parallel monitoring of DREs, in which testers set aside certain DREs for testing on Election Day -- a random selection from each model in California -- and input predetermined votes to be sure the final tally matches votes entered. The testing process, which was to be in place for California's March 2004 primaries, is videotaped to ensure accuracy.
In his position paper, Shelley said he wants to ensure voter confidence in the election process. "I support a VVPAT not because DRE voting systems are inherently insecure, they are not," he wrote, "but rather because people understandably feel more confident when they can verify that their votes are being recorded as intended."
In the 30 days following the release of the Secretary of State's Ad Hoc Touch Screen Task Force Report
in July 2003, the secretary accepted public comments. Of the 6,000-plus comments received, the comments were approximately two-to-one in favor of a VVPAT, said California's Assistant Secretary of State for Communications Terri Carbaugh.
California Counties Divided
While those who took time to send comments to the Secretary of State's Office favored a VVPAT, many California county elections administrators worry it will cause problems that cash-strapped counties are ill-equipped to deal with. California faces its third straight year of budget cuts in 2005.
Ann Reed, Shasta County clerk/registrar of voters and president of the California Association of Clerks and Election Officials, said she is confident the machines are currently accurate.
"We do a lot of logic and accuracy testing before the election," she said. "We're very comfortable when we send our DREs, that they're going to record every vote, and they're going to record it accurately."
She has several concerns about Shelley's mandate. She worries that printer jams could compromise voter privacy, that the visually impaired will again be forced to use separate voting equipment, and that her county -- and many others -- will be burdened by the cost.
"This is a great expense for counties, and they're going to have to come up with the money," she said. "If the state doesn't kick in and pay for it, I don't know what we're going to do."
Reed also worried that unforeseen operational difficulties could further burden counties. "I have to have all the paper on hand, so it's going to be a storage problem. There's just a whole bunch of unknowns I'm not looking forward to dealing with."
Though Reed said she wrote a clause into the purchase contract requiring the vendor to bear 50 percent of the upgrade costs, even half of the estimated $500 per machine can amount to a formidable cost for counties already using DREs countywide.
Carbaugh of the Secretary of State's Office said other counties made similar agreements in anticipation of Shelley's decision. According to Carbaugh, a combination of federal and state funds will probably help pay for the printers, though state funding sources had not yet been identified.
Warren Slocum, chief elections officer and assessor/county clerk/recorder for San Mateo County, Calif., said his county will supplement needed funds with a voting system replacement fund the county began seven years ago. "For any election we do -- for a jurisdiction like a city, for instance -- we have a charge in the billing formula that right now is 8 cents per registered voter that we charge the entity in addition to standard things like printing and labor costs."
The fee is based on the price of equipment depreciation.
Slocum said he supports the VVPAT because it is an issue of both voter confidence and voting security. "I think it strengthens voter confidence," he said. "It helps ensure the integrity of the election process, and I think it protects the people against either honest human mistakes that programmers could make, or mischievous hackers and people that wanted to do bad things to an election."
If You Can't Beat 'Em, Join 'Em
In Washington state, Secretary of State Sam Reed and some county election officials proposed a state bill to require a VVPAT. Unlike in California, Reed said the proposal is widely supported by county officials. "I certainly talked with them and their leadership before we did this," said Reed, "so we would be going in together on this."
Bob Terwilliger, Snohomish County, Wash., auditor, said though he is confident in the technology, activists brought the issue to the fore, and officials were forced to ease the public's growing concern.
"We really didn't have much choice in this situation," he said. "Voters were beginning to be concerned about the integrity of these systems. Rightly or wrongly, still, the perception is 100 percent of the problem as opposed to reality oftentimes."
Though Terwilliger said he has seen little opposition to the machines in his county -- which is the only Washington county to use DREs so far -- he said concerns were raised in the Legislature. "Frankly they would have had their own bill if we hadn't put one forward," said Terwilliger. "We just felt, knowing that that was going to happen anyway, it was a much better place for us to be to create that legislation than to react to something being created by somebody else."
The bill not only requires a VVPAT, it also aims to maintain voter confidence in other security areas, including creating a task force to examine the security of electronic voting systems, increasing the Secretary of State's Office's involvement in certification and writing operational security into state statutes.
Many DRE problems reported nationwide were because of operational inefficiencies in the implementation process, according to Terwilliger.
"What kind of training did they provide to their board workers before they implemented it? What kind of exposure did they provide to voters in terms of public presentations, fraternal organizations, chambers of commerce or public events like fairs?" he said. "So when it actually got implemented on day one for a primary or general election, it was not a surprise, and it was not like, 'Oh my god. I don't know what to do now that the machine's not working properly.'"
Terwilliger said the bill's requirements regarding logic and accuracy testing, device security and keeping audit trails will be nothing new for his county, but state statutes will ensure standards for counties that implement future DREs. "It's just spelled out in some more detail in this statute to speak to concerns raised by skeptics in the community who believe, or arguably are saying, 'We're not sure every county auditor is doing that, so we want to make sure they know they have to.'"
Maryland Gov. Robert L. Ehrlich Jr. ordered an investigation of Diebold's AccuVote-TS voting system in August 2003. Researchers found security issues the vendor needed to address, which according to Diebold have since been conducted. Researchers also reported some that could be addressed by implementing new procedures.
To better examine DREs at the state level, the Washington proposal would increase the Secretary of State's Office involvement in the certification process.
"When we get into more sophisticated technology, it gets a lot more complicated. Then we need to tighten up in terms of certifying, even when there are patches they make for their software," said Secretary Reed. "In the past, a lot of these patches -- they'd just go ahead and do them."
Another thing likely to make it into the legislation is a requirement for mandatory recounts in a certain percentage of randomly chosen precincts. Advocates contend the VVPAT is meaningless without random audits. Terwilliger said that was not initially included in the bill, but it's likely legislators will include the requirement in the final draft.
"The impact is simply going to be one of staffing, timing and getting it done," he said. "There'll be some monetary impact, but when the Legislature is making a decision like this, the impact on the locals usually doesn't really play so much."
Compromise, he said, is part of the legislative process. "You don't always get everything you want, and somebody else is going to get something they want," said Terwilliger.
Terwilliger said he hoped HAVA funds would help with the costs associated with the bill, but his county will likely have to pay a decent portion of the upgrade costs.
"If that ends up being a state statute, and we have to do it, and there's no money to do it with, we'll just have to figure out how to do it."
Secretary Reed said he expected the funds supplied by HAVA would cover the printer for counties that implement just the one DRE that HAVA requires in each precinct for accessibility. If counties choose to move entirely to DREs, the counties would have to come up with some of the money themselves.
Some states are better positioned to deal with HAVA reforms than others, said Leslie Reynolds, executive director of the National Association of Secretaries of State, adding that a federal mandate would definitely increase implementation costs. Whether HAVA funds will cover its own mandates depends on each state's situation, said Reynolds.
"I think they would have a different answer based on the size of their state, the needs they have and what they had already done prior to HAVA," she said, noting that over the long term, it's generally agreed that HAVA funds won't cover all ongoing costs.
Some federal-level loopholes left states wondering when they will see the money from HAVA. In 2003, $1.5 billion of the federal budget was appropriated for HAVA, but states have seen very little of that money, said Reynolds. The General Services Administration distributed most of the $650 million permitted under Title I of HAVA, but the remainder was to be distributed by the Election Assistance Commission (EAC) that HAVA established.
"Unfortunately the Election Assistance Commission didn't exist," she said. "$830 million sat here in Washington, and the states had no access to it."
The four EAC commissioners should have been in place by Feb. 26, 2003, but were not confirmed until Dec. 9, 2003. In the 2004 budget, $1.5 billion was again allocated to fund HAVA, but since fiscal 2004 began in October, those funds also awaited the appointment of EAC commissioners.
Once the commissioners took their seats, the money didn't immediately start flowing to states. Though states promptly submitted the plans HAVA required, detailing how states intend to use and distribute HAVA funds, the EAC could not disburse those funds until the state plans are published in the Federal Register
-- which would require approximately $800,000 from the EAC budget.
"They can't send out the grant money until the state plans are published, but they don't have enough money in their budget to publish the state plans," said Reynolds. "So it is a bit of a mess right now."
The EAC was only appropriated $2 million to accomplish the numerous responsibilities it has been given. President George W. Bush allocated the EAC $10 million in his budget proposal for 2005.
Tim Storey, a senior fellow with the National Conference of State Legislatures, said because of the comprehensiveness of HAVA's reforms and confusion over funding, some states could be ill-prepared to deal with a VVPAT mandate.
"States are sort of reeling with implementation of HAVA, so the thought of a new federal mandate across the one the states haven't even implemented -- I'm not sure it's the right time for that," he said. "HAVA mandates a lot of new equipment purchases, and of course, you don't want to get too far down the line and have the feds come back in and change the law again when you've already done a fair amount of purchasing."
Federal funding could cushion the blow, Storey said.
"If there's a new mandate, the key would be to make sure the federal government provides money because the states are actually still struggling with emerging from the worst fiscal situations they've had in decades," he said. "It does come down to the money in many ways, obviously because of the inability of states right now to generate revenue and their concerns that the mandates of the Help America Vote Act are going to be fully funded to start with." | <urn:uuid:32410c63-69cd-4dd9-87a7-144601803eb1> | CC-MAIN-2017-04 | http://www.govtech.com/security/The-Price-of-Democracy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00503-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.977444 | 3,701 | 2.609375 | 3 |
One of the main privacy issues around the time of the Revolutionary War was the freedom of the people from government intrusion. As we celebrate Independence Day, let's ponder the clash of technology and civil liberties that We the People face 235 years after the adoption of the Declaration of Independence when we started this nation as rebels and revolutionaries to ascertain our "unalienable Rights" of "Life, Liberty and the pursuit of Happiness." Since then there have been countless kicks to people's privacy and constitutional rights in the "balance" of security measures. Do you ever wonder what the heck happened to the Constitution? It wasn't written to be temporary or optional.
On Independence Day, surely the ghosts of the Supreme Court of the United States (SCOTUS) walk the DOJ halls to see how America is faring on her birthday in balancing justice, technology, surveillance and civil liberties. As we await the Supreme Court to weigh in on technology issues like warrantless GPS tracking in regards to the Fourth Amendment, here are privacy-enhancing rulings from individuals much wiser than I will ever be.
One Justice in particular seemed to be far ahead of his time in fighting for freedom of speech and the right to privacy even way back in 1928; he was concerned with how privacy could lose out against "modern" technology being used by the government against the people like during Prohibition times with telephone wiretapping. In the dissenting opinion of Justice Louis D. Brandeis in Olmstead v. United States, Brandeis wrote: When the Fourth and Fifth Amendment were adopted, "the form that evil had theretofore taken" included the government forcing a person to incriminate themselves. The government could "secure possession of his papers and other articles incident to his private life-a seizure effected, if need be, by breaking and entry. Protection against such invasion of 'the sanctities of a man's home and the privacies of life' was provided in the Fourth and Fifth Amendments by specific language. . . . But 'time works changes, brings into existence new conditions and purposes. Subtler and more far-reaching means of invading privacy have become available to the Government. Discovery and invention have made it possible for the Government, by means far more effective than stretching upon the rack, to obtain disclosure in court of what is whispered in the closet."
Moreover, "in the application of a Constitution, our contemplation cannot be only of what has been but of what may be." The progress of science in furnishing the Government with means of espionage is not likely to stop with wire-tapping. Ways may someday be developed by which the Government, without removing papers from secret drawers, can reproduce them in court, and by which it will be enabled to expose to a jury the most intimate occurrences of the home. Advances in the psychic and related sciences may bring means of exploring unexpressed beliefs, thoughts and emotions. "That places the liberty of every man in the hands of every petty officer" was said by James Otis of much lesser intrusions than these. To Lord Camden, a far slighter intrusion seemed "subversive of all the comforts of society." Can it be that the Constitution affords no protection against such invasions of individual security? . . .
Furthermore, Justice Brandeis spoke of the government as "the potent, the omnipresent, teacher" which "breeds contempt for law" among the people by example when it refuses to acknowledge privacy.
The makers of our Constitution undertook to secure conditions favorable to the pursuit of happiness. . . . They sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations. They conferred, as against the government, the right to be let alone-the most comprehensive of rights and the right most valued by civilized men. To protect, that right, every unjustifiable intrusion by the government upon the privacy of the individual, whatever the means employed, must be deemed a violation of the Fourth Amendment. And the use, as evidence in a criminal proceeding, of facts ascertained by such intrusion must be deemed a violation of the Fifth.
Decency, security and liberty alike demand that government officials shall be subjected to the same rules of conduct that are commands to the citizen. In a government of laws, existence of the government will be imperiled if it fails to observe the law scrupulously. Our Government is the potent, the omnipresent teacher. For good or for ill, it teaches the whole people by its example. Crime is contagious. If the Government becomes a lawbreaker, it breeds contempt for law; it invites every man to become a law unto himself; it invites anarchy. To declare that, in the administration of the criminal law, the end justifies the means -- to declare that the Government may commit crimes in order to secure the conviction of a private criminal -- would bring terrible retribution. Against that pernicious doctrine this Court should resolutely set its face.
Of the same case, Justice Redkin said, "Here we are concerned with neither eavesdroppers nor thieves. Nor are we concerned with the acts of private individuals. . . . We are concerned only with the acts of federal agents whose powers are limited and controlled by the Constitution of the United States."
In Boyd v. United States, what was called "a case that will be remembered as long as civil liberty lives in the United States" Justice Bradley said, "The principles laid down in this opinion affect the very essence of constitutional liberty and security . . . They apply to all invasions on the part of the government and its employees of the sanctity of a man's home and the privacies of life. It is not the breaking of his doors and the rummaging of his drawers that constitutes the essence of the offense; but it is the invasion of his indefeasible right of personal security, personal liberty, and private property, where that right has never been forfeited by his conviction of some public offense, it is the invasion of this sacred right which underlies and constitutes the essence of Lord Camden's judgment."
In closing, here are a couple more words of wisdom from U.S. Supreme Court Justices. "The right of an individual to conduct intimate relationships in the intimacy of his or her own home seems to me to be the heart of the Constitution's protection of privacy," Harry A. Blackmun, U.S. Supreme Court Associate Justice said. And Justice Louis D. Brandeis stated,"Men born to freedom are naturally alert to repel invasion of their liberty by evil-minded rulers. The greatest dangers to liberty lurk in insidious encroachment by men of zeal, well-meaning but without understanding."
Wouldn't it seem as if these justices might surely be rolling over in their graves now as privacy, the Constitution, and surveillance powers clash?
Happy birthday USA. Be safe, have fun and enjoy Independence Day!
Like this? Here's more posts:
- What happens if you catch a hacker and must deal with the FBI?
- Microsoft patent may ruin Skype, may make VoIP spy and pry easy for gov't
- FBI Dumpster Diving Brigade Coming Soon to Snoop in a Trashcan Near You
- 'Secret Law' of Patriot Act: Geolocation Tracking & Domestic Spying on Steroids?
- Having private parts is not probable cause for TSA to grope or body scan you
- FaceNiff Android App Allows the Clueless to Hack Facebook in Seconds Over Wi-Fi
- Project PM Leaks Dirt on Romas/COIN Classified Intelligence Mass Surveillance
- Former FBI Agent Turned ACLU Attorney: Feds Routinely Spy on Citizens
- Sniffing open WiFi may be wiretapping judge tells Google
- EFF: Microsoft abusing DMCA to eradicate competing Xbox 360 accessories
Follow me on Twitter @PrivacyFanatic | <urn:uuid:6a6e22aa-46d5-47e9-97ca-82718caf3126> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220119/microsoft-subnet/in-this-digital-age--what-the-heck-happened-to-the-constitution-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00411-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954154 | 1,591 | 2.765625 | 3 |
The Architecture of Architecture, Part II
In my last article, I argued that because we really dont have a much needed, shared vocabulary for our kind of architecture, there is justified skepticism about the legitimacy and value of the discipline.
In this article, Ill try to support that assertion by surveying the diversity of opinion on what our kind of architecture is about. Ill start with a review of the conventional wisdom on the subject, and then contrast that with the earliest use of the term architecture in the IT space that Ive been able to find.
Most definitions of our kind of architecture define it in terms of components and relationships. Recently, the inclusion of the idea of principles has become more common. For example, one of the most commonly cited definitions of enterprise architecture is provided by IEEE Standard 1471, IEEE Recommended Practice for Architectural Description of Software-Intensive Systems . It reads:
The fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution.
But if you compare this to IEEEs definition of design, from IEEE Standard 610, Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries, which reads:
Design: (1) The process of defining the architecture, components, interfaces and other characteristics of a system or component. (2) The result of the process in (1).
youll see why such definitions are not helpful in distinguishing architecture from design. Just what makes it architecture rather than design? Indeed, 610, admittedly older than 1471, defines architecture as:
The organizational structure of a system or component.
So, design is the architecture, and architecture is the design hmmm.
The Open Group defines architecture thus:
Architecture has two meanings depending upon its contextual usage:
Because we borrowed the idea of architecture from the discipline of civil architecture these definitions are based on analogy with the definition of civil architecture. Analogies with civil architecture are intuitively appealing, but they must be made carefully, because the medium of our kind of architecture is quite different from that of civil architecture.
From the definition of architecture in the Oxford English Dictionary:
Other dictionary definitions of this original meaning of civil architecture ring changes on these basic themes of structure and style. They refer in common to the art and science of design for construction, and to stylistic patterns within that art and science. These aspects have been carried over into most modern definitions of our kind of architecture by analogy. But these definitions dont even bother to distinguish between design and architecture; architecture is design for construction as opposed to design for something else.
This doesnt work well with our kind of architecture, because we regularly use the word design to denote a specific activity in virtually all system development lifecycle models (regardless of the granularity of the cycle, or whether the activity is explicit or implicit). More importantly, all of our kind of design is design for construction, so by analogy with civil architecture, any and all of our kind of design is architecture. Again this is not helpful in understanding what makes our kind of architecture worthy of the name.
The term architecture is used here to describe the attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flow and controls, the logical design, and the physical implementation.
This isnt about structural relationships between components, its about hiding that structure and focusing instead on behavior. Nowadays, wed say it defines architecture as the properties of a class of objects. How did we get from external properties to internal structure? Thats largely the doing of Edsgar Dijkstra, when in 1968 he laid the foundations for the idea of software architecture. Theres a good discussion of this at the Software Engineering Institute (SEI) websitehttp://www.sei.cmu.edu.
We take for granted now that the internal structure of software matters, in that this structure significantly affects many important properties of software systems. Dijkstras ideas, further developed by Parnas, Perry and Wolf, Garlan and Shaw, Bass, Clements and Kazman, and others, provide the foundation for software architecture as understood today. But for solution and enterprise architects, for whom software systems are often components whose internal structure is a given, they have limited relevance.
When you consider enterprise architecture, things get even more curious. Neither IEEE nor The Open Group define enterprise architecture explicitly. The most commonly cited first use of enterprise architecture doesnt actually call it enterprise architecture, and thus doesnt define it.
John Zachman first applied the idea of architecture to an enterprise-wide (though IT-focused) scope in his paper A framework for information systems architecture (IBM Systems Journal, Vol. 26, No. 3, 1987). Note that Zachman did not call it enterprise architecture, rather he called it information systems architecture. Five years later he was still not calling it enterprise architecture, but somebody else was.
The first actual use of enterprise architecture I have found is by Steven Spewak in his book Enterprise Architecture Planning: Developing a Blueprint for Data, Applications and Technology (Wiley, 1992). Note that the subtitle limits the scope to data, applications and technology.
Spewak loosely defines architecture as being like blueprints, drawings or models. He defines enterprise by writing the term enterprise should include all areas that need to share substantial amounts of data.
More recent definitions of enterprise architecture tend to put less emphasis on architecture and more on the delivery of business value, in response to the pursuit of the perennially elusive business/IT alignment.
For example, researchers at the MIT Sloan Center for Information Systems Research (CISR) published Enterprise Architecture as Strategy: Creating a Foundation for Business Execution (Ross, Weill and Robertson; Harvard Business School Press; 2006), where they define enterprise architecture as:
The organizing logic for core business processes and IT infrastructure reflecting the standardization and integration of a companys business model.
And the Wikipedia entry for enterprise architecture defines it thus:
Enterprise Architecture is the description of current and/or future structure and behavior of organizations processes, information systems, personnel and organizational sub-units, aligned with the organizations core goals and strategic direction. Although often associated strictly with information technology, it relates more broadly to the practice of business optimization in that it addresses business architecture, performance management, organizational structure and process architecture as well.
As I said earlier, I am reminded of the blind men and the elephant. Is it possible to see the whole elephant for what it really is? Is there a single useful definition of our kind of architecture that encompasses all of these different perspectives and their implied needs? I believe there is. In my next article, Ill describe my quest for it.
Len Fehskens is The Open Groups vice president and global professional lead for enterprise architecture. He has extensive experience in the IT industry, within both product engineering and professional services business units. Len most recently led the Worldwide Architecture Profession Office at Hewlett-Packards Services business unit, and has previously worked for Compaq, Digital Equipment Corporation (DEC), Prime Computer and Data General Corporation. | <urn:uuid:f2e5b1ed-6401-4406-ba6f-bb4bd3cca0fe> | CC-MAIN-2017-04 | http://www.cioupdate.com/print/insights/article.php/11049_3726166_2/The-Architecture-of-Architecture-Part-II.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930091 | 1,462 | 2.5625 | 3 |
Noren K.,University of California at Davis |
Noren K.,University of Stockholm |
Statham M.J.,University of California at Davis |
Agren E.O.,National Veterinary Institute |
And 6 more authors.
Global Change Biology | Year: 2015
Population expansions of boreal species are among the most substantial ecological consequences of climate change, potentially transforming both structure and processes of northern ecosystems. Despite their importance, little is known about expansion dynamics of boreal species. Red foxes (Vulpes vulpes) are forecasted to become a keystone species in northern Europe, a process stemming from population expansions that began in the 19th century. To identify the relative roles of geographic and demographic factors and the sources of northern European red fox population expansion, we genotyped 21 microsatellite loci in modern and historical (1835-1941) Fennoscandian red foxes. Using Bayesian clustering and Bayesian inference of migration rates, we identified high connectivity and asymmetric migration rates across the region, consistent with source-sink dynamics, whereby more recently colonized sampling regions received immigrants from multiple sources. There were no clear clines in allele frequency or genetic diversity as would be expected from a unidirectional range expansion from south to north. Instead, migration inferences, demographic models and comparison to historical red fox genotypes suggested that the population expansion of the red fox is a consequence of dispersal from multiple sources, as well as in situ demographic growth. Together, these findings provide a rare glimpse into the anatomy of a boreal range expansion and enable informed predictions about future changes in boreal communities. © 2015 John Wiley & Sons Ltd. Source | <urn:uuid:b3997710-518a-4b03-b5df-ad58d7c5af02> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/naturama-modern-natural-history-1696945/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00191-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901305 | 345 | 3.203125 | 3 |
I’m about to get my first DSLR (a D7000) and I don’t feel like I can justify spending serious money unless I have mastered the core concepts. I have a buddy who went to Cal Arts and taught me a lot many years ago, and even took me to the dark room to do some black and white stuff, but I need to refresh what I learned, learn even more, and then capture it. So here we go.
Photography seems to be all about one main concept: How much light is hitting the sensor (film). There are a couple of ways to control this. You can:
- increase the amount of light that comes into the camera by increasing the size of the opening it comes in from
- you can increase the time the opening stays open
The size of the opening is called a camera’s aperture, or an f-stop. Smaller apertures have larger f-stop numbers and let in less light. Larger apertures have smaller f-stop numbers and let in more light.
The apertures (f-stops) go like so: F2.8 F4 F5.6 F8 F11 F16 F22 F32, and sliding along this scale (right to left) is called stepping up or stepping down. Each step results in a 50% light penetration difference.
Two main types of lenses: Prime, and Zoom. Prime are locked to one view. Zoom allow you to…um…zoom. 3x means the object will become three times larger.
The next type of lenses are wide-angle vs. telephoto, which focus either on width or depth. These can be either prime or zoom. Something in between a wide-angle and telephoto lens is called a mid-range lens.
The next type of lenses are wide angle vs. telephoto, which focus either on width or depth. These can be either prime or zoom.
Aperture is the opening in the lens that allows light in, and it can be made larger or smaller. A critical attribute of a lens is how large that opening can become, which is called an f-stop.
The f-stops are broken out like so: F2.8 F4 F5.6 F8 F11 F16 F22 F32. Sliding along this scale (right to left) is called stepping up or stepping down. Each step results in a 50% light penetration difference.
Smaller apertures result in larger depths of field. Closing the aperture by one f-stop gives you approximately 40% more depth of field.
Aperture size affects blurring. Large apertures will heavily blur the background; smaller ones will only slightly do so.
Aperture affects lens performance–especially with non-prime (zoom) lenses. Lenses usually perform best when stopped down (smaller) by a few stops. A good rule of thumb is to shoot at F8 for maximum sharpness.
Depth of Field
In natural photography the depth of field is roughly 1/3 in front of the subject, and 2/3 behind it. When you’re extremely close up, however, it’s 1/2 in front and 1/2 behind.
- Why does aperture affect clarity?
1 Much of this content came from this guide.
You can use the arrow keys to change the focus point during shooting, and the focus lock selector can keep that from happening.
Use the focus mode switch to switch from auto to manual to disable auto. | <urn:uuid:c8411394-3f95-44ae-851b-10cac886967d> | CC-MAIN-2017-04 | https://danielmiessler.com/study/photography/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00309-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945899 | 737 | 3.09375 | 3 |
The development of global economy has brought the world closer and closer. And this connection owes to Ethernet cables–one of the most important connectivity devices. Ethernet cables are used to connect PC, switches and routers to transmit and receive data. To build reliable connection, it’s important to select the suitable cables for specific applications. So this article will guide you how to choose appropriate cable categories from the side of cable structure–solid conductor or stranded conductor.
Copper Ethernet cables have the types of Cat 5, Cat5e, Cat6, Cat6a, etc. according to different specifications. Copper Ethernet cables can also be divided into solid and stranded conductor cables as to different cable constructions. The following will explain about these two kinds of cables in details.
Solid conductor cables are made up of a single, solid conducting wire. Solid conductors usually consist of bare copper wires with diameter between 22 and 24 AWG (American Wire Gauge units). For example, the diameter of Cat 5e UTP (Unshielded Twisted Pair) cables is 24AWG. The benefit of large wires is that they can provide superior electrical characteristics to keep stable over a wide range of frequencies. Therefore, solid cables are well suitable for high speed Ethernet applications.
Because of the large wire diameters, solid conductor cables have a lower DC resistance (The resistance is not good for signal transmission) and lower susceptibility to high frequency effects. This kind of cables can support longer distance transmission and higher data rates compared with stranded cables. But the large wire diameters also lead to disadvantages. The larger the core, the less inflexible the cable. If the cables were bent, they are very likely to be broken or affect the network performance.
Stranded conductor cables are very commonly used today. Inside the twisted pairs of stranded cables, each individual conductor is made up of a bundle of smaller-gauge wire strands. Generally six or seven strands are used to surround a single wire in the center. The outer strands are wrapped helically around the central wires. The stranded wires form a conductor with the similar diameter to a solid cable. But the conducting area is smaller than that of a solid cable due to the smaller diameters of each individual conducting wire strand.
The stranding structure makes stranded cables flexible. Even though the cables are bent, cables can’t be easily harmed since each strand is independent of the entire strand. Let’s see how this works. When cables are bent, all individual strands are pulled towards the center. The total stresses are distributed to all the strands to minimize the stresses on the center conductor. As a result, cable conductors can get more supports if there are more twists to the wire strands.
The conductors of stranded cables used for networking and Ethernet applications are made of bare or tin-coated copper wires. Tin-coated conductors can protect the conducting surfaces from oxidation and keep individual wire strands from fraying. That’s because of production process of tin-coated conductors. All the individual wire strands have to be dipped in a bath of molten tin before they are assembled into a single conductor.
But stranded conductor cables can cause higher insertion loss for their smaller conducting diameters especially for long distance transmission (of course the distance has limits for both solid and stranded conductor cables). Stranded conductor cables have high DC resistance which causes signals dissipation as increased heat during long distance transmission. So stranded conductor cables are not as good as solid cables for long distance runs. Another shortage of stranded cables is that they are more expensive than solid conductor cables for the equivalent length since they are expensive to manufacture.
Solid conductor cables are designed for backbone and horizontal cable runs. That attributes to the superior electrical performance and stable high frequency. The cables can support longer distances than that of stranded conductor cable. Long cables can be installed in the walls, up through ceilings, or between work areas on the same floor. But attention should be paid on that solid cables shouldn’t be bent, flexed, or twisted repeatedly as they are not very flexible.
While stranded conductor cables are used for short runs between network interface cards and wallplates or between concentrators and patch panels, hubs, and other rack mounted equipment, as they will be constantly plugged, removed or bent. Stranded conductor cable is much more flexible than solid conductor cable. However, it has high attenuation. When you use stranded category cables, remember to restrict its length to reduce insertion loss.
From the above content, solid and stranded conductor cables have their own advantages and disadvantages. Different types of cables are used for different applications. Knowing their specific purposes can improve network performance. FS.COM provides low-cost stranded cables including Cat5e, Cat6, Cat6a and Cat7 no matter shielded or unshielded, pre-terminated or unterminated. For more information about our copper Ethernet cables, please visit www.fs.com or contact us via firstname.lastname@example.org. | <urn:uuid:0931ba09-554c-4d0d-b559-e3500ea04e8e> | CC-MAIN-2017-04 | http://www.fs.com/blog/solid-or-stranded-conductor-cable-which-to-choose.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00064-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925762 | 1,010 | 3.359375 | 3 |
If you want a really, really fast computer, there are all kind of ways to build the hardware architecture, but one thing that almost all of them have in common is that they run Linux. The top spot now appears to belong to the Tianhe-1A , which means "Milky Way," at a research center at the National University of Defense Technology (NUDT) in Tianjin, China.
I say "appears" because the official Top 500 Supercomputer List won't be out until early November. Still, according to a New York Times report, Jack Dongarra, the University of Tennessee computer scientist who maintains the Top 500 ranking, said, the Tianhe-1A "blows away the existing No. 1 machine," which is a Cray XT5 Jaguar at the National Center for Computational Sciences. Dongarra concluded, "We don't close the books until Nov. 1, but I would say it is unlikely we will see a system that is faster."
How much faster? NUDT claims the machine is 1.4 times faster than Cray XT5 Jaguar. NUDT claims that the computer's peak performance can hit 1.206 petaflops and jogs along at 563.1 teraflops. To do this, the Tianhe-1A system covers a square kilometer, weights in at 155-tons and uses 14,336 Intel Xeon CPUs and 7,168 Nvidia Tesla GPUs.
The software behind it? Linux of course. Linux has long been the operating system of choice for the world's fastest computers. While NUDT hasn't said which specific Linux they used, I strongly suspect it's a high-speed optimized version of China's Red Flag Software's Red Flag Linux.
It's not just supercomputers that have become Linux fans. Other high-speed, no-room-for-failure systems have moved to Linux. The one that comes first to my mind is the London Stock Exchange, which dumped its slow Windows/.NET system for Linux. It's not the only one. Many of the world's stock exchanges, where every millisecond counts, have either already switched to Linux or are planning on it.
The bottom line: when speed and reliability is what you have to have, Linux is the operating system you have to use. | <urn:uuid:a5d1ab7b-7b14-479b-b415-bc3b281598b8> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2469721/network-software/china-has-the-top-supercomputer-in-the-world--but-it-still-runs-linux.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00146-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942415 | 477 | 2.78125 | 3 |
Earlier this week, the National Institute of Standards and Technology (NIST) published a formal definition for cloud computing, two years after the first draft was proposed. Despite an abundance of primers on the subject, the answer to the “What is cloud?” question remains somewhat murky, so a formal pronouncement from an established standards body such as NIST should be well-received by a community that is largely still seeking clarity on the “cloud” issue.
So without further ado, here is NIST’s formal definition:
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
NIST explains that the “definition is intended to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing.”
More fine-grained examples will of course take into account user-specific requirements. For example, the decriptor “convenient” is a relative term with a range of interpretations that depend on the user, application, and industry. One person’s “convenient” is another’s “latency issue.”
The NIST definition goes on to list five characteristics considered essential to cloud computing: on-demand self-service, broad network access, resource pooling, rapid elasticity or expansion, and measured service. There are also three “service models” (software, platform and infrastructure), and four “deployment models” (private, community, public and hybrid) – which combine to describe a delivery mechanism. These elements are further clarified in “The NIST Definition of Cloud Computing” (SP800-145.pdf).
Although just recently finalized, the NIST definition has long been the working standard for the community. In fact, the same version served as the US contribution to the InterNational Committee for Information Technology Standards (INCITS), the group that is working to develop a standard cloud computing definition at the international level. | <urn:uuid:d3146884-e849-4216-955e-902b6e0bb4d6> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/10/28/cloud_less_cloudy_with_formal_nist_definition/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00540-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926577 | 470 | 2.90625 | 3 |
Jobs emphasizing science, technology, engineering and math (STEM) skills have long conjured up images of research universities and corporate offices.
Many such STEM-heavy jobs require a bachelor’s or advanced degree. But what’s often not talked about is the roughly equal number of STEM jobs found on factory floor shops, construction sites and other blue-collar workplaces.
A Brookings Institution study published Monday makes the case that policymakers are overlooking the role these blue-collar STEM jobs play in the economy, finding far more STEM workers than previous estimates. About 26 million jobs (20 percent of the U.S. workforce) required knowledge of at least one STEM field in 2011. About half of these jobs did not call for a four-year degree, paying an average salary of $53,000, according to the report.
Skills for these jobs often aren’t acquired from four-year universities, but rather at community colleges, workshops and vocational schools. While professionals with advanced degrees typically do design work or make higher-level decisions, it’s the blue collar STEM workers who carry out the actual production or technical repairs when needed.
“Notwithstanding the economic importance of professional STEM workers, high-skilled blue-collar and technical STEM workers have made, and continue to make, outsized contributions to innovation,” the report states.
The report also suggests demand for these types of jobs is growing. In particular, the construction and manufacturing sector are shifting in this direction.
To examine STEM jobs across different metro areas, the study classified jobs using data from surveys part of the Labor Department’s Occupational Information Network Data Collection Program. One section of the survey asks workers to assess the level of knowledge required in different areas for their work, which Brookings then used as the basis for identifying STEM jobs.
Areas with the highest concentrations of STEM jobs aren’t too surprising. The San Jose-Sunnyvale-Santa Clara, Calif., metro area and all of its tech-heavy companies topped the list, with 33 percent of the workforce requiring high knowledge of at least one STEM field. Washington, D.C., ranked second with 27 percent.
We’ve compiled the following map using Brookings data showing STEM jobs for the 100 largest metro areas. STEM jobs accounted for the largest share of the total workforce in green regions; purple areas recorded lower percentages. (Open full-screen map in new window or click a marker to display an area's totals)
In general, jobs requiring at least a bachelor’s degree account for the majority of the STEM jobs in regions with the most such available positions. The Brookings report and other studies have also pointed out these regions, shown in green, tend to have lower unemployment.
A slightly different pattern emerges if we look only at STEM jobs not requiring a four-year degree.
By this measure, the following metro areas were shown to have the top concentrations of STEM jobs:
Here’s another map showing only STEM jobs not requiring four-year degrees. Again, green markers represent higher percentages, while purple signifies lower figures.
View Larger Map
You’ll notice that the type of STEM jobs vary by metro area. Several of the employment hubs in northern California, for example, are more heavy on STEM jobs requiring bachelor’s degrees, while the opposite appears to be true in Florida.
View an interactive STEM jobs map
Read Brookings profiles for each metro area
This article originally appeared in GOVERNING magazine. Photo from Shutterstock. | <urn:uuid:6d88d59f-8991-4d99-87cb-41844675eb8e> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/211011561.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00356-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896324 | 721 | 2.921875 | 3 |
Definition: A model of computation consisting of a set of states, a start state, an input alphabet, and a transition function that maps input symbols and current states to a next state. Computation begins in the start state with an input string. It changes to new states depending on the transition function. There are many variants, for instance, machines having actions (outputs) associated with transitions (Mealy machine) or states (Moore machine), multiple start states, transitions conditioned on no input symbol (a null) or more than one transition for a given symbol and state (nondeterministic finite state machine), one or more states designated as accepting states (recognizer), etc.
Also known as finite state automaton.
Generalization (I am a kind of ...)
model of computation, Turing machine, state machine.
Specialization (... is a kind of me.)
deterministic finite state machine, nondeterministic finite state machine, Kripke structure, finite state transducer, Markov chain, hidden Markov model, Mealy machine, Moore machine.
Note: Equivalent to a restricted Turing machine where the head is read-only and shifts only from left to right. After Algorithms and Theory of Computation Handbook, page 24-19, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
The FASTAR (Finite Automata Systems - Theoretical and Applied Research) group site links to some papers, conferences, and projects.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 22 August 2013.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "finite state machine", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 22 August 2013. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/finiteStateMachine.html | <urn:uuid:f869b784-c199-413d-be84-f9abb04e283d> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/finiteStateMachine.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00082-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.830404 | 440 | 2.734375 | 3 |
Suspicious and malicious – what is malware?
Malware, the term for “malicious software,” is a collective term used to describe a program or file that can harm a computer, mobile device, or network. Malware can take on several forms, including viruses, worms, Trojan horses, and spyware.
Malware attacks are becoming more and more sophisticated. Malware originally appeared within e-mails, but has since morphed into other forms such as popping up within images, video clips, and even media players. Identity thieves continue to evolve and change their tactics as quickly as the public is educated by security companies and Internet providers.
What does malware do?
A malware infection can damage computers, facilitate identity theft, and cause the loss of important files and information. Here are a few examples of malware’s impact:
– Spreads infections to your friends or coworkers.
– Embeds keystroke trackers to allow your passwords, user ID, credit card, and other financial information to be captured.
– Tracks information about the Web pages you visit, your favorite shopping sites, and more.
– Creates a digital trail to you for crimes someone else commits.
– Uses your computer to store or distribute illegal, stolen, pirated, or illicit files.
– Copies files from your computer in order to file false tax returns, or to apply for loans or credit cards.
How do I defend against malware?
Protecting your system against malicious software requires a layered approach to security. There is no single tool that will reliably block all malware attacks. Here are some basic security practices that you can use to minimize malware attacks:
– Obtain antivirus software and keep it running and updated at least weekly. Many software programs can be set to automatically update.
– Make sure to apply system patches offered by your operating system manufacturer as soon as they are released.
– Never reveal your user ID and password – keep them confidential.
– Do not open e-mail attachments from an unknown source.
– Do not download or install unfamiliar software.
– Learn to recognize signs of a virus infection, such as slow computer performance, system crashes, bounced e-mail, and anti-virus warnings.
– Do not forward virus warnings to your friends and coworkers.
For additional tips on Internet security, visit: http://news.centurylink.com/resources/tips/centurylink-consumer-security-tips-online-security | <urn:uuid:8ea9c243-c076-42d9-8f3d-c7bc3fa6114f> | CC-MAIN-2017-04 | http://news.centurylink.com/blogs/corporate/suspicious-and-malicious-what-is-malware | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00018-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919454 | 507 | 3.3125 | 3 |
Definition: The complexity class of decision problems for which answers can be checked for correctness, given a certificate, by an algorithm whose run time is polynomial in the size of the input (that is, it is NP) and no other NP problem is more than a polynomial factor harder. Informally, a problem is NP-complete if answers can be verified quickly, and a quick algorithm to solve this problem can be used to solve all other NP problems quickly.
Generalization (I am a kind of ...)
See also NP-hard, P.
Note: A trivial example of NP, but (presumably) not NP-complete is finding the bitwise AND of two strings of N boolean bits. The problem is NP, since one can quickly (in time Θ(N)) verify that the answer is correct, but knowing how to AND two bit strings doesn't help one quickly find, say, a Hamiltonian cycle or tour of a graph. So bitwise AND is not NP-complete (as far as we know).
Other well-known NP-complete problems are satisfiability (SAT), traveling salesman, the bin packing problem, and the knapsack problem. (Strictly the related decision problems are NP-complete.)
"NP" comes from the class that a Nondeterministic Turing machine accepts in Polynomial time.
History, definitions, examples, etc. given in Comp.Theory FAQ, scroll down to P vs. NP. Eppstein's longer, but very good introduction to NP-completeness, with sections like Why should we care?, Examples of problems in different classes, and how to prove a problem is NP-complete. A compendium of NP optimization problems.
Scott Aaronson's Complexity Zoo
From xkcd by Creative Commons Attribution-NonCommercial 2.5 License.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 22 April 2015.
HTML page formatted Mon May 18 09:42:24 2015.
Cite this as:
Paul E. Black, "NP-complete", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 22 April 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/npcomplete.html | <urn:uuid:8c0a68a5-ee1c-46ad-b0a4-bfcdada162fc> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/npcomplete.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.889521 | 498 | 2.734375 | 3 |
What is Router and what is he doing?
Router is the device that has the ability to route packets. In other words, primary responsibility of router is to find the best way for a packet to arrive at his destination network and forward packets from one network to the next one through this selected path. Devices on different networks would not be able to communicate if there is no router between those networks.
Routers are hidden but they are always working for us
Typical users are unaware of all the routers in their network environment or on the Internet. Users are expecting to send e-mails, browse web pages and download data and they are not really interested where is this data stored. They don’t ask if the server that they are accessing is on their local network or is on some other remote network.
How is Router doing his job?
A router connects multiple networks in the way that it has multiple interfaces that each belongs to a different network. When a packet comes into the router on one interface, router determines which interface is the best for forwarding the packet onto its destination. This interface that the router uses to forward the packet can be the interface in the final destination of the packet or a way to next router that is the best way to reach the destination network.
Each network that a router connects to needs a separate interface. Interfaces are used to connect to Local Area Networks – LAN or Wide Area Networks – WAN. LANs are commonly Ethernet networks that contain devices such as PCs, printers, and servers. WANs are used to connect networks over a large geographical area.
What else are they doing?
Router has additional services as well and not only packet forwarding function. All of these services are built around the routers main function.
Modern routers have the ability to:
- Ensure continuous around-the-clock availability.
- Guaranteed network reachability by use of alternate paths in case the primary path fails.
- Integrated services of voice, data and video over all kinds of wired and wireless networks. This is made possible by use of Quality of service (QoS) prioritization of IP packets to ensure that real-time traffic, such as voice; video and critical data are not delayed.
- Integrate some firewall abilities and in that way fight against worms, viruses, and other attacks on the network by permitting or denying the forwarding of packets.
Networks today are used in a variety of ways, including IP telephony, gaming, web applications, commerce, education, and etc. The center of the network is the place where the router stands. Router’s job is to connect one network to some other network. We can say that the Router is responsible for the delivery of packets across different networks. IP packet might be sent to some web server or mail server in another part of the world and the routers are responsible for efficient delivery of all those packets. | <urn:uuid:8f8af076-47b0-4a6a-be4a-e4f472e4badb> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/router | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956342 | 591 | 3.734375 | 4 |
The vision of electric cars call for charge stations to perform smart charging as part of a global smart grid. As a result, a charge station is a sophisticated computer that communicates with the electric grid on one side and the car on the other. To make matters worse, it’s installed outside on street corners and in parking lots.
Electric vehicle charging stations bring with them new security challenges that show similar issues as found in SCADA systems, even if they use different technologies.
In this video recorded at Hack In The Box 2013 Amsterdam, Ofer Shezaf, founder of OWASP Israel, talks about what charge stations really are, why they have to be “smart’ and the potential risks created to the grid, to the car and most importantly to its owner’s privacy and safety. | <urn:uuid:1be89f5b-13ce-4df4-9166-0b75fffe9f16> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/05/15/hacking-charge-stations-for-electric-cars/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961941 | 165 | 3.078125 | 3 |
You can't believe everything you read on the Internet -- Abraham Lincoln
I found an app quite some time ago that fascinates me. It's called 3D Sun and it's published by the NASA Heliophysics Group. The app alerts you to sunspot activity and other interesting and significant events such as coronal mass ejections or CMEs.
CMEs are explosions on the sun that release clouds of billions of tons (yes, you read that right, billions of tons) of charged particles. Depending on the level of solar activity CMEs can occur as often as three times per day or as infrequently as once every five days and a CME can travel at speeds of up to 4.5 million miles per hour reaching earth in about 21 minutes hours.
CMEs can leave the sun in any direction and when they are powerful enough and come in our direction, they can damage satellites, cause auroras and, if they're really strong and penetrate the atmosphere, can even damage electrical systems on the earth's surface. In short, CMEs are awesome and potentially very dangerous to modern civilization.
Now, while most CMEs don't do much more than trigger the Northern Lights there have been rare CMEs that have affected power grids, for example, in 1989 a CME rated as an X15 event caused a blackout of the entire province of Quebec, Canada.
While that was impressive the most intense CME in modern history occured in 1859 when the earth was hit by a CME so powerful it melted telegraph cables and caused electrical machinery to continue working even after being switched off. The solar flare that presaged the CME was observed by the British astronomer Richard Carrington and as a result the most powerful events of this type are called Carrington-class coronal mass ejections. The one in 1859 has been estimated to be an X45, roughly 33 times more powerful than most CMEs seen recently.
Luckily Carrington-class coronal mass ejections are rare occurring roughly once every 500 years but yesterday the 'Net was a-buzz with the news that we had recently missed being hit by one. This information came first, as far as I can tell, from the "Washington Secrets" section of the Washington Examiner in a post titled Massive solar flare narrowly misses Earth, EMP disaster barely avoided.
The article discussed how, during a panel on the threat of an electromagnetic pulse (or EMP) from nuclear weapons and CMEs, the possibility of a Carrington-class CME had come up. The piece also reported that:
Two EMP experts told Secrets that the EMP flashed through earth's typical orbit around the sun about two weeks before the planet got there.
You can watch the video of the panel below where some statements about CMEs are made; the most definitive is at 23:53 when Ambassador Henry F. Cooper, Chairman of the Board of High Frontier (a group that was established to advocate for the Strategic Defense Initiative) says:
We had a near miss of a solar emission within the last several months, is it? ... which went by us in the orbit of the earth."
Former Cliton-era Director of Central Intelligence, James Woolsey confirmed the question.
Unfortunately for Cooper and Woolsey (and the Washington Examiner), no such event had happened and, according to NASA and other sources including The Weekly (a "Preliminary Report and Forecast of Solar Geophysical Data" issued by the NOAA/National Weather Service Space Weather Prediction Center) solar activity was very low to moderate over May, June and July.
Curiously the Washington Examiner's "misunderstanding" of the issue entered the Internet blogosphere's echo chamber and voila! It became "fact" (as of writing I get 903 results for the search term "a Carrington-class coronal mass ejection crossed the orbit of the Earth and basically just missed us").
It seems the Washington Examiner is not averse to a little hyperbole and the blogosphere is obviously willing to be credulous which is sad not just because of the gullibility on display but because the possibility of a seriously intense CME is real and this makes it look risable. The fact is we have virtually no plans for dealing with an event that could take out the national power grid, satellites, data centers, computers, and everyday electronics in a few minutes.
That all sounds like some kind of summer blockbuster disaster movie doesn't it but it's one that you don't want to see and neither did Abraham Lincoln. | <urn:uuid:ed482eff-1a0c-4aae-93c9-99d166360f3d> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225097/security/no--the-earth-didn-t-nearly-get-taken-out-by-carrington-class-coronal-mass-ejection.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968724 | 930 | 2.609375 | 3 |
When art, science and engineering come together, you sometimes end up with a 15-foot tall LED-lit brain.
A group called Mens Amplio, which is Latin for “expanding the mind,” is creating this project. So how does it work? Lets start with the sculpture.
The head surrounds the brain, both constructed out of steel. The steel requires a custom-made mandrel bender to properly follow the twists and turns of a human brain.
Inside the brain, when the technology starts working, that is where it gets really interesting. Inside the steel frame brain, the Mens Amplio team is building neuron branches out of Endlighten rods, a type of clear, light-diffusing acrylic LED light.
A person wears a NeuroSky MindWave EEG headset which reads that person’s brainwaves. The EEG communicates with a Raspberry Pi, which will be programmed with software written in Python, and which will in turn talk to the LEDs using a protocol called Open Pixel Control (OPC). The OPC client will send out LED color data packets to the LED strips over Serial Peripheral Interface Bus (SPI), an information transmission protocol that is enabled by default on the Linux distro Mens Amplio is running on their Raspberry Pis.
The LEDs will then light up and change color and pattern based on the EEG wearer's state of mind. The color and patterns of the LEDs are meant to mimic the images of clinical brain scans.
What’s more, the brain will also produce real fire that is controlled directly by the Raspberry Pi, although the team is keeping that methodology under wraps. That will only happen, however, if the person wearing the NeuroSky is in a meditative state, which might be hard with throngs of enthusiastic lookers-on trying to distract him or her.
The project is partially funded by Burning Man, an annual event held in the Nevada desert where the brain will make its debut (and that Larry Paige has said he enjoys attending), but they have an Indiegogo campaign to raise the rest of the money they need. And a generous donation of $500 will land you your own desktop brain! The desktop brain is powered by an Arduino microcontroller since it will be smaller (because really, who has room for a 15-foot tall brain around the house) and therefore requires less computational power.
Sure, the Mens Amplio team, which is comprised of doctors, people who have worked in brain imaging, neurotechnology, computer programming and electronics and metal fabrication, is excited to built a 15-foot tall brain. Who wouldn't be? But what they are really looking forward to is bringing this project into local schools to get kids interested in art, science, math and electronics. Seeing a giant, colorful brain around Oakland, CA just got a little more common. | <urn:uuid:24c896d8-60f9-463e-9088-b17963151820> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474180/computer-processors/this-is-your-brain-on-led.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00466-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942145 | 579 | 2.5625 | 3 |
By Adrian Leon Mare
The world we live in today is a technologically advanced world. While on one hand, commercialization of IT (Information technology) revolutionized our modern day lifestyle, it has raised a big question mark about the confidentiality and privacy of the information shared and managed using advanced means of communication. As computer technology continues to evolve, the task of managing and handling private and sensitive information is becoming more and more challenging with each passing day. Increased rates of cyber crimes leading to unsolicited invasions of privacy have resulted in the emergence of a new field of computer science known as cyber forensics. With the increasing demand of computer security in recent times, it has become more important than ever to understand the digital forensic technology.
What is Digital Forensics/Cyber Forensics?
Also known as cyber forensics, computer forensics involve the application of acquiring and analyzing digital information (as a part of structured investigation) to be used as evidence in the court of law.
Digital Forensics-Primary Goals
The primary goal of Digital Forensics is to carry out an organized and structured investigation in order to preserve, identify, extract, document and interpret digital information that is then utilized to prevent, detect and solve cyber incidents.
A typical forensic investigation consists of the following main steps:
1. Preserving the data.
2. Acquiring the data.
3. Authenticating the data.
4. Analyzing the data.
Figure 1: Steps involved in a Forensic Investigation Process
1. Preserving and acquiring the data-The first and foremost step of a digital forensic investigation is to preserve and acquire the data from a computer. The step involves creating a bit by bit copy of the hard drive data.
2. Authenticating the data- The next process involves verifying the data seized. To ensure that the acquired data is an exact copy of the contents of the hard drive, the md5/sha1 of the original and copied data are checked and matched.
3. Analyzing the data-This is perhaps the most important part of the investigation process which involves careful examination and analysis of the data using forensic tools.
The process mainly involves:
– Recovering deleted files /Data Recovery
– Tracking or identifying hacking activities
Digital Forensics and Windows
21st century is the century of revolution and change. The transformation of the analog world into a digital world has raised new challenges and opportunities for technology lovers.
New forensic challenges arise with the introduction of newly released and latest operating systems. While on one hand, these newly released versions of Windows are aimed at making things easier for users, many of the functions (such as auto play, file indexing) performed by your operating system for your convenience can actually be used against you.
If you look at the current cyber crime statistics, you will notice that the highest percentage of cyber crimes is committed in the United States of America. 23% of the total cyber crimes take place in U.S. This calls for increased security measures to protect your confidential information from being misused.
The average user is mostly unaware of the fact that their newly upgraded operating system is leaving tracks of their activity. It is essential for users to know that valuable pieces of sensitive and confidential information is stored in Windows Artifacts. These artifacts can be used to recreate and restore the account history of a particular user.
Digital Forensics and Windows-The Windows Artifacts
Some of the artifacts of Windows 7 operating system include:
– Root user Folder
– Pinned files
– Recycle Bin Artifacts
– Registry Artifacts
– App Data Artifacts
– Favorites Artifacts
– Send to Artifacts
– Swap Files Artifacts
– Thumb Cache artifacts
– HKey Class Root Artifacts
– Cookies Artifacts
– Program files Artifacts
– Meta Data Artifacts
– My Documents Artifacts
– Recent Folder Artifacts
– Restore Points Artifacts
– Print Spooler Artifacts
– Logo Artifacts
– Start menu Artifacts
– Jump lists
Information collected from any of these artifacts can be used to recreate the account history of a user. To gain a better understanding of how these artifacts can be used to access or retrieve valuable information, it is essential to briefly discuss some of the most important Artifacts of Windows 7.
1. Root User Folder artifacts
The Root User Folder gives access to the complete operating system. The Root User reserves the right to delete and modify files on the operating system besides having the rights to generate new users and award them some rights. Nonetheless, these rights cannot exceed the rights of a root user.
The Windows Folder is specified by %SYSTEMROOT%. The Folder can be accessed through Start\Run\%SYSTEMROOT%\System32.
2. Desktop Artifacts
All the files present on the desktop of a user are stored in the desktop folder of the operating system. Typically, the desktop is populated either,
– By the user, or
– By programs that automatically create files and place them on the desktop.
The Desktop can be accessed using the following link;
3. Pinned Files/Jump Lists Artifacts
Pinned Files or Jump lists is a relatively new feature introduced in Windows 7 released by Microsoft. Using the Jump lists all the pinned files can be accessed. Additionally, these lists also maintain a record of recently or last visited files relative to a particular software. Pinned files can be accessed from the jump list using the following link,
4. Recycle Bin Artifacts
The Recycle Bin stores the recently deleted files temporarily. These files can be restored easily. You can only view the Recycle Folder after un-checking the hide\protect system files option using the following link;
5. Registry Artifacts
Registry is the location where the configuration information of Windows is kept and stored. It can be used to obtain information related to historical and current use of applications in addition to obtaining valuable pieces of information about option preferences and system settings. It can be accessed using the following link;
6. App Data Artifacts
Application data or App data is a junction designed to provide backward compatibility. A junction can roughly be defined as a shortcut that serves to redirect programs and files to different locations. All the information related to settings configuration (of various apps) is stored in this folder. Furthermore, information related to the Windows address book and recently accessed files are also stored in this folder. The junction can be accessed through:
C: User\ (username)\AppData\Roamingfolder
7. Favorite Artifacts
The folder contains valuable bits of information related to Windows Explorer and Internet Explorer favorites. The folder can be accessed using the following link;
8. Send To Artifacts
The Send to folder stores information pertaining to shortcuts to different locations, and other software apps on the operating system of your computer. These shortcuts serve as destination points. Using these destination points a file can be sent or activated. Furthermore, these points can also be modified as per your convenience. The Send to folder can be accessed using the following link;
9. Swap Files Artifacts
Page Files or Swap files are the memory files of your computer that aid in expanding the memory of your computer. These files are not visible and are hidden by default settings. To view these files, following link can be used;
MyComputer>Properties>Taskmenu>AdvancedSystemSettings>Advancedtab>Performance>Settings>Performance options dialogue box>Advanced tab>Change.
10. Thumbs Cache Artifacts
Thumbs.db files are files that are stored in every directory on the Windows systems that includes thumbnails. These are default files (created by default) and store valuable information that is not available elsewhere. The file is created locally amongst the images. The location where cache is stored is as follows;
The display can be stopped by a user by checking on the ‘Always show icon, not thumbnails’ from the list of Folder options.
11. HKey Class Root Artifacts
The HKey Class Root or simply HKCR key contains sensitive information about different file name extensions in addition to containing information related to COM class registration. Furthermore, it is designed to be compatible with the 16-bit Window registry.
HKEY _LOCAL_MACHINE and HKEY_CURRENT_USER key both store valuables information related to file name extensions and class registration.
HKEY_LOCAL_MACHINE\Software\Classes: This key stores all the information pertaining to different users using the system.
The HKEY_CURRENT_USER\Software\Classes: On the other hand, this key stores information pertaining to the interactive user.
12. Cookies Artifacts
A number of website store information on your computer in the form of cookies. Cookies can roughly be defined as small text files containing information related to preferences and configuration of a particular user.
These files can be accessed using the following link;
C: User\(username)\AppData\Roaming folder\ Microsoft\Windows\Cookies.
13. Program Files Artifacts
Windows 7 consists of two Program files folders including;
1. C:\program files
2. C:\Program files (x86)
These folders are designed to be compatible for 32 bits and 64 bits version of Windows 7. The first one is compatible with the 64 bit version of Windows 7, whereas, the second one is compatible with 32 bit version of Windows 7.
14. Meta Data Artifacts
Meta Data simply refers to information related to data itself. Using the metadata artifacts, valuable strings of file information can be obtained that can be used as evidence in digital forensic investigation.
15. Restore Points Artifacts
Windows & gives its users the option of restoring points thereby creating the image of your system. This essentially helps in providing users with an option to revert back to the point when the system was working perfectly in case of fatal system errors. This system image also contains the drives that are required by your operating system to run in addition to including program settings, system settings and file settings.
16. My Documents Artifacts
My Documents contains all the information related to files that have been created by users themselves. Usually when a program is installed on a system, the information is stored in this folder. It is also known as the primary storage space meant for storing all the key information. The folder can be accessed through;
17. Start Menu Artifacts
The traditional Start menu has been replaced by Start in Windows 7. Using software like classic shell, it is absolutely possible to get the menu back. In Windows 7, the right column of the start (new version of start menu), links to respective libraries are shown instead of folders.
18. Logo Artifacts
The Logos included in the Windows 7 Operating System include valuable information pertaining to application events information, security related events information, setup event information, forwarded event information, and application events information.
19. Print Spooler Artifacts
Print Spooler is a software program responsible for organizing all the print jobs that have been sent to the print server or the computer printer. In essence all the print related information is stored in this folder.
The folder can be accessed by using the following link;
20. Recent Folder Artifacts
The Recent Folder stores links of the recently accessed or opened files by a specific user. The folder can be accessed by using the following link;
Windows Forensics- Analysis of Windows Artifacts
Analysis of Windows artifacts is the perhaps the most crucial and important step of the investigation process that requires attention to detail.
The following flowchart depicts a typical windows artifact analysis for the collection of evidence. | <urn:uuid:1c7c490a-f0f9-450d-9006-645a641d3d3e> | CC-MAIN-2017-04 | https://articles.forensicfocus.com/2014/04/14/windows-forensics-and-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883297 | 2,382 | 3.296875 | 3 |
Green benefits of telework aren't just for Earth Day
Telecommuting helps reduce greenhouse gas emissions, study says
- By Alysha Sideman
- Apr 21, 2011
Many people are choosing not to drive to work on April 22 and instead work from home in honor
of Earth Day. They might also use less energy and green up their environments any way they can.
According to Earthday.org’s "A Billion Acts of Green" blog,
people choose to telework for various reasons.
“By teleworking, I reduce [my] carbon footprint: no exhaust
pollution,” comments Angela LeFall. Teleworking also reduces congestion on highways and
streets, and reduces the wear and tear on public property, such as streets, buses,
trains, etc., she adds.
And if people telework more frequently, Earth Day initiatives can have far-reaching effects in daily life.
A recent telecommuting study conducted by the American
Consumer Institute said greenhouse gas emissions could be
reduced by about 588 tons in the next decade if another 10 percent of the
workforce jumped into the movement, reports the Philadelphia Inquirer.
A number of federal agencies are doing their part thanks to the
Telework Enhancement Act signed by President Barack Obama in December 2010, which requires all agencies to develop telework policies.
For example, thousands of Defense Information Systems Agency employees relocating to Fort Meade, Md., are being encouraged to telework because many of them still live in Northern Virginia.
Other environmental benefits of teleworking include fewer dry-cleaning chemicals being used and workers saving money on child care. In addition, firms “will
need less equipment, office space, parking spaces,” the American Consumer Institute study states.
Telework arrangements also allow professionals to control
their work environments.
their own work waste at home by reducing paper usage, increasing recycling
efforts, using CFL or LED light bulbs, opting for windows and fans instead of
air conditioning, and turning off electrical appliances when they’re not in
use,” writes Sara Sutton Fell on her
"Broadband for America" blog.
Alysha Sideman is the online content producer for Washington Technology. | <urn:uuid:422e080f-1f4a-4f78-ac21-32729b5e8f2f> | CC-MAIN-2017-04 | https://fcw.com/articles/2011/04/21/green-telework-earth-day.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934322 | 467 | 2.578125 | 3 |
In the final chapter of our Apple in classroom series, we hear from six more educators on ways Apple has impacted their classrooms.
Vickie S. Cook, Ph.D., Director of the University of Illinois at Springfield’s Center for Online Learning, Research and Service @DrVickieCook
Using the power of touch to teach and challenge students.
For Vickie S. Cook, director of the University of Illinois at Springfield’s Center for Online Learning, Research and Service, it’s all about using the power of touch to create a new experience for students: “Students can touch and interact with devices such as the iPhone, iPad, and Mac products using sensory perception that engages learners in new ways that teach and challenge the students. This highly visual approach allows students to engage with learning objects to build the skills needed in the 21st century.”
Apple technology has delivered educators the ability to tailor learning to individuals, groups and entire classes. This means that teachers can level the playing field for students with varying learning modality preferences, and create solid visualizations of concepts more easily. It also helps “bring learning to life anytime, anywhere, through connectivity and a highly personalized, visual environment,” she adds.
Sam Gliksman, EdTech Author, Speaker, Consultant and Owner/Blogger for EducationalMosaic.com @samgliksman
It’s not just about tech. Traditional educational paradigms are changing.
“To be absolutely clear it's far more about changing traditional educational paradigms than about any one particular device,” explains Sam Gliksman, EdTech consultant and owner/blogger of EducationalMosaic.com.
Gliksman is partial to the iPad as it allows students to engage with material in new ways. One example he offers is a field trip to a California mission where students used their iPads to capture photos, sounds and interviews. Coming back, they recorded video in front of a green screen, acting as virtual tour guides for the mission as they weaved images and sounds in and out of the video background. “Mobile devices such as iPads empower students with tools that spark creativity and innovation,” he says.
Tom Kuhlmann, Chief Learning Architect at Articulate.com @tomkuhlmann
With classroom content it’s now “pull over push.”
It's one thing for people to easily consume content as part of their learning. And mobile devices do that. But the more interesting part is that they allow the learners to create and share what they learned, says Tom Kuhlmann, chief learning architect at Articulate.com. “This ability to integrate creativity with personal learning and then share it with others not only creates a dynamic learning experience, it [also] makes it fun and engaging,” he adds.
Kuhlmann calls out the App Store and “all of the easy-to-use content creation apps” as a valuable resource that is helping change the learning and teaching dynamic from push to pull. “In the past, learning was mostly a push mechanism where the instructor pushed content out for the learner to consume. Today, the Apple devices let learners easily explore and pull content in,” says Kuhlmann. “They also allow anyone to create their own content and in turn demonstrate better understandings of what they’ve learned.”
Cory Tressler, Associate Director at the Ohio State University @TresslerTech
Mobile technology is sparking a revolution in education.
“Students, teachers, parents and administrators are all now part of a highly mobile society that has access to an incredible amount of information, collaboration tools and computing power right in the palm of their hand. Apple sparked this revolution and it has transitioned into education,” states Cory Tressler of the Ohio State University.
He is a firm believer in the iPad, describing it as “the most powerful educational technology tool ever developed,” as it provides “a mobile option for schools, teachers and students that is extremely powerful, easy to use, and enhances any learning environment.” From creating video and photos, to accessing information and research, to taking notes and writing papers, Tressler says, “it delivers any student of any age the power to do and learn instantly.”
Brianna Crowley, Teacherpreneur at The Center for Teaching Quality (CTQ) @AkaMsCrowley
The mobile device as a portal to grow student imagination.
Apple is making learning accessible through its touch interface and quality education apps, says Brianna Crowley, teacherpreneur at the Center for Teaching Quality.
From kindergarteners that cannot yet type, or high schoolers seeking individualized learning experiences, the iPad offers wonderful tools. The same goes for students with disabilities: “When a special education teacher asks for technology support for students with disabilities, I can confidently suggest an iPad because of the accessibility settings.”
Crowley loves the iPhone because it blurs the boundaries between classroom and "real world" for both herself and her students. “I love showing them how a device they used mainly for social or family communication can help them take ownership over their learning process. Showing them this helps me also reinforce the idea that learning happens everywhere, constantly,” she says. “Our devices are simply portals for our imaginations to take shape or our curiosities to be explored.”
John Wetter, Technology Services Manager at Hopkins Public Schools @johnwetter
Students constantly surprise us by what they do on their iPads.
“Look at what students are doing every day and you’ll see the transformation in learning,” says John Wetter of Hopkins Public Schools who is charged with managing over 4,000 iPads used by the district’s students.
He calls out the iPad as a transformative device in education. “Seeing an elementary student start programming with an app like Kodable, to students creating an entire movie project in iMovie on their iPad really shows the creative power unleashed in this environment,” he says. “Great teachers working with great technology to help enhance student learning, it's what we're all about.”
We’d love to hear about your experience or thoughts on technology in the classroom. If you have an idea, feel free to join the conversation by Tweeting @JAMFSoftware with your insight or story.
And as an added bonus, during the International Society for Technology in Education (ISTE) conference, we’re hosting an “Apple a Day Giveaway” Twitter contest where you could win an Apple TV. Simply tell us why or how you leverage Apple in the classroom.
Interested? Learn more about the rules and participation criteria. | <urn:uuid:34a7de40-8183-4bc9-a16e-211b85b68513> | CC-MAIN-2017-04 | https://www.jamf.com/blog/how-has-apple-transformed-your-classroom-part-iii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943336 | 1,419 | 3.078125 | 3 |
Blood Groups Types Explained - Blood Group Diet, Blood Group Matching
When your practitioner tells you your "blood group type", you are being given two pieces of information - your Blood Group and your Rh Status.
Blood Group Types Explained
- Your blood group will be A, B, AB, or O. If you have "A" "B" or "O" blood group, you have antibodies in your blood plasma that destroy some of the other blood groups. If you have group "A" blood, you cannot receive blood that is group "B" and vice versa. If you have "O" blood, your body will create antibodies to fight "A" or "B" blood. If you have group "AB" blood however, your body will not create antibodies for any of the other blood groups.
- Your Rh status will be listed as negative (-) or positive (+). If you have Rh- blood, your body may form antibodies against Rh+ blood and destroy it. In order for this to happen, you must first be exposed to Rh+ blood (i.e., through a blood transfusion or carrying an Rh+ fetus). This can be a problem if you have antibodies against Rh+ blood and are pregnant with an Rh+ fetus. However, there is medication that can prevent this reaction from occurring if it is given immediately after you are exposed to Rh+ blood.
|Blood type and Rh||How many people have it?|
|O +||40 %|
|O -||7 %|
|A +||34 %|
|A -||6 %|
|B +||8 %|
|B -||1 %|
|AB +||3 %|
|AB -||1 %|
Does your blood group type reveal your personality?
According to a Japanese institute that does research on blood types, there are certain personality traits that seem to match up with certain blood group types. How do you rate?
|Blood Group||About Personality|
|TYPE O||You want to be a leader, and when you see something you want, you keep striving until you achieve your goal. You are a trend-setter, loyal, passionate, and self-confident. Your weaknesses include vanity and jealously and a tendency to be too competitive.|
|TYPE A||You like harmony, peace and organization. You work well with others, and are sensitive, patient and affectionate. Among your weaknesses are stubbornness and an inability to relax.|
|TYPE B||You're a rugged individualist, who's straightforward and likes to do things your own way. Creative and flexible, you adapt easily to any situation. But your insistence on being independent can sometimes go too far and become a weakne ss.|
|TYPE AB||Cool and controlled, you're generally well liked and always put people at ease You're a natural entertainer who's tactful and fair. But you're standoffish, blunt, and have difficulty making decisions.|
The chart of blood groups from whom you can receive the blood is given below.
|If Your Blood Group is||O-||O+||B-||B+||A-||A+||AB-||AB+|
Blood Group Diet
The blood group diet is said to have originated from two American Naturopaths, Dr James D'Adamo, and his son Dr Peter D'Adamo, who believe that your blood group type is the key to how you burn your calories, which foods you should eat and how you would benefit from certain types of exercise. They recommend that eating to suit your blood group may, help you to lose weight, help you fight disease, boost your immune system and slow down the ageing process.
It is believed that a chemical reaction occurs between your blood and foods as they are digested. Lectins, a diverse and abundant protein found in food, may be incompatible with your blood group and adverse side effects may occur. The avoidance of these Lectins which can agglutinate (adhere or stick to one another) can be important if your particular cells-determined by your blood type may react with them.
Blood Group Matching For Marriage
Matching blood group before marriage is important. This is to prevent Rh incompatibility. Rh incompatibility can lead to erythroblastosis fetalis (Hemolytic disease of the newborn-HDN). Fetal RBC get destroyed & newborn may get severe anaemia, jaundice. This jaundice is more severe than Physiological jaundice ( which is the most common and will usually resolve on its own). In very severe form, fetus may die due to heart failure. This is mediated by antigen-antibodies reaction. Transfer of maternal antibodies across the placenta occurs. This happens when Rh +ve man marries Rh-ve lady. So Rh +ve man should try to avoid marrying Rh-ve lady.
So, it becomes important to match the bood group before marriage otherwise newborn with erythroblastosis fetalis may need exchange transfusion. Complete blood count, bilirubin levels are done. High levels of bilirubin may lead to kernicterus. Kernicterus means deposition of bilirubin in basal ganglia region & can cause severe brain damage (bilirubin encephalopathy). In kernicterus, baby will be lethargic, slowly responding when breast-feeding is tried. Bulging fontanelles may be seen. In 1st pregnancy problem is less severe but in subsequent pregnancies problem becomes more severe. Hemolytic disease of the newborn can be treated before birth by intrauterine transfusion.
Incompatibilities of ABO blood types do not cause erythroblastosis fetalis. Erythroblastosis fetalis can be prevented by giving the mother Rh0(D) immune globulin at 28 wk gestation and within 72 hours of pregnancy termination. Due to preventive treatments given to the mother, erythroblastosis fetalis is less common now-a-days. Direct antiglobulin test (DAT, Direct Coomb's test) is used to diagnose HDN.
If a girl has RH-ve & boy has RH+ve Blood Group factor (irrespective of A, B, AB, O), then there is about 50% chance that the child will be RH+ve. I such case, complication may occur. Dring pregnancy if child's blood and mother's blood mixes, mother's immune system starts to develop antibodies against RH factor. This may result in destruction in RBC as mentioned above but even in that case, it is possible to prevent such complications and the pregnancy is possible. This is because of the developments in the field of medical science.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:5dc12d9f-5bd2-4538-96f6-4a7416fda7ad> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-326.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913429 | 1,495 | 2.75 | 3 |
There has been a lot of talk and speculation about the size of the botnet formed by computers targeted with the Flashback malware and whether all these machines were, in fact, all Macs.
The botnet was first discovered by the researchers working for Dr. Web, a Russian security company, who managed to redirect the botnet traffic to their servers. Initially they counted 550,000 infected machines, but the number has reached 600,000 shortly after.
Their announcement has sparked a great debate, likely fueled in part by the Macs’ image as machines that can’t be easily infected by malware.
Kaspersky Lab researchers decided to check for themselves if the claim was true, and by reverse engineering the malware’s C&C domain generation algorithm and using the date, they managed to beat the botnet herders to the registration of a domain that the infected machines proceeded to send requests to.
“Since every request from the bot contains its unique hardware UUID, we were able to calculate the number of active bots,” said the researchers. “Our logs indicate that a total of 600 000+ unique bots connected to our server in less than 24 hours. They used a total of 620 000+ external IP addresses. More than 50% of the bots connected from the United States.”
In addition to that, they used passive OS fingerprinting techniques to estimate which percent of the machines were actually Macs, and as it turns out, 98 percent likely are.
It is believed that the number of active Mac machines around the world reaches 60 million, making the ones infected by this particular malware part of one percent of the total number.
Given that Flashback has the capability of downloading additional malware on the affected machines, it is a good idea for all those who suspect that they might have been hit to verify the speculation with this free tool. | <urn:uuid:dde0f070-e524-42ed-8e93-08f88d5de569> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/04/10/600k-strong-flashback-botnet-comprises-mostly-macs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00073-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96023 | 384 | 2.78125 | 3 |
SACRAMENTO, Calif. — Advanced geological mapping and subsurface drilling tools must still be developed in order for geothermal power to become a more prominent source of renewable energy, experts said at the National Geothermal Summit.
In a breakout session at the summit on Wednesday, Aug. 8, a panel of government and private-sector geothermal experts discussed the value that additional 3-D modeling, geophysical surveys to indicate hot water flow below ground, and other research and development work could have on the future of geothermal exploration.
James Faulds, director of the Nevada Bureau of Mines and Geology, said that like oil and gas deposits, about two-thirds of the geothermal energy sources remain hidden, with no surface expression. Advanced technologies for fossil fuels has been developed, but similar advancements for geothermal have lagged behind.
In addition, Faulds added that often you’ll have a geothermal well in production, but then another site just a few hundred feet away may be hot, but dry. Those risks are a major impediment for developing geothermal systems.
“We need a better understanding why certain wells are productive and why others are not,” Faulds said. “Fundamentally we need better conceptual models of these geothermal systems to figure out where to drill and reduce the risk.”
“Some of this can be funded by industry, but some of the broader studies probably need to be funded by government entities,” Faulds added.
The U.S. Department of Energy was represented by Hildigunnur Thorsteinsson, team lead of the U.S. DOE Hydrothermal and Resource Confirmation. She agreed with Faulds that in comparison with oil and gas production, there’s a lack of high-performance tools and temperature devices for geothermal exploration.
Thorsteinsson pointed out that there are a variety of technical pathways to overcome, organized by the DOE into categories like advancements in non-invasive geophysics, invasive geophysics, geology and structure, remote sensing, geo-chemistry and cross-cutting, seismic gravity tools.
Joe Iovenitti, vice president of resource for Alta Rock Energy Inc., added that short-term goals should be established to find ways to lower the cost of drilling and to establish further techniques for boreholes and wells.
In the long term, Iovenitti believed geothermal exploration would be dependent on data integration. He suggested the government purchase private data sets related to geoscience and make that data available to companies that are interested in geothermal pursuits. Iovenitti said that would help correlate where to drill and what drilling applications will work in a certain area.
“There’s definitely a technological challenge here that we can address,” Thorsteinsson said regarding the U.S. government’s ability to help companies interested in geothermal drilling. “And it’s an opportunity for the department to put up funding opportunities and make some progress.”
Thorsteinsson revealed that the U.S. DOE is pushing forward with funding projects. She said there are more than 30 federal R&D efforts under way. One in particular deals with percussive and encapsulated drilling techniques that have particles in the drilling buds that make tiny explosions down in a hole to help break up rock.
Those projects that meet technical milestones will receive funding for a second phase starting in 2013, Thorsteinsson added. | <urn:uuid:5168176c-1cda-4523-8e3e-0f4d57b8cfd7> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Geothermal-Power-Needs-Inventions-to-Thrive.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939172 | 723 | 2.578125 | 3 |
New Threats Leave Less Room for Error
As the number of broadband users increases, the ripple effect of any security vulnerability becomes much greater. Combine that with a wide array of new virtual threats, and you have a much more susceptible IT environment, said David Redekop, co-founder of Nerds On-Site, a kind of brain trust for assisting enterprises and individual consumers with IT products and concepts.
“We used to have reasonably big windows when a security vulnerability was discovered,” he said. “We would find out about it and say, ‘That’s a bit of risk for our customers. We should plan on patching that system.’ As the number of broadband users—and potentially malicious users—grows, those windows are getting smaller and smaller.”
Although it used to be acceptable to employ a reactive, patch-as-you-go strategy, information security professionals have to take more preventative measures today. “You can no longer assume that you’re going to get a warning, and you can patch it and then you’re safe,” Redekop said. “You have to assume that by the time you’ve been warned, that particular exploit has been tested on your network by some zombie or some hacker. All of a sudden, we have to put big fences around our (network), and additional fences just in case there are some holes we weren’t aware of.”
Redekop recommends using reverse firewalls to cut down on spam and prevent malware from slipping in and out through back doors. “A reverse firewall virtually inspects any computer’s outbound request,” he said. “Spam is sent out by some piece of spyware on the users’ computers that helps the spammers’ cause by sending out masses of e-mails. A reverse firewall implementation (ensures) computers on a network can only send mail through a server, which inspects for viruses and spam. It’s an easy implementation, and it can be done on any professional-grade router.”
Users also need to be aware of the vulnerability of information sent through wireless networks or in public hot spots. Part of the problem is that more than 90 percent of users still use clear-text e-mail in all situations, Redekop said. Hackers use Cain-and-Abel programs to pick up traffic in exposed areas like this, and can intercept user names and passwords relatively easily. To avoid compromising sensitive information, he suggests using at least some level of encryption to send and receive e-mail.
For more information, see http://www.nerdsonsite.us. | <urn:uuid:b91f546e-261a-4db7-a28f-d3f380b9b2aa> | CC-MAIN-2017-04 | http://certmag.com/new-threats-leave-less-room-for-error-expert-says/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00222-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939458 | 561 | 2.53125 | 3 |
According to scientist, it has been calculated that on September 1 2016. Sky watchers and some parts of Africa will see the moon passing in front of the sun. This will create an Annual Solar Eclipse.
Scientist have proved that during an Annual Solar Eclipse, the moon is father away from earth than during a solar eclipse.
The Annual Solar Eclipse appear small and doesn’t completely cover the sun. it will be available in most parts of Africa.
A partial solar eclipse will be visible on most part of the African continent.
So, get ready to experience the beauty of the Solar Annual Eclipse.
What is Eclipse?
Eclipse is a terms used to describe Solar Eclipse. That is when the Moon’s shadow crosses the Earth. Eclipse is also an event that occurs when an astronomical object is temporarily obscured. These could be when an object is passing into the shadow of another body or when another object is passing in between.
Types of Eclipse
- Solar Eclipse: This event occurs when the shadow of the moon passes the surface of Earth.
- Lunar Eclipse: Is an event that occurs when the Moon slowly moves in to the shadows of earth.
Tips on Viewing the Annual Eclipse
On Thursday, 1st September 2016, the sunshine will blaze like a ring with fire rounding it. This is caused by the distance between the Moon and Earth. The Event is called the Annual Eclipse, unlike it is popularly known as the Total eclipse.
To enjoy this Event without damaging you eye lens, below are some of the safety tips.
- Get a dark shade
- Put it ON before looking towards the direction of the sun.
- Gradually look at the sun either from the left or from the right to the direction of the sun
- Enjoy the moment.
- Ensure you don’t look at the sun while driving
- Do not Walk while looking at the sun.
- Avoid direct contact to the sun to prevent eye lens damage.
Next Solar Eclipse Event
The Eclipse cycle occurs when different eclipse are separated by time interval. Eclipse Cycle takes place when the orbital movement of the object form repeating harmonic patterns. Which results in a repetition of a solar or lunar eclipse every 6,585.3 days
The next annular solar eclipse will occur Feb. 26, 2017, with the point where the eclipse will appear to last the longest located off the eastern coast of South America.
The next highly anticipated total solar eclipse, set to take place Aug. 21, 2017, is being called the “Great American Eclipse” because the best viewing locations will be within the continental U.S.
For this event, Pasachoff said now is the time to make necessary travel plans and reservations if you want to observe the eclipse within the path of totality. Amazing isn’t it.
Solar Eclipse Videos
More Video of Solar Eclipse | <urn:uuid:91eaa1b6-3ce1-4eaa-af57-533e578c246b> | CC-MAIN-2017-04 | http://mikiguru.com/solar-eclipse-2016-eclipse-see/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00038-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919158 | 587 | 3.234375 | 3 |
With the release of the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors came a push to hold software developers to be held liable for any insecure code they write.
Alan Paller, director of research for the SANS Institute commented that "wherever a commercial entity or government agency asks someone to write software for them, there is now a way they can begin to make the suppliers of that software accountable for [security] problems."
Put in place to protect the buyer from liability, these efforts are also guided at producing secure code from the very beginning since vendors won’t be able to charge for fixing vulnerabilities found in their software.
The Most Dangerous Programming Errors is a list compiled yearly by the Common Weakness Enumeration, a community initiative sponsored by the US Department of Homeland Security and the MITRE corporation, and the SANS Institute. Drawing from an international pool of approximately 40 software security experts from businesses, governments, and universities, the Top 25 are created by building from the previous year’s list through a private discussion board. Once the threats discussed were evaluated by the research team and the list was narrowed down to 41 entries. These entries were then rated according to two metrics: prevalence and importance and the 25 with the highest ratings were selected to the list. The ratings for prevalence were:
When looking at the importance of a threat, the ratings were:
In addition to ranking the Top 25 weaknesses, the report broke them down in to three categories: Insecure Interaction Between Components, Risky Resource Management, and Porous Defenses. The following table displays the rank, score, weakness, and category it falls under.
|1||346||Failure to preserve web page structure (Cross-site scripting)||Insecure interaction between components|
|2||330||Improper sanitization of special elements used in a SQL command (SQL injection)||Insecure interaction between components|
|3||273||Buffer copy without checking the size of input (Classic buffer overflow)||Risky resource management|
|4||261||Cross-site request forgery||Insecure interaction between components|
|5||219||Improper access control (Authorization)||Porous defenses|
|6||202||Reliance on untrusted inputs in a security decision||Porous defenses|
|7||197||Improper limitation of a pathname to a restricted directory (Path traversal)||Risky resource management|
|8||194||Unrestricted upload of a file with dangerous type||Insecure interaction between components|
|9||188||Improper sanitization of special elements used in an OS command (OS command injection)||Insecure interaction between components||10||188||Missing encryption of sensitive data||Porous defenses|
|11||176||Use of hard-coded credentials||Porous defenses|
|12||158||Buffer access with incorrect length value||Risky resource management|
|13||157||Improper control of filename for include/require statement in PHP program (PHP file inclusion)||Risky resource management|
|14||156||Improper validation of array index||Risky resource management|
|15||155||Improper check for unusual or exceptional conditions||Risky resource management|
|16||154||Information exposure through an error message||Risky resource management|
|17||154||Integer overflow or wraparound||Insecure interaction between components|
|18||153||Incorrect calculation of buffer size||Risky resource management|
|19||147||Missing authentication for critical function||Porous defenses|
|20||146||Download of code without integrity check||Risky resource management|
|21||145||Incorrect permission assignment for critical resource||Porous defenses|
|22||145||Allocation of resources without limits or throttling||Risky resource management|
|23||142||URL redirection to untrusted site (Open redirect)||Insecure interaction between components|
|24||141||Use of a broken or risky cryptographic algorithm||Porous defenses|
|25||138||Race condition||Insecure interaction between components|
If developers are expected to ensure that their code is void of any weakness listed in the Top 25, they need to a) know how to identify the weakness and b) know how to prevent it. The report further breaks down those making the list and provides a detailed description for each weakness. These descriptions are broken down for the developer, providing a Summary that provides the weakness prevalence rating, a rating for the remediation cost, the attack frequency, the consequences of the weakness being exploited, how easy it is to detect the weakness, and how aware attackers are that the weakness exists. Following the summary is a discussion that provides the developer with a quick description of how an attack is carried out against the weakness and a sequence of prevention techniques to help developers avoid making such mistakes in their code.
In developing their Top 25 list, CWE/SANS included a comparison to the OWASP Top Ten making a clear statement of the importance of OWASP’s list while also recognizing distinct differences between the two. Most clearly defined is that the OWASP Top Ten deals strictly with vulnerabilities found in web applications where the Top 25 deals with weaknesses found in desktop and server applications as well. A further contrast is seen in how the list is compiled. OWASP giving more credence to the risk each vulnerability presents as opposed to the CWE/SANS Top 25 that included the prevalence of each weakness. This factor is what gives Cross-site scripting the edge in the Top 25 as it is ranked number 1 while OWASP has it ranked at number 2.
Pushing to make the Top 25 a checklist for developers to use to avoid lawsuits seems like the panacea for all programming vulnerabilities to those who support adopting such standards. Opponents claim that it will result in costlier software. Neither addresses what happens in the event of a zero-day attack. With an estimated 78% of all vulnerabilities being found in web applications, the likelihood of falling victim to an unknown threat is high.
So what happens if XYZ software abides by the Top 25 but weeks after deploying a new application, they are hit by an unknown threat? Is XYZ liable? Or is the buyer? And what if the buyer fails to protect the application? If the code is attacked as a result of their failure to secure other resources, where does the blame lie?
In my opinion, this has already been answered and is put into practice daily. Merchants who want to accept credit cards must comply with PCI standards that require either a code review or the deployment of a web application filter. Taking either route insures compliance but the recommendation is to make use of both solutions. Code review, which would help address Top 25 weaknesses, helps the developer indentify weaknesses and address them before releasing the application to the marketplace. Building secure code from the beginning is always a best practice to be followed. Combining this with the protection a web application firewall provides a line of defense against unknown vulnerabilities to stave off potential zero-day threats. | <urn:uuid:d1896048-fc6e-47d3-bb00-1ccbaae0f43b> | CC-MAIN-2017-04 | http://www.applicure.com/blog/cwe-sans-top-25-dangerous-programming-errors | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00066-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905487 | 1,468 | 2.703125 | 3 |
Public-Key Cryptography Standards (PKCS)
The Public-Key Cryptography Standards are specifications produced by RSA Laboratories in cooperation with secure systems developers worldwide for the purpose of accelerating the deployment of public-key cryptography. First published in 1991 as a result of meetings with a small group of early adopters of public-key technology, the PKCS documents have become widely referenced and implemented. Contributions from the PKCS series have become part of many formal and de facto standards, including ANSI X9 documents, PKIX, SET, S/MIME, and SSL.
Further development of PKCS occurs through mailing list discussions and occasional workshops, and suggestions for improvement are welcome. For more information, please contact us.
The draft Version 2.30 of the PKCS #11 specification is now available for 30-day public review. The public review will continue through Wednesday 28-Oct-2009. Please send all comments to firstname.lastname@example.org.
See the PKCS #11 page for links to the draft documents.
Contributions for PKCS are welcome! Please read our contribution agreement.
Note: PKCS #2 and PKCS #4 have been incorporated into PKCS #1
- Draft Version 2.30 of the PKCS #11 specification available
- Amendment 1 to PKCS #5 v2.0: XML Schema for Password-Based Cryptography available
- PKCS #11 v2.20 Amendment 3: Additional PKCS #11 Mechanisms available
- PKCS #11 v2.20 Amendment 1: PKCS #11 Mechanisms for One-Time Password Tokens available
- PKCS #11 v2.20 Amendment 2: PKCS #11 Mechanisms for the Cryptographic Token Key Initialization Protocol available
- Notice regarding PKCS #15 | <urn:uuid:88efdeae-f051-4812-a034-3b9badaca0bc> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/public-key-cryptography-standards.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00552-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876157 | 371 | 2.828125 | 3 |
ChallengeBy Edward Cone | Posted 2002-04-08 Email Print
Its tortuous route to modernizing air traffic control systems has cost the Federal Aviation Administration billions. Has the agency finally learned its lessons?: Fixing System of Systems">
Challenge: Fixing System of Systems
The challenge is immense.
Current air navigation systems are organized by assigning planes to preset flight paths, defined by a series of radio beacon codes; flight progress is monitored by radar, with controllers speaking instructions over radio. The FAA would like to move to a system that makes more use of satellite-assisted navigation, giving pilots moving displays of the other planes in their vicinity, and decreasing reliance on voices by adding a digital data link between controllers and pilots.
First, the agency has to break free of the limitations of systems designed in the 1950s and '60s. For example, one of the backbone systems, the Host Computer System, continues to run on software written in a dead mainframe computer language, Jovial, and even those scarce Jovial programmers still working have a hard time navigating all the patches that have been applied to the software over the years.
The system has been upgraded by adding new mainframe hardware, and a new front end was added through a project called the Display System Replacement (DSR) that gave the controllers Unix-based workstations.
Host is one of many systems that was supposed to be replaced by AAS, but the agency is only now ramping up a replacement project. Under the FAA's new strategy of proceeding a step at a time, replacing an old mainframe system that still basically works just hasn't been a priority.
The air traffic control system is really several interlocking systems. Airport tower controllers are in charge of landing, take-off, runway assignments, and preventing collisions on the ground.
Flight arrivals and departures are managed by Terminal Radar Approach Control facilities (TRACONs), which track and direct planes from just outside the range at which they are visible from the tower (about 5 miles) to a range of 40 or 50 miles. Although a TRACON is typically located at a major airport, TRACON controllers also have to worry about traffic bound for smaller airports in their region, helicopter flights over nearby cities, and so on.
For years, TRACONs have used the computer system called ARTS that was originally built by IBM and is now in the hands of Lockheed Martin, which acquired IBM's old Federal Systems business. The replacement for ARTS, Raytheon's STARS, wound up running about four years behind schedule.
Both ARTS and STARS track arriving and departing flights using radar data, display the position of each plane on a controller's screen, and help with the process of making sure planes don't get too close together. There are many versions of ARTS in use, including some with updated color displays that rival those offered by STARS, but STARS is supposed to provide one common replacement system that makes more use of open standards like Unix and off-the-shelf hardware. <[>As a departing flight moves away from the airport, TRACON controllers take it to the edge of the area they control and hand it off to the next control center in its flight path. Usually, for a cross-country flight, that's an En Route facilityone of the air traffic control centers that specialize in managing the long-haul portion of a flight.
Here, the key computer systems are Host, the legacy back end, and DSR, the more modernized controller's workstation. It's also where URET is being deployed, as a sort of sidecar to DSR for the second member of a controller team. While one controller watches the real-time radar monitor, his partner uses URET's 20-minute projections of flight paths to detect and avert conflicts that would put one plane into the path of another. | <urn:uuid:0f497430-032d-440a-a65b-537d971693b2> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Enterprise-Planning/Can-FAA-Salvage-Its-IT-Disaster/4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00092-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959731 | 797 | 2.640625 | 3 |
Édouard-Léon Scott de Martinville was this French dude who lived in 1800s Paris, where he made a living as a printer and seller of books.
But he was interested in inventions and eventually tried his hand at a few. Scott de Martinville shrewdly realized that if a technology (photography) could be invented to record visual images -- as it was three decades before -- it stood to reason that a technology also could be developed to capture sound.
Scott de Martinville knew that if he could scale this technology to capture all of Earth's sounds, he could rule the world.
Kidding. Rather than indulge any megalomania, the Paris printer built something called the "phonautograph," and with it he made this recording -- the first-ever sound recording, according to the video's narrator -- in 1860.
Quality-wise, it falls well short of Neil Young's standards. But it's still the first. And that's cool.
This story, "Listen to the first sound recording, made in 1860" was originally published by Fritterati. | <urn:uuid:020bdf19-26d3-4ec8-b143-ef89c70a9d07> | CC-MAIN-2017-04 | http://www.itnews.com/article/2906136/listen-to-the-first-sound-recording-made-in-1860.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.979762 | 228 | 2.875 | 3 |
Whilst most people know what phishing is, very few realize the lengths criminals will go to in order to initiate a phishing attack. Phishing has moved beyond simply distributing emails with fake corporate logos, attacks are now much more sophisticated, using clickable advertising to create legitimate-looking phishing websites to capture the sensitive data of an unsuspecting victim. This infographic highlights the five steps taken by cybercriminals to help you better understand the anatomy of a phishing attack.
Jon Collins’ in-depth look at tech and society
Kathryn Cave looks at the big trends in tech | <urn:uuid:77c649dd-1b13-41f5-8206-59f79b74f281> | CC-MAIN-2017-04 | http://www.idgconnect.com/view_abstract/39847/the-anatomy-phishing-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93626 | 119 | 2.796875 | 3 |
DDoS Detection: Separating Friend from Foe
Full traffic visibility to diagnose those nasty attacks
In many organizations, networks are at the core of the business, enabling not only internal functions such as HR, supply chain, and finance but also the services and transactions on which the business depends for revenue. That makes network availability critical. Any interruption of access from the outside world turns off the revenue spigot, impacting profit and creating a bad user experience that can damage customer satisfaction and result in permanent loss of patronage. The worse the outage, the worse the damage. That’s why speed is so important in detecting, diagnosing, and responding to Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks.
One of the chief challenges in responding to an attack is to distinguish friend from foe. Without a way to drill down into traffic details and examine host-level traffic behavior, it can be difficult to tell the difference. That’s why it’s critical to have a network visibility tool like Kentik Detect, which allows you to quickly filter for key attack metrics. The sooner you can determine where a traffic spike is coming from and going to, the sooner you can decide on the appropriate response. And even after an attack has passed, examining full resolution traffic data from that attack can still reveal information that can be applied to better prepare for future events.
About DoS and DDoS
Just to be sure we’re on the same page, a DoS attack is an attempt to make computing or network resources unavailable for normal usage, such as interrupting a host’s access to the Internet or suspending its services to the Internet. DoS becomes DDoS when the source of the attack is distributed, meaning that the attack comes from more than one unique IP address. DDoS attacks are commonly launched from “botnets” of compromised hosts that can number up into the thousands.
It’s widely known that DDoS attacks are rapidly increasing in frequency and size. While mega attacks that last for many hours and reach 200 Gbps or more make the news, the vast majority of attacks last under an hour and are less than 1 Gbps in volume. Smaller attacks often happen without being noticed, though they may be harbingers of larger attacks to come. Mid-sized attacks are more readily felt, but distinguishing between a friendly surge in normal traffic and an attack is key to timely response. Large attacks are fairly obvious, and in these cases diagnosing the traffic is important to understand network entry points and sources. In all cases, a clear assessment is important to understand the best way to mitigate the attack.
The most common form of DDoS is the volumetric attack, in which the intent is to congest all of the target network’s bandwidth. Roughly 90% of all DDoS attacks are volumetric, with application-layer attacks making up the remaining 10%. According to Akamai’s Q1 2015 State of the Internet report, over 50% of volumetric attacks are IP flood attacks involving a high volume of spoofed packets such as TCP SYN, UDP, or UDP fragments. A growing percentage of attacks are reflection and amplification attacks using small, spoofed SNMP, DNS, or NTP requests to many distributed servers to bombard a target with the much more bandwidth-heavy responses to those requests.
In the last few quarters, both Akamai and other Internet security observers have noted rapid growth of reflection attacks based on spoofed Simple Service Discovery Protocol (SSDP) requests sent to IP-enabled home devices with poorly protected Universal Plug-n-Play (UPnP) protocol stacks. SSDP reflection now accounts over 20% of all volumetric DDoS attacks.
DDoS detection and analysis cases
Depending on your organization type (e.g. ISP, hosting company, content provider, or end-user organization), you may be concerned only with attacks that directly affect your resources. Or you might want to know about any attack traffic that’s passed — or is passing — through your network. Either way, there are two general cases of DDoS analysis:
- Diagnosing — You’ve already detected that something is amiss, for example one of your resources is experiencing service degradation, you’re seeing anomalous server log entries, or a circuit is unusually full. In this case, you’ll need to identify the traffic that’s causing the problem, and, if it’s not legitimate, to characterize the attack clearly enough to enable specific mitigating actions.
- Spelunking — In this case you’re not aware of any current attacks but you want to explore your network traffic data to learn more about previous attacks (and maybe even find an attack in progress). We’ll cover this second mode of DDoS analysis in a separate forthcoming post.
In both of these analysis cases, the NetFlow and BGP information in Kentik Detect’s big data datastore gives you many ways to analyze volumetric traffic. In the following examples we’ll look at how this data can help you separate an attack from an innocent spike.
Diagnosis by Destination IP
For this first example, let’s say that you’re suddenly being alerted — e.g. by server overload alarms from your network management software, or by alert notifications from Kentik Detect — that the IP address “192.168.10.22” (anonymized here to protect the innocent) is getting hammered by anomalous traffic, indicating that it is possibly under attack. You’ll want to rapidly drill down on key characteristics of the suspect traffic to determine if it’s actually an attack, and if so to gather information that will help you to mitigate the attack quickly.
As a starting point, we would go to the Data Explorer in the Kentik Detect portal. By clicking All in the device selection pane in the sidebar at left, and then the Apply button, we can see total network-wide traffic for all of the listed device(s). Since we know the IP address that we suspect is under attack, we’ll can then use the Filters pane in the left-hand sidebar to filter the total traffic for that address:
- Click Add Group. The Filters pane expands, showing the first filter group (Group 1).
- Define a filter for traffic whose destination IP is 192.168.10.22:
– From the drop-down metrics menu, we choose Dest IP/CIDR.
– We leave the operator setting at “=” (default).
– In the value field, we enter 192.168.10.22/32.
– The Filters pane then appears as shown at left.
- Click the Apply button at the upper right of the page. A graph is rendered (Fig. 2) and a corresponding table is populated below the graph (Fig. 3).
Once we’ve applied this destination IP address filter, the resulting graph shows clearly that there is a significant, anomalous spike of traffic for over 20 minutes and continuing.
Viewing by Source IP
Now we need to characterize this traffic. An abnormally large number of source IP addresses from atypical countries is indicative of a botnet, so we’ll look at traffic by source country to unique source IP addresses:
- In the Group by Metric drop-down above the graph, choose Source » Country.
- In the Units drop-down, choose Unique Src Ip, then click Apply.
We can now see that there is a huge number of unique source IP’s from China, U.S., Vietnam and other countries that are generating nearly a million packets per second in aggregate. Since this IP happens to be the U.S. and doesn’t typically get traffic from Asia, that’s clearly suspicious. China is the biggest contributor to this suspect traffic, so we’ll isolate China and look at packets per second per source IP:
- In the Group by Metric drop-down, choose Source » IP/CIDR.
- In the Units drop-down, choose Packets/s, then click Apply.
This view validates that we’re looking at a rather large number of source IP addresses that are sending equivalent packets per second, which is indicative of a botnet.
Additional attack characteristics
Now that we’re pretty sure we’re under DDoS attack, it would be helpful to know a bit more about the traffic we’re being hit with. So next we’ll look at the protocol and destination port # of the traffic. First, in the Group by Metric drop-down, choose Full » Proto. We can see that the traffic is primarily UDP:
Next we’ll set the Group by Metric drop-down to Destination » Port to look at the destination port number.
Assessing mitigation options
When we look at the destination port # of the traffic, we can see that there is remarkable consistency, in that the vast majority of the UDP packets are going to port 3074, which is the Xbox protocol. Now we can be pretty certain that this is a botnet attack. Since this address otherwise doesn’t receive traffic from Asia, we can mitigate the majority of the attack by dropping this traffic from China and some of the other Asian countries.
Remember, though, that our look at source countries listed the U.S. right after China, with over a hundred thousand packets per second. So to develop a complete mitigation plan we need to explore that issue next. Since this IP gets traffic from the U.S. under normal conditions, simply dropping traffic from the U.S. isn’t a good idea. But what we can do is to look at packets by source IP, but this time instead of /32 host source IPs, we’ll look at /24s.
We can see that there is a good number of /24s that are sending a fair amount of pps. So, one possible mitigation approach would be to rate-limit the pps from each of these /24. Another mitigation would be to redirect traffic from these /24s to an internal or cloud-based scrubbing center.
There’s a large-scale dark market that trades in DDoS, and that market continuously innovates and evolves to meet new demand. With the nature of DDoS attacks constantly changing, network-centric organizations need an agile approach approach to DDoS detection. By offering complete visibility into network traffic anomalies, including both alerting and full-resolution drill-down on raw flow records, Kentik Detect enables operators to respond rapidly and effectively to each DDoS threat. | <urn:uuid:4d6d62f9-f99c-4d8d-95c3-d38a76aa8af7> | CC-MAIN-2017-04 | https://www.kentik.com/ddos-separating-friend-from-foe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91967 | 2,235 | 2.53125 | 3 |
By: Candid Wueest, Software Engineer at Symantec
Virtual machines (VM) have been used for many years and are popular among researchers because malware can be executed and analyzed on them without having to reinstall production systems every time. These tests can be done manually or on automated systems, with each method providing different benefits or drawbacks. Every artifact is recorded and a conclusion is made to block or allow the application. For similar reasons, sandbox technology and virtualization technology have become a common component in many network security solutions. The aim is to find previously unknown malware by executing the samples and analyzing their behavior.
However, there is an even bigger realm of virtual systems out there. Many customers have moved to virtual machines in their production environment and a lot of servers are running VM, performing their daily duty with real customer data. This leads to a common question when talking to customers: “Does malware detect that it is running on a virtual system and quit?”
It is true that some malware writers try to detect if their creation is running on a VM by using tricks such as:
- Checking certain registry keys that are unique to virtual systems
- Check if helper tools like VMware tools are installed
- Execute special assembler code and compare the results
- And more.
In some rare cases we have encountered malware that does not quit when executed on a VM, but instead sends false data. These “red herrings” might ping command-and-control servers that never existed or check for random registry keys. These tactics are meant to confuse the researcher or have the automation process declare the malware a benign application.
Malware authors want to compromise as many systems as possible, so if malware does not run on a VM, it limits the number of computers it could compromise. So, it should not come as a surprise that most samples today will run normally on a virtual machine and that the features can be added if the cybercriminal wishes to do so.
In order to answer the initial question with some real data, we selected 200,000 customer submissions since 2012 and ran them each on a real system and on a VMware system and compared the results. For the last two years, the percentage of malware that detects VMware hovered around 18 percent. On average, one in five malware samples will detect virtual machines and abort execution.
This means that malware still detects if it is running on a VM, but only in some minor cases. Symantec recommends that virtualized systems should be properly protected in order to keep them safe from threats. Symantec engineers are always on the lookout for new techniques that malware authors may employ to bypass automated analysis. With the combination of various proactive detection methods, like reputation based detection, we can ensure maximum security for our customers. | <urn:uuid:f280855f-0c15-453b-88fa-e164b7b39706> | CC-MAIN-2017-04 | http://www.itbestofbreed.com/sponsors/symantec/best-tech/does-malware-still-detect-virtual-machines | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941218 | 565 | 2.53125 | 3 |
Thanks to user-friendly distributions like Ubuntu, more people are running Linux than ever before. But many users stick to the GUI and point and click their way through tasks, missing out on one of the key advantages of Linux: the command line.
The command line interface is the most efficient and powerful way to interact with Linux; by typing commands, users can quickly move files, install new packages, and make complex tasks easy.
The Linux Command Line is a complete introduction to the command line. Author William Shotts, a Linux user for over 15 years, guides readers from their first keystrokes to writing full programs in Bash, the most popular Linux shell.
The book’s extensive coverage tackles file navigation, environment configuration, command chaining, pattern matching with regular expressions, and much more.
“The command line is like a window into Linux,” said No Starch Press founder William Pollock. “Strip away the GUI and you’re in control of your machine. The difference is kind of like driving a stick versus an automatic. The automatic is great for shepherding the family around town, but the stick puts you in control of that souped up sports car.”
Among the command line’s many features, readers will learn how to:
- Create and delete files, directories, and symlinks
- Administer their system, manage networking, and control processes
- Use standard input and output, redirection, and pipelines
- Edit files with Vi and write shell scripts to automate tasks
- Slice and dice text files with cut, paste, grep, patch, and sed. | <urn:uuid:64ec1d48-c23b-4b02-8fdd-70c3b19d886b> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/01/11/the-linux-command-line/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00138-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910422 | 337 | 2.84375 | 3 |
Gnone G.,Acquario di Genoa Area Porto Antico |
Bellingeri M.,Acquario di Genoa Area Porto Antico |
Dhermain F.,GECEM |
Dupraz F.,GECEM |
And 17 more authors.
Aquatic Conservation: Marine and Freshwater Ecosystems | Year: 2011
The Pelagos Sanctuary is the largest marine protected area of the Mediterranean Sea (87 500km2), and is located in the north-west part of the basin. The presence of the bottlenose dolphin in this area is well documented but its distribution and abundance are not well known. The present study collected and analysed data from 10 different research groups operating in the Pelagos Sanctuary from 1994 to 2007. Photo-identification data were used to analyse the displacement behaviour of the dolphins and to estimate their abundance through mark-recapture modelling. Results show that the distribution of bottlenose dolphin is confined to the continental shelf within the 200m isobath, with a preference for shallow waters of less than 100m depth. Bottlenose dolphins seem to be more densely present in the eastern part of the sanctuary and along the north-west coast of Corsica. Bottlenose dolphins show a residential attitude with excursions usually within a distance of 80km (50km on average). A few dolphins exhibit more wide-ranging journeys, travelling up to 427km between sub-areas. The displacement analysis identified two (sub)populations of bottlenose dolphins, one centred on the eastern part of the sanctuary and the other one around the west coast of Corsica. In 2006, the eastern (sub)population was estimated to comprise 510-552 individuals, while 368-429 individuals were estimated in the Corsican (sub)population. It was estimated that in total, 884-1023 bottlenose dolphins were living in the Pelagos Sanctuary MPA in the same year. The designation of a number of Special Areas of Conservation (SACs) under the Habitats Directive is discussed as a possible tool to protect the bottlenose dolphin in the Pelagos Sanctuary and in the whole of the Mediterranean Sea. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd. 21 4 June 2011 10.1002/aqc.1191 Research Article Research Articles Copyright © 2011 John Wiley & Sons, Ltd.. Source
Druon J.-N.,European Commission - Joint Research Center Ispra |
Panigada S.,Tethys Research Institute |
David L.,EcoOcean Institute |
Gannier A.,British Petroleum |
And 7 more authors.
Marine Ecology Progress Series | Year: 2012
The development of synoptic tools is required to derive the potential habitat of fin whales Balaenoptera physalus on a large-scale basis in the Mediterranean Sea, as the species has a largely unknown distribution and is at high risk of ship strike. We propose a foraging habitat model for fin whales in the western Mediterranean Sea relying on species ecology for the choice of predictors. The selected environmental variables are direct predictors and resource predictors available at daily and basin scales. Feeding habitat was determined mainly from the simultaneous occurrence of large oceanic fronts of satellite-derived sea-surface chlorophyll content (chl a) and temperature (SST). A specific range of surface chl a content (0.11 to 0.39 mg m-3) and a minimum water depth (92 m) were also identified to be important regional criteria. Daily maps were calibrated and evaluated against independent sets of fin whale sightings (presence data only). Specific chl a fronts represented the main predictor of feeding environment; therefore, derived habitat is a potential, rather than effective, habitat, but is functionally linked to a proxy of its resource (chl a production of fronts). The model performs well, with 80% of the presence data <9.7 km from the predicted potential habitat. The computed monthly, seasonal and annual maps of potential feeding habitat from 2000 to 2010 correlate, for the most part, with current knowledge on fin whale ecology. Overall, fin whale potential habitat occurs frequently during summer in dynamic areas of the general circulation, and is substantially more spread over the basin in winter. However, the results also displayed high year-to-year variations (40 to 50%), which are essential to consider when assessing migration patterns and recommending protection and conservation measures. © Inter-Research 2012. Source
Carpinelli E.,Information and Research on Cetaceans |
Carpinelli E.,Tethys Research Institute |
Carpinelli E.,University of Pavia |
Gauffier P.,Information and Research on Cetaceans |
And 12 more authors.
Aquatic Conservation: Marine and Freshwater Ecosystems | Year: 2014
The Mediterranean sperm whale sub-population is considered 'Endangered' by both ACCOBAMS and the IUCN. Conservation policies require protected species populations to be monitored, but the distribution and movements of sperm whales across the Mediterranean Sea are still poorly understood. To provide insight into sperm whale movements, the photo-identification catalogue from the Strait of Gibraltar was compared with seven other collections: (a) the North Atlantic and Mediterranean Sperm Whale Catalogue (NAMSC), and with photo-identification catalogues from (b) the Alboran Sea, Spain, (c) the Balearic Islands, Spain, (d) the Corso-Provençal Basin, France, (e) the Western Ligurian Sea, Italy, (f) the Tyrrhenian Sea, Italy, and (g) the Hellenic Trench, Greece. Of 47 sperm whales identified in the Strait of Gibraltar between 1999 and 2011 a total of 15 animals (32%) were photographically recaptured in other sectors of the western Mediterranean Sea in different years. None of the Strait of Gibraltar sperm whales were resighted in Atlantic waters or in the eastern Mediterranean basin. These results indicate long-range movements of the species throughout the whole western Mediterranean Sea, with a maximum straight-line distance of about 1600km. The absence of any photographic recaptures between the Mediterranean Sea and the North Atlantic Ocean supports the genetic evidence of an isolated sub-population within the Mediterranean Sea. Long-term photo-identification efforts and data sharing between institutions should be further encouraged to provide basic information necessary for the implementation of effective sperm whale conservation measures in the whole basin. Copyright © 2014 John Wiley & Sons, Ltd. 24 S1 July 2014 10.1002/aqc.2446 Supplement Article Research Articles Copyright © 2014 John Wiley & Sons, Ltd. Source
Campana I.,University of Tuscia |
Crosti R.,European Commission - Joint Research Center Ispra |
Angeletti D.,University of Tuscia |
Carosso L.,University of Pisa |
And 8 more authors.
Marine Environmental Research | Year: 2015
Maritime traffic is one of many anthropogenic pressures threatening the marine environment. This study was specifically designed to investigate the relationship between vessels presence and cetacean sightings in the high sea areas of the Western Mediterranean Sea region. We recorded and compared the total number of vessels in the presence and absence of cetacean sightings using data gathered during the summer season (2009-2013) along six fixed transects repeatedly surveyed. In locations with cetacean sightings (N = 2667), nautical traffic was significantly lower, by 20%, compared to random locations where no sightings occurred (N = 1226): all cetacean species, except bottlenose dolphin, were generally observed in locations with lower vessel abundance. In different areas the species showed variable results likely influenced by a combination of biological and local environmental factors. The approach of this research helped create, for the first time, a wide vision of the different responses of animals towards a common pressure. © 2015 Elsevier Ltd. Source | <urn:uuid:ab3e74b8-a5a6-4588-ba3b-9151610cb1fe> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/ecoocean-institute-1821857/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904625 | 1,643 | 2.9375 | 3 |
Virtualization: The tech trick that makes hardware vanish
This feature first appeared in the Summer 2016 issue of Certification Magazine. Click here to get your own print or digital copy.
In the beginning, there was hardware. Big, chunky, heavy computing hardware that filled entire buildings built with specially reinforced floors to bear the immense weight. These early machines were essentially as smart as the programmers who “taught” them to perform complex calculations, through the use of coded instructions which evolved into the first software programs.
Software caught up to computing hardware, and then became the spur that drove the creation of faster, more powerful machines. One type of software in particular — simulation programs — needed serious hardware to run effectively. Airlines used simulation software to create virtual jumbo jets for pilots to train on; NASA astronauts-in-training spent hundreds of hours in virtual space shuttles powered by simulation software.
As simulation software became more sophisticated, and computer hardware grew more powerful, the inevitable question was asked: Could a physical computer be used to simulate one or more virtual computer systems?
While the earliest work on virtualization goes back several decades, it wasn’t until the late 1990s that it made a significant impact on the mainstream enterprise. Virtualization gave IT departments the ability to create virtual machines (VMs), a feature that helped generate additional value from the latest hardware.
What is a VM?
A virtual machine consists of an actual operating system which is installed on simulated hardware components. To the virtual machine and the OS powering it, all of the perceived hardware (including CPU, RAM, and hard disk) belongs to it. The computer hosting the virtual machine shares its memory, CPU, and other components with the virtualization software.
The VM setup offers a huge advantage, in that the VM’s hard disk actually exists as a single file on the host machine. Because of this, a VM can be saved, copied, moved, and restored just like a typical file can.
This save-and-restore feature of VMs was a revolutionary improvement for many different environments. Software testers could use VMs to run prototype programs, and if a VM blew up, it could quickly be restored to its original state, rather than requiring the re-imaging of an entire hard disk.
Schools would also greatly benefit from virtualization, as VMs enabled the quick and inexpensive setup of classrooms and computer labs.
The VM in your house
It wasn’t long before virtualization trickled down to the home computing market. Two popular software programs, VMware Fusion and Parallels Desktop, let Mac owners create virtual Windows PCs on their Apple systems.
Oracle VM VirtualBox, a powerful open source program that runs on Windows, OS X, Linux, and Solaris, can create VMs capable of running versions of Windows, Linux, Solaris, OpenBSD, and OS/2.
Impressed by the potential of virtualization, hardware vendors began to support it in their products. Industry leader Intel created a CPU feature called VT-x which provides tailored hardware assistance to virtualization software, making VMs work more efficiently and reducing resource overhead on the host machine. Chip maker AMD added a similar feature, AMD-V, to its processors.
Beyond simulated computers
As virtualization has evolved, it has grown beyond the mere simulation of workstations and servers. Virtualization has grown to include the following:
Virtual applications that can run as though they are installed on the client PC.
Storage virtualization, which pro vides an abstraction layer separating physical storage devices and how they can be presented and accessed.
Memory virtualization, which turns multiple networked RAM resources into a single shared memory pool.
Today, virtualization has grown to include nearly anything of which a virtual version can be made. Virtual servers can be used to host virtual networks, which are built using virtual routers and switches — this may seem like virtual overkill, but it is a viable example of how virtualization is being used to replace traditional networking hardware devices.
Network virtualization enables combining a number of physical networks into one logical network. Alternatively, you can take a single physical network and split it into a number of logical networks. You can even create a virtual network between two virtual machines which exist on the same physical server.
Put in the simplest terms, virtualization is excellent at taking one physical resource (a server, for example) and carving it into several virtual resources, which saves on the number of physical machines required. Alternatively, it also excels at taking several physical resources (like a large number of networked RAM chips) and making them appear as a single resource.
Virtualization has also revolutionized the economy of network computing resources for small businesses and individual entrepreneurs. Virtual private servers can be leased from service providers for as little as $20/month, with full support and maintenance included.
That said, virtualization is more complicated than just throwing a ton of CPU cores, RAM, and hard disks into a server, and spinning up an army of virtual machines. One of the current challenges of virtualization is application management on VMs. In a perfect world, applications would run as smoothly and consistently on VMs as they do on physical computers.
As it turns out, applications are often finicky. While the operating system loaded on a VM may be quite happily convinced it is running on its own physical computer, applications can often make arcane resource demands that cause VMs to pitch fits. This is particularly true for web applications, which have grown in complexity during the cloud computing revolution.
Software installed on VMs can cause problems like application incompatibilities with the VM’s virtual hardware management, or unbalanced application workloads causing bottlenecks in the host machine’s CPU, RAM, or network bandwidth.
Containing the problems
Much of the discussion concerning the future of virtualization (and virtual machines in particular) is on the use of containers. A container bundles a software application with all of the code, libraries, and tools necessary for the application to run. This bundle is capable of running in any OS environment, and doesn’t require any system emulation like a virtual machine.
In essence, a container makes an application platform-agnostic, taking away the requirement for a virtual version of the app’s native OS.
A recent example of the use of containers is Google’s plan to make its popular Chromebook computers (which run the very limited Chrome OS operating system) compatible with the full catalog of existing Android apps. This will be done by running the Android apps in a container that contains the full Android Framework. The container will let Android apps run on Chrome OS without any virtualization required.
Containers have a key advantage over virtual machines — an app running in a container can still communicate with the OS the container is running on. In Google’s case, an Android app in a container can still communicate with Chrome OS to get access to the onboard hardware. This means the Chromebook doesn’t have to provide any heavy virtual hardware emulation.
The virtual tomorrow
Perhaps the most interesting item being discussed about the future of virtualization, is the possible resurgence of the “big, chunky, heavy computing hardware” that we mentioned in the opening paragraph of this article. Yes, the mainframe computer is trying to come back in style!
Aren’t mainframes dead? Not to some experts who assert that using mainframes for virtual machine infrastructure provides greater security than using commodity servers. Given the potential costs associated with security breaches, these proponents believe that the higher costs associated with mainframes are worth it.
To be fair, mainframes have continued to evolve along with the rest of the computing technology industry. In particular, mainframe computing has become more powerful and less expensive, while also becoming easier to administrate and maintain.
It is unlikely, however, that mainframes represent the future of virtualization, except perhaps for organizations with very ambitious requirements. The relatively low cost of commodity hardware servers is a powerful incentive for businesses and public service groups to stay with traditional virtualization solutions. After all, virtualization’s primary advantage is the reduction of costs associated with purchasing physical servers.
Here is one safe prediction: The future of virtualization is directly related to the future of cloud computing. Virtualization and the cloud go hand in hand, and as technologists come up with bigger and better ways to implement the cloud in our daily lives, virtualization will be called upon to efficiently and affordably enable these new ideas. | <urn:uuid:225600fd-4086-4a2f-b7e7-a163f8925d31> | CC-MAIN-2017-04 | http://certmag.com/virtualization-tech-trick-makes-hardware-vanish/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930645 | 1,750 | 3.5625 | 4 |
MP codes convert packed decimals to unpacked decimal representation for output or decimal values to packed decimals for input.
The MP code most often used as an output conversion. On input, the MP processor combines pairs of 8-bit ASCII digits into single 8-bit digits as follows:
When displaying packed decimal data, you should always use an MP or MX code. Raw packed data is almost certain to contain control codes that will upset the operation of most terminals and printers.
Input conversion is valid. Generally, for selection processing you should specify MP codes in field 7 of the data definition record.
yields 0x D01234 | <urn:uuid:3d235b0e-36f7-406e-b3c4-c6aa8a57a643> | CC-MAIN-2017-04 | http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/jql2_CONVERSION.MP.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.710385 | 132 | 3.0625 | 3 |
This year is noteworthy for more than the 10th anniversary of the Sept. 11, 2001, terrorist attacks. Forty years ago, President Nixon declared a war on drugs. What do the two dates have in common? Arguably the most important countermeasure against both terrorism and narcotics trafficking--financial intelligence.
Most criminal activity is about greed, i.e. money. To help criminal investigators follow the money, Congress in 1970 passed a series of laws and regulations collectively known as the Bank Secrecy Act. "Secrecy" is a misnomer, however. A better term is "financial transparency" or "financial intelligence."
The data that emerged quickly proved its worth, helping combat money laundering related to narcotics and many other crimes. In 1990, Congress' confidence in financial intelligence was further evidenced by the creation of the Financial Crimes Enforcement Network, or FinCEN, at the Treasury Department. FinCEN's mandate was to collect, analyze and disseminate financial intelligence to more effectively support law enforcement.
Long before the idea of breaking down bureaucratic walls became popular, FinCEN was sharing financial intelligence with non-Treasury agents at the federal, state, local and, increasingly, international levels. FinCEN also is a network, a link among law enforcement, financial and regulatory organizations. Subsequently, Treasury gave the agency responsibility for enforcing the Bank Secrecy Act.
Approximately 16 million to 18 million pieces of financial intelligence are filed with FinCEN each year, many of which include names, addresses, account numbers and other identifiers used to root out criminal activity. Currency transaction reports filed with FinCEN for cash deposits or withdrawals of $10,000 or more have approximately 150 data fields. Similar reports are filed for cross-border transport of cash or assets and large business transactions at places such as car dealerships, real estate agencies, jewelers and precious metals dealers.
Particularly during the war on drugs, bankers were considered the first line of defense against money launderers.
Bankers are supposed to know their customers. If they sense a customer's activity is inappropriate, they file a suspicious activity report. In addition to identifying information, the reports often contain narratives that detail suspicious activity. FinCEN receives around 1 million SARs annually, about half of which are generated from banks and half from money service businesses.
Most financial intelligence is used reactively. A criminal investigator or analyst assigned to solve a crime queries the financial databases seeking information that could prove useful, including information about a subject's assets. While financial intelligence by itself does not generally solve a crime, it often buttresses other investigative techniques, such as interviews, informants, surveillance and undercover operations.
But analysis of financial intelligence also can be proactive when crime has not yet occurred. In such cases, law enforcement officials examine data to identify anomalies, patterns and trends to intercept or prevent criminal activity.
Analytics add value to financial intelligence by combining it with other databases, including criminal records, immigration records, trade data, commercially available business information and social networks. Connections spotted between individuals, companies, bank accounts and other links, can uncover suspicious financial relationships and money flows, expanding the money trail.
Financial reporting was originally intended to detect large amounts of dirty money related to the war on drugs, and it has helped uncover terror activity in the United States and overseas. It is difficult to detect the small amounts of money used to finance terrorism. A plot in the Arabian Peninsula to send explosive devices to the United States in a printer, mobile phones and other air freight cost about $4,500. When the scheme was intercepted by authorities, al Qaeda in the Arabian Peninsula proclaimed, "It is such a good bargain for us to spread fear among the enemy and keep him on his toes in exchange for a few months' work and a few thousand dollars."
In another recent attempt, the Pakistani Taliban used an underground money transfer system--a crack in U.S. countermeasures--to finance an attempted bombing in New York's Times Square. That operation cost approximately $12,000.
The Way Forward
Criminal and terrorist methodologies have evolved during the past 40 years. Law enforcement and intelligence communities need to do a better job of recognizing new threats and developing innovative countermeasures.
FinCEN and other government entities must employ state-of-the art analytics to effectively exploit financial intelligence. Advanced analytics help identify anomalies, outliers, typical versus atypical behavior and patterns, and apply early-warning detection, predictive modeling, data integration, entity resolution, sentiment analysis and social media analytics. And industries must develop robust programs to comply with Treasury's financial reporting mandates.
Adversaries, criminal methodologies and threats do not remain static. They shift and change, yet money will remain the essential ingredient in crime and terror. Successfully collecting, analyzing and disseminating financial intelligence will be their nemesis.
John A. Cassara spent more than 25 years as an intelligence officer and Treasury Department special agent and is author of several books on money laundering and terror finance. He also is an industry adviser to SAS Federal LLC. | <urn:uuid:e317463f-fc52-40c2-91a4-9a1947057bea> | CC-MAIN-2017-04 | http://www.nextgov.com/mobile/2011/11/analysis-financial-intelligence-is-a-critical-weapon-against-terrorist-networks/50149/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943864 | 1,032 | 2.625 | 3 |
NASA Servers At High Risk Of Cyber AttackAuditors were able to pull encryption keys, passwords, and user account information over the Internet from systems that help control spacecraft and process critical data.
(click image for larger view)
Slideshow: NASA, Microsoft Reveal Mars In Pictures
The network NASA uses to control the International Space Station and Hubble Telescope has unpatched vulnerabilities that could be exploited over the Internet, NASA's inspector general warned in a new report.
The risk of an attack is real, according to the report. In 2009 alone, hackers stole 22 GB of export-restricted data from NASA Jet Propulsion Laboratory systems and were able to make thousands of unauthorized connections to the network from as far afield as China, Saudi Arabia, and Estonia.
"Until NASA addresses these critical deficiencies and improves its IT security practices, the agency is vulnerable to computer incidents that could have a severe to catastrophic effect on agency assets, operations, and personnel," according to the report, titled "Inadequate Security Practices Expose Key NASA Network To Cyber Attack."
The inspector general pinned the problems on the lack of oversight. Despite agreeing to establish an IT security oversight effort for the network after a critical audit last May, that effort hadn't yet been launched as of February.
As part of its investigation, NASA's inspector general used open source network mapping and security auditing tool nmap to uncover the fact that 54 separate NASA servers -- all associated with efforts used to "control spacecraft or process critical data" -- were able to be accessed over the Internet.
Network vulnerability scanner NESSUS uncovered several servers at high risk of attack. For example, one server was susceptible to an FTP bounce attack, which can be used to, among other things, scan servers through a firewall for other vulnerabilities.
Several other servers, which were configured improperly, served up encryption keys, user account information, and passwords to investigating auditors, which could have opened the door to more NASA systems and personally identifiable data.
In response to the report, NASA CIO Linda Cureton agreed to add continuous monitoring to the network, mitigate risks to currently Internet-accessible servers, and put in place more comprehensive agency-wide cyber risk management strategies. However, neither the report nor Cureton's response indicate whether the vulnerabilities in question have yet been patched. | <urn:uuid:2677b897-e16e-4b98-bdfc-ccab46dde1c7> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/nasa-servers-at-high-risk-of-cyber-attack/d/d-id/1096929 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933845 | 470 | 2.5625 | 3 |
Digital Forensics: Tech detectives follow the computer trail
From hipsters in lab coats to gun-toting crime solvers, television programming is full of depictions of computer forensics as a fast-moving, action-packed career where analysts routinely interface with law enforcement and often confront perpetrators with evidence of their crimes in dramatic courtroom showdowns. Is that really the case?
As with any television dramatization, this depiction of digital forensics certainly glamorizes the field, but there are grains of truth behind the flashy Hollywood embellishments. Computer forensic technicians do often uncover critical evidence that solves crimes, and they do testify in court about their findings. The reality, however, is that forensic analysis is painstaking work that requires great attention and tremendous expertise. Successful forensic analysts have many lucrative career opportunities in both the public and private sectors.
Introduction to Digital Forensics
Digital forensics as a field includes the retrieval and analysis of information stored, processed or transmitted by digital devices. While the field originally covered only traditional computers, the proliferation of device types over the years now requires forensic analysts to routinely extract information from smartphones, tablets, embedded computers and even automobiles. Any digital device with storage, processors and/or memory is fair game for the forensic analyst seeking to uncover hidden information.
One of the most common forms of forensic analysis is the retrieval of information from storage media, such as a hard drive, flash device, or camera memory card. Because they may need to testify about their findings in court, forensic analysts must take care when retrieving information from storage that they do not accidentally alter any of the information stored on that device. Any intentional or accidental modification of data taints the evidence and may result in that evidence becoming inadmissible in court.
Therefore, forensic technicians accessing the contents of storage can’t simply boot up a computer and browse its hard drive. Instead, they work with duplicate images of the actual evidence and also use special write blocking devices to prevent accidental data corruption. The evidence retrieved from storage devices may include documents, pictures or even temporary cache files that contain information about browser history and other use of a computer system.
Forensic analysts also often turn to network-level analysis to determine the activity that took place on a network during a period of time. In some cases, forensic analysts make use of specialized tools that capture information about network traffic and store it for later analysis. In some cases, the device may capture the full contents of network transmissions, allowing analysts to completely reconstruct any activity that took place on the network. This approach, however, is quite costly because it consumes massive amounts of storage.
Therefore, many organizations choose to capture summary data, known as network flows. The information captured using the network flow approach includes the IP addresess of the source and destination system, the ports and applications used and the amount of data transferred. It’s quite similar to the type of data you would find on a telephone bill. An analyst can tell which systems talked to each other and how much information was exchanged, but they can’t reconstruct the contents of the communication.
The combination of device and network forensics can paint a detailed picture of a user’s network activity. Forensic analysts can then take this information and reconstruct the circumstances surrounding an event to support law enforcement or other types of investigations.
Careers in Digital Forensics
Career opportunities abound for qualified forensic analysts. Digital forensics is an extremely technical field and individuals with this expertise are coveted and in high demand. Government agencies are the most obvious potential employers, including both law enforcement agencies, the military and other units that conduct investigations. Opportunities exist at the federal, state and, in some cases, even local level. The use of digital evidence is so prevalent in our judicial system that even small cities now often have digital investigation units, or at least an individual qualified to perform forensic analysis of smartphones and other devices.
The private sector also provides opportunities for careers in digital forensics. In fact, many analysts who start their careers working for a government agency often gain experience and then move to the private sector in search of more lucrative career opportunities. Many private investigators employ forensic analysts on a contract or freelance basis and some firms specialize in digital investigations, hiring analysts around the world to conduct private digital forensic investigations in support of corporate clients. Analysts working in the private sector may find themselves working in support of legal defense teams, the internal investigations of private corporations or various other causes.
Digital Forensics Certifications
IT professionals seeking to shift careers and specialize in digital forensics often find certification programs an excellent way to get started. Earning a professional certification validates that job candidates successfully achieved a base level of knowledge, regardless of whether they participated in a college degree program, enrolled in instructor-led technical training courses or completed a program of self-study.
The International Society of Forensic Computer Examiners (ISFCE) offers the Certified Computer Examiner (CCE) program as a vendor-neutral certification for forensic analysts. Candidates for this credential must either complete an approved training program (either through a bootcamp or self-study) or have 18 months of verifiable professional experience in digital forensics. Candidates who meet these requirements must submit a written application for the program to the CCE board. Once approved, they then must pass both written and hands-on exams demonstrating their knowledge of digital forensics.
The Information Assurance Certification Review Board (IACRB) offers a similar vendor-neutral credential: the Certified Computer Forensic Examiner (CCFE) certification. Similar to the CCE program, the CCFE requires that candidates pass both a written exam and a hands-on practical evaluation. The written exam includes 50 multiple-choice questions administered during a two-hour testing period. Candidates who pass the written exam may then take the practical exam which requires performing a forensic examination of case files and writing a formal analysis report suitable for presentation in court.
Candidates seeking a more focused experience may choose to pursue certification on a particular forensic tool. The EnCase Certified Examiner (EnCE) program offers an approach that focuses on the EnCase forensic toolkit. This program requires passing a two-hour online multiple choice examination with a score of 80 percent or higher and then taking a practical examination using the EnCase tools. Candidates for this credential must either complete 64 hours of online or classroom training in digital forensics or demonstrate that they have twelve months of relevant work experience.
Finally, the SANS Institute offers three certification programs focused on digital forensics. The GIAC Certified Forensic Analyst, Certified Forensic Examiner, and Network Forensic Analyst are highly regarded certification programs that require strong technical depth to pass. SANS offers training courses focused on each one of these exams and candidates must pass written exams to earn any of the GIAC credentials.
Digital forensics is an exciting career field with many diverse employment opportunities. IT professionals seeking to expand their technical skills may wish to pursue training in this field through self-study or a formal training program. In addition to completing training, employers always appreciate candidates who take the additional step of demonstrating their practical knowledge by successfully completing one or more digital forensic certifications. | <urn:uuid:aadd11ba-5a27-4a35-8474-a9155dc4b162> | CC-MAIN-2017-04 | http://certmag.com/digital-forensics-tech-detectives-follow-computer-trail/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934149 | 1,452 | 3.078125 | 3 |
Multiprocessors, Multicomputers, and Clusters
In this and following lectures, we shall investigate a number of strategies for parallel computing, including a review of SIMD architectures but focusing on MIMD.
The two main classes of SIMD are vector processors and
We have already discussed each, but will mention them again just to be complete.
There are two main classes of MIMD architectures (Ref. 4, page 612):
a) Multiprocessors, which appear to have a
shared memory and a
shared address space.
b) Multicomputers, which comprise a large number
of independent processors
(each with its own memory) that communicate via a dedicated network.
Note that each of the SIMD and MIMD architectures call
for multiple independent
processors. The main difference lies in the instruction stream.
SIMD architectures comprise a number of processors, each executing the same set of instructions (often in lock step).
MIMD architectures comprise a number of processors, each executing its
It may be the case that a number are executing the same program; it is not required.
The Origin of Multicomputing
The basic multicomputing organization dates from the 19th century, if not before.
The difference is that, before 1945, all computers were human; a “computer” was defined to be “a person who computes”. An office dedicated to computing employed dozens of human computers who would cooperate on solution of one large problem.
They used mechanical desk calculators to solve numeric equations, and paper as a medium of communication between the computers. Kathleen McNulty, an Irish immigrant, was one of the more famous computers. As she later described it:
“You do a multiplication and when the answer appeared, you had to write it down and reenter it. … To hand compute one trajectory took 30 to 40 hours.
This example, from the time of
the important features of multicomputing.
1. The problem was large,
but could be broken into a large number
of independent pieces, each of which was rather small and manageable.
2. Each subproblem could be assigned to a single
computer, with the expectation
that communication between independent computers would not occupy a
significant amount of the time devoted to solving the problem.
An Early Multicomputer
Here is a picture, probably from the 1940’s.
Note that each computer is quite busy working on a mechanical adding machine.
We may presume that computer–to–computer (interpersonal) communication was minimal and took place by passing data written on paper.
Note here that the computers appear all to be boys. Early experience indicated that grown men quickly became bored with the tasks and were not good computers.
Consider a computing system with N processors, possibly independent.
Let C(N) be the cost of
the N–processor system, with C1 = C(1) being the cost of one
processor. Normally, we assume that C(N)
scales up approximately as fast as the number of processors.
Let P(N) be the performance of the N–processor system, measured in some conventional measure such as MFLOPS (Million Floating Operations Per Second), MIPS (Million Instructions per Second), or some similar terms.
Let P1 = P(1) be the performance of a single processor system on the same measure.
The goal of any
parallel processor system is linear speedup: P(N) »
Define the speedup factor as S(N) = [P(N)/P1]. The goal is S(N) » N.
Recall the pessimistic
estimates from the early days of the supercomputer era that
for large values we have S(N) < [N / log2(N)], which is not an encouraging number.
It may be that it was these values that slowed the development of parallel processors.
Here is a variant of Amdahl’s Law that addresses the speedup due to N processors.
Let T(N) be the time to execute the program on N
T1 = T(1) be the time to execute the program on 1 processor.
The speedup factor is obviously S(N) = T(1) / T(N).
We consider any program as having two distinct
the code that can be sped up by parallel processing, and
the code that is essentially serialized.
Assume that the fraction of the code that can be sped up is denoted by variable X.
The time to execute the code on a single processor can be written as follows:
T(1) = X·T1 + (1 – X)·T1 = T1
Amdahl’s Law states that the time on an N–processor system will be
T(N) = (X·T1)/N + (1 – X)·T1 = [(X/N) + (1 – X)]·T1
The speedup is S(N) = T(1) / T(N) = =
It is easy to show that S(N) = N if and only if X = 1.0; there is no part of the code that is essentially sequential in nature and cannot be run in parallel.
Some Results Due to Amdahl’s Law
Here are some results on speedup as a function of number of processors.
Note that even 5% purely sequential code really slows things down.
Overview of Parallel Processing
Early on, it was
discovered that the design of a parallel processing system
is far from trivial if one wants reasonable performance.
In order to achieve
reasonable performance, one must address a number of
1. How do the parallel processors share data?
2. How do the parallel processors coordinate their computing schedules?
3. How many processors should be used?
4. What is the minimum
speedup S(N) acceptable for N processors?
What are the factors that drive this decision?
In addition to the
above question, there is the important one of matching the problem to the
processing architecture. Put another
way, the questions above must be answered
within the context of the problem to be solved.
For some hard real time problems (such as anti–aircraft defense), there might be a minimum speedup that needs to be achieved without regard to cost. Commercial problems rarely show this critical dependence on a specific performance level.
There are two main categories here, each having subcategories.
Multiprocessors are computing systems in which all programs share a single address space. This may be achieved by use of a single memory or a collection of memory modules that are closely connected and addressable as a single unit.
All programs running on such a system communicate via shared variables in memory.
There are two major variants of multiprocessors: UMA and NUMA.
In UMA (Uniform Memory Access) multiprocessors, often called SMP (Symmetric Multiprocessors), each processor takes the same amount of time to access
memory location. This property may be enforced by use of memory delays.
In NUMA (Non–Uniform Memory Access) multiprocessors, some memory accesses are faster than others. This model presents interesting challenges to the programmer in that race conditions become a real possibility, but offers increased performance.
Multicomputers are computing systems in which a collection of processors, each with its private memory, communicate via some dedicated network. Programs communicate by use of specific send message and receive message primitives.
There are 2 types of multicomputers: clusters and MPP (Massively Parallel Processors).
Coordination of Processes
Processes operating on
parallel processors must be coordinated in order to insure
proper access to data and avoid the “lost update” problem associated with stale data.
In the stale data problem, a processor uses an old copy of a data item that has been updated. We must guarantee that each processor uses only “fresh data”.
One of the more common mechanisms for coordinating multiple processes in a single address space multiprocessor is called a lock. This feature is commonly used in
databases accessed by multiple users, even those implemented on single processors.
These must use explicit synchronization messages in order to coordinate the processes. One method is called “barrier synchronization”, in which there are logical spots, called “barriers” in each of the programs. When a process reaches a barrier, it stops processing and waits until it has received a message allowing it to proceed.
The common idea is that each processor must wait at the barrier until every other processor has reached it. At that point every processor signals that it has reached the barrier and received the signal from every other processor. Then they all continue.
Classification of Parallel Processors
Here is a figure from Tanenbaum (Ref 4, page 588). It shows a taxonomy of parallel computers, including SIMD, MISD, and MIMD.
Note Tanenbaum’s sense of humor. What he elsewhere calls a cluster, he here calls a COW for Collection of Workstations.
Levels of Parallelism
Here is another figure from Tanenbaum (Ref. 4, page 549). It shows a number of levels of parallelism including multiprocessors and multicomputers.
a) On–chip parallelism, b) An attached coprocessor (we shall discuss
c) A multiprocessor with shared memory, d) A multicomputer, each processor having
its private memory and cache, and e) A grid, which is a loosely coupled multicomputer.
This is a model discussed by Harold Stone [Ref. 3, page 342]. It is formulated in terms of a time–sharing model of computation.
In time sharing, each process that is active on a computer is given a fixed time allocation, called a quantum, during which it can use the CPU. At the end of its quantum, it is timed out, and another process is given the CPU. The Operating System will move the place a reference to the timed–out process on a ready queue and restart it a bit later.
This model does not account for a process requesting I/O and not being able to use its entire quantum due to being blocked.
Let R be the length of
the run–time quantum, measured in any convenient time unit.
Typical values are 10 to 100 milliseconds (0.01 to 0.10 seconds).
Let C be the amount of
time during that run–time quantum that the process spends
in communication with other processes.
The applicable ratio is (R/C), which is defined only for 0 < C £ R.
In course–grain parallelism, R/C is fairly high so that computation is efficient.
In fine–grain parallelism, R/C is low and little work gets done due to the excessive overhead of communication and coordination among processors.
UMA Symmetric Multiprocessor Architectures
This is based on Section 9.3 of the text (Multiprocessors Connected by a Single Bus), except that I like the name UMA (Uniform Memory Access) better.
Beginning in the later 1980’s, it was discovered that several microprocessors can be usefully placed on a bus. We note immediately that, though the single–bus SMP architecture is easier to program, bus contention places an upper limit on the number of processors that can be attached. Even with use of cache memory for each processor to cut bus traffic, this upper limit seems to be about 32 processors (Ref 4. p 599).
Here, from Tanenbaum
(Ref. 4, p. 594) is a depiction of three classes of bus–based UMA
architectures: a) No caching, and two variants of individual processors with
b) Just cache memory, and c) Both cache memory and a private memory.
In each architecture, there is a global memory shared by all processors.
UMA: Other Connection Schemes
The bus structure is not the only way to connect a number of processors to a number of shared memories. Here are two others: the crossbar switch and the omega switch.
The Crossbar Switch
To attach N processors to M memories requires a crossbar switch with N·M switches. This is a non–blocking switch in that no processor will be denied access to a memory module due to the action of another processor. It is also quite expensive, as the number of switches essentially varies as the square of the number of connected components.
The Omega Switch
An Omega Switching Network routes packets of information between the processors and the memory units. It uses a number of 2–by–2 switches to achieve this goal.
Here is a three–state switching network. One can trace a path between any one processor and any one memory module. Note that this may be a blocking network.
A big issue with the realization of the UMA multiprocessors was the development of protocols to maintain cache coherency. Briefly put, this insures that the value in any individual processor’s cache is the most current value and not stale data.
Ideally, each processor in a multiprocessor system will have its own “chunk of the problem”, referencing data that are not used by other processors. Cache coherency is not a problem in that case as the individual processors do not share data.
In real multiprocessor systems, there are data that must be shared between the individual processors. The amount of shared data is usually so large that a single bus would be overloaded were it not that each processor had its own cache.
When an individual processor accesses a block from the shared memory, that block is copied into that processors cache. There is no problem as long as the processor only reads the cache. As soon as the processor writes to the cache, we have a cache coherency problem. Other processors accessing those data might get stale copies.
One logical way to avoid this process is to implement each individual processor’s cache using the write–through strategy. In this strategy, the shared memory is updated as soon as the cache is updated. Naturally, this increases bus traffic significantly.
The next lecture will focus on strategies to maintain cache coherence.
In this lecture, material from one or more of the following references has been used.
1. Computer Organization and Design, David
A. Patterson & John L. Hennessy,
Morgan Kaufmann, (3rd Edition, Revised Printing) 2007, (The course textbook)
ISBN 978 – 0 – 12 – 370606 – 5.
2. Computer Architecture: A Quantitative
Approach, John L. Hennessy and David A.
Patterson, Morgan Kauffman, 1990. There is a later edition.
ISBN 1 – 55860 – 069 – 8.
3. High–Performance Computer Architecture,
Harold S. Stone,
Addison–Wesley (Third Edition), 1993. ISBN 0 – 201 – 52688 – 3.
4. Structured Computer Organization,
Andrew S. Tanenbaum,
Pearson/Prentice–Hall (Fifth Edition), 2006. ISBN 0 – 13 – 148521 – 0 | <urn:uuid:ff863c42-920d-4032-8c23-17fc860bf74e> | CC-MAIN-2017-04 | http://edwardbosworth.com/My5155_Slides/Chapter13/Multiprocessors_01.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92414 | 3,242 | 3.375 | 3 |
A DNS hijack means that someone has intentionally modified the settings on your router without your consent. This type of attack allows an attacker to monitor, control, or redirect your Internet traffic. For example, if your router’s DNS has been hijacked, any time you visit an online banking site on any device connected to that router, you may end up being redirected to a fake version of the site. From there, the attacker can gain access to your banking session and use it to transfer money without your knowledge. Home routers can be hacked if they contain vulnerabilities, or if they are misconfigured. | <urn:uuid:11006980-56a1-4ce4-b105-749d77aa63cd> | CC-MAIN-2017-04 | https://campaigns.f-secure.com/router-checker/en_global/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00397-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91188 | 121 | 2.734375 | 3 |
3.6.9 What are some other signature schemes?
Merkle proposed a digital signature scheme based on both one-time signatures (see Question 7.7) and a hash function (see Question 2.1.6); this provides an infinite tree of one-time signatures [Mer90b].
One-time signatures normally require the publishing of large amounts of data to authenticate many messages, since each signature can only be used once. Merkle's scheme solves the problem by implementing the signatures via a tree-like scheme. Each message to be signed corresponds to a node in a tree, with each node consisting of the verification parameters used to sign a message and to authenticate the verification parameters of subsequent nodes. Although the number of messages that can be signed is limited by the size of the tree, the tree can be made arbitrarily large. Merkle's signature scheme is fairly efficient, since it requires only the application of hash functions.
The Rabin signature scheme [Rab79] is a variant of the RSA signature scheme (see Question 3.1.1). It has the advantage over the RSA system that finding the private key and forgery are both provably as hard as factoring. Verification is faster than signing, as with RSA signatures. In Rabin's scheme, the public key is an integer n where n = pq, and p and q are prime numbers which form the private key. The message to be signed must have a square root mod n; otherwise, it has to be modified slightly. Only about 1/4 of all possible messages have square roots mod n. The signature s of m is s = m1/2 mod n. Thus to verify the signature, the receiver computes m = s2 mod n.
The signature is easy to compute if the prime factors of n are known, but provably difficult otherwise. Anyone who can consistently forge the signature for a modulus n can also factor n. The provable security has the side effect that the prime factors can be recovered under a chosen message attack. This attack can be countered by padding a given message with random bits or by modifying the message randomly, at the loss of provable security. See [GMR86] for a discussion of a way to get around the paradox between provable security and resistance to chosen message attacks. | <urn:uuid:59b756a0-9104-4f1b-8e70-35b2ffbfddd4> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/other-signature-schemes.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913985 | 477 | 2.828125 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The weaknesses in SNMP, a core set of protocols widely used in network management systems, were discovered during rigorous tests by Finland's Oulu University secure programming group.
SNMP is used in systems from core network devices such as firewalls, switches, routers and wireless access points, to operating systems and networked printers.
The university found eight vulnerabilities that mean key Internet systems could be brought down by a denial of service attack. The weaknesses affect both the management systems and the agents that sit on networked devices and report operational states to the management console.
Ian Finlay, Internet security analyst at Cert, said, "We are viewing this as potentially more serious than the Code Red worm attacks. Code Red attacked [only] those using Microsoft's IIS server. SNMP is more widespread." The vulnerabilities probably exist in many of the Web's control systems so some outages may occur. | <urn:uuid:de682d9e-f307-4eeb-b3bb-9fde638cfacf> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240044314/Faults-in-core-Web-protocols | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00149-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955127 | 208 | 3.125 | 3 |
Oct. 31 — Scientists at The University of Manchester have used Polaris, the N8 Research Partnership High Performance Computing (N8 HPC) facility, to simulate for the first time ever how one of the world’s largest dinosaurs would have walked.
The analysis has revealed that Argentinosaurus, which weighed more than 80 tonnes, would have walked with a slow steady gait. The research, published in PLOS ONE, is important for understanding more about musculoskeletal systems since all vertebrates, from dinosaurs to humans to fish, all share the same basic muscles, bones and joints.
N8 HPC, which is funded by the Engineering and Physical Science Research Council (EPSRC), provides a high performance computing facility service for the Universities of Durham, Lancaster, Leeds, Liverpool, Manchester, Newcastle, Sheffield and York – an established collaboration known as the N8 Research Partnership – and industry partners. Capable of a peak performance of 110 trillion operations per second – the approximate equivalent to half a million iPads – it enables academic and private sector researchers to build more realistic models involving large amounts of data and to undertake more complex analyses in many research fields, including life sciences, energy, digital media and aerospace.
Dr. Bill Sellers, lead researcher on the project from the University of Manchester, Faculty of Life Sciences said, “If you want to work out how dinosaurs walked, the best approach is computer simulation. This is the only way of bringing together all the different strands of information we have on this dinosaur so that we can reconstruct how it once moved.
“To understand how muscles, bones and joints function, we can compare how they are used in different animals. Argentinosaurus is the biggest animal that ever walked on the surface of the earth and understanding how it did this will tell us a lot about the maximum performance of the vertebrate musculoskeletal system. We need to know more about this to understand how it functions in ourselves.
“Similarly, if we want to build better legged robots then we need to know more about the mechanics of legs in a whole range of animals, and nothing has bigger, more powerful legs than Argentinosaurus.”
The £3.25m N8 HPC facility is a Tier 2 SGI 5,000+ core high performance computing cluster, with 332 compute nodes. Each node has two of the latest generation Intel E5-2670 ‘Sandy Bridge’ processors and these nodes have a capacity of 320 GigaFLOPS. By using a Mellanox QDR InfiniBand interconnect to join all of Polaris’ nodes together, a peak performance of 110 TeraFLOPS is possible, making it one of the 250 most powerful computers in the world.
Dr. Lee Margetts, of The University of Manchester Research Computing Services, said, “Access to the N8 HPC system was a critical factor that enabled the team to finish the research in time for the PLOS ONE Special Collection on Sauropods. Timing is important as this collection is likely to be the “de facto” international reference for Sauropods for decades to come. The researchers report that they were very impressed by the system as their software ran twice as fast on N8HPC than HECToR, when using the same number of cores.”
The team of scientists also included Dr Rodolfo Coria from Carmen Funes Museum, Plaza Huincal, Argentina, who was behind the first physical reconstruction of this dinosaur that takes its name from the country where it was found. Dr Phil Manning, from Manchester who also contributed to the paper, said: “It is frustrating there was so little of the original dinosaur fossilized, making any reconstruction difficult. The digitization of such vast dinosaur skeletons using laser scanners brings Walking with Dinosaurs to life…this is science not just animation.”
The University Manchester team now plans to use the same method to recreate the steps of other dinosaurs including Triceratops, Brachiosaurus and T.Rex.
Source: University of Manchester | <urn:uuid:63855282-a697-4611-8b98-e96bf63b2466> | CC-MAIN-2017-04 | https://www.hpcwire.com/off-the-wire/n8-supercomputer-tracks-first-dinosaur-steps-90-million-years/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00149-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929324 | 837 | 3.3125 | 3 |
CurrentControlSet (Windows Registry)
A Control Set contains system configuration information for a Windows Operating System. Windows maintains two Control Sets and knowing which one to focus on during your examination is critical. Knowing the CurrentControlSet will be important to gather information of evidentiary importance such as Computer Name, Time Zone information, Shutdown Times, and even what USB Devices connected to the system.
Once you have exported out the Registry Hive of the computer that you are examining, you can use MiTeC’s Windows Registry Analyzer or AccessData’s Registry Viewer to determine what the CurrentControlSet is. Use either of those programs to open the SYSTEM Hive. You will see the following once it is open:
Now navigate to the SYSTEM\Select key. It is here you will see 4 entries. Current, Default, Failed and LastKnownGood. Current is the CurrentControlSet used last boot up the system. Default usually matches the Current. Failed denotes which control set that was unable to successfully boot into the system and LastKnownGood is the control set that last successfully booted into the system.
Going back to your registry viewer of choice, find the Select key and highlight it:
In the example above, you will see Current has a value of 0x1 or (1). This means that the CurrentControlSet is ControlSet001. That means you must focus on ControlSet001 to gather the information that you are looking for during your examination. As you can see in the above screenshots, the Default value matches the Current value. Looking at the Failed entry, it shows a value of 0x0 which means that there was no failed boot ups. Finally, the LastKnownGood value shows 0x2 or (2), meaning that ControlSet002 previously booted into the system successfully.
Forensic Programs of Use
MiTeC Windows Registry Analyzer (by Michal Mutl)- http://www.mitec.cz/Data/XML/data_downloads.xml (found under Registry/INI Tools)
AccessData Registry Viewer- www.accessdata.com/support/downloads | <urn:uuid:df2fca24-83f1-42c9-b447-bdbc46d49b4f> | CC-MAIN-2017-04 | http://forensicartifacts.com/tag/select/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.867529 | 432 | 2.828125 | 3 |
Talk about a high-capacity wireless network. Researchers at DARPA this week will detail a program - with over $18.3 million in funding behind it - that looks to develop wireless communications links capable of supporting 100 Gb/sec capacity at ranges of 125 miles for air-to-air links and about 62 miles for air-to-ground links from an altitude of 60,000 ft.
From DARPA: The goal of the 100 Gb/s RF Backbone (100G) Program is to design, build and test an airborne-based communications link with fiber-optic-equivalent capacity and long reach that can propagate through clouds and provide high availability. Additionally, the system will provide an all-weather (cloud, rain, and fog) capability while maintaining tactically-relevant throughput and link ranges. Size, weight, and power will be limited by the host platforms, which will primarily be high-altitude, long-endurance aerial platforms.
"Backbone communications networks rely on high-capacity links to interconnect the major nodes of the network and to handle the aggregated voice, video, internet, and enterprise data flows. Our nation's telecommunication infrastructure relies heavily on single-mode optical fiber as the data backbone. However, our military can't rely on a fixed infrastructure for deployed operations and instead needs a means of projecting fiber-optic-equivalent capacity anywhere within the area of responsibility.
A logical approach is to use free-space optical (FSO) links to project the capacity. FSO links have been shown to have fiber-optic-equivalent capacity at long ranges and are expected to play a significant role in the implementation of the military's airborne-based data backbone. However, FSO links can't propagate through clouds, which are present 40% of the time in some regions and lead to unacceptable network availability," DARPA stated.
DARPA researchers said potential technology to increase wireless system capacity could include:
- Multiple independent channels, such as spatial multiplexing, polarization multiplexing, and/or orbital angular momentum; some of which require multiple antenna apertures.
- Increased system bandwidth, which usually requires moving to higher frequencies where atmospheric losses can reduce link performance.
- Spectrally-efficient modulation, such as quadrature amplitude modulation, which requires increasing the signal power in order to achieve the signal-to-noise ratio required to demodulate the signal.
The 100G program comprises three phases. Phase 1 will focus on technology development, maturation, and characterization leading to their incorporation into a prototype system in subsequent phases. Phase 2 will develop prototype 100G transceivers and integrate them into aircraft and fixed ground sites. Phase 3 will focus on final prototype development and flight tests involving air-to-air and air-to-ground configurations.
What if your wireless communications just absolutely, positively have to be heard above the din of other users or in the face of massive interference?
The 100G program is the second major wireless move the research agency will make this month. DARPA is expected to detail its Spectrum Challenge - a $150,000 competition that aims to find developers who can create software-defined radio protocols that best use communication channels in the presence of other users and interfering signals.
High priority radios in the military and civilian sectors must be able to operate regardless of the ambient electromagnetic environment, to avoid disruption of communications and potential loss of life. Rapid response operations, such as disaster relief, further motivate the desire for multiple radio networks to effectively share the spectrum without requiring direct coordination or spectrum preplanning. Consequently, the need to provide robust communications in the presence of interfering signals is of great importance, DARPA stated.
DARPA says the Challenge is not focused on developing new radio hardware, but instead is targeted at finding strategies for guaranteeing successful communication in the presence of other radios that may have conflicting co-existence objectives. The Spectrum Challenge will entail head-to-head competitions between your radio protocol and an opponent's in a structured test bed environment. In addition to bragging rights for the winning teams, one team could win as much as $150,000, the agency stated.
"The Spectrum Challenge is focused on developing new techniques for assured communications in dynamic environments - a necessity for military and first responder missions. We have created a head-to-head competition to see who can transmit a set of data from one radio to another the most effectively and efficiently while being bombarded by interference and competing signals," said Dr. Yiftach Eisenberg, DARPA program manager in a statement. "To win this competition teams will need to develop new algorithms for software-defined radios at universities, small businesses and even on their home computers."
Check out these other hot stories: | <urn:uuid:5e18761d-e0c1-44bc-8d0c-e2b8b5d4d9f7> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2223786/wi-fi/darpa-in-search-of-a-100-gb-sec-wireless-technology-that-can-penetrate-clutter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00177-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933948 | 972 | 2.96875 | 3 |
A guide to troubleshooting theory from a CompTIA A+ perspective
In addition to the domain Software Troubleshooting on the upcoming 220-902 certification exam (of which it constitutes 24 percent), topic 5.5 asks that you explain the troubleshooting theory. This requires you to know the six steps of the theory as given by CompTIA and always follow them in order (taking into consideration corporate policies, procedures and impacts):
1. Identify the problem: Question the user and identify user changes to computer and perform backups before making changes
2. Establish a theory of probable cause (question the obvious): If necessary, conduct external or internal research based on symptoms
3. Test the theory to determine cause: Once theory is confirmed determine next steps to resolve problem, If theory is not confirmed re-establish new theory or escalate
4. Establish a plan: Make a plan of action to resolve the problem and implement the solution.
5. Determine system status: Verify full system functionality and if applicable implement preventive measures.
6. Make a record: Document findings, actions and outcomes.
Since this is a key topic that bleeds over from one domain to another (and truly, from one test to another), it is important to walk through some of the main subject matter you should know as you study.
Identifying the Problem
While this may seem obvious, it can’t be overlooked: If you can’t define the problem, you can’t begin to solve it. Sometimes problems are relatively straightforward, but other times they’re just a symptom of a bigger issue. For example, if a user isn’t able to connect to the Internet from their computer, it could indeed be an issue with their system. But if other users are having similar problems, then the first user’s difficulties might just be one example of the real problem. Problems in computer systems generally occur in one (or more) of four areas, each of which is in turn made up of many smaller pieces:
● A collection of hardware pieces integrated into a working system. As you know, the hardware can be quite complex, what with motherboards, hard drives, video cards, and so on.
● An operating system, which in turn is dependent on the hardware.
● An application or software program that is supposed to do something. Programs such as Microsoft Word and Excel are bundled with a great many features.
● A computer user, ready to take the computer system to its limits (and occasionally beyond). A technician can often forget that the user is a very complex and important part of the puzzle.
Many times you can define the problem by asking questions of the user. One of the keys to working with your users or customers is to ensure, much like a medical professional, that you have good bedside manner. Most people are not as technically competent as you, and when something goes wrong they become confused or even fearful that they’ll take the blame. Assure them that you’re just trying to fix the problem but that they can probably help because they know what went on before you got there. It’s important to instill trust with your customer. Believe what they are saying, but also believe that they might not tell you everything right away. It’s not that they’re necessarily lying; they just might not know what’s important to tell.
Help clarify things by having the customer show you what the problem is. One of the best methods can be to ask them to show you what “not working” looks like. That way, you see the conditions and methods under which the problem occurs. The problem may be a simple matter of an improper method. The user may be performing an operation incorrectly or performing the operation in the wrong order. During this step, you have the opportunity to observe how the problem occurs, so pay attention.
Here are a few questions to ask the user to aid in determining what the problem is:
1. Can you show me the problem?
2. How often does this happen?
3. Has any new hardware or software been installed recently?
4. Has the computer recently been moved?
5. Has someone who normally doesn’t use the computer recently used it?
6. Have any other changes been made to the computer recently?
Be careful of how you ask questions so you don’t appear accusatory. You can’t assume that the user did something to mess up the computer. Then again, you also can’t assume that they don’t know anything about why it’s not working. The key is to find out everything you can that might be related to the problem. Document exactly what works and what doesn’t and, if you can, why.
Establishing a Theory
Once you have determined what the problem is, you need to develop a theory as to why it is happening. No video? It could be something to do with the monitor or the video card. Can’t get to your favorite website? Is it that site? Is it your network card, the cable, your IP address, DNS server settings, or something else? Once you have defined the problem, establishing a theory about the cause of the problem—what is wrong—helps you develop possible solutions to the problem.
Theories can either state what can be true or what can’t be true. However you choose to approach your theory generation, it’s usually helpful to take a mental inventory to see what is possible and what’s not. Start eliminating possibilities and eventually the only thing that can be wrong is what’s left. This type of approach works well when it’s an ambiguous problem; start broad and narrow your scope. For example, if the hard drive won’t read, there is likely one of three culprits: the drive itself, the cable it’s on, or the connector on the motherboard. Try plugging the drive into the other connector or using a different cable. Narrow down the options.
Once you have isolated the problem, slowly rebuild the system to see if the problem comes back (or goes away). This helps you identify what is really causing the problem and determine if there are other factors affecting the situation. For example, there are times when memory problems have been fixed by switching the slot that the memory chips are in.
Sometimes you can figure out what’s not working, but you have no idea why or what you can do to fix it. That’s okay. In situations like those, it can be best to turn to documentation. The service manuals are your instructions for troubleshooting and service information. Virtually every computer and peripheral made today has service documentation on the company’s website, or on a DVD, or even in a paper manual. Don’t be afraid to use them!
If you’re lucky enough to have experienced, knowledgeable, and friendly co-workers, be open to asking for help if you get stuck on a problem.
Test the Theory
You’ve eliminated possibilities and developed a theory as to what the problem is. Your theory may be pretty specific, such as “the power cable is fried,” or it may be a bit more general, like “the hard drive isn’t working” or “there’s a connectivity problem.” No matter your theory, now is the time to start testing solutions. Again, if you’re not sure where to begin to find a solution, the manufacturer’s website is a good place to start!
This step is the one that even experienced technicians overlook. Often, computer problems are the result of something simple. Technicians overlook these problems because they’re so simple that the technicians assume they couldn’t be the problem. Here are some examples of simple problems:
Is it plugged in? And plugged in at both ends? Cables must be plugged in at both ends to function correctly. Cables can easily be tripped over and inadvertently pulled from their sockets.
Is it turned on? This one seems the most obvious, but everyone has fallen victim to it at one point or another. Computers and their peripherals must be turned on to function. Most have power switches with LEDs that glow when the power is turned on.
Is the system ready? Computers must be ready before they can be used. Ready means the system is ready to accept commands from the user. An indication that a computer is ready is when the operating system screens come up and the computer presents you with a menu or a command prompt. If that computer uses a graphical interface, the computer is ready when the mouse pointer appears. Printers are ready when the Online or Ready light on the front panel is lit.
Do the chips and cables need to be reseated? You can solve some of the strangest problems (random hang-ups or errors) by opening the case and pressing down on each socketed chip (known as reseating). This remedies the chip-creep problem, which happens when computers heat up and cool down repeatedly as a result of being turned on and off, causing some components to begin to move out of their sockets. In addition, you should reseat any cables to make sure they’re making good contact.
Is it user error? User error is common but preventable. If a user can’t perform some very common computer task, such as printing or saving a file, the problem is likely due to user error. As soon as you hear of a problem like this, you should begin asking questions to determine if the solution is as simple as teaching the user the correct procedure. A good question to ask is, “Were you ever able to perform that task?” If the answer is no, it means they are probably doing the procedure wrong. If they answer yes, you must ask additional questions to get at the root of the problem.
If you suspect user error, tread carefully in regard to your line of questioning to avoid making the user feel defensive. User errors provide an opportunity to teach the users the right way to do things. Again, what you say matters. Offer a “different” or “another” way of doing things instead of the “right” way.
It’s amazing how often a simple computer restart can solve a problem. Restarting the computer clears the memory and starts the computer with a clean slate. If restarting doesn’t work, try powering down the system completely and then powering it up again (rebooting). More often than not, that will solve the problem.
Establish a Plan of Action
If your fix worked, then you’re brilliant! If not, then you need to reevaluate and look for the next option. After testing solutions, your plan of action may take one of three paths:
1. If the first fix didn’t work, try something else.
2. If needed, implement the fix on other computers.
3. If everything is working, document the solution.
When evaluating your results and looking for that golden “next step,” don’t forget other resources you might have available. Use the Internet to look at the manufacturer’s website. Read the manual. Talk to your friend who knows everything about obscure hardware (or arcane versions of Windows). When fixing problems, two heads can be better than one.
If the problem was isolated to one computer, this step doesn’t apply. But some problems you deal with may affect an entire group of computers. For example, perhaps some configuration information was entered incorrectly into the DHCP server, giving everyone the wrong DNS server address. The DHCP server is now fixed, but all of the clients need to renew their IP addresses.
Once everything is working, you’ll need to document what happened and how you fixed it. If the problem looks to be long and complex, taking copious notes as you’re trying to fix it. It will help you remember what you’ve already tried and what didn’t work. We’ll discuss documenting in more depth in the “Documenting the Work” step just a bit later.
After fixing the system, or all of the systems, affected by the problem, go back and verify full functionality. For example, if the users couldn’t get to any network resources, check to make sure they can get to the Internet as well as internal resources.
Some solutions may actually cause another problem on the system. For example, if you update software or drivers, you may inadvertently cause another application to have problems. There’s obviously no way you can or should test all applications on a computer after applying a fix, but know that these types of problems can occur. Just make sure that what you’ve fixed works and that there aren’t any obvious signs of something else not working all of a sudden.
Another important thing to do at this time is to implement preventive measures, if possible. If it was a user error, ensure that the user understands ways to accomplish the task that don’t cause the error. If a cable melted because it was too close to someone’s space heater under their desk, resolve the issue. If the computer overheated because there was an inch of dust clogging the fan…you get the idea.
Document the Work
Lots of people can fix problems. But can you remember what you did when you fixed a problem a month ago? Maybe. Can one of your co-workers remember something you did to fix the same problem on that machine a month ago? Unlikely. Always document your work so that you or someone else can learn from the experience. Good documentation of past troubleshooting can save hours of stress in the future. While documentation can take a few different forms, but the two most common are personal and system based.
It is highly recommended that technicians always carry a personal notebook and take notes. The type of notebook doesn’t matter—use whatever works best for you. The notebook can be a lifesaver, especially when you’re new to a job. Write down the problem, what you tried, and the solution. The next time you run across the same or a similar problem, you’ll have a better idea of what to try. Eventually you’ll find yourself less and less reliant on it, but it’s incredibly handy to have!
System-based documentation is useful to both you and your co-workers. Many facilities have server logs of one type or another, conveniently located close to the machine. If someone makes a fix or a change, it gets noted in the log. If there’s a problem, it’s noted in the log. It’s critical to have a log for a few reasons. One, if you weren’t there the first time it was fixed, you might not have an idea of what to try and it could take you a long time using trial and error. Two, if you begin to see a repeated pattern of problems, you can make a permanent intervention before the system completely dies.
There are many different forms of system-based documentation. Again, the type of log doesn’t matter as long as you use it! Often it’s a notebook or a binder next to the system or on a nearby shelf. If you have a rack, you can mount something on the side to hold a binder or notebook. For client computers, one way is to tape an index card to the top or side of the power supply (don’t cover any vents!) so if a tech has to go inside the case, they can see if anyone else has been in there to fix something too. In larger environments, there is often an electronic knowledge base or incident repository available for use; it is just as important to contribute to these systems as it is to use them to help diagnose problems. | <urn:uuid:ba584980-235a-4b4c-934a-008c24cb4008> | CC-MAIN-2017-04 | http://certmag.com/guide-troubleshooting-theory-comptia-perspective/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00571-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942766 | 3,318 | 3.171875 | 3 |
Software Development Failures Plague North American Enterprises: Study
Service Virtualization addresses these challenges by enabling teams to develop and test an application using a virtual service environment that has been configured to imitate a real production environment. This provides the ability to change the behavior and data of these virtual services easily in order to validate different scenarios. "Service virtualization (or SV) is the automated practice of capturing and simulating any system or service IT teams depend on to deliver software," Mittal said in his post. "This is not like conventional hardware virtualization that copies some servers to free up hard drives and rack space in your own data center. We are talking about simulating every constraint in the software environment—the very distributed and over-utilized stuff that software teams need to interact throughout development, such as complex customer-response data, core mainframes, integration middleware and performance data that is either costly or unavailable to you." "This research follows a European study conducted in July 2012 in which 32 percent of respondents revealed that they were expected to deliver and manage four to seven releases a year, compared to 53 percent in North America," Ian Parkes, managing director of Coleman Parkes Research, said in a statement. "Even more surprising, 75 percent of respondents across North America and Europe reported they were seeking additional budget to pay for more application development man-hours, when we know that additional labor is not in fact the ideal solution." "In short—it's time for enterprise IT to industrialize simulation and modeling, or suffer more delays and failures," Mittal said. "While the concept is new for software, it shouldn't be. Other industries such as aviation and automotive manufacturing have done it for years, using things like wind tunnels, flight simulators and computer models to avoid real-world constraints and do their engineering and testing much earlier, far before real products are assembled. IT needs to apply these same principles of simulation and modeling."
According to the study, "These survey results suggest that development managers often bring new applications or services from testing environments into production without complete insight into how their integrated applications might fail. For engineers, understanding failure modes is a critical part of the job, yet according to this study, 69 percent did not have this insight on a consistent basis. This is an alarming prospect for any board giving the green light for new software projects, especially those that impact the customer. It is also concerning that only nine percent have comprehensive insight into how complex, integrated applications could break in production." | <urn:uuid:38dbb8ac-14e3-40c6-9455-146f1d1a26e6> | CC-MAIN-2017-04 | http://www.eweek.com/developer/software-development-failures-plague-north-american-enterprises-study-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958477 | 503 | 2.578125 | 3 |
With the ongoing movement toward statewide education standards, the search for the best tools to measure student learning at the state level has begun. Idaho, for example, has chosen "level testing," a system that gauges the skills each child has mastered compared with benchmarks set for his or her grade.
As Idaho pilots the new tests this year, few of the second through ninth graders taking them will need to sharpen their number-two pencils - most schools will download the exams over the Internet and serve them up to students on computers.
Although the Northwest Evaluation Association (NWEA) has supplied computerized tests to about 850 school districts in 37 states, Idaho is the first to implement its tests statewide, said Michael Patterson, director of information technology for the not-for-profit organization in Portland, Ore.
Computers simplify a key element of level testing, tailoring questions individually to each student's abilities. "As the student answers questions correctly, the test gets more difficult. As he answers incorrectly, it gets less difficult," Patterson explained.
NWEA offers paper-and-pencil versions of its "adaptive" tests, but administering them is a somewhat ungainly process. Students first answer a set of qualifying questions; each student then receives a set of questions geared to a certain skill level, based on how many qualifying questions he or she answered correctly.
"When we did this initially, we printed 80,000 tests," recalled Linda Clark, director of instruction at Joint School District 2 in Meridian, Idaho. Meridian has used NWEA's adaptive tests district-wide for reading, math, language and science in grades three through eight for the past four years. It switched to the computerized versions in all areas but science in the fall of 2000.
Convenience and Accuracy
"You can do level tests with paper and pencil, but obviously the computerized versions are more convenient and, I think, over the long haul are more accurate," said Karen McGee, a member of the Idaho State Board of Education and its interim director of assessment and accountability.
Forty-four school districts in Idaho already have worked with NWEA, Patterson said. Starting this year, about 136,000 students in all of Idaho's districts will take state tests in reading, math and language arts annually. All but a handful of the schools - those that lack the necessary equipment - will administer them by computer.
"All students will be tested in the spring and fall to show growth," McGee said. "Teachers will have the option to give it again in mid-year, if they want to see how the children are doing."
To administer the Idaho Standards Achievements Tests, the school first uploads its class roster to the NWEA. The association then transmits the test and its Test Taker software over the Internet to a server at the school or at a district office. "The school district then needs to install the Test Taker application to each of the workstations that they're going to use to do the test," said Dan Hawkins, networking and telecommunications specialist at the Idaho State Department of Education. Schools can use either Windows-based or Macintosh machines.
NWEA employs local servers rather than hosting the tests itself because "most school districts just don't have the bandwidth to get everybody on the Web and have a test," Patterson said. Relying on a local area network rather than the Internet also ensures that students don't experience lags as they take their exams.
Pick Up Where They Left Off
When a group arrives to take a test, the proctor logs onto each workstation, selects the appropriate test and selects the name of the student who will work there, Hawkins said. The student's response to each question is recorded on the local server. To make sure the test is customized for each student's abilities, the system remembers the questions he or she answered previously. When students sit down to take the test again, "it picks up at the same level where they stopped, so you can continue your measurement. But it also takes out any questions that the student has already seen," he said.
Along with the adaptive questions, tests for grades three through eight also will include some standard questions for every student in a given grade, which Idaho is adding this spring in order to comply with the federal No Child Left Behind Act.
After finishing a test, the student's preliminary score appears immediately on the screen. That night, the school's server uploads all students' answers and NWEA checks them for any irregularities that could render a test invalid and compiles reports. The following day, each teacher can log onto NWEA's Web site with a password to get his or her class's scores.
NWEA also offers school-wide and district-wide reports, including historical data, within 72 hours. "Beginning this fall, they will also have the ability to disaggregate by ethnic groups, by special ed or whatever other parameters they've given us," Patterson said.
A few schools with slower Internet connections receive the tests on CD-ROM, Patterson said. Transmitting students' answers back to NWEA is no problem, however, since they involve a great deal less data than the tests themselves. Schools that give the paper-and-pencil tests send the completed answer sheets to NWEA, where they're scanned to capture the data.
To prepare for the Idaho Standards Achievements Tests, each school needed a computer lab. "There were quite a few schools that didn't have labs," Hawkins said, adding that nearly all have created them by now. Many schools redeployed under-utilized classroom computers to gather the required number of machines in one room, and the workstations don't need to be expensive to handle the tests. "What I've seen done is low-end Pentiums with at least 32 MB of RAM," he said.
The workstations in the lab must be networked to a dedicated server. "For most school districts that have good connectivity among the buildings, a server per district will work," Hawkins said. Where connectivity isn't adequate, districts need one server per building.
Using adaptive tests to measure achievement allows teachers to tailor their instruction to different students' needs. The computerized system, with its quick turnaround, helps them fine-tune their lesson plans without delay. In Meridian, for example, the teachers have the results within 24 hours after giving tests at the start of the fall term, Clark said. "It allows them then to look at the year's curriculum and see what skills students have mastered, and what skills and knowledge students still need to work on. So they're targeting their instructional time to things they need to work on in order to grow."
Meridian shifted from its district-wide testing program to the state program this year; the tests are similar in content, Clark said.
Teachers will likely put students' scores to use as soon as possible, but Idaho considers the testing program a pilot until 2005, McGee said. During this time, education officials will make any necessary adjustments to the questions. "But also, when we call it a pilot phase, we don't want to tie it to any teacher accountability, because we want teachers to learn how to really use the data effectively to affect teaching." Training sessions starting in October will show teachers how to interpret and respond to the scores, she said.
If teachers throughout Idaho are similar to their colleagues in the Meridian school district, they will like what they learn in those sessions. "We've been delighted over the four years with the kind of data we've received from the testing and how it has empowered our instruction and changed how we teach," Clark said. | <urn:uuid:93b90e87-e6be-4884-be5d-2ef93db3649d> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Putting-Computers-to-the-Test.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00231-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969598 | 1,571 | 3.1875 | 3 |
Citing the United Nations’ Universal Declaration of Human Rights, Access Now, an organization that advocates for open digital communication, called for the prohibition of government hacking in its report “A Human Rights Response to Government Hacking,” released on Tuesday.
“Hacking is one of the most invasive activities governments can engage in, yet it is occurring in the dark, without public debate. It is critical for governments, law enforcement, technologists, and civil society to have an honest conversation about the impact of government hacking in the digital age,” said Amie Stepanovich, U.S. policy manager at Access Now.
The report said that most government hacking infringes upon the human rights of property, freedom of opinion, freedom of thought, freedom from arbitrary attacks on privacy, freedom of assembly, and right to a fair trial.
It defines hacking in three areas: to control messaging to the public, to intentionally cause damage, and to commission intelligence or surveillance gathering.
The report condemns the first two hacking practices outright, and proposes “Ten Human Rights Safeguards for Government Hacking,” restricting intelligence and surveillance hacking to the “rare, limited, exceptional cases” for which it is essential:
- Government hacking must be explicitly provided for by law.
- The government must explain why hacking is the least invasive means for accessing protected information.
- Hacking operations must never occur in perpetuity.
- Governments must apply to a “competent judicial authority who is legally and practically independent from the entity requesting the authorization.”
- Governments must provide notice to the target of the operation and to owners of the devices and networks affected, when possible.
- Agencies must publish the extent of their hacking in annual reports.
- Governments cannot compel private entities to act in a way that would undermine the security of their products and services.
- Governments must report back to the judicial authority if their hacking exceeds initial authorization.
- Extraterritorial hacking should not occur without specific authority.
- Agencies conducting hacking should disclose all vulnerabilities that they discover or purchase.
Through the seventh requirement, the report strongly sides with the private sector’s stance on strong encryption, which has often run up against the FBI stance that law enforcement should have some means of access when in possession of a warrant.
Though the report cites U.N. policy and international law, it is less specific about who should ideally enforce its Ten Human Rights Safeguards for Government Hacking. | <urn:uuid:05da38f7-46b0-4898-8afc-ecfb015fc15f> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/access-now-proposes-presumptive-ban-on-government-hacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00497-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922093 | 518 | 2.8125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.