text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
At the University of Malaga, in Spain, musicians have “taught” a computer cluster to compose contemporary classical music. The work is being spearheaded by Gustavo Diaz Jerez, full-time pianist and part-time software consultant. According to Wikipedia, the Linux cluster, known as Iamus, is equipped with 352 AMD processors and 704 GB of memory — certainly not a large system by HPC standards.
The key is the software. Developed by Melomics, the application uses evolutionary algorithms to compose melodies — Melomics = genomics and melodies — in which the software encodes a musical theme into a “genome” and then evolves them into musical scores. It can blend a variety instruments into the composition, respecting the physical limitations of the hardware and its human players. Once developed, the composition can be rendered into a variety of formats — MP3, MIDI, MusicXMP, and a readable score in PDF — for editing or playing.
The computer doesn’t actually generate a performance; it still relies on humans for that. While the compositions have gotten mixed reviews, they are seen as several steps above earlier attempts at computer-generated music, which relied on much less sophisticated algorithms. According to a recent article posted on the BBC website, Diaz Jerez says their approach sets itself apart by using complex aesthetic rules to grow the musical structures in the computer.
The resulting music has been noteworthy enough to attract the attention of The London Symphony Orchestra, which has produced a studio album based on a number of the computer’s better compositions. The album was released in 2012, under the title Iamus.
Since the cluster can compose a complete score in less than a second, the capability to generate music is virtually unlimited. And according to Diaz Jerez, Iamus doesn’t have to be restricted to classical melodies. They could reprogram it to incorporate more notes in the scale and generate Hindu or Arabic music.
Here is Nasciturus, one of computer’s signature pieces.
For other performances based on Iamus’ compositions, check out Diaz-Jerez’s YouTube page. | <urn:uuid:6db6489d-02a1-4a31-83f4-de62dab1e0c6> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/01/10/classical_music_courtesy_of_a_supercomputer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00366-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94472 | 439 | 3.046875 | 3 |
Experts claim to have created an artificial intelligence (AI) system that has accurately predicted the outcomes of cases at the European Court of Human Rights (ECHR).
Researchers from University College London (UCL), Sheffield and Pennsylvania allege that their AI “judge” has predicted the same verdict as real judges in 79 percent of cases.
Dr Nikolaos Aletras, lead researcher at UCL Computer Science, said: “There is a lot of hype about AI but we don’t see AI replacing the judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes.”
“It could also be a valuable tool for highlighting which cases are most likely to be violations of the European convention on human rights.”
Trial by algorithm
Supposedly the software is able to assess legal evidence against moral questions of right and wrong to accurately predict the result of real life cases.
The algorithm examined English language data sets for 584 cases related to articles of the Conventions on Human Rights: Article 3 regarding torture and inhuman and degrading treatment; Article 6, which protects the right to a fair trial; and Article 8, which respects the right for a private life.
These were chosen because they represent cases about fundamental rights and because there is a large amount of published data on them.
The algorithm searched for patterns in the text, which it was then able to label as a “violation” or “non-violation”. It found that ECHR judgments depend more on non-legal facts than purely legal arguments. This suggests court judges are more legal “theorists” than “formalists”.
Using artificial intelligence for efficiency
The experiment, apparently the first of its kind, found the most reliable factors for predicting court decisions were language alongside the topics and circumstances of the case.
Dr Vasileios Lampos, UCL Computer Science, and co-author, said: “Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge, so this is the first time judgements have been predicted using analysis of text prepared by the court.
“We expect this sort of tool would improve efficiencies of high level, in demand courts, but to become a reality, we need to test it against more articles and the case data submitted to the court.” | <urn:uuid:354ad6ac-5098-4d50-94d5-741736a704d7> | CC-MAIN-2017-04 | https://internetofbusiness.com/scientist-artificial-intelligence-legal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956328 | 502 | 3.25 | 3 |
A friend recently told me a scary story about why he changed the password on his account with one of the leading online securities trading firms. He was perusing his six-figure portfolio when it occurred to him that he hadn’t changed his password a while. Quite a while, it turned out; about nine years.
He was further dismayed to realize that the password he had been using all that time –the name of a beloved pet followed by a single number – could probably be guessed by anyone who followed him on social media. For a sophisticated password cracking program, guessing it would be a layup.
Surprisingly, many online services don’t regularly challenge customers to change their passwords, despite the fact that password-cracking technology has advanced by leaps and bounds. Bad guys now follow their victims on social networks to mine keywords that they feed into malicious programs that use machine intelligence to test variations until the door is unlocked. A small fortune may be protected by the cyber security equivalent of tin foil.
No one likes passwords, but they are more important than ever these days. And the ones that worked for you five years ago are probably useless today. If your money, health records or any other personally identifiable information (PII) is at stake, you owe it to yourself to use a secure, random code that a machine can’t guess. As you go about resetting your passwords, avoid these eight common mistakes.
- Using the same password everywhere
The easiest way to remember a password is to use only one, but that’s also the fastest route to disaster. Once a successful phishing attack captures that password – and studies have found that as many as 97% of people can’t detect a phishing email – the attacker essentially has the keys to the kingdom. While it’s probably okay to use the same password for sites that don’t store any PII, you should use different and secure passwords in any situation where your identity or financial information could be compromised.
- Varying passwords with a single character
This is a trap many people fall into when asked to change their passwords; they comply by changing a “12” to a “13.” Password-guessing programs are wise to this trick and can sniff it out in seconds.
A variation of this dangerous practice is to include a non-alphanumeric character by tacking “!” onto the end of your existing password. That’s the oldest dodge in the book, and password crackers are wise to it. Non-alphanumeric characters should be used within the password, not at either end.
- Using personal information in passwords
Avoid using names of relatives, celebrities, sports teams, pet or any other common terms in your passwords. Cracking software automatically looks for the most common combinations like Yoda123. Don’t think that you can protect yourself by invoking personal information like the name of a loved one or your high school mascot. Social networks make it straightforward for crooks to harvest that information.
You also shouldn’t assume that adding a string of characters to a common name is protection enough. Password crackers know this trick and cycle through combinations of common names and numbers until they hit the right one. The only safe password is one with random – or seemingly random – sets of characters.
- Sharing passwords with others
You might have the strongest password in the world, but if you share it with someone who stores it in an email account protected by “qwerty,” it won’t make a bit of difference. Your passwords are for your eyes only.
- Using passwords that are too short
A decade ago, a five- or six-character password was enough to beat most cracking programs, but computers are so much faster now that a six-character password can be guessed by a brute-force attack. Think 12 characters at a minimum.
- Storing passwords in plain text
One easy way to remember passwords is to store them in a spreadsheet or mail them to yourself. Bad idea. Have you heard of ransomware? It’s the fastest-growing category of malware. Criminals hold your data hostage until you pay them a ransom. In the meantime, they scour your hard drive looking for anything that resembles a password list. Once they find it, the ransom payment is the least of your problems.
- Using recognizable keystroke patterns
“1qaz2wsx” may seem like a pretty tough password to guess until you look at your keyboard and notice the pattern. A random series of letters and numbers must be truly random to have a chance.
- Substituting numbers for letters
This used to be an effective technique, but “Spr1ngst33n” doesn’t survive a determined attack any more. The software is on to that trick.
Your best bet is to use a password manager protected by strong encryption. The best ones generate secure passwords for you and give you total protection with two-factor authentication. | <urn:uuid:2f22e785-c5f2-4d78-946c-39507cf73269> | CC-MAIN-2017-04 | https://blog.keepersecurity.com/page/3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948493 | 1,039 | 2.578125 | 3 |
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss UDP.
UDP: User Datagram Protocol
UDP is a connectionless transport layer (layer 4) protocol in OSI model, which provides a simple and unreliable message service for transaction-oriented services. UDP is basically an interface between IP and upper-layer processes. UDP protocol ports distinguish multiple applications running on a single device from one another.
Since many network applications may be running on the same machine, computers need something to make sure the correct software application on the destination computer gets the data packets from the source machine, and some way to make sure replies get routed to the correct application on the source computer. This is accomplished through the use of the UDP “port numbers”. For example, if a station wished to use a Domain Name System (DNS) on the station 220.127.116.11, it would address the packet to station 18.104.22.168 and insert destination port number 53 in the UDP header. The source port number identifies the application on the local station that requested domain name server, and all response packets generated by the destination station should be addressed to that port number on the source station. Details of UDP port numbers could be found in the TCP/UDP Port Number document and in the reference.
Unlike the TCP , UDP adds no reliability, flow-control, or error-recovery functions to IP. Because of UDP's simplicity, UDP headers contain fewer bytes and consume less network overhead than TCP.
UDP is useful in situations where the reliability mechanisms of TCP are not necessary, such as in cases where a higher-layer protocol might provide error and flow control, or real time data transportation is required.
UDP is the transport protocol for several well-known application-layer protocols, including Network File System (NFS) , Simple Network Management Protocol (SNMP) , Domain Name System (DNS) , and Trivial File Transfer Protocol (TFTP).
Protocol Structure – UDP User Datagram Protocol Header
- Source port – Source port is an optional field. When used, it indicates the port of the sending process and may be assumed to be the port to which a reply should be addressed in the absence of any other information. If not used, a value of zero is inserted.
- Destination port – Destination port has a meaning within the context of a particular Internet destination address.
- Length – It is the length in octets of this user datagram, including this header and the data. The minimum value of the length is eight.
- Checksum — The sum of a pseudo header of information from the IP header, the UDP header and the data, padded with zero octets at the end, if necessary, to make a multiple of two octets.
- Data – Contains upper-level data information.
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. Achieving your CCNA certification is much more than just memorizing Cisco exam material. It is having the real world knowledge to configure your Cisco equipment and be able to methodically troubleshoot Cisco issues. So I encourage you to continue in your studies for your CCNA exam certification. | <urn:uuid:40c169a6-a0d3-4a9f-83bd-7c524910e509> | CC-MAIN-2017-04 | https://www.certificationkits.com/udp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00054-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.864312 | 690 | 4.09375 | 4 |
Schopmeyer S.A.,University of Miami |
Lirman D.,University of Miami |
Bartels E.,Center for Tropical Research |
Byrne J.,The Nature Conservancy |
And 7 more authors.
Restoration Ecology | Year: 2012
During an unusual cold-water event in January 2010, reefs along the Florida Reef Tract suffered extensive coral mortality, especially in shallow reef habitats in close proximity to shore and with connections to coastal bays. The threatened staghorn coral, Acropora cervicornis, is the focus of propagation and restoration activities in Florida and one of the species that exhibited high susceptibility to low temperatures. Complete mortality of wild staghorn colonies was documented at 42.9% of donor sites surveyed after the cold event. Remarkably, 72.7% of sites with complete A. cervicornis mortality had fragments surviving within in situ coral nurseries. Thus, coral nurseries served as repositories for genetic material that would have otherwise been completely lost from donor sites. The location of the coral nurseries at deeper habitats and distanced from shallow nearshore habitats that experienced extreme temperature conditions buffered the impacts of the cold-water event and preserved essential local genotypes for future Acropora restoration activities. © 2011 Society for Ecological Restoration International. Source
Karubian J.,Tulane University |
Carrasco L.,Center for Tropical Research |
Mena P.,Center for Tropical Research |
Olivo J.,Center for Tropical Research |
And 4 more authors.
Wilson Journal of Ornithology | Year: 2011
The Brown Wood Rail (Aramides wolfi) is a globally threatened, poorly known species endemic to the Chocó rain forests of South America. We provide a first report on the species' nesting biology, home range, and habitat use. Nests (n = 16) were open cups ∼2 m above ground and were more common in secondary forest than expected by chance. Median clutch size was four eggs, incubation lasted >19 days, the precocial young departed the nest within 24 hrs of hatching, and 66% of nests successfully produced young. At least two adults participated in parental care and pair bonds appear to be maintained year-round. The home range of an adult radio-tracked for 7 months was 13.5 ha in secondary and selectively-logged forest contiguous to primary forest. This easily overlooked species may be more resilient to moderate levels of habitat degradation than previously suspected, but extensive deforestation throughout its range justifies its current status as 'Vulnerable to Extinction'. © 2011 by the Wilson Ornithological Society. Source | <urn:uuid:1afb0cc6-681e-4595-bc53-7b4ff20e2eff> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-for-tropical-research-2695551/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00356-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91507 | 542 | 3.328125 | 3 |
The global preventive vaccines market accounted for a value of USD 27.6 billion in 2015 and is expected to reach USD 55 billion by 2021, growing at a CAGR of 12.4% during the forecast period from 2016 to 2021.
The vaccines market is different from any other commodity market. Compared to the pharmaceutical market, the vaccines market is relatively small, although growing fast. The vaccines market is rather concentrated on both supply and demand sides. It is highly regulated and largely dependent on public purchasers and donor policies. The vaccines market has very distinct features that increase the complexity of assessing and understanding pricing and procurement in their context. It is made up of individual markets for individual vaccines or vaccine types, each with its own specificities, particularly on the supply side.
New candidates for vaccination against cancer and HIV, which are under development, are also projected to hit the magical milestone. Multinational vaccine companies historically have conducted much of the innovation, research, and development in the field of vaccine production. The companies have used their significant revenues, global size, and deeper expertise to fund these R&D efforts. New technologies are also being introduced to curb the time and expense incurred on new vaccine discovery and production. However, despite the success of vaccines, infectious diseases are still a major cause of illnesses worldwide. At least 40 major infectious diseases are still uncontrolled by vaccination.
Global Preventive Vaccines Market- Market Dynamics
This study targets gaining a detailed overview of the dynamics of the preventive vaccines market during the forecast period. It focuses on the need to develop strategic insights into the global and country-level markets, taking into consideration the demand as a result of awareness, government initiatives, birth rate etc. A holistic study of the market has been carried out by incorporating various factors extending from country-specific demographic conditions and market-specific microeconomic influences that were needed to analyze the future trends of this market.
The report details several factors driving the growth of the global preventive vaccines market. Some of these are:
High regulatory intervention and costs are turning out to be the major restraints for this market.
The market has been segmented based on the type of vaccines and application. The segment based on the type of vaccine includes live, attenuated vaccine, inactivated vaccines, subunit vaccine, toxoid vaccines, conjugate vaccines, DNA vaccine, and recombinant vector vaccine. The market segment based on applications has been further subdivided into pediatric vaccine and adult vaccine. The pediatric vaccines include pneumococcal vaccine, varicella vaccine, combination vaccines (Bivalent, trivalent, tetravalent, and pentavalent vaccines), MMR, poliovirus, hepatitis, HIB, and other pediatric vaccines. The segment on adult vaccines includes influenza, cervical cancer, hepatitis, zoster, and other adult vaccines.
There are currently close to 120 new vaccines in the pipeline of various companies across the globe, set to hit the market in the next five years. In anticipation of this and with the emergence of a new target market and smaller players in it, the global preventive vaccines market is set to grow at a CAGR of close to 12.4 % by the year 2021.
Some of the key players in the market are:
What the Report Offers | <urn:uuid:9af6215f-8d18-478e-9462-5a403dd31b33> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/global-preventive-vaccines-market-trends-forecasts-2014-2019-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00476-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949909 | 659 | 2.5625 | 3 |
2.2.3 What is a key agreement protocol?
A key agreement protocol, also called a key exchange protocol, is a series of steps used when two or more parties need to agree upon a key to use for a secret-key cryptosystem. These protocols allow people to share keys freely and securely over any insecure medium, without the need for a previously-established shared secret.
Suppose Alice and Bob want to use a secret-key cryptosystem (see Question 2.1.2) to communicate securely. They first must decide on a shared key. Instead of Bob calling Alice on the phone and discussing what the key will be, which would leave them vulnerable to an eavesdropper, they decide to use a key agreement protocol. By using a key agreement protocol, Alice and Bob may securely exchange a key in an insecure environment. One example of such a protocol is called the Diffie-Hellman key agreement (see Question 3.6.1). In many cases, public-key cryptography is used in a key agreement protocol. Another example is the use of digital envelopes (see Question 2.2.4) for key agreement. | <urn:uuid:37e938d3-bd28-435d-8ff2-6da148b015f9> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-a-key-agreement-protocol.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00200-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911219 | 236 | 4.09375 | 4 |
A new economic model developed by the Center for Strategic and International Studies, a prominent D.C. think tank, quantifies sizable economic impact from malicious cyber activity.
Attacks on businesses and consumers are a blight on the economy, with criminals foreign and domestic using the Internet to steal identities, intellectual property, trade secrets and just about anything else they can get their hands on.
A new economic model developed at a prominent D.C. think tank puts the cost to the U.S. economy as high as $100 billion annually, with a corresponding loss of as many as half a million jobs.
The report, released by the Center for Strategic and International Studies (CSIS) and written by James Lewis and Stewart Baker, two old hands in the Washington cybersecurity policy discussion, offers a quantitative approach based on data from the Commerce Department and analogous losses from activities such as car crashes, piracy and other losses and crimes.
Cybercrime: The Cost of Doing Business?
The authors explain: "One way to think about the costs of malicious cyber activity is that people bear the cost of car crashes as a tradeoff for the convenience of automobiles; similarly they may bear the cost of cybercrime and espionage as a tradeoff for the benefits to business of information technology."
[Related: Obama Signs Cybersecurity Order]
But what is the price of all that nefarious activity?
The report, sponsored by security software vendor McAfee, eschews survey data, which the authors say is flawed because respondents "self-select," and businesses often either conceal or do not realize the full extent of the losses from a cyber attack.
"We believe the CSIS report is the first to use actual economic modeling to build out the figures for the losses attributable to malicious cyber activity," Mike Fey, executive vice president and CTO at McAfee, said in a statement.
"As policymakers, business leaders and others struggle to get their arms around why cybersecurity matters, they need solid information on which to base their actions."
Lawmakers Divided Over Government's Role
And cybersecurity is the subject of a long-running policy debate in Congress, with lawmakers divided over what role the government should play in setting and enforcing security standards for critical infrastructure operators in the private sector.
The CSIS report evaluated malicious cyber activity in a variety of forms, including crime, intellectual property loss, reputational damage and the cost of bolstering network security and recovery after an attack. The authors also considered the opportunity costs associated with downtime and lost trust, as well as the loss of sensitive business information.
Through an analysis of Commerce Department data on exports and job losses, the authors estimated that cyber espionage could rob the economy of as many as 508,000 jobs. Though he described that figure as a "high-end estimate," co-author Lewis suggested that the real impact could be more severe.
"As with other estimates in the report, however, the raw numbers might tell just part of the story," he said. "If a good portion of these jobs were high-end manufacturing jobs that moved overseas because of intellectual property losses, the effects could be more wide ranging."
The authors are planning to produce a second report that will focus on the less tangible impacts of malicious cyber activity, attempting to quantify the impact on the pace of innovation and the flow of trade.
Kenneth Corbin is a Washington, D.C.-based writer who covers government and regulatory issues for CIO.com. Follow Kenneth on Twitter @kecorb. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.
Read more about government in CIO's Government Drilldown.
This story, "Cybercrime Costs U.S Economy $100 Billion and 500,000 Jobs" was originally published by CIO. | <urn:uuid:ed26bd00-ba65-4199-9861-4d82a9b11237> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2168435/malware-cybercrime/cybercrime-costs-u-s-economy--100-billion-and-500-000-jobs.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00438-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952892 | 777 | 2.546875 | 3 |
In South Africa, a land for many years under oppressive racist rule, a new openness and spirit of reconciliation is growing. The new constitution of that land, developed on the Internet with an open invitation for suggestions and comments as it proceeded, has one of the world's most comprehensive freedom of information acts.
Thomas Blanton, U.S. National Security Archives director, said in 1995 that while most freedom of information statutes -- including that of the United States -- are limited and were passed mostly for the purpose of embarrassing former leaders or regimes, South Africa's constitution specifically guarantees freedom of information to all South Africans. However, Blanton said, "because South Africa lacks established administrative procedures and the judiciary is only beginning to be reformed, this constitutional right is only an idea, not yet an actual practice."
Since 1995, though, South Africa has taken its new constitution and freedom-of-information idea and plunged into the deep divide between whites and blacks. As stated on the Web page of South Africa's Truth and Reconciliation Commission, "This Constitution provides a historic bridge between the past of a deeply divided society characterized by strife, conflict, untold suffering and injustice, and a future founded on the recognition of human rights, democracy and peaceful coexistence and development opportunities for all South Africans, irrespective of color, race, class, belief or sex.
"The pursuit of national unity, the well-being of all South African citizens and peace require reconciliation between the people of South Africa and the reconstruction of society."
To help accomplish this, the country established its Truth and Reconciliation Commission and gave it the power to grant amnesty for crimes committed under the old regime if the following conditions were met: the act, omission or offense was associated with a political objective; the act, omission or offense took place between the period March 1, 1960, and May 10, 1994; and full disclosure has been made.
If a decision to grant amnesty is made, the president must be informed, and the person's full name and acts committed must be published in the government newspaper.
The commission had no easy job. Political murders, bombings, torture and other acts were reviewed. In some cases, offenders were pardoned for political acts and convicted of criminal ones. A white police officer was pardoned for suffocating black suspects in pursuit of confessions, and in one of its final acts, the commission pardoned four black men who dragged a white American from her car and stabbed her to death. While many called for revenge and said such amnesties were unjust, others, including the parents of the American girl, praised the commission for its efforts in the pursuit of full disclosure and reconciliation.
It's easy to see the results of crimes concealed, and how mysteries continue to plague our senses and trap our attention. The "disappeared" people of Argentina. The vanished millions in Cambodia. And what really happened in the assassination of President Kennedy? Mystery attracts and upsets human beings. But unlike mystery novels, where a final resolution brings satisfaction and relief, real life's unresolved mysteries stir suspicion and distrust.
New technologies allow us to learn the truth as never before. DNA analysis, for example, was brought into the debate over President Clinton's sexual relationship with Monica Lewinsky. DNA evidence has exonerated hundreds of men, jailed before DNA analysis was available, for rapes they did not commit. It was Thomas Jefferson who said, "A patient pursuit of facts, and cautious combination and comparison of them, is the drudgery to which man is subjected by his maker, if he wishes to attain sure knowledge." Ironically, it is DNA evidence that now appears to confirm that Jefferson fathered a child with Sally Hemmings, one of his slaves.
And if it is sure knowledge that allows us to operate with certainly and reason, we are indeed fortunate to live in the Information Age. But information alone is not enough. South Africa's Truth and Reconciliation Commission is engaged in an amazing reversal of traditional justice by offering amnesty in exchange for truth. When used wisely, forgiveness is a very powerful force that appeals to the best in us all.
Remember when a small boy fell into a zoo's gorilla enclosure? A female gorilla carried the unconscious boy to a door so zookeepers could remove him safely. Instead of taking her revenge on the humans who had entrapped her, she chose to help. Millions watching on television were deeply touched by the animal's actions.
Remember the American family whose son was murdered by a gunman in Italy? Instead of condemning the country for an action of one of its citizens, or crying for revenge, the family donated their son's organs to Italian children and created an outpouring of love and support so powerful it revolutionized organ donation practices in that country and will undoubtedly save the lives of many.
If South African justice can successfully tap this force for reconciliation, its people may teach us all a lesson about justice.
A CD-ROM version of the commission's report is available online.
Letters to the Editor may be faxed to Dennis McKenna at 916/932-1470 or sent via e-mail to . Please list your telephone number for confirmation. Publication is solely at the discretion of the editors. "Government Technology" reserves the right to edit submissions for length. | <urn:uuid:743a4d4c-240b-4b5e-81f4-a66d63936181> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/The-Truth-Will-Set-Us-Free.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960915 | 1,073 | 2.796875 | 3 |
In this series, we’ll discuss routing and routing protocols. First, let’s define what we mean by a “route”. In common usage, a “route” is an entry in the IP routing table. You can display the IP routing table (available routes) with the command show ip route.
Each entry in the routing table gives the best way for that router to reach a particular IP prefix. Remember that a “prefix” is a particular address/mask combination. Examples of prefixes are:
- 10.0.0.0/8 (a classful network)
- 172.168.100/24 (a subnet)
- 192.168.1.32/29 (another subnet)
- 188.8.131.52/32 (a host route)
- 0.0.0.0/0 (the default route)
For each prefix displayed in the routing table, the entry will indicate how the route was learned, the next hop router’s address and/or outbound interface used to reach it, and other information. Although a routing table entry always represents the best known way to reach a particular prefix, the router may be aware of other possible paths to that prefix. If so, those additional paths would be tracked in other behind-the-scenes data structures separate from the routing table.
There are three ways that a router can learn about the existence of a route:
- Directly connected (to an interface)
- Static configured (by a person)
- Dynamically learned (via a routing protocol)
Directly connected routes are those prefixes to which the router has a direct physical connection. Assuming that the interface is “up/up”, the router will calculate the prefix based on the address and mask configured on the interface, and place a “C” (Connected) route for that prefix in the routing table.
Static routes are those that are configured by an administrator with the ip route command, instructing the router to use a particular next hop or outbound interface to reach a particular prefix. Assuming that the interface in question is “up/up”, the router will place an “S” (Static) route for that prefix in the routing table.
Dynamic routes are those learned via a routing protocol. The mechanism by which the router learns the route varies by routing protocol, as does the letter representing the way the route was learned. Examples include “R” (RIP), “O” (OSPF) and “D” (EIGRP) routes.
Once all routers have learned their best paths to all available prefixes, the network is said to be “converged”. Note that after the network is converged, the routers do not have identical routing tables, but the tables are consistent and correct. When a change occurs, the time lag between the change and re-convergence is referred to as the “convergence interval” or “convergence time”, and is a function of the routing protocol(s) and the size of the network.
We can classify the dynamic protocols several ways. First there’s the method of operation, which can include:
- Distance-Vector (D-V)
The basic idea of a D-V protocol such as RIP is that each router determines its directly-connected routes, and places them in its routing table. The router then sends advertisements on all participating interfaces to inform its neighbors about what it knows.
When a router receives an update, it checks its routing table to see if the update contains an advertised prefix that was previously unknown. If so, that prefix is added to the routing table, with the advertising router as the best next hop.
If a router receives an advertisement for a known prefix, the router checks to see if the advertised route has a better metric than the current route. If so, the router updates the routing table to use the advertising router as the next hop for that prefix. If not, the router ignores the advertisement.
When all information has been passed around and the routing tables have stabilized, the network is converged. So that all routers become aware of changes to the topology in a timely fashion, the routing tables are advertised periodically.
D-V protocols get their name from the fact that each update sent from router to router is a mathematical vector (a multi-valued variable) containing prefix and metric (distance) information. The metric varies by routing protocol, and include such things as hop count, cost, bandwidth, and delay. The vectors used with routing protocols are mathematical vectors, not navigational vectors (such as Northeast).
Author: Al Friebe | <urn:uuid:39a32dd1-de05-4826-abf9-04edb58110d1> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/09/14/routing-protocols-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928554 | 986 | 4.375 | 4 |
Primer: Network WormBy Kevin Fogarty | Posted 2005-01-13 Email Print
How a network worm infects your computers.
Isn't this just a regular worm? Yes, but there is more than one meaning for "regular."E-mail worms and viruses are designed to spread by using the e-mail system itself as a carrier. A network worm is more insidious. It might arrive via e-mail, but could also slip in attached to files in a portable hard drive, a flash-memory stick, a PDA or, increasingly, a cell phone.
Why the distinction? Because it's possible to screen out most, if not all, e-mail worms and viruses using virus scanners at the firewall or on the e-mail servers. But network worms can come in via pathways that become more numerous with every advance in mobile computing, wireless networks and smart phones. Many companies aren't sufficiently aggressive about virus screening inside the firewall. So network worms not only have more ways to get into a corporate network, but once they're in, they're more likely to be free to operate uninterrupted.
How does a network worm attack? Most simply copy themselves to every computer with which the host computer can share data. Most Windows networks allow machines within defined subgroups to exchange data freely, making it easier for a worm to propagate itself. Some worms can also lodge in the startup folder of a networked computer, launch when that computer is restarted and reinfect a network that may have already been cleaned out. A worm that lodges in a server can infect every user who logs on to that server.
How can it affect cell phones? Russian cybersecurity firm Kaspersky Labs recently identified a network worm called Cabir that can infect a cell phone connected to the Symbian network by posing as a security utility. The worm can change the phone's operating system so it is launched every time the phone is turned on, then propagate itself to other phones via Bluetooth wireless connections. No infections have been reported so far.
How do you fix infected computers? Manually, by shutting down the network and going to each infected computer to delete the offending files, then erasing the System Restore data to make sure it won't reinfect a cleaned machine. Or buy a sophisticated virus-scanning application that will sit on each computer and server and clean it of anything that resembles worm or virus code.
What's the solution? Pretty obvious: Buy a good enterprise virus-scanning utility that will monitor activity inside your network as well as data coming in through the firewall. Once they've cleaned out an existing infection, virus scanners continue to watch the network for other threats. Make sure you set all machines to download the most recent worm and virus filters automatically. | <urn:uuid:e8c37188-401f-4ede-a186-e6902d00026a> | CC-MAIN-2017-04 | http://www.baselinemag.com/security/Primer-Network-Worm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946263 | 558 | 3.140625 | 3 |
A Microsoft Professional Developers Conference panel on the future of programming languages looks at what is on programmers' minds. The PDC panel of experts debates what's best for programmers and languages.
LOS ANGELES-What are some of the most pressing issues facing developers
today, and what can be done with programming languages to help with them?
Those were among the questions posed to a group of language and programming
experts at the Microsoft
Developers Conference) here.
Gilad Bracha, Anders Hejlsberg, Douglas Crockford, Wolfram Schulte and
Jeremy Siek made up the distinguished panel of computer language designers and
researchers addressing "The Future of Programming Languages." And the
moderator was no slouch either. Erik Meijer, a Microsoft software architect and
language expert in his own right, moderated the panel. Meijer was influential
in the evolution of the Haskell language and is the leader of Microsoft's "Volta"
project to simplify Web and cloud development.
The panel touched on a wide variety of issues, not only including
identifying the most pressing issues facing developers, but also such topics as
whether IDEs (integrated development environments) matter more than languages,
whether modeling is important, the degree to which programmers should be allowed
freedom with the language and the inevitable dynamic language versus static
First, a bit about the panelists ... Gilad Bracha is the creator of the
Newspeak programming language. He is currently a distinguished engineer at
Cadence Design Systems; previously he was a computational theologist and
distinguished engineer at Sun Microsystems. Douglas Crockford is a senior
Division at Microsoft, is the chief designer of the C# programming language and
a key participant in the development of the Microsoft .NET
framework. Hejlsberg also developed Turbo Pascal, the first-ever IDE,
and the Delphi language. Wolfram Schulte is a senior
researcher at Microsoft, and Jeremy Siek is an assistant professor at the University
of Colorado. Siek's areas of
research include generic programming, programming language design and compiler
Regarding IDEs, Bracha said, "I come from a world where IDEs matter a
lot. They are enormously important, but the language is also enormously
Hejlsberg said IDEs certainly do matter, "but a lot less than they did
25 years ago." He said frameworks and IDEs have dwarfed languages, but
languages remain important. However, Hejlsberg lamented the fact that languages
evolve so slowly as compared with other areas of computing.
Schulte said he believes, "languages and libraries don't matter so
much. You have to look at what problem you want to solve and then pick the
language." Indeed, Crockford said he encourages developers to learn as many
languages as possible.
Yet, when asked whether languages should be designed by committee or by a
benevolent dictator, all five panelists, in unison, replied:
based, and said although a standards body or committee may be stodgy, it is the
structure the organization provides that is most important. | <urn:uuid:9f343d67-fc74-43ba-a363-932418a8b272> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Whats-Most-Pressing-for-Programmers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922562 | 668 | 2.515625 | 3 |
A funny thing happened on the way to our supposedly 3D-printed future: A simpler, older, but no less revolutionary technology made its way into every automated factory on earth, and now it’s coming to a garage near you. If you haven’t heard of it, it’s mostly because it has a completely unbankable name—CNC routing (or CNC milling.) Also, unlike the usurper technology 3D printing, which has only lately become popular, CNC milling has been around since MIT pioneered the technology starting in the 1950s.
CNC routing is basically the inverse of 3D printing. Instead of using a computer to control a basic armature and print head that deposits plastic or some other material in three dimensions, CNC routing uses a spinning drill bit to carve wood, metal or plastic. It’s the difference between making a sculpture out of clay and carving it from marble, only there’s a robot doing it instead of a human.
And now CNC milling is becoming as accessible as 3D printing. Shopbot, which has made CNC routers since 1996, has launched a Kickstarter crowd-funding effort to launch its Handibot miniature CNC router. | <urn:uuid:16726855-671d-4839-aee4-93b8f01636fe> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2013/07/forget-3d-printing3d-subtraction-going-arrive-your-garage-first/66695/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963741 | 254 | 2.96875 | 3 |
NASA's Mars rover Curiosity made its third short trip this week as part of a long trek that could take as much as a year.
Curiosity, an SUV-sized rover carrying 10 scientific instruments, drove 135 feet on Tuesday. The first two drives were made on July 4 and July 7, kicking off an approximately six-mile trip to the base of Mount Sharp, the goal of Curiosity's two-year exploratory mission.
NASA's Mars rover Curiosity took this image of the lower slopes of Mount Sharp, its next big destination, after a drive on Tuesday. (Photo: NASA/JPL-Caltech)
Before heading out on this trek, Curiosity had only driven about 500 yards from where it landed in August 2012.
However, this latest trip is one of the longest any rover has made on Mars. Curiosity's predecessor, Opportunity, made the longest trek, traveling 13 miles in 1,000 days.
After this third drive, Curiosity will have traveled a total of 325 feet into its long journey.
Mount Sharp, which sits in the middle of Gale Crater, where Curiosity landed, exposes many geological layers where scientists hope to find clues to how the ancient Martian environment evolved.
Curiosity isn't expected to climb to the top of Mount Sharp, though it will drive up a portion of it to investigate as many geological layers as possible.
NASA scientists have been eager to get Curiosity to Mount Sharp, since that is their main point of interest, but the rover already has made significant findings.
Less than two months after landing on the Red Planet, Curiosity found evidence of what scientists described as a "vigorous" thousand-year water flow on the planet's surface.
The rover is on a two-year mission to find evidence of whether Mars has, or ever had, an environment that could support life. The water evidence was a key discovery since it is one of the main elements necessary for life as we know it.
In March, Curiosity sent back what appears to be proof that Mars could have supported life in the distant past. The evidence came from the first rock that NASA technology has ever drilled on another planet.
The sample, which was analyzed by chemistry instruments on the rover, contained sulfur, nitrogen, hydrogen, oxygen, phosphorus and carbon -- key chemical ingredients for life.
This article, NASAs Mars rover Curiosity takes third short trip in long journey, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA's Mars rover Curiosity takes third short trip in long journey" was originally published by Computerworld. | <urn:uuid:a554a38b-8cf3-4052-bfb8-4f93710ab7b5> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2168088/data-center/nasa--39-s-mars-rover-curiosity-takes-third-short-trip-in-long-journey.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00459-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963493 | 597 | 3.53125 | 4 |
University researchers have found that HTML5-based mobile apps, which are expected to become more prevalent over the next several years, could add security risks for businesses.
Through developer error, the Web technology could automatically execute malicious code sent by an attacker via Wi-Fi, Bluetooth or a text message, researchers at Syracuse University reported last month at the Mobile Security Technologies Conference in San Jose, Calif.
"The malicious code can surreptitiously capture the victim’s sensitive information off their mobile device and ex-filtrate it to an attacker," Jack Walsh, a mobile security expert at ICSA Labs, said Monday in a blog post on the research. "Second, and potentially worse, the app may spread its malicious payload like a worm -- SMS text messaging itself to all of the user’s contacts."
Security weaknesses introduced in HTML5-based apps could become a bigger problem as their use grows. Because of the cross-platform nature of the Web technology, it is expected to be in more than half of all mobile apps by 2016, according to Gartner.
If the developers just want to process data, but use the wrong APIs, the code in the mixture can be automatically executed, the researchers said.
"If such a data-and-code mixture comes from an untrustworthy place, malicious code can be injected and executed inside the app," the researchers said.
The risk of developer error is not unique to HTML5 apps.
"An HTML5-based app is no different from a web-based application and the same security measures should apply to both," Bogdan Botezatu, senior e-threat analyst for Bitdefender, said.
Ways in which an attacker could send a malicious code-data string to an HTML5 app include an SSID field sent over a Wi-Fi access point, a QR code, JPEG image or as metadata within an MP3 music file. The SSID, or service set identifier, is used in connecting devices to a network.
Other places malicious code could be hidden are in an SMS message displayed by the app. The code could also be sent from an infected device via Bluetooth if the app attempts a pairing.
In order for HTML5-based apps to be cross-platform, they require a middleware framework that lets them connect to the underlying system resources, such as files, device sensors and the camera.
Google Android, Apple iOS and Windows Phone have different containers that apps use for accessing services, so developers let the framework creators handle the plumbing underneath the Web app.
Examples of frameworks include PhoneGap, RhoMobile and Appcelerator. The researchers studied 186 PhoneGap plugins and found 11 that were vulnerable to the code-injection attack.
While the researchers only used PhoneGap and Android for their work, the same problems were applicable across operating systems.
"Since apps are portable across platforms, so are their vulnerabilities," the researchers said. "Therefore, our attacks also work on other platforms."
This story, "Why businesses should use caution with HTML5-based mobile apps " was originally published by CSO. | <urn:uuid:86bee5c4-1309-41e6-8791-efd50268c8be> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2364306/data-protection/why-businesses-should-use-caution-with-html5-based-mobile-apps.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00459-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934947 | 641 | 2.828125 | 3 |
The MIT SENSEable City Lab's Real Time Rome project aggregates data from cell phones to better understand urban dynamics in real time. By collecting location data from cell phone users, and speed and location data from bus and taxi fleets, the project aims to help Roman commuters make better decisions about their environment. "Imagine being able to avoid traffic congestion or knowing where people are congregating on a Saturday afternoon," said project director Carlo Ratti, director of the SENSEable City Lab. "In a worst-case scenario, such real-time systems could also make it easier to evacuate a city in case of emergency."
For more information on Real Time Rome, visit the Web site. -MIT | <urn:uuid:56d59858-0d2d-48e7-b59b-4167805057b1> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/MIT-Project-Aggregates-Cell-Phone-Data.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00367-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926498 | 140 | 2.921875 | 3 |
CARRIER-TO-NOISE RATIO IN CABLE NETWORKS
Cable operators routinely measure carrier-to-noise ratio (CNR) as one means of characterizing the health of their cable networks. Government regulations require that cable networks meet certain minimum standards for analog television signal CNR. Also, the DOCSIS® Radio Frequency Interface Specification includes in its assumed RF channel transmission characteristics a minimum CNR parameter for data signals. But just what is CNR? This white paper provides a comprehensive tutorial on the subject.
THE CONCEPT OF CNR
CNR is in cable industry vernacular a pre-detection measurement-that is, a measurement performed in the frequency domain. By definition, CNR is the difference, in decibels, between the amplitude of an RF signal and the amplitude of noise present in the transmission path of the RF signal. The RF signal may be unmodulated (also called continuous wave, or CW) or modulated. The noise may be one or a combination of several types: thermal noise; shot noise and relative intensity noise (RIN) in optical fiber links; and, in cable systems carrying a mix of analog TV channels and digitally modulated carriers, non-thermal noise such as composite and intermodulation noise. This paper focuses on thermal noise generated by passive and active devices through which the RF signal is transmitted. The amplitude of thermal noise-also known as additive white Gaussian noise, or AWGN-is usually specified over a certain bandwidth, called noise power bandwidth.
Figure 1 is an example of a typical spectrum analyzer display when making a CNR measurement.Figure 1. CNR Is a Frequency Domain Measurement
Many of today's signal level meters, spectrum analyzers, and quadrature amplitude modulation (QAM) analyzers support the measurement of both analog TV channels and 64- and 256-QAM digitally modulated carriers. TV channel signal level or amplitude generally refers to the visual carrier amplitude, which is defined as the root mean square (rms) value of the instantaneous synchronizing peak. Digitally modulated carrier amplitude is a measure of the signal's average power. The signal level, or amplitude, of a TV channel or digitally modulated carrier is the "C" in CNR.
Analog TV channel visual carrier amplitude and digitally modulated carrier average power are commonly measured using decibel millivolt (dBmV), a unit of power expressed in terms of voltage.
dBmV = 20log(signal amplitude in millivolts/1 millivolt) Equation
But what about the "N" in CNR? That is the thermal noise rms amplitude.
According to Fundamentals of RF and Microwave Noise Figure Measurement1, thermal noise is "the fluctuating voltage across a resistance due to the random motion of free charge caused by thermal agitation." The HP application note goes on to say "The probability distribution of the voltage is Gaussian with mean square voltage..."
The equivalent circuit model for a noisy resistor can be represented as a noise voltage source in series with a noiseless resistor, or as a noise current source in parallel with a noiseless resistor. Figure 2 illustrates these models.Figure 2. Equivalent Circuit Model for a Noisy Resistor
These equivalent circuit models are mentioned here because in the world of electronic amplifier design, the concepts of input referred noise voltage and input referred noise current are important. The optimum source resistance necessary to minimize the noise figure of an electronic amplifier is the ratio of input referred noise voltage to input referred noise current-in general. This statement is true for low-frequency amplifiers, where input referred noise voltage and input referred noise current are uncorrelated. In higher-frequency RF-type amplifiers, the input referred noise voltage and input referred noise current have finite correlation such that the optimum input impedance becomes a complex value. That is, the phase angle becomes important.
The previously mentioned Fundamentals of RF and Microwave Noise Figure Measurement states that the "power delivered by a thermal source into an impedance matched load is kTB watts," where
kTB Equation
k = Boltzmann's Constant (1.38*10-23 joules/kelvin)
T = Temperature in kelvin (K)
B = Bandwidth
At a reference source temperature2 of 290 K, the 1 Hz bandwidth thermal noise power delivered to any load impedance from a matched source impedance is 4.002*10-21 watt or -203.98 dBW. This shows that available thermal noise power into a matched load is directly proportional to bandwidth. For example, if the bandwidth doubles from 1 to 2 Hz, the available thermal noise power increases by 3.01 dB to 8.004*10-21 watt or -200.97 dBW.Converting Thermal-Noise Power to dBmV
To get thermal noise power into the world of the more familiar dBmV, we start with a variation of Equation . From that, we can derive the following formula for calculating the open-circuit noise voltage from a resistance or impedance:
k = Boltzmann's Constant (1.38*10-23 joules/kelvin)
T = Temperature in kelvin (K)
B = Bandwidth in Hz
R = Resistance (or impedance) in ohms
Equation allows us to calculate the open-circuit noise voltage over a 4 MHz bandwidth (the noise power bandwidth used for analog National Television System Committee [NTSC] television channel CNR measurements) generated by a 75-ohm resistor at room temperature (68° F, or 293.15 K). Figure 3 shows the open-circuit noise voltage of a 75-ohm resistor equivalent circuit model and a standalone 75-ohm resistor.
en = 2.2033075*10-6
en = 2.2033075 microvolts (µV)Figure 3. 75-Ohm Resistor Open Circuit Noise Voltage
When this 75-ohm impedance noise source is terminated by an equal value resistance-say, connected to the input of a 75-ohm impedance amplifier-the thermal noise is en/2 or 1.10165375 microvolts. This is shown in Figure 4.Figure 4. 75-Ohm Resistor Terminated Noise Voltage
The formula to convert microvolts to dBmV follows:
dBmV = 20log(microvolts/1000) Equation
dBmV = 20*log(1.10165375/1000)
dBmV = 20*log(0.001101653755)
dBmV = 20*(-2.95795)
dBmV = -59.16
That is, 1.1 microvolts = -59.16 dBmV
Now consider Equation . If we plug some now familiar values into that equation, we can validate the previous calculations:
1.62*10-14 watt or -137.91 dBW
To convert dBW to dBmV in a 75-ohm system, add 78.75 to the value in dBW: -137.91 dBW + 78.75 = -59.16 dBmV.
If we use a reference source temperature of 290 K rather than room temperature (293.15 K), the answer is -59.21 dBmV, although most cable network CNR calculations assume room temperature.
If a 75-ohm resistor at room temperature is capable of generating measurable noise power (-59.16 dBmV in a 4 MHz bandwidth), imagine the noise that is generated by active devices such as amplifiers. Indeed, real-world amplifiers do generate noise, which must be accounted for when calculating or measuring CNR.
If we had a perfect amplifier, the RF carrier and noise output levels would be greater-by the amplifier's gain in decibels-than the input carrier and noise levels. For example, a 20 dB gain amplifier with +12 dBmV RF carrier input level would have an output carrier level of +12 dBmV + 20 dB = +32 dBmV. If the input noise level at that same amplifier were -59.16 dBmV, the output noise level would be -59.16 + 20 dB = -39.16 dBmV. In addition, the CNR at the amplifier input and output would be equal: 71.16 dB in this example. Figure 5 illustrates this.Figure 5. Ideal Amplifier
A real-world amplifier behaves more like what is shown in Figure 6.Figure 6. Real-World Amplifier
As expected, the output RF carrier level is 20 dB greater than the input RF carrier level. However, the output noise level is 28 dB greater than the input noise level. Furthermore, the output CNR is 8 dB worse than the input CNR. How can this be? Is the amplifier amplifying noise more than it does the RF carrier? No-the CNR degradation is related to the noise figure of the amplifier.
Fundamentals of RF and Microwave Noise Figure Measurement defines noise figure as the "...degradation in signal-to-noise ratio as the signal passes through the [device under test]." The most commonly accepted definition originated in the 1940s3, which stated that the noise figure (F) of a network is the ratio of the signal-to-noise power ratio at the input to the signal-to-noise power ratio at the output: F = (Si/Ni)/(So/No).
In the previous example, the amplifier output CNR is 8 dB worse than the input CNR, so the amplifier noise figure is 8 dB. The noise figure of an amplifier is independent of input and output levels. Amplifier manufacturers try to reduce the noise figure by optimizing impedance levels and circuit design, and choosing low-noise transistors or hybrids. Typical cable TV amplifier noise figures are in the 7 to 10 dB range.
NOISE POWER BANDWIDTH
As previously mentioned, the noise power bandwidth for analog NTSC television channels is 4 MHz. When calculating or measuring the CNR of a digitally modulated carrier, the noise power bandwidth should be equal to the symbol rate4. For example, the symbol rate of a 6 MHz bandwidth downstream 64-QAM digitally modulated carrier is 5.056941 million symbols per second (Msym/sec), so the noise power bandwidth is 5.056941 MHz. This value, expressed in Hz (5,056,941 Hz), is substituted for B in Equation to calculate the thermal noise level. Tables 1 and 2 summarize noise power bandwidth and thermal noise level for several common digitally modulated carrier bandwidths used in DOCSIS and Euro-DOCSIS® networks.Table 1 Noise Power Bandwidth-Symbol Rate Bandwidth
Channel RF Bandwidth Symbol Rate1 Noise Power Bandwidth Thermal Noise Level at 68°F (75-ohm impedance) 6 MHz 5.056941 Msym/sec 5,056,941 Hz 1.24 microvolts -58.14 dBmV 6 MHz 5.360537 Msym/sec 5,360,537 Hz 1.28 microvolts -57.89 dBmV 8 MHz 6.952 Msym/sec 6,952,000 Hz 1.45 microvolts -56.76 dBmV 200 kHz 160 ksym/sec 160,000 Hz 0.22 microvolt -73.14 dBmV 400 kHz 320 ksym/sec 320,000 Hz 0.31 microvolt -70.13 dBmV 800 kHz 640 ksym/sec 640,000 Hz 0.44 microvolt -67.12 dBmV 1.6 MHz 1,280 ksym/sec 1,280,000 Hz 0.62 microvolt -64.11 dBmV 3.2 MHz 2,560 ksym/sec 2,560,000 Hz 0.88 microvolt -61.10 dBmV 6.4 MHz 5,120 ksym/sec 5,120,000 Hz 1.25 microvolts -58.09 dBmV
1DOCSIS 2.0 uses modulation rate in kHz rather than symbol rate for upstream digitally modulated carriers.
Table 2 Noise Power Bandwidth-Full RF Channel Bandwidth
The CNR of an individual cable TV amplifier can be calculated with the formula:
C/Ni = Nt - NF + I Equation
In this equation:
C/Ni is the CNR of an individual amplifier.
Nt is the thermal noise level from Equation (expressed as a positive number so that the answer will come out positive). Note that for the following example, analog NTSC television channels are assumed, so 59.16 is used.
NF is the amplifier noise figure in dB.
I is the amplifier RF input level in dBmV.
For example, the standalone CNR of an amplifier with 8 dB noise figure and +12 dBmV input is
C/Ni = 59.16 - 8 + 12
C/Ni = 63.16 dB
Figure 7 provides an example of the standalone CNR of a cable amplifier.Figure 7. Amplifier CNR
Federal Communications Commission regulations require that the analog NTSC TV channel CNR in U.S. cable systems be no less than 43 dB at the subscriber terminal. Good engineering practice suggests that the worst-case CNR should be better than the FCC minimum-most modern cable networks are designed to provide end-of-line CNR in the mid to upper 40s.
The assumed channel transmission characteristics in the DOCSIS Radio Frequency Interface Specification include the following minimum CNRs for digitally modulated carriers, regardless of modulation format:
Downstream: 35 dB
Upstream: 25 dB
Calculating downstream CNR in a cable network is generally done by first calculating the CNR of each type of standalone amplifier used in the network, and then calculating the CNR of the longest cascade of amplifiers in the network. The cascaded amplifier CNR is then combined with the headend and fiber link CNR using power addition. This exercise yields the overall CNR from headend to the network end-of-line.
A cascade of identical cable TV amplifiers has a combined CNR of
C/Nt = C/Ni - 10log(N) Equation
For instance, a cascade of six identical amplifiers (shown in Figure 8), each with a standalone CNR of 63.16 dB, has a combined end-of-line CNR of
C/Nt = 63.16 - 10*log(6)
C/Nt = 63.16 - 10*0.7782
C/Nt = 63.16 - 7.78
C/Nt = 55.38 dBNote: We can get even more accurate with a cascaded CNR calculation by accounting for the thermal noise contribution of the coaxial cable between each amplifier, although the overall impact is small. In addition, coaxial cable has frequency-dependent attenuation, which may affect CNR. Most cable distribution network cascade CNR calculations do not consider the effects of the cable-only the active devices.Figure 8. Amplifier Cascade CNR
The following power addition formula can be used for combining individual CNRs:
Using this formula, we can calculate the downstream end-of-line CNR for a cable network when the CNRs of individual components or elements are known. In addition, we can use Equation to combine unlike CNRs. For instance, if we know the headend, fiber link, and coax plant CNRs, we can combine them using Equation to calculate the end-of-line CNR. Assume the headend, fiber link, and coax plant have the following standalone CNRs, as shown in Figure 9:
Headend CNR: 55 dB
Fiber link CNR: 52 dB
Coax plant CNR: 49 dBFigure 9. Cable Network End-of-Line CNR
If the headend CNR is increased from 55 dB to, say, 60 dB, the end-of-line CNR improves slightly from 46.56 dB to 47.01 dB. Indeed, excluding the headend CNR contribution from the calculation-that is, calculating the combined CNR for only the fiber link and coaxial plant-results in an insignificant change to the results, increasing the end-of-line CNR to 47.24 dB:
But what if we want to calculate the CNR of one of the contributing elements, say, the coaxial plant, when only the fiber-link and end-of-line CNRs are known? This is possible, but it requires a slight juggling of the power addition formula (subtraction is used inside the formula brackets rather than addition). Note that the headend CNR has been excluded.
From this, the coax plant CNR contribution is 49 dB, which agrees with the value used in the earlier example. To calculate the fiber link CNR when only the coax plant and end-of-line CNRs are known, we use the following variation of the formula:
The upstream CNR of a cable network is calculated somewhat differently than the downstream CNR. In the forward path, the network branches out from a common point-say, a node. The worst-case downstream CNR is almost always through the longest individual cascade of amplifiers. In the reverse path, the network combines at a common point-the node, hub site, or headend. This results in a reverse funneling effect for system noise and impairments. Instead of calculating the CNR for a given cascade of amplifiers, the upstream CNR accounts for all the reverse amplifiers that are connected to a common point.
If a network design is such that 50 amplifiers are connected to a node, the downstream CNR is the end-of-line value through the longest single cascade of amplifiers, which may be only 6 or 8 (not the entire 50). But going the other direction, noise from all 50 amplifiers combines back at the node, so upstream CNR must account for that. Assuming all 50 reverse amplifiers are identical, we first calculate the CNR of a standalone amplifier using Equation . For example, if the noise figure of each reverse amplifier is 10 dB and the RF input level is +18 dBmV, the CNR of one amplifier is 67.16 dB.
C/Ni = Nt - NF + I
C/Ni = 59.16 - 10 + 18
C/Ni = 67.16
The combined CNR at the upstream input of the node for 50 identical reverse amplifiers can be found using Equation , where N is the total number of reverse amplifiers rather than the longest cascade of amplifiers.
C/Nt = C/Ni - 10log(N)
C/Nt = 67.16 - 10log(50)
C/Nt = 67.16 - 10*log(50)
C/Nt = 67.16 - 10*1.70
C/Nt = 67.16 - 16.99
C/Nt = 50.17
Equation is used to combine the total upstream amplifier CNR (50.17 dB in this example) with the upstream fiber link CNR. If the standalone CNR of the fiber link is 39 dB, the combined CNR at the headend is 38.68 dB.
The power addition formula (Equation ) also can be used to calculate the combined CNR at the upstream input port to a cable modem termination system (CMTS). Assume the following CNRs from four upstream fiber links (including the respective node and coax plant ):
Upstream output from fiber receiver A: 35 dB
Upstream output from fiber receiver B: 32 dB
Upstream output from fiber receiver C: 26 dB
Upstream output from fiber receiver D: 29 dB
Figure 10 illustrates this example.Figure 10. Combining CNRs at CMTS Upstream Input
In this example, four-to-one combining results in a CNR at the CMTS input that does not meet the DOCSIS assumed upstream channel transmission characteristic minimum of 25 dB. We could either migrate to two-to-one combining (combining the 35 dB and 26 dB upstream values yields 25.49 dB CNR, whereas combining the 32 dB and 29 dB upstream values yields 27.24 dB CNR), or troubleshoot the plant to correct-if possible-the two lower CNRs5.
CNR is generally accepted to be a pre-detection measurement-that is, one made at RF. When we carried only analog TV channels on our networks, CNR was understood to be the difference, in decibels, between the amplitude of the visual carrier of a TV channel and the rms amplitude of system noise in some specified bandwidth. Today's cable networks carry a variety of signals in addition to traditional analog channels, including digitally modulated carriers that use high-order modulation formats such as 64- and 256-QAM. Understanding CNR, how it degrades through a cascade of devices, and how it affects all the signals carried on a cable network are critical parts of ensuring reliable network operation.
CNR is a powerful tool for characterizing the health of an RF transmission medium or standalone device. Cable operators have long used CNR as a measure of network performance, along with other parameters such as carrier-to-composite triple beat (CTB), carrier-to-composite second order (CSO), and carrier-to-cross modulation (XMOD) ratios, hum modulation, and broadband sweep response. CNR by itself does not necessarily describe the quality of signals carried on a cable network, although CNR does have an impact on signal quality. Maintaining CNR at or above certain performance thresholds is one way to minimize that impact.
ReferencesANSI/SCTE 17 2001 (formerly IPS TP 216), "Test Procedure for Carrier to Noise (C/N, CCN, CIN, CTN)"DOCSIS 1.0 Radio Frequency Interface Specification-DOCSIS 1.1 Radio Frequency Interface Specification-DOCSIS 2.0 Radio Frequency Interface Specification-Federal Communications Commission Regulations, Part 76Friis, H.T., Noise Figures of Radio Receivers, Proceedings of the IRE, July 1944, pages 419-422Fundamentals of RF and Microwave Noise Figure Measurement, Application Note 57-1, Hewlett-Packard, July 1983Hranac, R., "CNR versus SNR," Communications Technology magazine, March 2003Hranac, R., "Spectrum Analyzer CNR Versus CMTS SNR," Communications Technology magazine, September 2003Hranac, R., "More on CMTS SNR," Communications Technology magazine, October 2003 | <urn:uuid:a04a49a2-9204-40cf-a258-2b8974765d44> | CC-MAIN-2017-04 | http://www.cisco.com/en/US/products/hw/cable/ps2217/products_white_paper0900aecd800fc94c.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00485-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.853445 | 4,773 | 2.9375 | 3 |
What makes it so easy to fall prey to phishing emails is that they look perfectly legitimate–official-looking emails from reputable companies getting in touch for seemingly reasonable purposes. It could be Apple, PayPal, or your bank. No matter how shrewd you are, it only takes one moment to let your guard is down and become a victim. A phishing email will typically focus on the giving and verification of information. But how do you recognize the difference between a genuine email and a spoof?
Best practice tips to avoid phishing scams:
Email is not encrypted by default. For that reason, it would be extremely rare and irresponsible for a reputable company to ask for private information–passwords, credit card numbers, etc. A real company might alert you using an email, but it would never disclose personal details or ask for them in return over email. It’s highly unlikely that a bank or any other official body will ask you for passwords or personal info.
Don’t trust display names. A display name is the reader-friendly title of a person or company that appears in your inbox. Check the actual email address first. If you don’t like the look of it, don’t open it. Many inboxes will only reveal a display name and nothing else, so always check the email address thoroughly.
In the email address, look for fake domains. The domain is whatever comes after the ‘@’ symbol in the email address. A scammer’s address will have a slight deviation in spelling from the real one – it could be something as simple as a hyphen or a different letter. For instance, a phishing email from PayPal could come from email@example.com instead of firstname.lastname@example.org.
Look for a logo. Counterfeits are usually copied from an authentic site but have been altered or appear in low resolution. Check it against the organization you usually deal with; look at their website and compare. Better still, run it against a previous email from past correspondence if you have any. Go on the company website and see if they have any literature about what to look out for. If clients are often targeted by cyber criminals, they may have a knowledge base of advice, guidance, and warning signs.
If you notice anything suspicious about the links in an email, don’t click on them. It sounds abundantly obvious, but sometimes we’re in such a rush we don’t think before we click on a link. Links can lead to websites that will download malware or spyware to your machine, a mock website that tricks you into entering a username and password, or sites filled with malicious advertisements and trackers.
Fake emails are notorious for bad spelling, if you see some obvious spelling mistakes and appalling grammar, then your inner alarm bell should be ringing.
How do they greet you? Phishing emails are usually sent to a huge list of addresses amassed from a number of sources. As a result, they are usually not personable. If they address you vaguely and not by name, or the salutation is overly friendly, then there’s a good chance the email is a scam. Some phishing emails are more targeted however, honing in on a specific people or group of people. This is called “spear phishing”, and it accounts for the vast majority of successful phishing scams, but only a small fraction of the total phishing emails sent every day.
Keep an eye out for language that’s coated in fearful words and a sense of urgency. Nothing that can be handled via email should be so urgent that it needs your immediate attention. If it is, ring your bank or whoever this email is from and ask them.
Remember to check the authenticity of digital signatures provided if your email client is able to do so. A digital signature is a sort of stamp that often appears as an attachment, such as smime.p7 on Mac OSX and iOS email clients. These attachments are verified with a third party to prove the sender is who they say they are. If the digital signature cannot be verified, you should see some sort of alert. Tread carefully.
Whatever you do, don’t click on attachments. Phishing emails typically use social engineering–a type of psychology used to manipulate people’s behavior–to trick victims into voluntarily giving up information. Attachments, however, often contain viruses, spyware, trojans, and malware. Once installed on a victim’s device, they can spy on the user’s activity or hijack the device.
A recent example of this type of scam occurred when cybercriminals used the identity of the Irish Government to target PayPal users. They used a fake government address–email@example.com, for instance. The emails snuck past spam filters and landed in people’s inboxes instead, giving them an impression of authenticity. A dramatic message stated the receiver’s account would be limited. Victims were ordered to contact PayPal immediately in order to restore their accounts account. Of course a dodgy link was given in order to do so rather than a genuine phone number.
Another example of a phishing email comes from an imitation Royal Bank of Scotland asking for verification of account details in order to update security information.
HMRC has its own handout on examples of phishing emails, showing just how prevalent it is for its customers. The handout includes an exhaustive list of potential fake email address to look out for.
It’s good to be skeptical
It’s important to remain skeptical at all times. If in doubt, ring the supposed senders of the email and ask them to confirm whether it was them who sent it. Make sure you use the phone number on the official website, not one given in the email. As well as being skeptical, remain vigilant and check the details. Here’s a quick recap of what to look out for:
- Check the subject header – Spelling mistakes, a sense of urgency and fear
- Salutation – Uriah Heep type greetings, or completely impersonal out-of-character ones
- Dodgy links — Don’t open them
- Email addresses – Check them. Do they look legitimate? If not, don’t click.
- Poor spelling and grammar – A big give-away.
If you stay vigilant at all times, hopefully you won’t fall prey to a phishing email. Most webmail clients include free and automatic virus scans for attachments, but you should still invest in good antivirus software.
Forewarned is to be forearmed, and we hope we’ve given you enough information to fend off those phishing emails for the foreseeable future. | <urn:uuid:06498a46-b908-4cc5-bca6-e0b39ac5daa6> | CC-MAIN-2017-04 | https://www.comparitech.com/blog/information-security/how-to-spot-a-fake-spoof-or-phishing-email/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00421-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930548 | 1,397 | 2.75 | 3 |
In Part 2 of CIO.com's three-part series on the technology skills gap in America, Gary Beach suggests that the issue is really an education gap. When it comes to math and science education, is the United States a nation at risk?
The skills gap, as we discussed in the first installment of this series, is a controversial topic. But some, like Adam Davidson, founder of National Public Radio's "Planet Money" program, claim the term is misnamed. For him, a skills gap is really an "education gap."
Based on six years of research I invested in writing my book The U.S.Technology Skills Gap, I agree with Davidson. And with Glen Whitney, the founder of the Museum of Mathematics, the country's only math museum located in New York City, who says math (and science) are subjects Americans "love to hate and believe were done by dead Greek guys 1,000 years ago."
The first cracks in America's education gap could first be observed in 1909, according to A Short History of Mathematics in the United States, a book written by David Klein.
In his work, Klein tracks a precipitous 41 percent drop in the percentage of American high school students enrolled in math courses from 1909 through 1934. Even at a time when there was incredible technological innovation in America like the Henry Ford's Model T automobile(1908), the radio circuit (1918) and Polaroid photography (1931).
That American kids were not math whizzes should not have come as a surprise. Education was not valued in America at the time. In fact, though the inventions just mentioned were brought to market by Americans, the world's center of technological innovation in the 1930's was not America. It was Germany -- a country where math and science skills were revered. A country that was putting those math and science skills to work building massive war machines in the country's run up to World War II under Adolf Hitler and the Nazis.
I often ask CIOs and IT executives this question: Who was/is the most famous scientist in the history of America? More often than not, the reply I get back is "Albert Einstein."
Technically, the answer is correct. Einstein was an American citizen for the last 15 years of his life. But he never was taught in an American classroom. Rather, Einstein was educated in Switzerland and Germany and immigrated to the United States in 1933 as Hitler was about to come to power in Germany.
After America entered the war in December 1941, the United States War Department bluntly awakened America to its math and science problem. Though the American military at that time had more mules than tanks, the new equipment the War Department did have was more sophisticated than war equipment used at the end of World War I. Equipment that demanded intelligent people to operate complex machines.
The hitch was this: though millions of patriotic men and women lined up to serve, many of them lacked skills in math, science and cognitive thinking. The War Department, therefore, was forced to quickly assess those deficiencies by creating an aptitude/IQ test called the "Army General Classification Test."
Introducing this test to the American public, the War Department claimed it was necessary "to minimize the effects of public schooling."
The goal of the Army General Classification Test was to identify intelligent people to fly the new planes, drive the new tanks, command the new ships and operate the new canon. One year after the test's deployment, the Army General Classification Test issued this assessment of the intelligence of the recruits: Nearly 40 percent had the mental capacity of eight-year-olds.
Regardless of their intellectual abilities, these brave men and women fought, and won, World War II. But as they returned home from war, they confronted with weak U.S. public school system that the U.S. War Department sought to "minimize" as the war started. A system where 60 percent of students dropped out of high school before graduation.
And a system that was not prepared for the onslaught of the Baby Boomer Generation, a generation of Americans born from 1946 - 1964. A history-defining generation of Americans who entered the U.S. public school system in 1952 at a staggering pace of two million additional students per year. A generation of Americans that crippled an already ailing school system and infrastructure.
Prior to World War II, the process of teacher certification was arduous. After the war, however, as millions of Baby Boomers created overcrowded classrooms, another huge problem arose. There was not enough teachers to teach these Baby Boomers. In fact, there was a shortage of 132,000 K-12 teachers in America.
To address the situation, many states lowered, or abolished entirely, teacher certification programs. Teachers, who would have never qualified to be a teacher prior to the war, now stood in front of millions of young American students.
Life magazine, in March 1958, ran a four-part series on the state of American education entitled "Crisis In Education" where it compared lives of teenagers in America to those living in Moscow. A comparison that didn't fare well for America.
Just as the Life magazine was being published, the U.S. Government, coming just months after the Soviets launched Sputnik into space, sent two delegations from the U.S. Office of Education to the Soviet Union, our country's cold war adversary, to study how their school system functioned. The delegations conclusion read, "we came back deeply concerned about our poorer schools now suffering neglect with this question: will we Americans work and sacrifice to improve public education in the United States?"
Throughout the 1950s and early 1960s, Baby Boomer students, many of taught by incompetent, unqualified teachers, didn't learn their math lessons well. Here's proof. American high school students generally take their SAT tests when they are 17. Do the math. The first group of Baby Boomers to turn 17 did so in 1963. And how did they do? Not very well. For 14 consecutive years, from 1963 through 1976, SAT math and verbal scores for Baby Boomers declined year-after-year-after year.
The long tail of overcrowded classrooms and incompetent teaching of this era remains with America to this day as about 40% of the current teacher population in the United States are Baby Boomers. Teachers whose generation was subjected to horrendous education conditions in America. Teachers whose generation did not learn well math, and science, skills from teachers who shouldn't have been teachers.
(Aside: if you took the SAT test prior to 1995, I can guarantee you that reading The U.S.Technology Skills Gap will add over 100 points to your score. I am not kidding.)
Other cracks were forming in the United States' education gap. One year after the SAT train wreck began in 1963, the First International Mathematics, organized by the International Association for the Evaluation of Educational Achievement, was fielded in 1964 among eighth grade students around the world.
America's students didn't do well. They came in 13th.
Out of 14 countries included in the study.
Seven years later, in 1971, the same organization conducted a science assessment test again among eighth grade students. Different subject. Same result. America's students came in next to last among the 13 countries that participated in the test.
Those results should have shocked America. Instead, it was pushed aside by even more prominent news as the political assassinations of President Kennedy, Martin Luther King, racial tension in America's cities, the growing involvement of our country in the Vietnam War and Watergate dominated headlines across the United States.
Read this paragraph. After you do, I have two questions for you.
When was it written? And, by whom was it written?
This paragraph is extracted from A Nation at Risk, a report released by the U.S.Department of Education in April 1983 (http://datacenter.spps.org/uploads/sotw_a_nation_at_risk_1983.pdf). The report was an immediate hit with the media with headlines like "Education Panel Sees Rising Tide of Mediocrity", "U.S. Education Unsatisfactory" and "Failure in Education" appearing in editorials across the country.
But the findings and recommendations of Nation at Risk, a report written to warn Americans about how our country was falling behind Japan in key industries like automobiles, electronics, photography and office automation, were not embraced.
Besides the attention grabbing headlines, the report did little to stem the tide of mediocre student performance in academic assessment tests administered by the U.S. Department of Education or private organizations like the College Entrance Examination Board who conducts the well-known SAT test.
Over the next 30 years, from 1983 - 2013 , as a litany of results from other tests were released by the International Association for the Evaluation of Educational Achievement (1995, 1999, 2003, 2007, and 2011), the Programme International Student Assessment test (2000,2003,2006, 2009), and more stringent national testing mandated by law through the U.S. Department of Education's National Assessment of Educational Progress's "No Child Left Behind" initiative, this sombering picture of America's education gap came into clear focus:
The deeper an American student proceeded through the U.S. public education system, the further behind the rest of the world American young people fell even though, as a nation, the $600 billion the United States spends annually on public education is, by far, the most of any nation in the world)
Here's a story that illustrates why America's education gap threatens our country's future prosperity. Earlier this year I attended a technology conference that included a keynote panel on the topic of the "skills gap."
The panel members included a high-ranking official from the U.S. Department of Labor, and several business executives. As the panel began, the government official claimed that despite 12 million unemployed Americans, and nearly 4 million open job postings, jobs that cannot be filled because employers say applicants do not have the right skills for the job, "there is no skills gap in America because if there was, the Department of Labor would be monitoring higher weekly wages (because employers would have to compete with higher salaries for valued workers ) and
The existence of a national skills gap would mean lengthening of hours worked per week (because employed workers would have to work overtime to do the work of open job positions)."
As the Labor Department official ended his opening comment, one of the business executives on the panel disagreed strongly with the secretary's comments and said the following:
And then another panel member, this one the CEO of a global manufacturing firm, said, "Mr.Secretary, my firm has just concluded an internal audit of our employment needs in the coming three years. The audit claims for us to remain globally competitive our company will need to hire 5,000 IT workers. 5,000 workers"
He continued, "my business, the business of manufacturing, is changing rapidly. In fact, it has become a software-driven business. A business where software drives robots, lasers and computers on my manufacturing floor. I can source work anywhere in the world where a talented job candidate has a computer and an Internet connection. My audit concludes we will not be able to find those workers here in America."
America's education gap is real. After 60-years of widening, many, including myself, feel it is rapidly reaching a national tipping point that threatens our nation's future economic growth, the employability of our workers and our national security as the prospect of cyberwar lurks on the horizon.
I have heard this analog several times: America seems like the proverbial frog in the pot of water, content as the temperature rises slowly. But then unable to escape as it reaches 212 degrees.
In 1962, as President Kennedy was encouraging Americans to look to the end of the decade and land a person on the moon, an obscure Japanese physicist by the name of Mitsutomo Yuasa was looking back 450 years. In an essay in a Japanese scientific journal, he concluded since 1540 the world's center of scientific activity has shifted west from one country to another every 80-110 years.
Yuasa placed the mantel of worldwide scientific leadership on the East Coast of America in 1920. Do the math. If Yuasa's theory, often referred to as Yuasa's Phenomenon, is in play again, it claims between now and 2030 another country, a country to America's west, will take over as world scientific leader.
Some say the next center of world scientific activity by 2030, if Yuasa's Theory is to be believed, will be the People's Republic of China. I am not thoroughly convinced it will be. But what I am sure of is this: If America wants to prolong its position as world's scientific leader it must continue to excel at innovation and invention. Two areas that put a premium on a country's ability to produce a world-class education system.
In 1990, the Commission on the Skills of the American Workforce, released a report with a provocative title that read "America's Choice: High Skills or Low Wages?"
Sadly, in my opinion, America has not yet made that choice.
Our nation's education gap continues to widen.
The temperature of the sea of mediocrity that America seems to content to swim in is fast approaching 212 degrees. Our nation remains at risk.
Read more about education in CIO's Education Drilldown.
This story, "IT Skills Gap Is Really an Education Gap" was originally published by CIO. | <urn:uuid:0501b317-72a9-4da3-8551-02a621b03d16> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2170829/data-center/it-skills-gap-is-really-an-education-gap.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00147-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966181 | 2,782 | 2.875 | 3 |
Ranking the world's best big data supercomputers
- By Kevin McCaney
- Jun 26, 2013
There’s more to measuring a supercomputer than just going by its raw floating-point processing power, especially as agencies find more uses for them.
One of the emerging uses for high-performance computing (HPC) is in analyzing big data, a process different from the 3D modeling and simulations supercomputers have traditionally been used for. Which supercomputer architectures are best for analyzing data-intensive loads? That’s the idea behind the Graph 500, a project announced three years ago at the International Supercomputing Conference, which this month released its latest rankings.
At the top of the list, as it has been since November 2011, is Lawrence Livermore National Labs’ Sequoia, a 20 petaflop machine not lacking in raw power — it currently ranks third on the Top 500 list of the world’s fastest supercomputers, and was first a year ago — but with a focus on performing analytic calculations on vast stores of data.
The Graph 500 serves as a complement to the Top 500, which uses the Linpak benchmark to measure a computer’s capacity to execute floating-point operations, the mathematical calculations used in 3D physics modeling for things such as simulating hurricanes, the Big Bang or nuclear tests. (China’s Tianhe-2, recently named the fastest computer, achieved 33.9 petaflops, or 33.9 quadrillion floating point operations per second.)
The Graph 500, as the name suggests, instead measures how fast a computer handles the graph-type problems commonly used in cybersecurity, medical informatics and other data-intensive applications, LLNL said in an announcement. By way of explanation, LLNL compared a graph of vertices and edges to a graphical image of Facebook (and its billion users), in which each vertex represents a user and each edge represents a connection between users. The Graph 500 employs an enormous data set to measure how fast a machine can start at one vertex and discover all the others. For the record, Sequoia achieved 15,363 GTEPS, for giga (billions) traversed edges per second.
Sequoia is an IBM Blue Gene Q system, a type well-suited to data-intensive tasks. Blue Gene Q systems take up nine of the top 11 spots on the list, including the second-place Mira at Argonne National Laboratory.
"The Graph 500 provides an additional measure of supercomputing performance, a benchmark of growing importance to the high-performance computing community," said Jim Brase, deputy associate director for big data in LLNL’s Computation Directorate. "Sequoia's top Graph 500 ranking reflects the IBM Blue Gene/Q system's capabilities. Using this extraordinary platform Livermore and IBM computer scientists are pushing the boundaries of the data-intensive computing critical to our national security missions."
The list, along with the Green Graph 500, which rates supercomputers’ power efficiency (another growing HPC trend), is managed by international steering committee of more than 50 experts from national labs, academia and industry.
Considering the move toward big data analytics for everything from health care to controversial anti-terrorism monitoring, the importance of the Graph 500 — and the approach to supercomputing architectures and software it represents — is just getting starting to be recognized.
In fact, despite its name, when the first list appeared in November 2010, there were only eight machines on it. That number has grown to 142 with the latest list. But it probably won’t be long until the Graph 500 name can be taken literally.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:ca31afeb-9967-4668-8b18-f7da9ec3ce88> | CC-MAIN-2017-04 | https://gcn.com/Articles/2013/06/26/Graph-500-ranks-HPC-supercomputers.aspx?Page=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00045-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923928 | 772 | 2.640625 | 3 |
The most memorable scene in the 1986 movie, Star Trek 4: The Voyage Home involved a time-traveling Scotty trying to use a computer from the 1980s. He walks up to a Macintosh Plus and says: "Computer!" When the computer doesn't respond, it occurs to Dr. McCoy that, because this is a primitive computer from the past, perhaps it needs a close-up microphone. So he hands the mouse to Scotty, who tries his voice command into the mouse.
The scene is as prescient as it is funny.
What Star Trek always got right was that the user interface of the future was conversational.
A conversational UI can take place as a back-and-forth text chat, email or spoken conversation. The difference between text bots and the virtual assistants you talk to is slight. By simply adding off-the-shelf speech recognition on one end and text-to-speech on the other, you can turn any text bot into a speech assistant.
In fact, bots are highly portable, and the companies that make them don't care where they show up.
Dennis Mortensen, the CEO and founder of a New York startup called x.ai, which makes the Amy virtual assistant for scheduling meetings, told me that the Amy virtual assistant could in the future be made available via Amazon's Echo, Apple's Siri or through some other channel. It doesn't matter. While today the Amy is an email-based virtual assistant that lives in the cloud, it could in the future be conversed with by phone, text, in a social network or any other space where a conversation could take place.
Mortensen also said recently that he believes bots would soon replace apps.
Rise of the conversational UI
The idea that chat bots and virtual assistants that we can talk to would replace apps is as alien to us now as today's computing scene would have appeared 10 years ago.
For example, if I told you in 2006 that in 10 years the mobile web would be faster and more feature rich than the version of the web you get on the desktop, it would have made no sense. (Refresher: This is what phones were like in 2006.) If I told you in 2006 that Apple would be the world's leading phone maker, most valuable company and was working on a car; that Google would be delivering Internet access via balloon; that Facebook's CEO was the world's fourth richest person; that stringing together little cartoon icons would become a major form of social interaction; or that everyone can stream live video globally but it's too banal for most people to bother with -- you would have thought I was nuts.
Likewise, a few years from now, we'll use computers and the Internet in ways that make no sense today. The dominance of conversational UIs sounds less appealing than how we use computers today. But that's because we can't picture what that will be like.
Lucas Ives, who works as head of conversation engineering at ToyTalk (which makes the conversation engine for Hello Barbie), told me that "in five or 10 years you'll be walking through your kitchen, and your refrigerator will say: 'Your milk is going to go bad in three days, do you want me to order some more for you?' "
Ives' example is a perfect illustration of three ways the conversational UI will change our lives. First, the interface is a conversation. Second, the conversation is with the refrigerator -- the Internet of Things will turn everything into an Internet-connected computer. And third, the refrigerator can start the conversation. Pre-emptive interaction today is a novelty, found mainly in Google Now. In the future, many objects, devices and apps will initiate conversations with us.
In fact, the conversational UI trend has already begun.
The conversational UI has got everybody talking
Quartz this month introduced an app that gives you the news in a conversational UI. It simply chats with you, as if you were getting the news from a friend via text. It tells you a little bit about a new story, then if you ask to hear more, it will go into more detail, complete with photos, links and, eventually, ads.
While the possible user input is narrow (ultimately you tell it that you want to hear more about the current story or you want to move on to the next story), the experience is just like texting with a friend, where the subject happens to be the news and the friend happens to be a fast-typing journalist who's banging out news stories just for you.
At Mobile World Congress last week, Sony unveiled a range of products, including something called Xperia Ear, which is an "intelligent earbud," and Xperia Agent, an Amazon Echo-like virtual assistant appliance. In both cases, it shows that Sony is preparing for the conversational UI future.
Sony's Xperia Ear video is a perfect illustration of the subtle shift to conversational interfaces. In the video, the users are doing normal things like texting, making calls and getting directions. But instead of doing these things directly, they're asking a virtual assistant to do it. And the assistant responds with the information.
One of the surprise darlings of the Los Angeles Auto Show in November was a Silicon Valley startup called Capio, which makes a conversational UI on a chip for cars (and other appliances). If you visit their website and watch the short video, you'll get a sense of what the future conversational UI of the future will be like inside a car.
These new products and services are part of a much larger trend toward the conversational UI.
What parents got wrong about Barbie
Hello Barbie works like Siri. You talk to the doll, the doll talks back. And like Siri, children are actually talking to software in a remote data center. Their voices are recorded. The recording is sent and processed. A Barbie-like response is constructed and sent back down through the Internet and home Wi-Fi network to the doll, which replies.
The initial press on this product, which shipped in November, skewed negative. As an Internet-of-Things appliance connected to a home Wi-Fi network, Hello Barbie was called insecure. Others said that a product that records the voices of children is creepy.
The critics are completely wrong on both counts. Hello Barbie should serve as a model for how IoT devices should handle security.
Late last year, a security company called Blue Box discovered potential vulnerabilities in Hello Barbie and the companion app, but then later admitted that due to ToyTalk's "fast response time, a number of the issues have already been resolved." No doll has been hacked. And the company launched a security bug bounty program, paying security researchers for finding any future problems.
Few IoT device makers are acting this responsibly on security.
Regarding the discomfort people feel about children interacting with a virtual assistant bot housed on a remote server, I'll just come right out and say it: Get used to it, people. This is the future.
Talking to a virtual assistant and the virtual assistant talking back is what using a computer will be in just a few years.
It's also worth pointing out that actively engaging a virtual personality in child-directed conversation is probably better for kids than passively watching TV for hours, or using any screen-related technology, for that matter.
So if you want to prepare your daughter for the future of technology, get her a Hello Barbie. Because Barbie works today like everything will work tomorrow.
This story, "How a Barbie doll prepares your child for the future" was originally published by Computerworld. | <urn:uuid:ff97bdbe-8835-4cbe-a419-0b29630187e8> | CC-MAIN-2017-04 | http://www.cio.com/article/3038220/emerging-technology/how-a-barbie-doll-prepares-your-child-for-the-future.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963396 | 1,563 | 2.515625 | 3 |
Apache Hadoop: Big data's big player
- By Patrick Marshall
- Feb 07, 2012
If there is a key technology that enabled the analysis of big data, it is the introduction of Apache Hadoop.
Hadoop is software that allows for the distributed processing of large datasets across clusters of computers. It is designed to scale up from single servers to thousands of machines, with computation and storage of pieces of the dataset taking place on each local machine.
What you need to know about big data
Big data spawns new breed of 'data scientist'
The framework was originally developed by Doug Cutting (then at Yahoo and now chairman of the board of the Apache Software Foundation), who named it after his son’s toy elephant. It is based on Google's MapReduce analysis engine, which parses data for distributed processing.
The Hadoop framework includes several modules, including Hadoop Common, the common utilities that support other Hadoop projects; Hadoop Distributed File System, a distributed file system that provides high-throughput access to application data; and Hadoop MapReduce, a software framework for distributed processing of large datasets on compute clusters.
While the public sector is just beginning to set up projects based on Hadoop, more than 100 major private-sector big data applications have been built on the Apache Hadoop framework. Some of the more notable include Facebook, eBay, Twitter, Yahoo and the New York Times.
Patrick Marshall is a freelance technology writer for GCN. | <urn:uuid:1e35a9b5-53f1-424c-88a5-9009cc8fdd2b> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/02/06/feature-1-apache-hadoop-sidebar.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00468-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932689 | 315 | 2.640625 | 3 |
Every government agency has to deal with managing identity, and protecting sensitive data. From passwords to employee information to agency information, securing information should be a top priority.
According to John Bennett of Oracle, 84 percent of North American enterprises suffered a security breach in the past year, which is a 17 percent increase over three years. What can be done about keeping information secure?
The most important thing in identity management is planning security policies -- having a specific plan of access (who can access what, and when/ how). Without this, the agency is setting itself up for a security breach, and that can be both costly and embarrassing.
Bennett uses this simile to help explain security: think of identity management like a Ding-Dong. The high calorie (but admittedly tasty) treat is a creamy filling, covered in a chocolate cake, sealed in a foil wrapper. The foil is like the network perimeter security; chocolate is the majority of information, which is important to the agency but not of value to hackers and identity thieves; the creamy filling is the sensitive data most coveted by identity thieves.
To protect the sensitive "creamy filling" encryption is the key. If sensitive information is not encrypted, it can be visible to hackers using hex editors. Information such as SSNs, health history or credit card numbers could all be there for the taking. If, however, it is encrypted, such information is safe from would be identity thieves. | <urn:uuid:331898ef-1a3f-43ce-8b82-cf5dd56e6bf9> | CC-MAIN-2017-04 | http://www.govtech.com/security/Encryption-Key-to-Identity-Management.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932313 | 291 | 2.609375 | 3 |
The White House on Tuesday welcomed some of America’s most innovative students for the fourth-ever White House Science Fair, which this year emphasized the specific contributions of girls and young women who are excelling in science, technology, engineering and math.
Among those highlighted at the conference was Elana Simon, 18, who was diagnosed with a rare liver cancer at age 12, and her work with one of her surgeons to find a common genetic mutation across samples of other patients coping with the same cancer.
Cassandra Baquero, 13, Caitlin Gonzolez, 12 and Janessa Leija, 11, of Los Fresnos, Texas, also showcased their work as part of an all-girl team of app builders who built “Hello Navi,” an app that gives verbal directions to help their visually-impaired classmates navigate unfamiliar spaces based on measurements of a user’s stride and digital building blueprints. Girl Scout Troop 2612 of Baltimore, Md., demonstrated their computer program designed to automatically retract a bridge when flood conditions are detected by a motion sensor embedded in the river bed.
In remarks after viewing this year’s science projects, President Obama cited statistics that just one in five Bachelor’s degrees in engineering and computer science are earned by women, while fewer than three in 10 workers in science and engineering fields are women.
“That means we have half of our team we’re not even putting on the field,” Obama said. “We have to change those numbers. These are the fields of the future.”
Obama announced new efforts to invest in STEM education, including a $35 million grant competition by the Education Department to help train and prepare STEM teachers in support of the President’s goal to train 100,000 excellent STEM teachers.
The president also announced an expanded effort to provide STEM learning opportunities to more than 18,000 low-income students this summer through the STEM AmeriCorps program, which launched at the 2013 White House Science Fair. The summer program will bring together AmeriCorps members with community groups, educational institutions and corporate sponsors to help students learn about STEM – from building robots to writing code for the International Space Station to participating in “scientist-for-a-day” programs to explore various careers.
Seven cities across the country also will launch STEM mentoring efforts through the US2020 City Competition, sponsored by Cisco, which challenges cities to develop innovative models for scaling STEM mentorship for young students, particularly girls, minorities and low-income families. The goal of the program is to mobilize 1 million STEM mentors annually by the year 2020.
“Last week, we had the Superbowl champion Seattle Seahawks here, and that was cool,” Obama said. “But I believe what’s being done by these young people is even more important. As a society, we have to celebrate outstanding work by young people in science at least as much as we do Superbowl winners.” | <urn:uuid:af0f1d06-4cb9-48e7-a15f-8fc8470337de> | CC-MAIN-2017-04 | http://www.nextgov.com/cio-briefing/wired-workplace/2014/05/white-house-spotlights-contributions-girls-stem/85234/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960108 | 617 | 2.84375 | 3 |
For much of its history, Google has been a widely admired company that could seemingly do no wrong. But in recent years, some observers have cast a suspicious eye at the search giant. From censoring content in China to accusations of invading user privacy at the behest of the U.S. government, the company with the motto "Do no evil" has lost some of its luster.
If its image has been tarnished, much of the blame stems from Google's ability to intimately track users' Web browsing habits. Though Google is by far the most popular site for searching the Web, users are growing more uncomfortable with the notion they may be under the lens of Google's microscope.
But what if Google could use its considerable power for good? The company will tell you that's what it's always done. If you want proof, look no further than Flu Trends, a remarkably simple service Google devised to help the nation's health officials get an upper hand during flu season.
If advertisers can determine your shopping trends based on Web searches, health officials should be able to monitor health trends the same way. That's the underlying, albeit simplified, rationale behind Detecting influenza epidemics using search engine query data, a paper that appeared in the November 2008 issue of Nature. The authors - Jeremy Ginsberg, Matthew H. Mohebbi, Rajan S. Patel, Mark S. Smolinski and Larry Brilliant of Google and Lynnette Brammer of the Centers for Disease Control and Prevention (CDC) - analyzed years of search terms and concluded they could develop a model to quickly identify influenza outbreaks.
"By processing hundreds of billions of individual searches from five years of Google Web search logs, our system generates more comprehensive models for use in influenza surveillance, with regional and state-level estimates of ILI (influenza-like illness) activity in the United States," they wrote.
The authors gathered historical logs of Google search queries from 2003 to 2008. From that data they developed a formula to track the occurrence of common search queries amid the 50 million most common searches in the U.S. during that time. The formula was then further refined to narrow the query tracking to ILI-related searches. The resulting search trends were then compared to the data gathered by the CDC across its nine public health regions. The CDC's influenza-surveillance data is gathered by 1,500 doctors who report to the CDC on 16 million annual physician visits concerning ILI - a process that can take several weeks. It turned out that the researchers' Web query analysis produced trends similar to those discovered by the CDC.
"Google Web search queries can be used to estimate ILI percentages accurately in each of the nine public health regions of the United States," according to the authors. "Because search queries can be processed quickly, the resulting ILI estimates were consistently one to two weeks ahead of CDC ILI surveillance reports. The early detection provided by this approach may become an important line of defense against future influenza epidemics in the United States, and perhaps eventually in international settings."
The authors are quick to note, however, that their model is not intended to replace the sort of on-the-ground surveillance conducted by the CDC. Instead, Google Flu Trends is designed to help public health officials spot an outbreak before it starts. "This system is not designed to be a replacement for traditional surveillance networks or supplant the need for laboratory-based diagnoses and surveillance. Notable increases in ILI-related search activity may indicate a need for public health inquiry to identify the pathogen or pathogens involved. Demographic data, often provided by traditional surveillance, cannot be obtained using search queries," the authors said.
"In the event that a deadly strain of influenza emerges, accurate and early detection of ILI percentages may enable
public health officials to mount a more effective early response."
They also point out that, during the process of tracking queries, no personal information is recorded, nor are user IP addresses or users' specific physical locations.
You can see just how accurate the gathered data is at www.google.org/flutrends. With the formulas in place, Google engineers can show flu trends just as easily as they show webmasters their sites' analytics. When the data is charted, the results are strikingly similar to those found by the CDC's surveillance system. In fact, from 2004 through 2008, the flu activity reported by Google and the CDC are almost identical. The Google numbers skew slightly higher, but that can be attributed to people searching the Web for flu information when they don't actually have the flu.
So what search terms give hints there may be a flu outbreak on the way? According to Google spokeswoman Katy Bacon, it could be something as mundane as "thermometer." When taken together, these search terms can give vital, advance notice to health officials.
"Maybe you're [searching] for where you can buy a thermometer or what the best chest congestion remedy is, or things like that," she explained. "By tracking the popularity of certain Web search queries, we can accurately estimate the level of flu in each state in near real time. The reason this is important is early detection is critical to helping health officials respond quickly. That's why the CDC tracks the disease. But Flu Trends can help inform the public and officials about flu levels one or two weeks before the traditional surveillance system."
With Flu Trends helping to inform the public about influenza, the obvious question is whether these sorts of analytics can be applied to fight other outbreaks.
"We have a product called Google Trends that lets you track the popularity of specific search queries," Bacon said. "I know the team is excited about where they can go next. But for right now they're just focused on making sure Flu Trends continues to work."
The team of researchers who gathered the data for Flu Trends wants to expand the capability to regions with inadequate medical care. They believe the tool can be particularly useful in developing nations.
"We hope to extend this system to enhance global influenza surveillance, especially in areas that currently lack the necessary resources, including laboratory diagnostic capacity." One problem, of course, is that many areas that could most benefit from this data are those that have limited Internet access. But as Internet access continues to spread, Google is hoping Flu Trends will help ensure the flu doesn't. | <urn:uuid:01e6cebd-ac68-408f-8468-3c4509894845> | CC-MAIN-2017-04 | http://www.govtech.com/health/Google-Flu-Trends-Gives-Early-Warning.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953373 | 1,288 | 2.625 | 3 |
Researchers enable computers to teach themselves common sense
While some people may think they're getting dumbed down as they scroll through images of cats playing the piano or dogs playing in the snow, one computer is doing the same and getting smarter and smarter.
A computer cluster running the so-called the Never Ending Image Learner at Carnegie Mellon University runs 24 hours a day, 7 days a week searching the Internet for images, studying them on its own and building a visual database. The process, scientists say, is giving the computer an increasing amount of common sense.
"Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon's Robotics Institute. "Images also include a lot of common sense information about the world. People learn this by themselves and, with [this program], we hope that computers will do so as well."
The computers have been running the program since late July, analyzing some three million images. The system has identified 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images, according to the university.
The program has connected the dots to learn 2,500 associations from thousands of instances.
Thanks to advances in computer vision that enable software to identify and label objects found in images and recognize colors, materials and positioning, the Carnegie Mellon cluster is better understanding the visual world with each image it analyzes.
The program also is set up to enable a computer to make common sense associations, like buildings are vertical instead of lying on their sides, people eat food, and cars are found on roads. All the things that people take for granted, the computers now are learning without being told.
"People don't always know how or what to teach computers," said Abhinav Shrivastava, a robotics Ph.D. student at CMU and a lead researcher on the program. "But humans are good at telling computers when they are wrong."
He noted, for instance, that a human might need to tell the computer that pink isn't just the name of a singer but also is the name of a color.
While previous computer scientists have tried to "teach" computers about different real-world associations, compiling structured data for them, the job has always been far too vast to tackle successfully. CMU noted that Facebook alone has more than 200 billion images.
The only way for computers to scan enough images to understand the visual world is to let them do it on their own.
"What we have learned in the last five to 10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta said.
CMU's computer learning program is supported by Google and the Office of Naval Research.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Researchers enable computers to teach themselves common sense" was originally published by Computerworld. | <urn:uuid:70209942-dcf3-4be0-ba23-c9ff1c7cd22e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2172282/data-center/researchers-enable-computers-to-teach-themselves-common-sense.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957617 | 658 | 3.75 | 4 |
In addition, both traditional desktops and notebooks are plagued with overheating problems and unexpected reboot issues. Inopportune reboots can occur as power fluctuates in a system designed more to provide power for a car's lights, wiper motors and fans than the smooth, constant voltage required for typical computing.
Car PC's Require Special Components
A reliable in-car PC needs to use small yet rugged components whose size and shape allow proper placement, whether on a crowded dashboard, in or under a dashboard, or under the car's seats. The PC needs to be out of the way of the car's critical instruments.
A car PC must also be designed to be a real workhorse. It will need to handle heat and cold, exposure to sunlight, rapid changes in temperature and humidity, along with shocks and vibrations from the road. The system must also survive as an add-on to a power system that is frequently switched off, often for long periods, and that is prone to deep cycle discharging.
Automotive electrical systems—unlike the clean, steady household current from a wall receptacle—operate off a DC (direct current) storage system that is constantly changing. First, it discharges to deliver power to 'turn-over' a cold car engine. Then it charges the battery back to capacity as the car is driven. Car PCs must operate from power fully conditioned to remove the risk of low voltage during cranking and carefully regulated to prevent damage to PC components during charging.
In summary, an in-car PC must work consistently and meet the challenges of in-vehicle operability. It must be able to take a beating, have short boot times, offer power-saving features, and run its applications easily and safely accessible. The system must be mountable where it is easily visible but doesn't block the driver's line of vision or the path of airbags. The system also must be fastened securely so it won't come lose in a minor accident. | <urn:uuid:3cb0f362-6274-41b8-8b81-bc3339a327e9> | CC-MAIN-2017-04 | http://www.networkcomputing.com/wireless/how-build-pc-your-car/1568737590/page/4/5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958518 | 405 | 3.015625 | 3 |
Schools face tough decisions when preparing for online exams. The dilemma typically lies in these three areas:
- Technology Readiness vs Student Readiness
- Standardized vs Personalized Learning
- Exam Focused vs Learning Focused
But does this have to be an either-or? Why can’t it be an AND? With today’s technology, AND is an option.
Whereas most online exams only provide a one-size-fits-all approach to assessments, new technology and software offer the ability for more types of problem solving and higher-order thinking questions that go beyond multiple choice and help the students further their understanding. Assessing standards through the use of technology can— and inevitably will—eliminate the restraints of its paper and pencil predecessor.
Prepare students by using enhanced question types that help put the learning experience first. This helps make exam day like any other day for the teachers and the students. | <urn:uuid:b0fefbce-5ecf-4530-9150-75f3691b8fb9> | CC-MAIN-2017-04 | https://www.jamf.com/blog/eliminate-either-or-decisions-when-preparing-for-online-exams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00212-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917594 | 190 | 3.078125 | 3 |
Why are people so willing to give away their personal information to complete strangers?
It's because humans want to share information. And in fact, they share information a lot more freely than other "things" such as goods and services.
Which of these are you most likely to provide without thinking much about it?
• To give a stranger directions to the bus stop (information). • To take a stranger to the bus stop (service). • To give a stranger bus fare (goods).
If you're like most people, you'll freely give directions, but you'll resist giving away your money.
And that's how civil human society works, we share, and we especially share information, because it costs us little and it helps society function to more efficiently.
This idea was expressed by Clay Shirky at Austin's South by Southwest (SXSW) in 2010. Shirky has given multiple TED Talks and is widely respected for his thoughts on technology's effects on society. If you're interested in the subject of privacy, you should really watch Shirky's 2008 Web 2.0 Expo NY presentation: It's Not Information Overload. It's Filter Failure.
During the presentation, Shirky makes the following observation: privacy is a way of managing information flow. According to Shirky, the big question we're facing about privacy revolves around the fact that we aren't moving from one engineered system to another with different characteristics… but that we're moving from an evolved system to an engineered system.
"Managing our privacy" isn't a natural act.
What maintained our privacy in the past was that it was generally inconvenient to spy on people. Platforms such as Facebook present a new and unique problem and new solutions (filters) are needed, rather than to retool old existing filters. | <urn:uuid:59813232-9d94-4aef-811b-343be52a17de> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00002254.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00120-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960589 | 367 | 2.53125 | 3 |
IBM‘s question-answering computer platform will analyze aerospace research and data to help NASA answer inquiries on spaceflight science and support decision-making processes during air travel, Space.com reported Thursday.
Sarah Lewin writes NASA’s Langley Research Center will utilize the Watson computer to help researchers sort through large amounts of data generated by aerospace research.
“The idea here is to have a Watson system that can be a research development adviser to people who work in the aerospace fields,” IBM engineer Chris Codella told Space.com in an interview.
“There’s so much data out there that consists of unstructured text that usually only humans can make sense of, but the challenge is that there’s too much of it for any human being to read.”
Codella added IBM has discussed other uses of Watson such as the diagnosis of astronauts’ illnesses in flight and automated flight operations. | <urn:uuid:53cc99a0-fbe1-4ffe-bf79-d5b8c79fdc6a> | CC-MAIN-2017-04 | https://blog.executivebiz.com/2016/12/space-com-nasa-to-utilize-ibm-developed-watson-computer-for-spaceflight-science-inquiries/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00514-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938827 | 195 | 3.125 | 3 |
On Friday, the Government Accountability Office released a report
analyzing preparedness at federal agencies for Internet Protocol Version 6 (IPv6). IPv6 is a new standard for giving computers addresses on a network or the Internet, which dramatically increases the number of available addresses, increase flexibility and enhance security.
The 4.3 billion addresses supported by the current Internet protocol, IPv4, is not expected to be sufficient for the worldwide growth of the Internet into the future. The Internet protocol provides the addressing mechanism that defines how information such as text, voice and video are moved across the Internet. With IPv6, the number of available addresses is well over hundreds of trillions of addresses providing the needed room for growth.
The GAO urged agency IT staff to be aware that IPv6-compatible hardware is already installed in agency networks. The GAO recommended making an inventory of IPv6-compatible hardware on agency networks, assessing the security risks of the new standard and taking other measures involved with developing a strategic plan around the implementation of IPv6. The report also warned that IPv6 is still vulnerable to manipulation, meaning an attacker could abuse features of the new standard to allow otherwise unauthorized network traffic or make agency computers accessible directly from the Internet.
Earlier this month, the National Institute of Standards and Technology issued a report
that came to similar conclusions about both public and private sector voice-over IP networks. Challenges outlined in that report include the need to protect both voice and data on a VoIP network due to differences in how the data travels over the Internet instead of through traditional phone networks and the need to protect against denial of service attacks that can crash a VoIP device or a device running VoIP software. | <urn:uuid:a469e3fa-cebe-446a-a29c-e268b6fdc530> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Reports-Urge-Caution-in-Switching-to.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00422-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912013 | 341 | 2.59375 | 3 |
Government and city planning agencies are extending their services to citizens and communities through information technologies such as the Internet, wide area networks, and mobile computing. The digital relationship of city agencies with citizens and businesses has enabled smart cities and connected communities. Governance, Risk, and Compliance (GRC) solutions can help authorities manage smart city policies, and ensure that the necessary controls and risk management procedures are in place for governance.
A Practical Approach to Enable Smart Cities
Smart cities will be interconnected with government and private subsystems such as transportation, healthcare, safety and security, education, utilities, and real estate. These systems create an infrastructure for energy policy management, healthcare governance, Automated Demand and Response (ADR), remote monitoring, and Fault Detection and Diagnostics (AFDD).
City planning officials can adopt an integrated GRC framework on top of their city management system as given below:
Complaints Management: Smart cities will enable citizens to register their complaints (related to transportation, healthcare, security, utilities, and others) with city/government officials through a complaints management system. The system should support information technology channels such as mobile SMS, web based applications, telephone, and IVR systems to register complaints and route them to appropriate government officials or departments. Citizens should be able to use the system to request better citizen services from their elected government officials or city planning authorities, and track their complaints from initiation to closure through a closed loop process.
Energy Policy Management: Cities consume massive amounts of energy in commercial and residential buildings, rail network and transit systems, industrial and consumer appliances, etc. They must use energy policy management systems to enhance reliability, promote economic growth, and address environmental concerns. An energy policy management system should allow city officials to measure, plan, forecast, and implement energy policies for better citizen services. It should also help promote incentives for peak load management technologies and benefits of high-performance building designs.
Intelligent Buildings: The building policy management system should support governance of buildings, and provide capabilities to analyze energy demand trends for building components, predict future energy requirements, and perform energy audits based on environment management systems or regulatory requirements
Healthcare Governance: The healthcare industry involves stakeholders such as health regulators (Dept of Health, FDA, MHRA), healthcare providers, hospitals, pharmaceutical, and medical devices companies. City planning authorities can provide access to the best healthcare facilities through an integrated system which tracks pharmaceutical drug quality and medical device safety incidents. Government officials can use the governance system to ensure that life sciences companies follow 21 CFR regulations and cGMP quality processes such as deviations, corrective actions, and change control.
City Disaster/Emergency Management Governance: Smart cities manage incidents (loss of life and property, natural hazards, and acts of terrorism) through policies for emergency preparedness, protection, response, recovery, and mitigation. The governance system can help city officials in aggregating loss information for multiple incidents, triaging incidents, triggering investigative and remedial actions, calculating gross loss of information, and reporting risk exposures to government agencies.
Transportation System Governance: Smart cities can enhance citizen experience in commuting and transportation through effective governance of operational policies for route optimization, yield/revenue management, and compliance with regulations such as FAA (for airlines).
Financial Policy Governance: The financial oversight of cities requires governance systems to manage financial policies, and provide capabilities to recommend norms and procedures for stronger internal controls. The system should allow city financial authorities to identify, measure, mitigate, monitor and communicate key risk exposures, as well as manage financial policy compliance.
- Simplifies the delivery of services to citizens
- Ensures less corruption, increased transparency, and greater convenience
- Enables governance of and compliance with city and federal government policies
- Eliminates layers involved in interacting with city and government agencies
- Enables citizens and businesses to easily find information and timely services from city agencies
- Simplifies government agencies' business processes, and reduces costs
- Shares GRC and policy management best practices and frameworks across city planning, execution, and management functions | <urn:uuid:ea61dbb3-154b-47a2-bd5a-71f8b5f0f4d2> | CC-MAIN-2017-04 | http://www.metricstream.com/insights/smart-cities.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930555 | 819 | 2.8125 | 3 |
Ethernet II Frame:
In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we will discuss the CCNA concept of Ethernet Technologies. So even though it may be a difficult concept and confusing at first, keep at it as this is the first step in obtaining your Cisco certification!
Preamble – Synchronization. They give components in the network time to detect the presence of a signal and read the signal before the frame data arrives.
Start of Frame (SOF) – Start of Frame sequence
Destination and Source Addresses – Physical or MAC addresses. The source address is always a unicast address, the destination address can be unicast, multicast, broadcast.
Length – Indicates the number of bytes of data that follow this field.
Type – Specifies the upper layer protocol to receive the data.
Data – User or application data. Ethernet II expects a minimum of 46 bytes of data.
If the 802.3 frame does not have a minimum of 64 bytes, padded bytes are added to make 64.
Frame Sequence Check (FCS) – CRC value is used to check for damaged frames. This value is recalculated at the destination network adapter. If the value is different from what is transmitted, the receiving network adapter assumes that an error has occurred during transmission and discards the frame.
EIA/TIA Horizontal Cabling:
(Using CAT5 cabling in an Ethernet network)
3 Meters – 90 Meters – 6 Meters
Collision Domains – A collision domain is defined as a network segment that shares bandwidth with all other devices on the same network segment. When two hosts on the same network segment transmit at the same time, the resulting digital signals will fragment or collide, hence the term collision domain. It's important to know that a collision domain is found only in an Ethernet half-duplex network
Broadcast Domain – A broadcast domain is defined as all devices on a network segment that hear broadcasts sent on that segment.
All devices plugged into a hub are in the same collision domain and the same broadcast domain.
All devices plugged into a switch are in separate collision domains but the same broadcast domain. Although, you can buy special hardware to break up broadcast domains in a switch, or use a switch capable of creating VLANs. VLANs breakup broadcast domains.
Hubs and Repeaters extend collision and broadcast domains.
Switches, Bridges and Routers break up collision domains.
Routers (and Switches using VLANs) break up broadcast domains.
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. Achieving your CCNA certification is much more than just memorizing Cisco exam material. It is having the real world knowledge to configure your Cisco equipment and be able to methodically troubleshoot Cisco issues. So I encourage you to continue in your studies for your CCNA exam certification. | <urn:uuid:56903e5e-3c0d-4bf5-8265-8de8654c5990> | CC-MAIN-2017-04 | https://www.certificationkits.com/ccna-ethernet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894691 | 627 | 3.328125 | 3 |
Nov/Dec 2016 Digital Edition
Oct 2016 Digital Edition
Sept 2016 Digital Edition
Aug 2016 Digital Edition
July 2016 Digital Edition
June 2016 Digital Edition
May 2016 Digital Edition
Flu season worst in a decade, H1N1 and H3N2 make appearances
The 2012-2013 flu season is one of the worst in 10 years, according to the Centers for Disease Control and Prevention.
The CDC said the season hasn’t yet peaked and is nevertheless running five weeks ahead of its typical yearly schedule.
Forty-one states are reporting widespread geographic influenza activity for the week of December 23-29, 2012, said the CDC on Jan. 4 -- an increase from 31 states the previous week. The proportion of people seeing their doctors for influenza-like illness (ILI) is above the national baseline for the fourth consecutive week, climbing sharply from 2.8 percent to 5.6 percent over the past four weeks, said the health agency.
Since October 1, 2,257 laboratory-confirmed influenza-associated hospitalizations have been reported, marking an increase of 735 hospitalizations from the previous week. The numbers translate to a rate of 8.1 influenza-associated hospitalizations per 100,000 people in the U.S., it said.
The current flu season has claimed the lives of almost two dozen children, according to the CDC, with two influenza-related pediatric deaths reported during the week of December 23-29. Both deaths were associated with influenza B viruses, it said. Eighteen influenza-associated pediatric deaths occurring during the 2012-2013 season have been reported, said the CDC.
Influenza A (H3N2), 2009 influenza A (H1N1), and influenza B viruses have all been identified in the U.S. this season, it said. During the week of December 23-29, 2,346 of the 2,961 influenza positive tests reported to CDC were influenza A and 615 were influenza B viruses. Of the 1,234 influenza A viruses that were subtyped, 98 percent were H3 viruses and two percent were 2009 H1N1 viruses. Those virus variants are all covered by the current 2012-2013 Northern Hemisphere Flu vaccine, it said. | <urn:uuid:5422883f-6d68-4d60-8ec5-d6dde8327942> | CC-MAIN-2017-04 | http://gsnmagazine.com/node/28192?c=federal_agencies_legislative | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957206 | 456 | 2.53125 | 3 |
The Internet of Things describes the Home Area Networking, Building Automation and AMI protocols and their evolution towards open protocols based on IP such as 6LowPAN and ETSI M2M.
The authors discuss the approach taken by service providers to interconnect the protocols and solve the challenge of massive scalability of machine-to-machine communication for mission-critical applications, based on the next generation machine-to-machine ETSI M2M architecture.
The authors demonstrate, using the example of the smartgrid use case, how the next generation utilities, by interconnecting and activating our physical environment, will be able to deliver more energy (notably for electric vehicles) with less impact on our natural resources.
- Offers a comprehensive overview of major existing M2M and AMI protocols
- Covers the system aspects of large scale M2M and smart grid applications
- Focuses on system level architecture, interworking, and nationwide use cases
- Explores recent emerging technologies: 6LowPAN, ZigBee SE 2.0 and ETSI M2M, and for existing technologies covers recent developments related to interworking
- Relates ZigBee to the issue of smartgrid, in the more general context of carrier grade M2M applications
- Illustrates the benefits of the smartgrid concept based on real examples, including business cases.
This book will be a valuable guide for project managers working on smartgrid, M2M, telecommunications and utility projects, system engineers and developers, networking companies, and home automation companies. It will also be of use to senior academic researchers, students, and policy makers and regulators. | <urn:uuid:f227ac47-567d-4dd6-9576-54c118808ea6> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/02/16/the-internet-of-things-key-applications-and-protocols/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884369 | 339 | 2.515625 | 3 |
Over the past decade, communication technologies have given rise to a wide range of online services for both individuals and organizations via the Internet and other interconnected networks. Network routing protocols play a critical role in these networks to effectively move traffic from any source to any destination.
The core routing decisions on the Internet are made by the Border Gateway Protocol (BGP) as specified in IETF RFC 4271, which uses routing tables to determine reachability among autonomous systems and make route selections. BGP guarantees that the traffic will go through the shortest path to reach its destination; however, it does not guarantee that the route is optimal in terms of performance (e.g. latency, loss, etc.) and/or costs as shown in the following figure.
Internap’s Managed Internet Route OptimizerTM (MIRO) was specifically designed to overcome this problem by evaluating different path characteristics to create performance metrics that are used to select the best routes for Internap customers.
What is MIRO?
MIRO is a highly engineered, distributed system whose functionality can be separated into four core subsystems: Route Collection and Injection, Traffic Estimation, Performance Measurement and Route Optimization. The following is a greatly summarized description of each subsystem:
Route Collection and Injection – MIRO actively learns full BGP tables (prefixes) announced by each provider to be aware of the different routes available to each destination. There are different ways to learn this information including direct BGP sessions with edge routers or via SNMP queries. Also, this subsystem is in charge of updating routes (moving routes) by telling the routers which provider is preferred for each route.
Traffic Estimation – To estimate the volume of traffic, MIRO consumes network flow information from the edge routers (e.g. Cisco Netflow, IPFIX). The flow information contains source and destination IP addresses, port numbers, octets, etc. This information is aggregated into subnetworks (prefixes) that should match the ones collected by the Route Collection subsystem, and then the total amount of traffic to each destination is calculated and handed over to the Route Optimization Engine.
Performance Measurement – Performance metrics can be defined as a combination of one or more measurement variables like latency, packet loss, jitter, etc. MIRO selects target IPs on each destination network for which it collect performance metrics, and does so via different techniques including pings and traces. This information is then combined and normalized, and handed to the Route Optimization Engine.
Route Optimization – The Route Optimization Engine is the brains of the MIRO system. It consumes routes and provider information, traffic estimates, performance metrics, and user rules and parameters, and runs a mathematical model to find the absolute best route for each destination. The Route Optimization Engine then sends the selected routes for the destinations to the Route Injection subsystem, which makes sure the changes are applied.
In order to optimize each component and meet quality and performance requirements, our engineering efforts had to overcome many challenges, including:
- How to accurately calculate traffic at the prefix level, a problem which is still an open issue in the academic and research community;
- How to optimize routes in polynomial time considering there are hundreds of millions of possible solutions;
- How to keep track of thousands of route changes per minute from several providers without negatively impacting our edge routers’ performance; and
- How to calculate convergence points for target selection to ensure stable and reliable probing for collecting performance measurements.
One of the main differences with its predecessor is the way new MIRO optimizes routes. Our previous method, a heuristic TCP/IP route management control, worked with BGP in an automated manner. It updated routing tables with the best performing routes available to provide a superior alternative to the manual route selection approach that many data centers employ to compensate for BGP’s inherent deficiencies. The new method is a deterministic approach based on a mathematical model, expressed with a linear programming formulation that considers performance, cost and efficiency as required.
With the completed deployment of this new MIRO system in all our markets, we immediately confirmed a better performance and much faster response time to network events. In Atlanta (ACS) for example, we selected a random day to show the best case average latency for all carriers compared against MIRO, and as expected, MIRO had better performance from 2 up to 15 milliseconds faster:
In the previous figure, you can see there is a network event on provider RED (represented with the red line), where the average latency increased from 120 milliseconds to approximately 180 milliseconds. If we look at the amount of traffic MIRO was putting on provider RED, we can notice it reacted to the event almost instantly, moving about 2.8 Gigabits of traffic per second to other providers (from 4.3Gbps to 1.5Gbps).
Similarly, we selected the same day in New York (NYM) to compare the average packet loss per provider against MIRO. MIRO’s average packet loss is 0.01% versus 0.04% for the rest of the providers:
MIRO brings our customers faster and more stable gaming networks, Content Delivery Networks (CDNs), social networks and general availability for end users as a result of consistent low latency and packet loss. Even though the Internet wasn’t designed for speed, MIRO addresses the deficiencies in BGP and routes traffic along the best available paths.
To learn more about MIRO, watch the video, Maximize Internet Performance, Reduce Latency. | <urn:uuid:05777a69-53a6-4f33-95f5-47ee27e3a97b> | CC-MAIN-2017-04 | http://www.internap.com/2014/11/05/bgp-gets-smart-optimized-network-routing-protocols-miro/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919622 | 1,132 | 3.0625 | 3 |
Open source used to be a small portion of the web dedicated to Linux and the programming community, but now it’s an integral part of many web applications. Whether you’re a programmer looking for a solution or a site owner who needs a little help, the open source community can provide something for you.
With that in mind, here are a handful of awesome ways open source technology is used today.
1. Twitter Bootstrap – Web Design
Twitter Bootstrap, commonly referred to as bootstrap, has revolutionized the way people design websites. Bootstrap is a collection of CSS files that use media queries for built-in responsive support. So, if you need to build a site and aren’t ready to bring in a designer, you can use bootstrap to create a basic interface. Chances are that the designer you’d hire also uses bootstrap as it’s widely used and powers many large (and seriously cool looking) websites.
2. AngularJS – Web Design
AngularJS is backed by Google — so it has a lot of star power associated with it, as well a lot of interested eyes. But it’s already very popular among front-end designers. Here’s why.
Is your interest piqued? Take a quick dive into AngularJS with this Ben Finkel webinar. Then expand on what you learn with his AngularJS training course. It’s an open-source double whammy.
3. Android – Operating System
Android is commonly known as a Google product, but it’s actually an open source operating system that was used for Google’s main mobile platform. No wonder then, earlier this week, we dubbed Android as open source’s biggest gift!
You might think it’s just a Google operating system, but anyone can make Android a part of their design. It powers phones and mobile devices and its main competitor is closed-source iOS. Not surprisingly, Android and iOS dominate the market.
4. Mozilla – Browsing
Want to be a part of one of the most popular browsers on the market? The Mozilla developer network and open-source projects exist to give users a good browsing experience and provide developers with the right tools to create rich web applications.
Mozilla’s projects include Thunderbird and Firefox. Firefox is its flagship browser project, and Thunderbird is their email platform.
5. Apache – Server Software
Continuing with the web-based/browser theme, Apache is one of the most commonly used web servers.
It runs on both Linux and Windows-based servers although it’s much more popular with Linux than it is with Windows. Windows has a built-in web server that comes with the operating system, so most Apache systems run on Linux. Many developers work with Apache source code to create plugins and add-ons to the web server.
If you want to explore more Apache goodness, this Garth Schulte course covers Apache’s Hadoop solution and its relation to big data.
6. WordPress – Blogging
WordPress is one of the most popular platforms on the web. It powers many of today’s blogs and e-commerce stores. The WordPress developers paved the way by enabling the platform to be completely customizable. For example, developers can create the themes, plugins, and customize WordPress to build powerful sites that run large business sites. Guess what the CBT Nuggets blog runs on?
WordPress is open source, but also has an API that enables developers to call only certain platform functions, keeping radical changes gated from the core.
7. Python – Script Automation
Developers love Python. It’s easy to learn and it enables you to create scripts that power data collection and automation. See for yourself with the help of Ben Finkel. It’s also an open source language, so it has the backing of the open source community itself.
While Python is primarily a part of the Linux community, it also runs on Windows platforms. Its automation scripting capabilities attract both Windows and Linux developers.
These seven tools aren’t exhaustive. You can find more tools on GitHub and Microsoft’s NuGet. These repositories have thousands of projects for you to choose from. Happy hunting!
Browse our training and start planning your foray into open source tech today!
Not a CBT Nuggets subscriber? Start your free week today. | <urn:uuid:00f52bfe-13b9-4ce5-b5a0-6431f9835caf> | CC-MAIN-2017-04 | https://blog.cbtnuggets.com/2017/01/7-practical-ways-to-use-open-source/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00351-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90777 | 915 | 2.890625 | 3 |
Data at Rest Remains Secure With TrueCrypt
In past articles we’ve looked at how to encrypt data to protect it “in flight” as it passes from one computer to another over the Internet. In this article we’ll look at protecting data “at rest,” stored on a laptop or desktop computer, on a removable disk, a data CD or DVD, or on a USB memory stick.
Business or personal data stored in this way represents a huge security risk: hundreds of thousands of laptops and memory sticks are lost or stolen every year, and hardly a day goes by without reports in the media about large organizations losing customers’ confidential information when computer equipment goes astray. The cost of losing this data can be very high - data may have to be recreated or regathered, customers may have to be compensated, and there may be legal ramifications and a loss to the organization’s reputation. Yet this risk can be mitigated almost completely by taking the simple precaution of encrypting the data before it is stored.
Microsoft now includes its BitLocker data encryption system in some versions of Windows. But if you use a version of Windows without BitLocker, or if you use Linux or Mac OS X - or if you simply don’t want to use an encryption system provided by Microsoft - then the good news if there is an open source alternative called TrueCrypt which is powerful, easy to use, and free.
TrueCrypt can encrypt an entire device such as a USB stick or hard disk drive, or it can create an encrypted container on a device. This is a virtual disk: a file containing encrypted information which can be mounted (when the correct password is supplied) and used like a normal disk drive. In the Windows version of TrueCrypt (for XP, Vista, Server 2003 and Server 2008) the software can also encrypt the system drive which contains the operating system, storing a TrueCrypt boot loader in the first track of the boot drive in the drive’s boot sector. This prevents anyone from booting the computer without the necessary password.
One of the key points about TrueCrypt is that it carries out encryption and decryption transparently and “on the fly.” This means that data in an encrypted disk or container is always stored in an encrypted form, and decrypted as it is transferred from disk to memory when it is being used. Any data saved to an encrypted disk or container (or dragged and dropped from an unencrypted disk to an encrypted one, for example ) is encrypted automatically without any intervention on the part of the user. In fact once set up the only interaction the user has with TrueCrypt is to supply the correct passwords to allow access to encrypted devices. In theory any encryption system must incur a performance overhead, but in practice this is negligible.
To access data stored in an encrypted volume it’s necessary to supply the password that was specified when the volume was first encrypted. A password provides good protection as long as it remains confidential, and provided it is unguessable. In practice this means it must be long and preferably a random string of characters. To add additional security a keyfile can also be used. This can be any type of computer file stored on any type of device. For example, you could choose as a keyfile a particular JPEG image or MP3 file stored on your computer. To gain access an encrypted device you would have to supply your password and specify the image or music file which you have chosen as your keyfile.
In fact the keyfile need not be stored on your computer at all. By storing a particular image or music file (or a keyfile containing random data, which TrueCrypt can generate for you) on a USB key you can create a two-factor authentication system: a protected volume can only be made accessible by providing the password (something you know) and by inserting the USB key containing the keyfile (something you have.)
The easiest way to start with TrueCrypt is to create a container which you mount as a virtual drive - a process which I’ll outline now.
The first step is to download TrueCrypt. For the purposes of this HowTo I’ll be using the Windows version, but the container I create (which is actually just a file) can be moved to a Linux or OS X based machine and mounted as a drive on either of those operating systems.
Once TrueCrypt is installed and running, you’ll be presented with the main TrueCrypt window..
Click on the Create Volume button to get started. This brings up the Volume Creation Wizard, presenting the option of creating an encrypted container, encrypting a non-system partition/drive, or encrypting the system partition or entire system drive. (Note: the Linux and OS X versions of the software do not include this last option.)
To create an encrypted container, click Next, and Next again to create a Standard TrueCrypt volume
You’ll now be asked to create a file which will be the encrypted container.
This window is actually quite misleading. Clicking the Select File… button brings up a file selector window, but what you need to do next is navigate to the location where you want to create your secure container (which you can move later) and then provide a name for the file. If you choose an existing file it will be deleted and replaced with an empty container.
It may be helpful to provide an obvious name for the file, like “my encrypted container” , or you may prefer to disguise it by giving it an innocuous name such as “Readme.txt” or “Rainbow.jpg”. This is only necessary if you are worried about parties such as foreign governments searching the contents of your computer and compelling you to provide the passwords to any encrypted volumes they find.
Next you need to choose an encryption algorithm and hash algorithm to use. Unless you have a particular reason not to do so, or new vulnerabilities are discovered, the defaults (AES (Rijndael) and RIPEMD-160) are a good choice.
Now choose the size of the container you want to create, and specify the password you want to protect the container. If you want to use one or more keyfiles as well then click the keyfiles checkbox and click the Keyfiles… button to select a keyfile, or create a random one.
At the Volume Format screen you’ll be asked to move your mouse around on top of the screen for a period of time to help introduce randomness into the process (30 seconds minimum is recommended) before clicking the Format button to complete the volume creation process.
Using the Encrypted Container
Once you’ve created your container, it simply appears as a file in Windows Explorer. To use it as a virtual drive, you’ll first need to mount it. To do this, go back to the main TrueCrypt window, click on the Select File… button, and choose the file which is your encrypted container. You can also select a drive letter to mount it to, or let TrueCrypt choose an unused drive letter for you.
You’ll then be asked to supply your password (and keyfile if used), and after a second or two your encrypted volume will appear in Windows Explorer as a Local Disk (in this case P:) which you can use to store anything you like. Any files saved to this disk or dragged onto it will be encrypted automatically.
When you have finished with the virtual drive you can click the Dismount button in the TrueCrypt window, or the drive will dismount automatically when you shut down the computer.
Encrypted containers can be moved from one computer to another, and the virtual disks they contain can then be mounted as long as the computer has TrueCrypt installed. To make it more convenient to move USB drives or optical disks containing encrypted containers between Windows machines which may not have TrueCrypt installed, the Windows version of TrueCrypt enables the creation of a Traveller Disk.
Accessed from the Tools menu on the main TrueCrypt window, the Traveller Disk Setup option allows you to install the files needed to run TrueCrypt directly from the removable media, without needing to install anything on a Windows computer it is attached to. You can also specify that the virtual disk should automatically mount when the media is inserted into a computer (as long as the correct password and keyfile - if applicable - are supplied.)
TrueCrypt includes many other features - such as the ability to have a hidden volume within an encrypted volume - which are beyond the scope of this article.
The biggest difference between TrueCrypt and BitLocker - and commercial disk encryption products such as CheckPoint, PGP, Safeboot or Utimaco - is that TrueCrypt doesn’t include any key management system. That means that if you forget your password or lose access to your keyfile, you won’t be able to access the encrypted data ever again. By contrast BitLocker keys, for example, can be stored in an Active Directory database by default when they are created so that users who forget their keys can retrieve them. But if the lack of key management is not important to you then as a simple way to secure your data using strong encryption, on multiple platforms, TrueCrypt is very hard to beat. | <urn:uuid:101e364d-894c-4baf-9c3b-2cb15f8c7824> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3797691/Data-at-Rest-Remains-Secure-With-TrueCrypt.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914665 | 1,913 | 2.578125 | 3 |
The verdict on biofuels is in and the catchphrase seems to be over-promised and under delivered. Biofuels have been in public use in some form or the other for a long time (Remember the Ford Model T that ran on hemp-derived fuel?). However, in reality, innovation in biofuels for widespread use is much more recent. First generation biofuels made out of sugar, starch and edible oils still occupy a major share of the total market. All the same, the biofuel industry has come a long way.
The market for biofuels is expected to reach cross 105 billion by 2016. The demand for biofuels is on the rise and will continue to grow rapidly through 2022. This rapid expansion is changing the dynamics of the food, agricultural and the energy markets in a big way. The US, Brazil (Ethanol) and EU (Bio Diesel) are currently driving most of the demand for biofuels. Government energy policies have contributed greatly to this rapid rise in demand and have coaxed producers to find ways to increase production.
The US, for instance, has seen a rapid increase in production in the last decade, thanks to the Energy Policy Act of 2005 that provided tax incentives and loan guarantees for energy production of various types. EISA followed in 2007 with a bigger goal of moving the US towards energy security and independence.
The EU renewable energy directive of 2009 and the Federal Law 12,249/10 in Brazil have had a similar effect on the respective regional biofuel markets. In 2011, global biofuels production stood at 1,897,000 barrels per day, up from 1,635,000 barrels per day in 2009 - a 16% rise in just 2 years. Production levels are expected to reach 2,500,000 barrels per day by 2020. Increasing biofuels production in the current format has its challenges. For one, first generation biofuels (notably ethanol, bio diesel and bio gas) are made from sugar, starch and edible oils, and constitute a majority of all biofuel production. This has reduced the amount of arable land available for growing food for human and livestock consumption.
Over the past 10 years, many countries like Brazil, and Indonesia have noticed a considerable decrease in arable land for food production, as a result. According to UNEP, 35.7 million ha were used for biofuel production in 2008 and an estimated 80 million ha are to be used by 2020 at the current rate a 124% increase. Second-generation biofuels, on the other hand, are produced using inedible plant parts. Unlike first-generation biofuels, they do not compete with the use of raw materials as food. The fuel over food issue has been a cause for concern even with second-generation biofuels. Although Jatropha is a cost-effective feedstock plant for bio diesel production, large swathes of land expressly used for Jatropha cultivation has decreased arable land for food production, significantly in Tanzania and Kenya. Many companies are eyeing the next generation of biofuels to overcome such challenges. Third generation biofuels from algal biomass and fourth generation biofuels from specially engineered plants and biomass (with higher energy yields or with lower barriers to cellulosic breakdown) are currently in various stages of testing and production. The key challenge with next-generation biofuel technologies currently, as seen in the case of KiOR, is one of reaching production economies of scale.
The race for finding sustainable and economical biofuels is on. Major companies like ADM, Cargill, Butamax and Abengoa are partnering with new startups to help deliver innovative bio fuels technologies as part of their long-term strategies. The ADM-Virent Energy partnership for better bio refinery solutions, the POET-DSM partnership for producing cellulosic bio-ethanol and the Genesis Biofuel- Abundant Energy Solutions joint venture are just some of the many bets placed by private and government players. Striking the right balance between energy freedom and food security, and efficiency and price parity remains a challenge for biofuels today, but will not be for very long. | <urn:uuid:c08abd28-eb2a-4474-b829-4be8f4d61711> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/global-biofuels-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00469-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952884 | 853 | 2.796875 | 3 |
What do you see as the biggest online security threats today?
Today’s biggest online threats come from malicious software (e.g., viruses, worms, and ad-bots), phishing scams, and direct attacks by hackers.
Malicious software typically exploit unpatched software bugs in widely used software such as operating systems, browsers, and office software. These malware agents are often propagated through email attachments, often associated with SPAM, or by leveraging unprotected file shares used like frogs jumping from one object to another. Spyware from freeware and software suppliers that plants unwanted “monitors” and system operation alterations (e.g., interfering with normal web browser operation and “homesite” selection on end-user systems is a constant headache for both end-users and desktop security software suppliers. Using built-in web browser safeguards and vigilantly keeping current anti-virus and anti-spyware software is a way of life to ensure secure web browsing and workstation software reliability.
Phishing is accomplished through the copying of prominent financial services sites (e.g., major banks, PayPal on E-Bay) to create bogus sites. The victims are lured to the bogus web sites by phony emails requesting that the customers need to update their accounts and then proceed to login to the bogus sites while their account numbers and passwords are being captured. User awareness is an important countermeasure against this sinister threat.
Direct hacking of website continues to be a problem associated with vulnerabilities created by a combination of unpatched software bugs and failure to use bundled security features in the software. Security and network software from prominent vendors, such as Cisco, Internet Security Systems (ISS), Symantec, and Zone Labs, have also come under attack during the past year. Attack objectives range from denial of service, web site defacement, and privilege escalation to direct theft of credit card information and other valuable electronic information. Being proactive with intensive web and database application coding guidelines and testing along with the use of up-to-date intrusion prevention systems is an absolute must in today’s Internet environment.
Direct attacks on recently implemented wireless LANs are also prevalent, but usually result in only the theft of high-speed Internet service or possible relaying to “interesting” targets. Use of strong authentication, encryption, firewalls, and ongoing audits, such as wired and wireless vulnerability testing, are critical safeguards to protect wireless network access points.
What are the people that come to the MIS Training Institute most worried about?
In recent years, the human resource and financial impact of trying to document internal controls and comply with regulatory security laws such as HIPAA, Graham-Leach-Bliley, and most recently Sarbanes Oxley are top priorities in most businesses. A major area of internal security controls associated with regulatory compliance issues is the “bread and butter” area of identity and access control management, in past years just simply referred to as “access control”. Accurately identifying users, their privileges or entitlements, and having an accurate record of what they did while using computerized resources is no longer just a “best practice” but a legal issue with serious non-compliance consequences to the senior management of all publicly owned businesses. All that, is in addition to dealing on a day-to-day basis with frequent software patches to address the major online threats we mentioned in response to the previous question.
Wireless insecurity is also a widespread concern, but can be more easily addressed by treating a wireless connection the same as an Internet connection by applying firewalls, intrusion detection, virtual private networks, and strong authentication.
The CSO is becoming increasingly aware of the dangers posed by mobile devices that contain confidential information and that are subject to theft or loss. What can they do to mitigate those risks? Is the education of end users within a company the only way to go?
There are three areas of security attention related to mobile devices which can range from handheld intelligent cell phones and PDAs to more robust notebook computers: protecting the information content on the mobile device, securing the interaction of that device with other computers across a network, and making sure that additional “backdoor” entry points are not introduced to accommodate “convenient” network access for mobile devices. Effective control of mobile devices begins with intelligent policies and vibrant security awareness and training. From a technical perspective, security for mobile devices includes the use of strong encryption and authentication based on a well-managed public key infrastructure. Remote access gateways, which continually convert “full size” web applications to miniature versions that can operate on the limited size and powered handhelds, must also be protected by strong physical and technical security safeguards. The major issue with theft or loss is not the device, but rather its contents; strong encryption and authentication make the device useless other than its face resale value in the black market.
What’s your take on the open source vs. closed source security debate? In your opinion, what operating system is better, when taking a look from the security perspective?
Open source software, usually with a strong Unix flavor, has proven to be a viable alternative to the world of the “install wizard”. It is often more compact and efficient and uses much fewer resources to provide equal or superior functionality to the end-user. From a purest high security standpoint, Microsoft Windows has still to prove that it is the equal of a well-tuned Unix system. Linux, a popular open source version of Unix has the potential to be very secure, but suffers from “too many fingers in the pie” unless it is stripped to the barer essentials to allow it to be more easily secured.
For the reader to draw their own unbiased conclusions about which operating system and typically associated web server has a better track record, I will refer them to the US National Institute of Standards and Technology (NIST) vulnerability tracking web site, icat.nist.gov, to make their own comparisons of publicly reported security alert bulletins to see which operating systems and web servers have the best track record in the area of fewest serious security bugs and other vulnerabilities. No system has a clean record, but there is a significant difference between the recent history (last 10 years) of open source and proprietary (“closed source”) software.
From a Chief Information Officer/Chief Technology Officer perspective, despite the clear security benefits, formal support for open source software is only available through informal channels in Internet news and discussion groups. However, formal support for closed source, commercial software, especially in light of the increased use of off-shore support that has not approached that of “good ol’ home cooking”, does not always provide a superior benefit. For example, I recently had an experience with a major handheld computer vendor’s off-shore support which involved a problem with the handheld not recognizing an inserted SDIO card. I reported the problem to the vendor via email and grew continually annoyed after three email exchanges. Each reply from the customer support was from a different technician who never responded directly to my questions and comments. Instead their responses read like a text book and did not directly address my problem.
What do you think about the full disclosure of vulnerabilities?
Vulnerabilities should be disclosed, as promptly as possible by the affected IT product vendor(s), accompanied by corrective action (e.g., software patch, additional firewall/intrusion prevention system filtering, security configuration changes or other tightening of access controls). A major concern by opponents of full disclosure is that by revealing the details of the vulnerability, it accelerates the creation of exploit scripts that can be used to attack the vulnerability. The software patches are also a resource to future attackers who can reverse engineer them to provide ideas on attack schemes. Some of the opponents of disclosure are the software authors/vendors themselves who failed to properly code and test their software that created the vulnerabilities in the first place-¦then the customer is again put back on their heels trying to keep up with all of the patches and possible side effects associated with those patches. Vulnerabilities must be disclosed in a timely fashion as long as the announcement includes a fix which may be a patch, a configuration change, or both. Consumer organizations must be able to protect themselves and test for vulnerabilities, so I don’t see any practical way to keep the vulnerabilities a big secret. What you don’t know-¦can kill you!
What is, in your opinion, the biggest challenge in protecting sensitive information at the enterprise level?
The biggest challenge is getting the full support of all levels of management and the work force in making information security a sincere top priority on a continuous basis. Senior management support, accountability “up and down the line”, relentless security awareness, and training are the key ingredients. Technical and physical security safeguards are no better than the people who administer and use them.
What are the future plans for the MIS Training Institute? Any exciting new projects?
MIS is continually in the process of securing new and industry-leading speakers and keynotes for our upcoming event schedule. For our 2005 conference schedule, several new events have been introduced including Cracking E-Fraud, The Conference on Enterprise Risk Management, The Summit on Managing Security & Privacy Compliance in the Era of Sarbanes-Oxley, as well as IT Security World in San Francisco. IT Security World is unique in that it will feature a full conference, including Sector Summits such as HealthSec, FinSec, GovernmentSec, LegalSec, EnergySec and CISO Executive Summit.
Detailed information on all of these events can be found on our Web site. I would encourage readers to visit the site for the most up-to-date information on upcoming conferences, seminars and symposiums. | <urn:uuid:4829a855-4c86-4b1c-b8c5-f3aa932daad9> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2005/01/12/interview-with-ken-cutler-vice-president-information-security-mis-training-institute/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00377-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9429 | 2,014 | 2.640625 | 3 |
The security of your systems and communication, especially those that utilize the Internet should be paramount for any business. Over the past few weeks a massive new security flaw has been uncovered. This flaw, codenamed Heartbleed, could potentially expose all your vital data and communications that flows between your computer and websites online. All businesses and Internet users should be aware of this Heartbleed so that they can take steps to stay safe.
Most sites on the Internet rely on Secure Sockets Layer (SSL) technology to ensure that information is transmitted securely from a computer to server. SSL and the slightly older Transport Layer Security (TLS) are the main technology used to essentially verify that the site you are trying to access is indeed that site, and not a fake one which could contain malware or any other form of security threat. They essentially ensure that the keys needed to confirm that a site is legitimate and communication can be securely exchanged.
You can tell sites are using SSL/TLS by looking at the URL bar of your browser. If there is a padlock or HTTPS:// before the Web address, the site is likely using SSL or TLS verifications to help ensure that the site is legitimate and communication will be secure. These technologies work well and are an essential part of the modern Internet. The problem is not actually with this technology but with a software library called OpenSSL. This breach is called Heartbleed, and has apparently been open for a number of years now.
OpenSSL is an open-source version of SSL and TSL. This means that anyone can use it to gain SSL/TSL encryption for their site, and indeed a rather large percentage of sites on the Internet use this software library. The problem is, there was a small software glitch that can be exploited. This glitch is heartbleed.
Heartbleed is a bug/glitch that allows anyone on the Internet to access and read the memory of systems that are using certain versions of OpenSSL software. People who choose to exploit the bugs in the specific versions of OpenSSL can actually access or ‘grab’ bits of data that should be secured. This data is often related to the ‘handshake’ or key that is used to encrypt data which can then be observed and copied, allowing others to see what should be secure information.
There are two major problems with this bug. The first being that if an attacker can uncover the SSL handshake used by your computer and the server that hosts the site when you login or transmit data they will be able to see this information. This information usually is made up of your login name, password, text messages, content and even your credit card numbers. In other words, anything that gets transmitted to the site using that version of SSL can be viewed.
Scary right? Well, the second problem is much, much bigger. The hacker won’t only be able to see the data you transmit, but how the site receiving it employs the SSL code. If a hacker sees this, they can copy it and use it to create spoof sites that use the same handshake code, tricking your browser into thinking the site is legitimate. These sites could be made to look exactly same as the legitimate site, but may contain malware or even data capture software. It’s kind of like a criminal getting the key to your house instead of breaking the window.
But wait, it gets worse. This bug has been present in certain versions of OpenSSL for almost two years which means the sites that have been using the version of OpenSSL may have led to exposure of your data and communication. And any attacks that were carried out can’t usually be traced.
What makes this so different from other security glitches is that OpenSSL is used by a large percentage of websites. What this means is that you are likely affected. In fact, a report published by Netcraft cited that 66% of active sites on the Internet used OpenSSL. This software is also used to secure chat systems, Virtual Private Networks, and even some email servers.
We have to make it clear here however: Just because OpenSSL is used by a vast percentage of the Internet, it doesn’t mean every site is affected by the glitch.
The latest versions of OpenSSL have already patched this issue and any website using these versions will still be secure. The version with Heartbleed came out in 2011. The issue is while sites may not be using the 2011 version now, they likely did in the past meaning your data could have been at risk. On the other hand, there are still a wide number of sites using this version of OpenSSL.
This is a big issue, regardless of whether a website uses this version of OpenSSL or not. The absolute first thing you should do is go and change your passwords for everything. When we say everything, we mean everything. Make the passwords as different as possible from the old ones and ensure that they are strong.
It can be hard to tell whether your data or communications were or are actually exposed or not, but it is safe to assume that at some time or another it was. Changing your passwords should be the first step to ensuring that you are secure and that the SSL/TSL transmissions are secure.
Another thing you should be aware of is what sites are actually using this version of OpenSSL. According to articles on the Web some of the most popular sites have used the version with the bug, or are as of the writing of this article, using it. Here are some of the most popular:
It would be a good idea to visit the blogs of each service to see whether they have updated to a new version of OpenSSL. As of the writing of this article, most had actually done so but some were still looking into upgrading. For a full list of sites, check out this Mashable article.
If you have a website that uses SSL/TSL and OpenSSL you should update it to the latest version ASAP. This isn’t a large update but it needs to be done properly, so it is best to contact an IT partner like us who can help ensure the upgrade goes smoothly and that all communication is infact secure.
Contact us today to see how we can help ensure that your company is secure. | <urn:uuid:d6557e14-9f5b-42b5-9587-801a82a31992> | CC-MAIN-2017-04 | https://www.apex.com/urgent-change-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00433-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951578 | 1,279 | 2.8125 | 3 |
Most people don’t typically associate the Central Intelligence Agency with historical UFO investigations but the agency did have a big role in such investigations many years ago.
That’s why I thought it was unusual and kind of interesting that the agency this week issued a release called “How to investigate a flying saucer.” [The release is also a nod to the fact that the science fiction TV series X-Files returns to the screen this weekend]
In the article the CIA talks about the Air Force’s Project Blue Book which investigated public reports of UFOs and operated between 1952-1969. Project Blue Book was based at Wright-Patterson Air Force Base near Dayton, Ohio. Between 1947 and 1969, the Air Force recorded 12,618 sightings of strange phenomena — 701 of which remain "unidentified.”
+More on Network World: 26 of the craziest and scariest things the TSA has found on travelers
“Although the CIA was not directly affiliated with Project Blue Book, the Agency did play a large role in investigating UFOs in the late 1940s and early 1950s, which led to the creation of several studies, panels, and programs. Former CIA Chief Historian, Gerald K. Haines, wrote an in-depth article looking at the Agency’s role in studying the UFO phenomenon for Studies in Intelligence. In his article, “CIA’s Role in the Study of UFOs, 1947-90,” Haines says that ‘while the Agency’s concern over UFOs was substantial until the early 1950s, CIA has since paid only limited and peripheral attention to the phenomena,’” the CIA wrote.
The CIA went on to say: While most government officials and scientists now dismiss flying saucer reports as a quaint relic of the 1950s and 1960s, there’s still a lot that can be learned from the history and methodology of “flying saucer intelligence.”
From there it issued a list: 10 Tips When Investigating a Flying Saucer. Here’s a summary of that list directly from the CIA:
1. Establish a Group To Investigate and Evaluate Sightings: Before December 1947, there was no specific organization tasked with the responsibility for investigating and evaluating UFO sightings. There were no standards on how to evaluate reports coming in, nor were there any measurable data points or results from controlled experiment for comparison against reported sightings. To end the confusion, head of the Air Force Technical Service Command, General Nathan Twining, established Project Sign (initially named Project Saucer) in 1948 to collect, collate, evaluate, and distribute within the government all information relating to such sightings, on the premise that UFOs might be real (although not necessarily extraterrestrial) and of national security concern.
+More on Network World: The coolest Air Force UFO videos+
2. Determine the Objectives of Your Investigation: The CIA’s concern over UFOs was substantial until the early 1950s because of the potential threat to national security from these unidentified flying objects. Most officials did not believe the sightings were extraterrestrial in origin; they were instead concerned the UFOs might be new Soviet weapons. Although Blue Book, like previous investigative projects on the topic, did not rule out the possibility of extraterrestrial phenomena, their research and investigations focused primarily on national security implications, especially possible Soviet technological advancements.
3. Consult With Experts: Throughout the 1950s and 1960s, various projects, panels, and other studies were led or sponsored by the US government to research the UFO phenomenon. This includes the CIA-sponsored 1953 Scientific Advisory Panel on Unidentified Flying Objects, also known as the “Robertson Panel.” It was named after the noted physicist H.P. Robertson from the California Institute of Technology, who helped put together the distinguished panel of nonmilitary scientists to study the UFO issue. Project Blue Book also frequently consulted with outside experts, including: astrophysicists, Federal Aviation officials, pilots, the US Weather Bureau, local weather stations, academics, the National Center for Atmospheric Research, NASA, Kodak (for photo analysis), and various laboratories (for physical specimens). Even the famous astronomer Carl Sagan took part in a panel to review Project Blue Book’s findings in the mid-1960s. The report from that panel concluded that “no UFO case which represented technological or scientific advances outside of a terrestrial framework” had been found, but the committee did recommend that UFOs be studied intensively to settle the issue once and for all.
4. Create a Reporting System To Organize Incoming Cases: The US Air Force’s Air Technical Intelligence Center (ATIC) developed questionnaires to be used when taking reports of possible UFO sightings, which were used throughout the duration of Project Blue Book. The forms were used to provide the investigators enough information to determine what the unknown phenomenon most likely was. The duration of the sighting, the date, time, location, or position in the sky, weather conditions, and the manner of appearance or disappearance are essential clues for investigators evaluating reported UFO sightings. Project Blue Book categorized sightings according to what the team suspected they were attributable to: Astronomical (including bright stars, planets, comets, fireballs, meteors, and auroral streamers); Aircraft (propeller aircraft, jet aircraft, refueling missions, photo aircraft, advertising aircraft, helicopters); Balloons; Satellites; Other (including missiles, reflections, mirages, searchlights, birds, kites, spurious radar indications, hoaxes, fireworks, and flares); Insufficient Data; and finally, Unidentified.
5. Eliminate False Positives: Eliminate each of the known and probable causes of UFO sightings, leaving a small portion of “unexplained” cases to focus on. By ruling out common explanations, investigators can focus on the truly mysterious cases. Some common explanations for UFO sightings discovered by early investigations included: misidentified aircrafts (the U-2, A-12, and SR-71 flights accounted for more than half of all UFO reports from the late 1950s and most of the 1960s); celestial events; mass hysteria and hallucination; “war hysteria;” “midsummer madness;” hoaxes; publicity stunts; and the misinterpretation of known objects.
6. Develop Methodology To Identify Common Aircraft and Other Aerial Phenomena Often Mistaken for UFOs: Because of the significant likelihood a common (or secret military) aircraft could be mistaken for a UFO, it’s important to know the characteristics of different types of aircraft and aerial phenomenon to evaluate against each sighting. To help investigators go through the troves of reports coming in, Project Blue Book developed a methodology to determine if the UFO sighting could likely be attributable to a known aircraft or aerial phenomenon. They wrote up detailed descriptions characterizing each type of aircraft or astronomical phenomenon, including how it might be mistaken for a UFO, to help investigators evaluate the incoming reports.
7. Examine Witness Documentation: Any photographs, videos, or audio recordings can be immensely helpful in evaluating a reported UFO sighting.
8. Conduct Controlled Experiments: Controlled experiments might be required to try and replicate the unknown phenomena.
9. Gather and Test Physical and Forensic Evidence: In the Zamora case [a famous case of a police officer, Lonnie Zamora, detailed an incredible experience with a UFO in New Mexico]. [the last chief officer of the US Air Force’s Blue Book UFO investigation Hector] Quintanilla contends that during the course of the investigation and immediately thereafter, “everything that was humanly possible to verify was checked.” This included bringing in Geiger counters from Kirtland Air Force Base to test for radiation in the landing area and sending soil samples to the Air Force Materials Laboratory. “The soil analysis disclosed no foreign material. Radiation was normal for the ‘tracks’ and surrounding area. Laboratory analysis of the burned brush showed no chemicals that could have been propellant residue,” according to Quintanilla. “The findings were all together negative.” No known explanation could be found for the mysterious event.
10. Discourage False Reporting: The Robertson Panel found that the Air Force had “instituted a fine channel for receiving reports of nearly anything anyone sees in the sky and fails to understand.” This is a classic example of needing to separate the “signal from the noise.” If you have too many false or junk reports, it becomes increasingly difficult to find the few good ones worthy of investigation or attention. The CIA in the early 1950s was concerned that because of the tense Cold War situation and increased Soviet capabilities, the Soviets could use UFO reports to ignite mass panic and hysteria. Even worse, the Soviets could use UFO sightings to overload the US air warning system so that it could not distinguish real targets from supposed UFOs. By knowing how to correctly recognize objects that were commonly mistaken for UFOs, investigators could quickly eliminate false reports and focus on identifying those sightings that remained unexplained.
The CIA declassified hundreds of documents in 1978 detailing the Agency’s investigations into UFOs. Take a look here.
Check out these other hot stories: | <urn:uuid:571e313f-0f9e-4aaf-ab71-0e9e33fa3eb0> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3025838/security/cia-10-tips-when-investigating-a-flying-saucer.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00249-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949758 | 1,891 | 2.875 | 3 |
As technology expands within school districts around the globe, educational organizations continue to face the challenges of bridging the gap between their IT departments and curriculum teams. While IT seeks to have control over tools and network resources, educators desire to provide technology resources to their teachers and students in an effort to improve personalized learning. Can they work together? Is there an opportunity to create an environment where both IT and educators can collaborate? In today’s session, presenters and Ridley School District IT experts, Don Otto and Ray Howanski, showed it’s possible. | <urn:uuid:614cc5d5-9483-4ad3-bd53-753de9b3b567> | CC-MAIN-2017-04 | https://www.jamf.com/resources/bringing-it-and-curriculum-together-to-get-results/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00213-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94682 | 112 | 2.671875 | 3 |
Mongolia is richly endowed with natural resources, with commodities such as copper, gold and coal making up the majority of the country's exports. Even with only 15 percent of the country fully mapped, the World Bank Group states that there are over 6,000 deposits of around 80 different minerals in Mongolia1. Of the 400 deposits that have been defined, 160 are in production2.
The country has a highly favorable investment climate by frontier market standards, with broad-based political stability paving the way for investor protection and a strong legal framework. Significant infrastructure development, aimed at facilitating mineral exports, should boost the country's mining sector over the coming years.
Interest in Mongolia Awakens
Mongolia's legal framework drew concerns when 106 mining licenses were revoked in 20133, and those concerns were compounded by a parliament that at times failed to consult the wider investment and business community on new legislation. However, in 2014 the country expanded the area available to mining and exploration to 20 percent (from roughly 8 percent), by lifting a 2010 ban on new licenses4.
The Mongolian government also recently resolved an ongoing dispute with Rio Tinto-owned Turquoise Hill Resources5, ending a three-year stalemate over revenue sharing and the foreign investor’s role in the Mongolia’s mining sector6.
Other recent policy changes and actions by Mongolian government have increased its attractiveness as an investment destination:
In a bid to revive investor interest in its mining sector, a wave of reforms to the country's 2006 Minerals Law in February 2015 and July 2014 were approved7,8,9.
Previously, the government was entitled to an equity interest in a mineral deposit of between 34 and 50 percent10. However, now there is an option for the government to either exercise the right to equity or impose a special royalty in lieu. This option allows the state's equity interest to be transferred to the license holder.
In May 2016, one of the world's largest undeveloped copper projects (Oyu Tolgoi)11, received approval for expansion. The Oyu Tolgoi mine in the South Gobi Desert of Mongolia is one of the world's largest and highest-grade copper and gold mines.
Despite periodic bouts of resource nationalism, Mongolia is expected to be more accommodating towards multinationals in the coming quarters. With China consuming about 80 percent12 of all Mongolian exports, there is room for further growth, as the government is unlikely to press ahead with policies that would further jeopardize investment in the face of China's economic slowdown. There is a view that, irrespective of the winner of the elections in June 2016, the major political parties do not have any political motivation to disturb or to bring any big large, sensitive changes into this investment agreement now13.
1Cope, Louis W, Mongolia: little known mineral wealth, Mining Engineering, 1 January 2006. Factiva, Inc. All Rights Reserved.
2Taehyun Lee, Altantsetseg Shiilegmaa, Khandtsooj Gombosuren, Gregory Smith, Mongolia Economic Update, The World Bank Group in Mongolia, April 2013.
3Frik Els, Mongolia revokes 106 exploration licences, Mining.com, 6 November 2013.
4Cecilia Jamasmie, Mongolia approves major overhaul to mining law, Mining.com, 2 July 2014.
5Rhiannon Hoyle and Alex MacDonald, Rio Tinto in $5.3 Billion Expansion of Mongolia Project, Dow Jones Newswires, 9 May 2016. Factiva, Inc. All Rights Reserved.
6Rio's $5.3 bln go-ahead fuels hopes of end to Mongolia's hangover, Reuters, 9 May 2016. Factiva, Inc. All Rights Reserved.
7Deepali Sharma, Kincora Copper sees end to Mongolian mining licences dispute, Metal Bulletin, 29 April 2014. Factiva, Inc. All Rights Reserved.
8S.Bold-Erdene, From draft to State Policy - a long journey, The Mongolian Mining Journal, 26 February 2014. Factiva, Inc. All Rights Reserved.
9Proposed amendments to the Minerals Law, The Mongolian Mining Journal, 19 May 2014. Factiva, Inc. All Rights Reserved.
10Andrea Hotter, Mongolia draft minerals law a threat to investors, Metal Bulletin, 18 January 2013. Factiva, Inc. All Rights Reserved.
11Fawad Mir, Rio Tinto, Mongolia green-light US$5.3B investment for Oyu Tolgoi underground mine development, SNL Financial LC, 9 May 2016. Factiva, Inc. All Rights Reserved.
12As China sneezes, those closest are catching the worst colds, Macau Daily Times, 1 June 2016. Factiva, Inc. All Rights Reserved.
13Mongolia election won't impact Rio Tinto's $5.3 bln deal -mining CEO, Reuters, 11 May 2016. Factiva, Inc. All Rights Reserved. | <urn:uuid:05feaee3-621a-4886-a28f-2421c97bb7da> | CC-MAIN-2017-04 | https://www.accenture.com/bd-en/insight-highlights-natural-resources-allure-mongolia | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905139 | 1,030 | 2.78125 | 3 |
Data, whether it’s Big or small, is naturally row-based. That’s not a technical revelation. Quite the contrary; it’s how people think. If you’re having dinner or drinks with a group of friends after work, you see each member as an individual; you know their first and last names. You may know the names of their spouse and children, as well as other information about them. But all of that information naturally coalesces into a virtual container about that person. It’s simply a natural way of thinking about things.
That’s how source systems organize and collect information. Much ado has been made around analytical engines about speeding performance by reorganizing the data into columns, instead of naturally occurring row-based structures. The columnar-based approach promises fast query speeds on vast amounts of data, and it delivers on that promise, but at a significant cost that is no longer worth it. Here’s why.
Column-based data is not a new concept. Mainframe systems used some element of storing data in columns back in the 1960s, on purpose-built platforms for specialized operations. It was essentially used to enable fast retrieval of data by creating an index, just like an index in a book. Using this index, again as someone would do with a book, fragments of subjects and mentions of concepts were referenced and pointed to in another logical order – e.g., alphabetically, as opposed to the chapters in which they were originally organized.
Think about any transaction of which you have been a part; it may be buying something at a retail store, taking money from an ATM, even booking a suspected criminal (although I hope you’ve not been part of that): Every one of these transactions follows the same format: “SMITH, JOHN; 26 Queens Boulevard, Apartment F; 10129; BURGLARY; May 26, 2001; 10:30pm ET,” and so on. Again, this is row-based in its approach, referring to one individual and his complete record.
To achieve their speed, column-store database systems essentially create many, many indexes – so many indexes, in fact, that they themselves become the database. Rather than being a new and better “size” alternative in a “one-size-fits-all” world, columnar databases perform unnatural acts of indexing on row-based data stores for one primary reason – to reduce the number of query-slowing input/output (I/O) calls against spinning hard drives; which is very much a 2005-type problem. Eight years later, we submit there’s a better way to approach this challenge, especially in the era of Big Data.
the good old spinning hard disk
Spinning disks – the venerable hard drive
At one time, hard drive spindle speeds were a serious data analysis bottleneck. While processors and CPU cores gained speed by leaps and bounds, spinning-disk hard drives lagged in their ability to quickly find and read data. Higher capacity drives only exacerbated the speed gap between disks and processor chips. Solid state disk (SSD), while much faster, offered little relief as it was low in capacity and high in price.
Because this problem persisted for years, the cumbersome data manipulations and associated IT complications of columns seemed worthwhile. But technology, like time, marches on and the calculus has changed – significantly.
Today, even low-cost commodity server platforms (aka “industry-standard servers”) have access to terabytes of lightning-fast Random Access Memory (RAM), making analytic reads from disks strictly optional. Executing through hyper-threaded, multi-core processors, a new generation of in-memory, massively parallel processing databases – architected from the ground up to take advantage of abundant RAM – enable row-based databases to deliver the query performance of column-based databases with none of the cost and complications brought on by the previous
sly unnatural acts of data mutilation. But that’s not the only advantage.
Limiting your options
Columnar databases impose a subtle, but very real, constraint on query parameters. To achieve its goal of limiting disk I/O, columnar databases query against what are essentially indexed summaries of the original row-based data store. Each row of data must be first split into its component column values and each of those values must then be written to a different place within the database. These indexing schemas must be correct for them to work properly – which can require multiple iterations to determine formats – and even if they work properly, a close examination of the process reveals that user queries are frequently limited by the index scheme itself. Without access to the original data in its original form, true ad hoc querying of the data is not possible. Rather, users are restricted to queries that conform to categories of comparison, called projections, anticipated by the original indexing.
In today’s world, that simply no l
onger works. People think of the oddest questions, and column indexing cannot cover each and every scenario. This is especially true as petabytes of additional data are added every single day; business analysts in the “Facebook Generation” know that every click, fact, status, sensor, etc. is tracked, stored and should be available for processing. Those analysts, who are not now (nor will they ever be) Database Specialists, write ad hoc queries in their favorite Business Intelligence tools, in standard user interfaces, or even just in Excel that do not conform to a standard that is friendly to any locked-in schema.
This columnar requirement to alter the original data structure introduces other practical issues, not least of which is operational latency when attempting to conduct more complex queries to perform more sophisticated analysis. For example, imagine a retailer using a columnar data warehousing system wants to run complex market basket analysis-type queries on a large data set, say something more than 5TB. Due to multiple fact tables and complex joins, it can take days or longer to get a columnar database properly set up, since the schema must be constructed multiple times to get it working right before data can even be loaded and analyzed. In a world where insights are increasingly required on an immediate, “need it now” basis, this presents companies with an untenable situation.
Further, updating the data warehouse with fresh data is not a straight-forward process, causing columnar database vendors to employ complicated tricks to do updates in a reasonable timeframe. Overall, from an IT management perspective, columnar data makes life complicated. Complexity is costly. For companies seeking the agility and advantages of near-real time analysis, this type of latency between data collection and data analysis is a real problem only exacerbated by the information fire hose effect that is “Big Data.”
In addition, the need to index and project can significantly diminish another one of the hyped benefits of columnar databases – compression. Because they develop indexes of the actual data, columnar databases are touted as providing exponential levels of data compression; seemingly, a very attractive proposition for companies dealing with massive amounts of information. What’s less publicized, however, is the effect that creating multiple indexed projections has on this benefit. As data sets grow larger and more complex, the need to perform more complex queries scales along with them. This, in turn, multiplies the number of projections that are created. Fairly quickly, this can significantly reduce the initial compression benefit of indexing. In fact, many of the columnar database purveyors recommend having as much disk as the uncompressed data for this very purpose.
An Example: on-line gaming analytics
A leading online gambling operator in the UK wanted to up its player analytics game. They were initially interested in columnar database technology because of the perceived uniqueness of high compression rates.on-line gaming and gambling – a tremendous performance requirement best served by in-memory analytics
In order to accomplish what they wanted, data would have to be duplicated over and over in order to build so-called “projections” in the database. In this way, the benefit of having a high level of compression was lost as they still had to have as much disk available as the total amount of uncompressed data.
Dust in the wind?
So, as spinning disks and other limitations they grew up with fade in the face of continued technological progress, will columnar data analysis disappear? In a word, no – it’s more likely columnar will become a feature or capability within a larger, more capable solution.
For instance, in applications where a careful effort has been made to tune the data for query performance and there is a need to repeatedly run the exact same set of real-time queries, columnar indexing may make sense. They are, however, the exception…and not the rule. More typical for businesses grappling with getting the most out of their Big Data investment is a scenario where a broad range of user types seek the answers from ad hoc, often fairly complex questions, against at least near-real-time information. For the reasons explained above, adapting a columnar architecture as the data warehouse engine in such a scenario poses significant costs in terms of operational cost and complexity.
So why go to all the trouble of putting data into columns if you don’t have to? Columnar databases were invented to solve yesterday’s problems. It’s time to look forward.
In-memory analytics: the cure for the common column
By comparison, an in-memory analytical platform can augment any data storage infrastructure and maintain the original row-based data structure while letting multiple users run queries at train-of-thought speed. Unbounded by the constraints of columnar indexing, users are free to explore any and all possible relations present within the data. Further, because the data structure is preserved and information passes quickly and easily from collection point to data warehouse, users are assured they’re working against near real-time data that accurately depicts the current lay of the land. This is often referred to as “performance at the glass,” and reflects the immediacy that drives many companies’ analytical needs today.
The future is always arriving
Even now, industry leaders Intel and AMD are reportedly working on new CPU technology that would enable 46-bit address spaces, overcoming a longstanding limit and thus allowing up to 64 terabytes of addressable RAM on a single server.
Memory technology itself is being pushed toward exciting new advances with direct application in and benefit for data warehousing and analytics. Dynamic RAM today is fast and getting faster on a regular basis, but it’s a volatile medium. If you lose your power, you lose your in-flight data – making disks a necessary safe harbor for persistence. But that present-day reality appears poised for a slide into the rearview mirror as years of research into different forms of non-volatile RAM (NVRAM) appears poised to alter the commercial landscape for fast, persistent, enterprise-class memory. The NVRAM just ahead in the commercialization pipeline will be a significant leap beyond the flash memory of today, which, though offering faster performance than spinning disks, is not up to DRAM speeds and suffers from data reliability issues under constant, heavy use. The next generation technologies for NVRAM, such as phase-change memory (PRAM), are promising to deliver something very close to universal memory; offering performance that eclipsing both RAM in speed and spinning disks in data durability.
Make no mistake; these technologies will not arrive via special delivery next week – or even next year – shrink-wrapped and ready for deployment at scale at fire sale prices. The trend, however, is clear and inexorable toward more and persistent memory, more efficient use of increasingly capable multi-core CPUs and increased bandwidth tying these platforms together. For these reasons, loading data in-memory on an analytical platform that augments an existing infrastructure represents a far superior solution for today – and tomorrow.
Find out more about how the Kognitio Analytical Platform provides scalable in-memory processing for advanced analytics at www.kognitio.com/analyticalplatform | <urn:uuid:de07a5f4-66b0-4ed7-a2c3-1ede2934a3ac> | CC-MAIN-2017-04 | http://kognitio.com/unnaturalacts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00571-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938982 | 2,508 | 2.625 | 3 |
This, says Professor Dan Wallach, a computer science professor at Rice University in the US, means that anyone using a WiFi sniffer application can eavesdrop and possibly intercept user sessions on a variety of web portals.
Wallach also asserts that the lack of security – with the exception of the password on Facebook – could allow a user's online session to be hijacked.
According to Phil Lieberman, CEO of Lieberman Software, the professor's discovery is typical of open source software, as there is little incentive for the software developer to use secure protocols unless the destination system requires it.
And this, he explained, is the biggest issue with open source software.
"Whilst the economic imperative to go open source is clearly very strong, companies that use open source, such as Android, which is based on Linux code, also need to ensure their software is robust on the security front, and this process costs money", he explained.
Lieberman, whose company specialises in privileged identity management and security solutions, went on to say that Android apps are an interesting case as, unlike most open source software, the apps are usually designed to run on as as-is basis, so adding security to the IP transmission side is not always as easy task.
"I would go one step further and state that this disclosure is but, one early warning shot about the use of cloud computing and new platforms such as Android and Windows Mobile 7", he said.
"The other element is the stark reality that computer science graduates rarely, if ever, receive any training on how to write secure applications. So it should come as no surprise that many applications created by these same people are insecure", he added.
Lieberman went on to say that, depending on the platform provided by a vendor, the core security available to the developer can also be woefully inadequate.
"As a consequence, developers of applications frequently find themselves needing to add layer upon layer of additional technology which may beyond their expertise and budget", he said.
"Because security is frequently an 'out of sight, out of mind' problem, it does not get addressed/funded until someone complains or something bad happens", he added.
Lieberman concludes that Wallach’s findings are a great lesson that it is time for developers to hit the books on how to secure their applications.
"Platform vendors need to complete their security and encryption suites to make it easy for developers to write secure applications", he said. | <urn:uuid:3abfab9f-4f43-4648-88f6-1de22e709f0e> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/google-android-apps-send-credentials-in-the-clear/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951017 | 498 | 2.765625 | 3 |
Sun is working on technology to make it easier to run different languages on the Java Virtual Machine. Called the Da Vinci Machine, the project is being described by Sun as "a multi-language renaissance for the Java Virtual Machine architecture." The project features prototype JVM extensions to run non-Java languages efficiently, as well as architectural support.
Although many languages besides Java have been implemented on the JVM, including Ruby, the intent is to make the JVM more compatible with other languages, said Charles Nutter, core developer of JRuby, which is a version of Ruby to run on the JVM. "For the most part, almost every language that's more than five years old has some kind of implementation on the JVM," he said.
The JVM allows programs using it to run on any platform supporting the JVM; it provides hardware and OS independence. Benefits like flexible online code loading and online garbage collection, in which objects are moved out of the way automatically rather than having to be saved manually, are featured.
Da Vinci Machine is intended to overcome obstacles like mismatches between a source language's design patterns and JVM capabilities. Because the JVM was designed for Java and Java favors some design patterns over others, implementers can find themselves dealing with these mismatches, Sun said.
"Specifically, the JVM was originally for Java, and many other languages have features unlike what Java provides. We need to find ways to support those features," said Nutter.
Some pain points to running new languages on the JVM include limitations on calling sequences and control stack management, finite inheritance, and scaling problems when generating classes.
Nutter pointed out that Java, in being a statically typed language, differs from scripting languages like Ruby, which are dynamically typed. Thus, Java gives the JVM more clues about what code is going to be executed. Ways need to be found to let the JVM make the correct call for these languages, he said. In JRuby, this obstacle is addressed via a piece of code to inspect target operations.
Capabilities of Da Vinci Machine are planned for inclusion in the upcoming JDK (Java SE Development Kit) 7, which is based on Java Platform, Standard Edition 7. Sun could not provide a release date for JDK 7. It is not known how many Da Vinci features might actually get into JDK 7, Nutter said.
Da Vinci represents an experimental branch, or even a fork, of the JVM, said Nutter. He cautioned that fork in this case is not meant to carry the same negative connotations associated with forking of a platform.
Java developers questioned liked the idea of Da Vinci Machine.
"[Da Vinci Machine] sounds like something I was thinking was going to happen and should have happened," said Daniel Hinojosa, an independent Java developer and a founder of the Albuquerque Java Users Group. "I think there's going to be a race between Java and the Microsoft [CLR (Common Language Runtime)]." The CLR has been likened to a virtual machine supporting multiple languages. "I think Java's a great language, but I think people like to program differently, whether it's a functional language or a scripting language," he said. Hinojosa said has been running JRuby, Groovy, and Java on the JVM.
Developer Alex Miller, tech lead at Java clustering technology vendor Terracotta, also agreed with the Da Vinci Machine effort.
"There are lots of people writing and running dynamic languages on the JVM these days, and there are certain things that are complicated or obscure with the use of [dynamic] languages," Miller said.
A lot of dynamic languages differ from Java in that they leverage functions rather than objects, Miller said. Making them work on the JVM requires extra work, he said.
Sun's JRuby project, meanwhile, is getting an upgrade. Version 1.1, due within a month, features a full compiler to greatly improve performance, Nutter said.
This story, "Sun's DaVinci: A Renaissance for JVM?" was originally published by InfoWorld. | <urn:uuid:5203823d-117b-4eb8-b84f-1d6d3bf07f63> | CC-MAIN-2017-04 | http://www.cio.com/article/2437222/java/sun-s-davinci--a-renaissance-for-jvm-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00295-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957076 | 848 | 3.015625 | 3 |
Setting and Maintaining Employee Storage Limits
Since the first computers appeared, one of the biggest problems facing system administrators has been how to allocate storage space to users.
In the early days, this was especially difficult because of the limited amount of storage on the systems at the time. Fortunately, most of the files that users were saving were fairly small, so this was not as critical of an issue. As the size of storage has increased, however, so has the size and number of files that are saved. In addition, with the wide popularity of various music and movie formats, many users often have gigabytes of files they wish to save. Therefore, despite the increase in storage capacities, the same problem exists now as before.
Systems administrators need to deal with such needs while still keeping the systems functional and storage space available when necessary. By setting quotas, they can do just that. Most modern operating systems offer some way of setting storage limits. Generally, they are set with a warning level, whereby users are notified when they are nearing the maximum amount of space they are allowed to use.
A level is also set whereby the user is no longer able to save any files until disk usage drops below the set storage level, which can be set per group and per user. On most systems, the default administrator accounts are not restricted by any quotas.
For those working with Linux systems, setting storage limits requires a number of steps be completed. (Details on how to do this can be found at http://tldp.org/HOWTO/Quota.html.) Essentially, quotas must be enabled in the kernel, the quota software installed and then changes are made to the file /etc/fstab to set the limits for each file system.
Once the server is rebooted, the quotas are in effect. These limits can be on the number of files a user can store in addition to the amount of space used. To check on the status of a user’s quota, the command “quota” can be used to check the quota status for a user or group.
Quotas on Windows systems work a lot like those on Linux systems. Starting with Windows 2000, administrators can restrict the amount of space users have available. On Windows, however, the limits are based strictly on storage space, not the number of files. In addition, enabling quotas on Windows is a much easier task.
To enable quotas on Windows, right-click on the drive on which you wish to set quotas and choose “Properties.” In the properties window, choose the “Quota” tab. This is where the quota settings can be configured. By checking the “Enable quota management” box, all the other options are available. An administrator can stop users who exceed their quota from saving any files to this drive by checking the “Deny disk space to users exceeding quota limit.”
Default quotas can be set by setting the appropriate options, along with specific settings for different users and groups by choosing the “Quota Entries” button. Events also can be sent to the event log for those users who exceed the warning level and the quota limit.
Unfortunately, in an enterprise environment, the built-in tools of managing quotas are extremely inadequate. Although quotas can be enabled through group policy, settings are limited, and reporting is almost nonexistent. For this reason, many different programs are available to provide more functionality when working with quotas on Windows systems.
Such programs include SpaceGuard SRM, WinQuota and Northern Quota Server. These programs improve upon the quota features included with Windows, including (in some cases) the ability to set quotas on Windows NT 4.0 systems, something not included with the operating system.
For system administrators, setting storage limits can be extremely important to making sure disk space is available when necessary. Fortunately, using tools available with modern operating systems, administrators can set such limits and manage the space used. For those who require additional features, third-party software can be purchased that will make this process even easier. | <urn:uuid:9d97fae9-1622-420b-9bd4-3cd496a24c04> | CC-MAIN-2017-04 | http://certmag.com/setting-and-maintaining-employee-storage-limits/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00203-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932779 | 834 | 2.796875 | 3 |
Machine learning is in the news a lot these days, but what does that even mean?
Certainly computers are good at crunching numbers, storing and retrieving bits of information, searching and sorting, comparing and detecting patterns. They’re faster than the human brain and they’re arguably capable of storing more information. They can be programmed with algorithms by which they can analyze past events and predict future ones. They can play chess, recognized by most as a game of strategy that requires intelligent reasoning, and since the 1980s have been able to beat the best human players, earmarked by the infamous victory of IBM’s Deep Blue over Garry Kasparov in the late ‘90s.
But do they really learn the same way we do? Let’s delve deeper into the world of neural networks and AML (Advanced Machine Learning) and examine some predictions from real people regarding just how far computers are capable of going and the ways in which they may never be able to emulate our “grey matter.”
Before we try to answer the question about whether computers can really learn, we have to back up and ask a few other, foundational questions: What is “learning?” In order to learn, must you be able to think? And if so, what is “thinking?” For that matter, what is a “computer?”
That last question might seem silly, but the term “computer” was originally used in the early 1600s and applied to people, not machines (someone who does computations – duh). There is some disagreement as to exactly when and by whom the non-human computer was invented, but Charles Babbage is generally credited with creating the concept, in 1822. His mechanical computing machines, the Difference Engine and the Analytical Engine, were the ancestors of today’s modern electronic computers. The first binary programmable machine was the Z1, built by Konrad Zuse in the 1930s.
Once computers became more sophisticated and were capable of doing many of the things that humans can do (often much more quickly and efficiently), it was inevitable that those humans would exercise one quality machines don’t (at least yet) have – their imaginations – and dream of building machines capable of thinking, and even feeling, in the same way we do. And that road inevitably leads to the scariest possibility of all.
Machines that are sentient (able to feel, perceive and experience have self-awareness and emotion) has been the subject of science fiction for many decades. From Isaac Asimov’s The Brain to HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey to my personal favorite, Jane from Orson Scott Card’s Ender series. And we can’t forget WOPR from that classic 80s movie, WarGames. Of course, there have been many, many more, some of which – like Star Trek’s voice-controlled computer on board the Enterprise who sounds amazingly like Majel Barrett – that don’t have names.
Speaking of Star Trek: as if it weren’t tricky enough to deal with computers that hold intelligent conversations from within the confines of their somewhat traditional hardware form factors, the logical next step is the robotic computer that takes on the physical form of a human being, such as Lieutenant Data, every Trekkie’s favorite Android. He is, after all, so much friendlier (and funny) than Battlestar Galactica’s cylons and Transformers’ Megatron. If you’re more into the idea of hybrid IT, there’s always the half-human, half-machine Borg and other fictional cybernetic organisms.
But back to the real world: The idea of artificial intelligence as a serious field of study began to take hold in the 1940s and became a reality in the 1950s. The term AI is generally recognized to have been first used by John McCarthy in 1955. As an interesting aside, that was also the year in which some of the people whose innovations brought us to the technological state where we are today, such as Bill Gates and Steve Jobs, were born.
McCarthy’s credentials were impressive; he taught at Princeton, Stanford, Dartmouth and MIT. He advanced the theory that machines can be said to have beliefs in a paper called Ascribing Mental Qualities to Machines. Although he died in 2011 at the age of 84, his web page is still alive on the Stanford.edu site with links to many of his lectures, articles and papers that earned him the title of “Father of AI.”
McCarthy’s 1955 project proposed to find a way to “make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” Certainly we have come a long way in some of those areas in the subsequent 60 years. Computers are speaking to us in English (and other human languages) and the technology isn’t confined to high dollar military and academic environments (or spaceships); Cortana, Siri, Google Now (who really needs a better name) and other “digital personal assistant” engines are proliferating in the consumer space.
A few years after McCarthy’s team first tackled this mission, Arthur Samuel implemented one of the first uses of machine learning – on an IBM 701 that had only 9 KB of memory – with a checkers-playing program. That was in the late 1950s. Almost forty years later, IBM was still at it and Deep Blue, an RS/6000 SP2 “supercomputer” built on the concept of massively parallel processing with thirty-two RISC CPUs and 512 special chess processors, declared victory over Kasparov.
The problem (or perhaps the good thing) with Deep Blue was that its artificial intelligence was very narrow. It was like a savant – very, very good at one particular thing but with no abilities outside its specialty. There was no danger of Deep Blue taking over the world, unless that world was being played out on a chess board.
Human level AI requires a broader, more generalized intelligence. Humans can, not only learn to deal with a huge number of very different problems, from how to cook a turkey to how to build a vehicle that will travel to Mars – we’re also able to apply what we’ve learned in one situation to other situations. Creating a machine that can think, and even outthink us, in a narrow field is relatively easy. We’re already using such technology in everyday life, even if we don’t think of it as AI. Cars that detect when we’re drifting out of our lanes and correct it, planes that fly on autopilot, spam filters that analyze your mail and decide which messages you don’t want to see, medical devices that determine how much medication to dispense, and so many more.
On the other hand, creating a machine that can plan, reason, grasp complicated concepts and think abstractly like a person of even average intellectual capability is much harder. Some would argue that it’s impossible. In the sci-fi movies, the robots always combine superintelligence and superhuman strength that makes them formidable opponents. However, it’s easier to write about it than to actually do it (which is probably fortunate for us).
Before we can devise a computer that thinks the way we do, we must first understand exactly how we think. And the real challenge isn’t in making computers that can do the really difficult things – like out-playing the world chess champion or perform quantum calculations. It’s the things that most people can do easily that are so hard for a computer, such as looking at a picture of a cloud and seeing an animal shape in it, or “getting” the sarcasm in a seemingly complimentary comment, or telling a story with poignancy that will make you cry. Computers are very, very good at collecting and storing data, and they can sort and analyze it and detect patterns – but computers lack one important element that we humans have (in greater or lesser degrees): imagination. And in order to learn, we often have to imagine.
We can keep making computers with more and more powerful processors and more and more memory, making them capable of doing more and more complex tasks faster and faster, but that won’t make them “smarter” in the human sense. Thus many AI scientists believe the only way to emulate human thinking and learning is to emulate the human brain. Since the brain works through biological neural networks, the goal is to build artificial neural networks to create a computer “brain” that thinks like we do.
Despite their shortcomings in comparison to the wild predictions of science fiction writers (at least so far), we have come a very long way in the field of AI – further than most people realize. We have robots performing surgery, cars that don’t just assist you when you’re driving but drive themselves, factories once staffed by human assembly line workers that now “employ” robots. In fact, the big fear isn’t so much that machines will turn against us, conquer and kill or enslave us, but that they’ll simply take all our jobs. If you’re middle-aged or older, you can probably think of many jobs that existed when you were a kid, that have been completely or mostly replaced by technology: switchboard operators, typists, file clerks, mail sorters and handlers, farm workers and many more.
Is this a good thing or a bad one? Does it mean we’ll all be living in poverty because we can’t find work, or does it mean the cost savings will make it possible for us to spend our time enjoying ourselves and taking advantage of the capabilities of all this awesome tech? As physicist Niels Bohr said, prediction is very difficult, especially if it’s about the future.
One thing is a good bet: AI is a trend that’s not going away, and it’s likely to take us places we can’t even currently imagine – even with our superior human imaginations. | <urn:uuid:8e9bc812-5867-4899-a592-c627b5357684> | CC-MAIN-2017-04 | https://techtalk.gfi.com/the-learning-machine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00111-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965038 | 2,115 | 3.53125 | 4 |
German computer scientists working on a product that seems either doomed or useless to its theoretical customers have actually done more than create a bicycle brake controlled using a wireless network.
They have highlighted, quantified and laid out in clearly definable terms the false assumptions, poor decisions and sloppy systems design that makes building or using wireless applications chancy.
In building a wireless bike brake no one will buy, they've provided all the evidence you'll ever need on how to build wireless apps that work quickly and reliably no matter how thin the bandwidth, or intolerant the users or other applications are of any delay.
If you're a cyclist, or interested in cycling gear, read the press release here, because it may be the only time you ever see anything about wireless brakes again .
If you're interested in wireless application development or deployment, go straight to the research paper itself around (PDF). The wireless brake may be destined to fail, but the things its inventor learned about how to make machines talk efficiently over wireless networks is definitely worth knowing.
"Wireless brake" and "hit by a truck" sound the same to a cyclist
Despite the impression you may have gotten from Lance Armstrong's obsession with gear and all the new designs, fashions and colorful stuff packed into your local bike store, the cycling industry and cyclists themselves are not quick adopters of new technology.
Sure, bike companies manipulate carbon fiber with the best of them and obsolete their own products so customers can buy replacements nearly as quickly as computer-industry vendors do.
They don't change the basics much, though.
The basic double-diamond shape of the bicycle itself hasn't changed in more than a hundred years. Design and functions of the components evolve slowly. New generations often look identical to old generations, with a few percentage points of improvement in performance or reliability built in or a few grams of weight shaved off.
So word that a group of computer scientists at a German university have built a set of brakes controlled using a small motor for a braking mechanism and wireless signaling device to tell it when to brake and how hard, is unlikely to cause cyclists to line up to try it.
Even its inventor only wanted to teach his wireless brake to talk
Making a popular set of bike brakes wasn't really the point of the project, however.
The project was to find out how to make the wireless connections between two components of a system that has to operate in real time – with milliseconds of difference between success and failure – more reliable than systems normally are that are connected by a wire.
Bike brakes are small and cheap, compared to controls for a locomotive or chemical plant, for example. So they're easy to work on.
And the timing requirements make for a pretty demanding obstacle to overcome.
On a bicycle – which gives the rider no protection at all from obstacles and on which even the most expert riders frequently crash – a brake that responds precisely as you expect it to exactly when you need it is not an option. There is no time to wait for for a lag caused by static or conflicting radio signals or magnetic interference or the million other things that make your cell phone or laptop freeze up on a WLAN once in a while.
The need to slow suddenly from 30 MPH to zero provides so small a window of opportunity that the brake has only 250 milliseconds to engage from the time the wireless control is pressed, according to the functional analysis done by a team of German researchers trying to figure out how to build wireless controls that work quickly and respond accurately within tiny slivers of time that make up the normal operating requirements of real-time computer-driven control systems.
The Germans don't really care about bikes, let alone bike brakes. They care about stopping trains, cranes, airplanes, drawbridge motors, industrial machinery and every other bit of technical appliance or machine being designed with wireless controls to make them more convenient and up-to-date, according to a paper published by IEEE called A VerifiedWireless Safety Critical Hard Real-Time Design.
ProTip on adding reliability to wireless: Add a wire
"Wireless networks are never a fail-safe method" for controls of any kind because of the limitations and difficulties of broadcasting complex digital commands via radio, almost all new complex industrial systems are being designed with wireless controls, according to Holger Hermanns, chairman of the Dependable Systems and Software department at Saarland Univ.
Everything from pacemakers to chemical-plant controls are going wireless; freight and passenger locomotive systems that rely on wireless for brakes and other controls are being tested in Europe already, and will be in commercial use within half a decade, Hermann said.
Making wireless applications like that reliable is far more important than bulletproofing wireless brakes for a bicycle. The quick response time, inability to tolerate failure and even stark limits on size and energy make bikes an ideal test bed for experiments in making wireless controls more reliable, Hermann said.
If a bike brake fails during a test, someone's probably going to bounce off a tree or wall. Even if it's kept out of live-traffic situations, failing to stop a moving locomotive has much weightier consequences.
"The wireless bicycle brake gives us the necessary playground to optimize these methods for operation in much more complex systems ," according to Hermann who, with his research team, tested the brakes using quality assurance processes and algorithms normally used for aircraft or chemical factories.
The goal was to build the radio-frequency send/receive systems that were as reliable as possible, test configurations that should deliver the best performance, and examine the process and protocol involved in sending commands from brake lever to brake, to spot errors that might stretch out stopping times.
The end result was a system that responded quickly and accurately enough every time but three out of a trillion. That's a reliability rate of 99.999999999997 percent . It's also 13 nines , just so you don't have to count, significantly higher than the data-center quality test of "five nines," or 99.999 percent uptime.
Repetition is not the answer; repetition is not the answer; repetition is not the answer
While the result was good, the way Hermann and his team found to get there was exactly opposite the one they thought would work.
As you'd expect, they assumed with wireless, the main reason a wireless signal is the failure of signal to reach the receiver in time.
In this case that meant a radio signal sent from a hand-operated controller on the handlebars of a cruiser bike to a brake on the wheel.
Rather than rely on just one point from which commands could be broadcast, researchers put five senders on various points of the bike, each of which would send the same message several times. With all of them sending the same message over and over, all at once, the chances that the signal would not go through in time would have to be divided by the number of senders and the number of times each sent the message, right? Assume three repeats of each command and you cut the likelihood of failure by 15x?
Sender and receiver communicated using the gMAC networking protocol; sender and receiver communicated using TDMA – a call-and-response system in which each component gets to send just one data point before having to stop and wait for a response from the other.
Each round-trip data exchange made up one slot in a frame of TDMA requests; the length of the frame itself was determined by how long it took the completed message to arrive.
The command language allowed slots in a TDMA frame to be assigned to sender and receiver randomly, using a scheme called Dynamic Slot Allocation (DSA), or it could hand out a seating chart that would tell both sender and receiver which got to speak when and in what order each should sent the bits of an overall message. The scripted process was called Fixed Slot Allocation (FSA).
DSA is easier for programmers to use because it doesn't require them to decide every detail about which slot to fill when, is far more common in wireless systems than FSA
Experiments don't always turn out the way you expect; that's why you do them
The team started with a single sender and receiver, but realized they had a problem from the get-go. The quickest round-trip response they managed to get from the brake was 125 milliseconds – 25 milliseconds longer than they wanted to average.
The reason had nothing to do with the radios or interference. The receiver wasn't getting half the messages the sender put out.
That's where redundancy should come in. More broadcast points = fewer lost messages, and it worked, kind of.
The number of messages completely lost dropped by 25 percent – which put the message-loss in the still-unacceptable 37 percent range overall.
Actual response was worse, though. Response times were longer and failures more common because the messages that were getting through were too old – they repeated bits of message the receiver already had, or were part of commands that were already out of date. They were sending 'slow down,' when the delay had caused the message to become 'SSSTOOOOOOOOPPPP.' Not a good characteristic in a brake.
What's the problem
The slots allocated in a TDMA frame boil down to opportunities to talk. If the sender is allocated slots 1, 3, 5, and 7, then it says its piece during those slots of time, and the receiver answers during slots 2, 4 and 6.
Dynamically allocating those slots didn't result in an orderly conversation in which both sender and receiver waited its turn so they could both say what they needed to most quickly. No, they fought over who got to talk when, often both talking at the same time, with no one listening.
The result is a lot like the collision of packets in a local-area Ethernet network. If the port onto an external network is narrower than the number of packets trying to squeeze through, packets bump into each other and both have to wait their turn.
IT's simplest solution to that is to expand the bandwidth so there are fewer collisions.
What Hermann and his team found is that it's much better to avoid collisions in the first place.
While DSA let both sender and receiver shout without listening, FSA told each when to speak. The result was that, even with only one sender rather than five, commands nearly always got through with little or no delay.
Average response times dropped far below the 100 millisecond limit, and the percentage of dropped or failed messages fell from between 35 percent and 50 percent to...0.003 percent.
Once the messages are actually getting through, then you can test for the kinds of things you assumed were the problem in the first place – external sources of static that cause problems with the radio signal itself.
You might not need to, though.
The key to arrive at a safe [reliable] design is to drastically reduce the individual message loss probabilities," Hermann and his team concluded. "for the [wireless brake] system this is achieved – maybe not surprising – by avoiding randomness in slot assignment, using the fixed slot allocation scheme...this twist results in a design with very high reliability guarantees, far beyond the "five-nines" yardstick."
The dryly worded research paper doesn't drive the point home too sharply, but for developers and networking engineers, the message should be clear: redundant signals and overabundant bandwidth do not deliver rapid response times or eliminate lag on their own. Adding more bandwidth is an inefficient way to fix a bottleneck within an application, especially one that is very time sensitive.
The best way to make a wireless network an efficient and reliable medium for time-sensitive command-and-control signals is to make sure the messages being sent are as clearly defined as possible and that the developer or the application itself determines ahead of time seemingly trivial variables like whether client or server gets to talk first and when it's acceptable for them to just talk over each other.
I hate to tell you this if you work in IT, because you're almost certainly working on one set of wireless applications or another, whether that means supporting WLANs, building apps to run on handhelds and communicate via G3, or working on more complex system-programming jobs like teaching locks, RFID cards and building controls to talk using Near Field Communications, Bluetooth or some other short-distance wireless protocol.
Whichever it is, and whatever limits you face because you're using commercial software, not something whose root you can grab sufficiently to change the way it creates data frames or packets, the best way to make a networked application run fast and reliably is to pare down the options it has to communicate, give it the shortest most concrete commands and ways to communicate them, and don't bother with signal replicators until you know for sure the unacceptable lag times you're trying to troubleshoot aren't due to bits of the application contending for space to talk and not giving any other component a chance.
It's a critical difference that will become more important as wireless networks become the norm for corporate networks, rather than an add-on that's considered a benefit, but not a reliable resource.
Holger Hermann and his unmarketable wireless bike brake, circumlocutory, academic writing style and abstruse choice of topics may seem an odd source from which to get confirmation of what seems like a core principle of good wireless application design, but you take insight where you can get it.
This seems like a good one. Much better than the assumption that the world need another failed high-tech bicycle component, anyway.
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:d1deeead-ab7b-4533-8b67-682234fdd23b> | CC-MAIN-2017-04 | http://www.itworld.com/article/2735796/mobile/how-to-build-wireless-apps-that-fail-three-times-in-a-trillion--and-a-wireless-bike-brake-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00019-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964525 | 2,855 | 3 | 3 |
Why water is not always the best temperature monitoring system
Tuesday, Jul 23rd 2013
As the costs related to running air conditioning units rise, many data centers are turning to alternative temperature monitoring techniques, including water. However, this method is not always the best way to guarantee a set computer room temperature.
Companies have become more concerned about rising electricity usage in data center environments, with cooling mechanisms frequently being targeted for efficiency gains. As a result, many have turned to water and other alternative temperature monitoring techniques to keep data centers at ideal conditions at all times. For instance, Google has reported that by using water-based cooling and other non-legacy methods, the company has made its data centers 50 percent more energy efficient than the standard facility.
"We take advantage of local conditions and use free cooling at all of our data centers," Google said on its website about its data centers. "Avoiding the need for mechanical chillers is the largest opportunity for energy and cost savings."
Google is not the only organization to utilize water as a central data center commodity. According to early reports, the new data center being built by the National Security Agency in Bluffdale, Utah, will use 1.7 million gallons of water a day to keep 100,000 square feet of hardware cool.
When water-based cooling goes wrong
Although Google and others have touted the environmental benefits of using water for data center temperature monitoring, the new NSA facility illustrates why this technique is not always ideal. In this instance, the government agency's data center will likely overtax already depleted water resources. TreeHugger reported that the facility will account for 1 percent of all water used in the area. As a result of its construction, Bluffdale municipal officials have started looking at alternative sources of water.
One of the key reasons why the new NSA data center's water cooling plan is "environmentally appalling," according to TreeHugger, is because that region is currently beset by drought. The latest statistics from the U.S. Drought Monitor show that most of Utah is currently in the grips of a moderate to severe drought, and that the state has been abnormally dry for the past 12 months. Utah is far from alone too, as the state is currently one of 15 dealing with similar or worse drought conditions.
Why temperature monitoring equipment is more sustainable
While water-based cooling helps data centers in states like Georgia that have a more abundant supply, such a system is not ideal in states like Utah in which already scarce water is becoming increasingly rare. Instead of using water, data center operators with facilities in drought-stricken geographies should utilize temperature monitoring equipment. Google noted that servers can withstand temperatures of up to 80 degrees Fahrenheit, and turning up the thermostat can lead to enormous cost savings.
This is risky, though, as raising the computer room temperature even slightly increases the odds of servers becoming too hot and breaking down. To make sure this doomsday scenario does not come to fruition, data center managers should leverage temperature monitoring equipment such as a temperature sensor. That way, staff can maintain effective oversight of internal conditions at any time and from any location. | <urn:uuid:9178d19c-4e8a-4a92-a2c8-4e2a29ab8505> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/data-center/why-water-is-not-always-the-best-temperature-monitoring-system-476457 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951469 | 638 | 2.875 | 3 |
Humidity and Temperature Sensor
The Humidity and Temperature Sensor is an Arduino-compatible sensor board that carries an SHT21 digital humidity and temperature sensor from Sensirion. It has a 4-pin interface that can communicate directly with the analog pins on the Arduino.
The SHT21 utilizes a capacitive sensor element to measure humidity, while the temperature is measured by a band gap sensor. Both sensors are seamlessly coupled to a 14-bit ADC, which then transmits digital data to the Arduino over the I2C protocol. Because of the sensor’s tiny size, it has incredibly low power consumption, making it suited for virtually any application.
To optimize accuracy of temperature and humidity readings, the SHT21 sensor is placed at the tip of the board, isolating it from heat producing circuitry. Two of the sensor’s four pins are ground and +5V; the other two are clock and data pins that make up the 2-wire serial interface. The board is 5V tolerant and designed to operate directly off the supply voltage coming from the Arduino. The software library for the Humidity Sensor is integrated into the latest Antipasto Arduino IDE, making it truly plug-and-play.
The sensor can be placed in soil to measure moisture and temperature, which makes it ideal for using it in a garden or greenhouse. The humidity sensor is also especially useful in building remote weather stations.
Board comes assembled with 4-pin male headers soldered on.
- Energy consumption: 80 uW (at 12 bit, 3V, 1 measurement/s)
- Relative Humidity operating range: 0-100% RH
- Relative Humidity resolution of 0.03%
- Relative Humidity Response Time of 8 sec (tau 63%)
- Temperature operating range: -40 to +125°C
- Temperature resolution of 0.01 C
- 4 pins: +5V, GND, Clock (SCL), Data (SDA)
- Bidirectional communication over a single pin on I2C protocol
- Board is 5V tolerant, allowing sensor to run from a 5V supply on Arduino I/O pins
- Homes, basements, and HVAC systems for measuring humidity
- People with physical conditions sensitive to humidity
- Home ventilating, heating and air conditioning systems
- Meteorology stations to predict or check weather temperatures
- Gardens or greenhouses to check humidity and temperatures
Humidity Sensor Cheatsheet (175.766 KBytes, Document file, May 31, 2010)
Code and pin connections to get the humidity sensor up and running in 5 minutes or less. | <urn:uuid:074a5acf-006b-4706-8c04-b2f7208015b0> | CC-MAIN-2017-04 | http://www.liquidware.com/shop/show/SEN-SHT/Humidity+and+Temperature+Sensor | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.859162 | 544 | 2.59375 | 3 |
NASA scientists are working to bring the Mars Reconnaissance Orbiter, which has been orbiting the Red Planet for eight years, back online after the spacecraft suffered a glitch Sunday.
The orbiter put itself into safe mode and swapped from its main computer to a backup, NASA said.
"The spacecraft is healthy, in communication and fully powered," said Dan Johnston, NASA's project manager for the orbiter. "We have stepped up the communication data rate, and we plan to have the spacecraft back to full operations within a few days."
The orbiter is one of several NASA robotic machines that is studying the Red Planet. The spacecraft has been working in conjunction with the Mars rovers Curiosity and Opportunity, and another orbiter, the Odyssey.
In addition to studying Mars, the Reconnaissance orbiter relays data and images from Curiosity and Opportunity back to Earth, and relays commands from Earth to the rovers.
Sunday's glitch has kept NASA from receiving nformation about the movements of the two rovers. Scientists also have been unable to send new commands to the rovers.
This isn't the first time the orbiter has put itself into safe mode. NASA reported that this has happened four other times in the spacecraft's eight years in the Mars orbit. The last time it happened was in November 2011.
NASA's tech team never discovered what problem sent the orbiter into safe mode on the other occasions.
This time, though, the orbiter went into safe mode after switching from a main radio transponder to a backup. The transponder is used to gather signals from the rovers and send those signals back to Earth.
According to NASA, scientists won't try to switch the orbiter back to the main transponder but will try to figure out why it made the switch.
The Reconnaissance orbiter began its work in March 2006 and completed a two-year mission. It is now on its third extension.
This article, NASA tries long-distance repair on robotic Mars orbiter, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about government/industries in Computerworld's Government/Industries Topic Center.
This story, "NASA tries long-distance repair of Mars orbiter" was originally published by Computerworld. | <urn:uuid:b5fc3d06-6533-4f81-a9f0-667924ea595e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2175108/data-center/nasa-tries-long-distance-repair-of-mars-orbiter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00470-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956095 | 525 | 2.71875 | 3 |
Linux Security: Easy as 1-2-3
"Linux is a secure OS."
You've probably heard this statement from time to time, and compared to Windows you could argue that it is. But really it's kind of a meaningless statement: no system which is connected to a network or used by human beings is completely secure, and if it was it would probably be useless.
But you can certainly beef up the security of a given Linux system to make it more secure than it would otherwise be - while still enabling it to do its job - and it's that process, known as hardening, that is the subject of this article. Without going in to the finer details, we'll be looking at the general steps you should take to harden any system under your control that warrants extra security beyond what you believe is necessary for your "normal" systems.
Before you can start the process of hardening a given system you need to have a clear idea of what they system is to be used for, what software it will therefore need to run, and the sorts of threats or vulnerabilities you want to protect against.
1. Start From Nothing
It's only possible to harden a system from a known secure state, and in practice this means starting from scratch with a bare system. This means you have the opportunity to partition the system's disk however you like, and it's a prudent security measure which costs nothing to separate the OS files from all the other data you'll be putting on it.
The next step is to configure a minimum install to get the system booted, and add in the extra packages you need to enable to the system to do the job you want it to. Why a minimum install? Because the less code on your machine, the less vulnerabilities there are waiting to be exploited: you can't exploit what isn't there. You'll also need to apply any security patches to the OS, or any of the packages running on it.
At this point it is worth noting that all the patching in the world is pointless if a potential attacker can get physical access to the machine - he could simply pick it up and walk off with it. So part of the process of hardening a server involves placing it in a secure environment. It's also possible that a more stealthy intruder who gains physical access to the server could boot the server from a CD-drive or other device and then browse, modify of steal any data the OS he boots into could see. (Booting a Windows server from a Linux CD is a classic way of gaining access to passwords in the supposedly secure Windows SAM database.) So it's wise to configure the system's BIOS to restrict booting to the system's internal hard drive, and to lock the BIOS and boot loader down with a strong password.
The next thing is to compile your own kernel, again including only the parts that you really need. Once your custom kernel is built and installed and you reboot into your kernel, you have a running system with a limited attack surface. But there are still plenty of ways to harden it further - the fun has only really just begun ...
2. Pare Down Services
With a running, slimmed down system, the next step is to make sure that only the system services you really need are running. Most will have been weeded out by now, but it's still likely that some will still be running in the background. You'll have to track them down by looking in the various locations like /etc/init.d and /etc/rc.d/rc.local that hold boot scripts for various daemons, checking anything launched by cron, and so on. You can also check what services are listening on sockets using netstat or even a port mapper like Nmap.
Some of the services you should look at disabling if they are running (unless you specifically need them) include:
- Network file systems e.g. samba
- Printing services e.g. cups
- Mail services e.g. pop, imap, sendmail
- Remote access daemons e.g. telnetd, ftpd, rlogind
- X (window system)
Of course there'll be some services that you will wish to allow, and one way to limit the amount of potential damage they could do to the rest of the system is to run them in their own chroot jails when possible so that they are isolated from the rest of the file system.
3. Consider Permissions
Mirroring the old spy diktat about "need to know", you can also harden your file system by ensuring that no user is given the power to do anything that is not strictly necessary. You can do this by performing an audit and reducing permissions for each file to the minimum possible, and ensuring that group permissions reflect the make-up of your groups. Ultimately, the aim is that no-one should be able to read or write files that they have no business to. You should also probably encrypt any particularly sensitive data.
The logical continuation of this is to make sure no-one can get access to accounts that they shouldn't by ensuring you have a secure root password known by as few administrators as practical, that other user credentials are up to date, and that policies like password expiry periods are adhered to. It's also very wise to remove any predefined accounts provided with default passwords, or to change the default passwords at the very least.
As had been said many times before, security is a process not a job. This means that to keep a machine in a hardened state it's vital that you keep watching and hardening it further when necessary. To do this you'll need to monitor the system and its logs, apply any patches in a timely fashion, and keep up with security news so you can deal with any vulnerabilities as soon as they become known. And remember that this article is not an exhaustive checklist - just a series of pointers to areas where there are opportunities to harden your systems.
There are always more steps you can take which will make your Linux system more secure and less productive. The key, as ever, is finding the right balance. | <urn:uuid:9c40d7f6-c7a4-4ad4-b97f-3b68380f9f12> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3755376/Linux-Security-Easy-as-123.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00066-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963595 | 1,263 | 2.90625 | 3 |
Carl Manion is a managing principal of Raytheon Foreground Security.
Targeted attack campaigns by advanced cyber adversaries have become a mainstay that most—if not all—organizations now need to be concerned about. This type of threat may stay hidden on your network, undetected for long periods of time, laterally moving across your systems as the attackers try to find the valuable information they’re interested in stealing.
Although such targeted attacks are difficult to detect, there are proven techniques and best practices, such as threat hunting, that can be implemented to significantly improve your chances of finding clues that serve as indicators of ongoing attacks. As such, it’s highly critical for enterprises to incorporate best practices into their security operations to mitigate the risks that targeted attacks pose.
» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.
Implementing a threat-hunting capability, along with standard IT security controls and monitoring systems, can improve an organization’s ability to detect and respond to threats. Because threat hunting is primarily a human-based activity, it takes skilled threat-hunting experts to implement an effective program.
So what makes a threat hunter successful? Here’s a list of four critical skills:
1. Pattern Recognition/Deductive Reasoning: Attackers are constantly finding new, creative ways to exploit weaknesses in popular operating systems and applications. Unforeseen zero-day exploits with no existing signatures are nearly an everyday occurrence, therefore, threat hunters need to look for patterns that match the tactics, techniques and procedures of known threat actors, advanced malware and unusual behaviors. To detect such patterns, a skilled threat hunter must also understand what normal behavior and patterns look like on their network. They must also be able to formulate and develop logical theories on how to access a network or exploit a system to gain access to specific critical information. Once they’ve developed their theory, they need to be able to work backward, using deductive reasoning, to look for likely clues and traces that would be left behind by attackers within those scenarios.
2. Data Analytics: Threat hunters rely on technology to monitor environments and collect logs and other data to perform data analytics. As such, threat hunters must have a solid understanding of data analytics and data science approaches, tools and techniques. Leveraging best practices such as the use of data visualization tools to create charts and diagrams significantly helps threat hunters identify patterns so they can determine the best actions to take in conducting threat-hunting activities and related investigations.
3. Malware Analysis/Data Forensics: When threat hunters find new threats, they often have to analyze and reverse engineer newly discovered malware and data forensics activities to understand how the malware was initially deployed, what its capabilities are and the extent of any damage or exposure it may have caused.
4. Communication: Once a threat hunter detects a threat, vulnerability, or weakness within the target network, they must effectively communicate to the appropriate stakeholders and staff members so the issue can be addressed and mitigated. If threats and related risks aren’t properly communicated to the right stakeholders, attackers will continue to have the upper hand.
As cyber adversaries continue to evolve, skilled threat analysts are needed to help defend our networks. Fortunately, a recent survey conducted by the National Cyber Security Alliance found 37 percent of young adults say they’re more likely to consider a cyber career than they were a year ago. Young adults also said they’re interested in career opportunities that will allow them to use their problem-solving, data analysis and communication skills. Threat hunting is an opportunity for them to use all of those skills. | <urn:uuid:f580246a-9ac7-420a-ba09-25e92cc6ff42> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/tech-insider/2017/01/4-skills-every-threat-hunter-should-have/134186/?oref=ng-relatedstories | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947349 | 746 | 2.53125 | 3 |
Network Your Files in a Snap with NFS
NFS, Network File System, is the original file-sharing method among UNIX-based computers. Originally developed by Sun, NFS is still widely used, since it is a (relatively) simple and effective means to provide a centralized file server.
We will be implementing an NFS server step by step in this article, exploring methods for simply sharing a directory, and also briefly talking about making users' home directories live on the server. A second installation will deal with the intricacies of NFS options, auto-mounting, and the differences between operating systems' NFS implementations.
Older NFS versions, which most people use for the sake of interoperability, have practically zero security. The server will believe what it's told about the UID/GID of files, so it should be protected from the Internet. Additionally, it should be limited to only serving files for clients that you designate. The easiest way to limit NFS mounts is with tcpwrappers, configurable via /etc/hosts.allow. Portmap, lockd, rquotad, statd, and mountd should all be limited to networks or specific IP addresses of trusted NFS clients.
Since Linux' NFS configuration options are quite similar to other Unix variants, we will be assuming a Linux client and server for this article.
First things first: We should begin by starting the necessary NFS services. On the server side, most distributions have a startup script designed to accomplish this. Running something like /etc/init.d/nfs start will fire up the NFS server properly on most distributions.
Using rpcinfo -p should return a bit of information about which RPC (define) services are running. At a minimum, for NFS to function, you should see: portmap, status, mountd, nfs, and nlockmgr. Any missing items will require that you figure out why they are missing before proceeding. Note that these names are based on the most current nfs-utils package, currently nfs-utils-1.0.6-22. Your specific Linux distribution's documentation should provide more information about how to make sure everything is started at boot time.
Now on to the fun part: sharing directories. The file /etc/exports is used to specify which file systems should be exported to which clients. This is basically a listing of:
"directory machine1(options) machine2(options)…"
Examples should make it clear:
To share /usr read-only to two IP addresses:
/usr 192.168.0.1(ro) 192.168.0.2(ro)
To share /usr/local read-write to one machine, and read-only to everyone else:
/usr/local 192.168.0.5(rw) *(ro)
There are many ways to share directories, and many configurable options. Client lists can be netgroups, IP addresses, a single host, wildcards, or IP networks. Refer to "man exports" for more exhaustive details. The server also needs to be told to reread the configuration when it changes. This can be accomplished by sending -HUP to the nfs daemon, or by running exportfs -ra.
If everything was done properly, this server should be ready to serve NFS. The command showmount -e will list the exported file systems. If an RPC error was returned, that generally means a necessary service is not running.
Most current Linux distributions will support NFS mounting out of the box. To check for kernel support, run grep nfs /proc/filesystems.
If enabled, a few lines mentioning NFS will be present. If not, you'll need to get a different kernel with built-in NFS support, or compile the module. Running insmod nfs should load the module if it happens to be present already.
On the client side, NFS requires a few RPC services before you can successfully mount a remote file system. Portmap, lockd, and statd all need to be running, and should be visible in the output of rpcinfo -p.
You can manually mount an NFS file system as root in the same manner as you would mount a local file system or CDROM.
For testing, I created a directory called /mnt/remote.
Then ran mount 192.168.0.100:/usr /mnt/remote, where 192.168.0.100 is the NFS server's IP address. It should silently return to your prompt.
ls /mnt/remote will now reflect the contents of the server's /usr directory.
It's really that simple. The most frequent cause of NFS (clients or servers) not working is because the appropriate services are not running. Knowing how to check for them (rpcinfo -p) makes troubleshooting NFS a breeze. Once the file system is mounted, the mount command will show you that /mnt/remote is mounted from the server, and it will also display the mount options.
Mounting many different directories manually is no fun, and you can't do it as a normal user. On the bright side, NFS file systems can be added to /etc/fstab just like any other file system. An example of our above mount in /etc/fstab looks like:
192.168.0.100:/usr /mnt/remote nfs ro 0 0
A few options worth mentioning here are hard, soft and intr.
Mounting "hard" or "soft" has to do with how NFS will deal with the server disappearing suddenly because of a crash or network outage. Soft mounting is dangerous. If a read or write operation fails, the NFS client will report and error to the process that was executing the read or write. Most programs won't handle this properly, so it is best to simply use the hard mount option.
Hard mounting means that (just like when a disk fails) the process will hang waiting on i/o if a read or write doesn't process immediately. When the server comes back up, the process will continue normally. This method guarantees data integrity, but can be a bit annoying. Processes will become hung and non-killable unless you also specify the "intr" option, meaning interruptible. Using these recommended options, the fstab entry above will now have "ro,hard,intr" in the options field.
That about covers the basics of NFS. There are many uses for NFS aside from the occasional mounting of directories to copy files.
In an organization with multiple UNIX computers where users login to them all, it may be beneficial to move the user's home directories onto one file server. You can simply export /home, and then tell all the clients to get /home from the server, and the home directories will be the same no matter which machine users are logged in to. There are other ways to accomplish this as well, such as using autofs.
The next part of this NFS series will have more detail about automatic directory mounting. | <urn:uuid:b658b75b-4d40-4c27-a7b8-7ead311c64bc> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netos/article.php/3490921/Network-Your-Files-in-a-Snap-with-NFS.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00214-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912621 | 1,487 | 3.125 | 3 |
Just a couple of months after everybody freaked out at the idea of some guys in Texas inventing a 3D-printed gun for the masses, those crazy guys finally produced a working prototype. "Working" might be an overstatement. In the weapon's first test at a shooting range, it managed to fire just six bullets before literally falling apart. No big deal. They expected this to happen, Cody Wilson, the founder of the Wiki Weapon project, told Wired. "We knew it would break, probably," said Wilson. "But I don't think we thought it'd break within six [rounds]. We thought it'd break within 20." It is made of plastic, after all.
Before we all laugh about this, let's clear up a couple of things. First of all, the 3D-printed gun as we know it isn't actually 3D-printed. Only part of it is -- the lower receiver, to be exact. The lower receiver is arguably the most important part of the gun, though, since it basically holds everything together, andaccording to the Gun Control Act of 1968, it's important enough to be regulated as if it were the entire gun itself. In fact, the lower receiver is the gun in the eyes of the law. However, it's less difficult to get the various other parts of a gun, like the barrel, the stock and the trigger.
Second is the unnervingly never-ending saga of Defense Distributed, the shell organization set up for the Wiki Weapon project. Under Brown's fearless leadership, this group wants to upload the design for a 3D-printed gun to a publicly available website so that anyone can download them and feed them into a 3D printer. | <urn:uuid:363f2d66-4ec5-4aa8-9801-e1c987f6400b> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2012/12/3d-printed-gun-masses-doesnt-actually-work-very-well/59934/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00150-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971749 | 351 | 2.703125 | 3 |
So this first column as we know is the ones column, then our tens, hundreds, and thousands. But really that’s ten to the zero, our ten to the 1 column, our ten squared column, and ten cubed, which means ten times ten times ten. So this is what we’re used to with decimal numbers, different positions have different values.
In binary numbers, it’s the exact same thing except instead of using base 10, we use base 2. Our values are 2 to the zero is one, then two to the one is two, two squared is four, then two times two times two is eight, that’s two cubed. Then two the fourth is 16, our next column is 32, then 64 and 128. This makes eight binary bits.
Your binary is 11010010
How to Determine the Size of a Network
So how do we determine the size of a network? The size of the network is determined by the number of host bits. The more host bits you have, the larger your network will be. If you have one host bits, it could either be a 0 or a 1. If you have two bits, it could be 00, 01, 10, or 11 so we have four combinations with two bits. So the size is equal to 2 to the number of host bits. That determines the size of our network.If we have one bit, it’s 21 or 2. If we have two bits it’s 22 or 4 possible combinations. If we have 8 host bits, it would be 28 which equals 256, and that would be the size of that network.
Guest Blogger: Jill Liles | <urn:uuid:1917ba77-153e-48e4-ad88-d2560f9c849c> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/03/05/subnetting-made-easy-part-1-decimal-binary-numbers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00204-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895846 | 350 | 4 | 4 |
Scientists at Argonne National Laboratory say they have created "hairy" electronic materials that grow like Chia pets.
The Argonne researchers said they are interested in the tiny fibers for use in technologies like batteries, photovoltaic cells or sensors.
"'Hairy" materials offer up a lot of surface area. Many chemical reactions depend on two surfaces making contact with one another, so a structure that exposes a lot of surface area will speed the process along. (For example, grinding coffee beans gives the coffee more flavor than soaking whole beans in water.) Micro-size hairs can also make a surface that repels water, called superhydrophobic, or dust," the researchers said in a statement.
+More on Network World: US lab developing technology for space traffic control+
The tiny-fiber structure is so useful that it's evolved several times in nature. For example, blood vessels are lined with a layer of similar tiny protein "hairs," thought to help reduce wear and tear by blood cells and bacterial infections, among other properties, according to Argonne physicist Igor Aronson, who co-authored the study.
The process that produced the Chai pet like growth included a mixture of epoxy, hardener and solvent inside an electric cell. Then the scientists ran an alternating current through the cell and watch long, twisting fibers spring up -- like the way Chia pets grow. The researchers said they can grow different shapes: short forests of dense straight hairs, long branching strands or "mushrooms" with tiny pearls at the tips.
In one experiment the researchers said they laid down a molecule-thick layer of material over the entire hairy structure, like a fresh blanket of snow, to add a layer of semiconductor material. Semiconductors are essential ingredients in many technologies, such as solar cells and electronics. This experiment provided proof of concept that the polymer could be incorporated into semiconductor-based renewable energy technologies. It also proved that it could survive high temperatures, up to 150°C, an essential property for many manufacturing processes.
The study, "Self-assembled tunable networks of sticky colloidal particles," was published in Nature Communications. Researchers from the Illinois Institute of Technology, the Russian Academy of Sciences and N.I. Lobachevsky State University in Russia co-authored the study.
Check out these other hot stories: | <urn:uuid:52789129-5e00-40f2-9ebc-e62952df2978> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226277/security/argonne-lab-grows-chia-pet-style-hairy-electronic-fibers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929268 | 483 | 3.578125 | 4 |
What's in a Function Description?
An overview of what's in the documentation for a function
The entry for each function in this reference typically includes the following sections:
This section gives the header files that should be included within a source file that references the function or macro. It also shows an appropriate declaration for the function or for a function that could be substituted for a macro. This declaration isn't included in your program; only the header file(s) should be included.
When a pointer argument is passed to a function that doesn't modify the item indicated by that pointer, the argument is shown with const before the argument. For example, the following indicates that the array pointed at by string isn't changed:
const char *string
This section gives a brief description of the arguments to the function.
The section indicates the library that you need to bind with your application in order to use the function.
To link against a library, use the -l option to qcc , omitting the lib prefix and any extension from the library's name. For example, to link against libsocket, specify -l socket. For more information, see the Compiling and Debugging chapter of the Neutrino Programmer's Guide.
This section describes the function or macro.
This section gives the return value (if any) for the function or macro.
This section describes the special values that the function might assign to the global variable errno .
This optional section gives one or more examples of the use of the function. The examples are often just code snippets, not complete programs.
This section tells where the function or macro is commonly found, which may be helpful when porting code from one environment to another. Here are the classes:
- These functions or macros are defined by the ANSI C99 standard.
- Large-file support
- These functions support 64-bit offsets.
- These functions are part of the NetBSD free, open-source system. For more information, see http://netbsd.org/ .
- POSIX 1003.1
- These functions are specified in the document
Information technology — Portable Operating System Interface
(IEEE Std 1003.1, 2004 Edition).
This standard incorporates the POSIX 1003.2-1992 and 1003.1-1996 standards, the approved drafts (POSIX 1003.1a, POSIX 1003.1d, POSIX 1003.1g and POSIX 1003.1j) and the Standard Unix specification. A joint technical working group — the Austin Common Standards Revision Group (CSRG) — was formed to merge these standards.For information about the many POSIX drafts and standards, see the IEEE website at http://www.ieee.org/ .
A classification of POSIX 1003.1 can be followed by one or more codes that indicate which option or options the functions belong to. The codes include the following:
Code Meaning ADV Advisory Information AIO Asynchronous Input/Output BAR Barriers CPT Process CPU-Time Clocks CS Clock Selection CX Extension to the ISO C standard FSC File Synchronization MF Memory Mapped Files ML Process Memory Locking MLR Range Memory Locking MPR Memory Protection MSG Message Passing OB Obsolescent PS Process Scheduling RTS Realtime Signals Extension SEM Semaphores SHM Shared Memory Objects SIO Synchronous Input/Output SPI Spin Locks SPN Spawn TCT Thread CPU-Time Clocks THR Threads TMO Timeouts TMR Timers TPI Thread Priority Inheritance TPP Thread Priority Protection TPS Thread Execution Scheduling TSA Thread Stack Address Attribute TSF Thread-Safe Functions TSH Thread Process-Shared Synchronization TSS Thread Stack Size Attribute TYM Typed Memory Objects XSI X/Open Systems Interfaces Extension XSR XSI Streams
If two codes are separated by a space, you need to use both options; if the codes are separated by a vertical bar (|), the functionality is supported if you use either option.
For more information, see the Standard for Information Technology — Portable Operating System Interface: Base Definitions.
- QNX 4
- These functions or macros are neither ANSI nor POSIX.
They perform a function related to the QNX OS version 4.
They may be found in other implementations of C for personal computers with the QNX 4 OS.
Use these functions with caution if portability is a consideration.
Any QNX 4 functions in the C library are provided only to make it easier to port QNX 4 programs. Don't use these in QNX Neutrino programs.
- QNX Neutrino
- These functions or macros are neither ANSI nor POSIX. They perform a function related to the QNX Neutrino OS. They may be found in other implementations of C for personal computers with the QNX OS. Use these functions with caution if portability is a consideration.
- RFC 2292
- Based on W. Stevens and M. Thomas, Advanced Sockets API for IPv6, RFC 2292, February 1998.
- Simple Network Management Protocol is a network-management protocol whose base document is RFC 1067. It's used to query and modify network device states.
- These Unix-class functions reside on some Unix systems, but are outside of the
POSIX or ANSI standards.
We've created the following Unix categories to differentiate:
- Legacy Unix
- Functions included for backwards compatibility only. New applications shouldn't use these functions.
- Other Unix functions.
This section summarizes whether or not it's safe to use the C library functions in certain situations:
- Cancellation point
- Indicates whether calling a function may or may not cause the thread to be terminated if a cancellation is pending.
- Interrupt handler
- An interrupt-safe function behaves as documented even if used in an interrupt handler. Functions flagged as interrupt-unsafe shouldn't be used in interrupt handlers.
- Signal handler
- A signal-safe function behaves as documented even if called from a
signal handler even if the signal interrupts a signal-unsafe function.
Some of the signal-safe functions modify errno on failure. If you use any of these in a signal handler, asynchronous signals may have the side effect of modifying errno in an unpredictable way. If any of the code that can be interrupted checks the value of errno (this also applies to library calls, so you should assume that most library calls may internally check errno), make sure that your signal handler saves errno on entry and restores it on exit.
All of the above also applies to signal-unsafe functions, with one exception: if a signal handler calls a signal-unsafe function, make sure that signal doesn't interrupt a signal-unsafe function.
- A thread-safe function behaves as documented even if
called in a multi-threaded environment.
Most functions in the QNX Neutrino libraries are thread-safe. Even for those that aren't, there are still ways to call them safely in a multi-threaded program (e.g. by protecting the calls with a mutex). Such cases are explained in each function's description.
- The safety designations documented in this manual are valid for the current release and could change in future versions.
- It isn't safe to use floating-point operations in Interrupt Service Routines (ISRs) or signal handlers.
For a summary, see the Full Safety Information appendix. | <urn:uuid:654add0f-96e2-4b2d-954e-b56151df2504> | CC-MAIN-2017-04 | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/summary.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00048-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.808994 | 1,569 | 3.125 | 3 |
MIT researchers have released a report titled The Future of the Electric Grid which examines the current state of the nation's electric power infrastructure and the challenges faced by the industry as it evolves over the next two decades.
Of particular concern is the ability to protect these critical resources as they are converted to "smart" networks with greater reliance on modern two-way information systems.
As the majority of nation's power provider authorities operate essentially as individual fiefdoms, one of the major challenges identified in the study is the integration of a smorgasbord of systems and technology to produce one unified national energy delivery system, and to avoid creating vulnerabilities in the process.
"From a cybersecurity perspective, interfacing so many different hardware and software components introduces vulnerabilities—especially when new and legacy hardware and software need to operate together... Perfect protection from cyber attacks is not possible. There will be a successful attack at some point," the MIT report concludes.
Though the individual utilities are governed by the Department of Energy (DoE), the study indicates that there is a lack of a clearly designated entity to address cyber security in the emerging smart grid system which creates a leadership gap that may amplify potential weaknesses in the network.
"Lack of a single operational entity with responsibility for grid cybersecurity preparedness as well as response and recovery creates a security vulnerability in a highly interconnected electric power system," the report states.
The report also notes that even with the most diligent of planning, the evolution of the smart grid will more than likely present vulnerabilities that can not be anticipated ahead of time - vulnerabilities we may not uncover until after they have been exploited by an attacker.
"The highly interconnected grid communications networks of the future will have vulnerabilities that may not be present in today’s grid. Millions of new communicating electronic devices, from automated meters to synchrophasors, will introduce attack vectors— paths that attackers can use to gain access to computer systems or other communicating equipment—that increase the risk of intentional and accidental communications disruptions," MIT researchers stated.
The MIT study echos a September report prepared by the Idaho National Laboratory (INL) for the Department of Energy which examined security issues for the nation's next generation electrical grid.
The report, titled Vulnerability Analysis of Energy Delivery Control Systems, underscores the need to design and implement these new energy delivery systems with security as a top priority regardless of budgetary concerns.
"Cybersecurity for energy delivery systems has emerged as one of the Nation’s most serious grid modernization and infrastructure protection issues. Cyber adversaries are becoming increasingly targeted, sophisticated, and better financed... The energy sector must research, develop and deploy new cybersecurity capabilities faster than the adversary can launch new attack tools and techniques," the report states.
While the notion that administrators will be able to deploy mitigation strategies faster than attackers can exploit them may seem somewhat optimistic - if not naive - the potential consequences of successful exploit could be devastating to the system as a whole, and the report points to the Stuxnet virus attacks in Iran as prime example.
The Stuxnet virus is a highly sophisticated designer-virus that wreaks havoc with SCADA systems which provide operations control for critical infrastructure and production networks.
"The Stuxnet worm—designed to attack a specific control system similar to those found in some energy sector applications—underscores the seriousness of targeted cyber attacks on energy control systems," the INL report notes.
The INL report also examines in great detail a myriad of vulnerabilities identified in security audits over the last seven years, noting that each of the top ten risks have been discovered in multiple systems with a wide range of deployed equipment and software configurations, many of which are attributed to the lack of secure coding practices.
"Vulnerabilities caused by less secure coding practices can be found in new and old products alike, and the introduction of Web applications into SCADA systems has created more, as well as new, types of vulnerabilities. The 10 most significant cybersecurity risks identified during NSTB software and production SCADA assessments are:"
- Unpatched published known vulnerabilities
- Web Human-Machine Interface (HMI) vulnerabilities
- Use of vulnerable remote display protocols
- Improper access control (authorization)
- Improper authentication
- Buffer overflows in SCADA services
- SCADA data and command message manipulation and injection
- SQL injection
- Use of standard IT protocols with clear-text authentication
- Unprotected transport of application credentials
Given the evidence presented, one has to wonder whether the rush to implement a smart grid system on a national level in the face of limited resources for expenditure is only inviting serious and even catastrophic events down the line.
The consensus is and always has been that there is not absolute security, and we must ask ourselves how big our risk appetite is in regards to the potential for major disruptions to commerce, communications, and national security - especially in light of the less than optimistic appraisals presented by the researchers who produced both of these studies.
"With rapidly expanding connectivity and rapidly evolving threats, making the grid invulnerable to cyber events is impossible, and improving resilience to attacks and reducing the impact of attacks are important. As a joint NERC–DOE report notes, 'It is impossible to fully protect the system from every threat or threat actor. Sound management of these and all risks to the sector must take a holistic approach, with specific focus on determining the appropriate balance of resilience, restoration, and protection'," the MIT report noted. | <urn:uuid:90c6e937-aee2-4a1b-9ffa-cc4e0a1102c1> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/18590-Smart-Grid-There-Will-be-a-Successful-Attack.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00013-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939577 | 1,111 | 2.640625 | 3 |
On Mars, it's as if the shutdown never happened
- By Frank Konkel
- Oct 18, 2013
Curiosity chugs toward Mars' Mt. Sharp. (NASA photo)
On Mars, it's as if the government shutdown never happened. NASA's Mars rover, Curiosity, continued chugging toward a mountain on the Red Planet while Congress fought over conditions for a spending deal.
NASA staffed a skeleton crew of 550 out of its total 18,000 workforce during the shutdown, keeping Curiosity on its eight-kilometer trek to Mount Sharp. Moving at a maximum speed of 1.5 inches per second, Curiosity is sometimes able to cover 40 meters of ground per day. While it continued sending images back to NASA as it journeyed to its ultimate Martian destination, the rover's team was happy to reintroduce itself via its favorite means of public communication: Twitter.
Eager to be back after a nearly three-week social media hiatus, the Curiosity team linked to a picture of the 5.5-km Mount Sharp, which Curiosity could reach by the end of 2013.
"Allow me to reintroduce myself," the Curiosity team announced on Twitter to its 1.5 million followers. "I'm back on Twitter & even closer to Mars' Mount Sharp."
Fortunately for NASA, the shutdown does not appear to have delayed the launch of its next Mars probe, the Mars Atmosphere and Volatile Evolution (MAVEN) orbiter.
According to NASA officials, the $650 million mission remains on pace to launch Nov. 18 because NASA granted it a shutdown exemption. In addition to studying the Martian atmosphere, MAVEN is to act as a communications relay between NASA and the two rovers cruising around on the Red Planet: Curiosity and Opportunity. The orbiter NASA currently uses as a communications relay is more than a decade old.
Had it not been exempted, the shutdown could have caused MAVEN to miss its window, which closes Dec. 7. If that had happened, the next possible launch date for Maven would have been 2016 due to the positioning of Mars and Earth.
NOAA assessing shutdown impacts
Now back to full staff, the National Oceanic and Atmospheric Administration is sorting out whether the shutdown affected the development of its two largest satellite programs, the Joint Polar Satellite System and the Geostationary Operational Environmental Satellite-R (GOES-R) program.
Worth a collective $22 billion in estimated lifecycle costs, the satellite programs are vital to NOAA's mission of providing weather forecast data to scientists on the ground.
"Currently, NOAA is assessing the short and long-term impacts of the government shutdown to the development of, and launch schedules for, all the spacecraft in its satellite acquisition portfolio, particularly, GOES-R and JPSS," a NOAA spokesperson told FCW.
Both programs have experienced cost setbacks and launch delays in the past, and both received congressional attention in September after critical reports were released by the Government Accountability Office.
A team of experts from NOAA, NASA, the Department of Defense and international partners and contractors will complete an analysis of the impact of the shutdown to their costs and schedules over the next several weeks.
Frank Konkel is a former staff writer for FCW. | <urn:uuid:bb6d6e36-6bce-400f-bc2e-a632212692b8> | CC-MAIN-2017-04 | https://fcw.com/articles/2013/10/18/mars-curiosity-shutdown.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00013-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949988 | 660 | 2.78125 | 3 |
Jerseygirlinfl asked the Answer Line forum if photos floating around the Internet could contain mailware.
Cybercriminals use images in a number of ways to infect your computer. In most cases, the photo itself is harmless; it's just a trick to get you to do something stupid. But sometimes, a .jpg file itself will contain malicious code.
Let's look at a few ways in which an image can contain some real bad news.
[Email your tech questions to email@example.com.]
As you may have noticed, a lot of spam exists for the specific purpose of tricking you into visiting a particular website--often one that intends to download malware. Images can play a big part of that. You probably already know not to click a link in a suspicious email, but photos can be embedded in emails as they are in webpages--and do their dirty work when you open the mail.
Fortunately, most modern mail clients don't display such pictures by default. Best to keep it that way.
Another trick is the double extension, which takes advantage of Windows' file-naming conventions. If a file is named adorable.jpg.exe, most Windows computers will display it as adorable.jpg. Most users, therefore, will think it a harmless image file, even though it's really an executable program. And when you run the program, it probably will show you an adorable picture...while it infects your PC.
And finally, there's steganography, which in a digital context means the art of hiding data in another type of file. A .jpg can easily contain additional bits interwoven within the image, without noticeably effecting the image's appearance. That additional data can include code, which is encrypted to make it harder to identify.
Luckily, such an altered image can't do much by itself. No image viewer will see or know what to do with that code, even if it isn't encrypted. But malware developers often break up their code into multiple pieces and distribute them separately to avoid detection. The information hidden in a picture could contain instructions useful to another piece of malware on your computer. See Zeus banking malware hides crucial file inside a photo for one recent example.
How do you protect yourself? Giving up on images seems a bit extreme. There are better methods.
Keep your operating system, browser, and antivirus software up-to-date. Of course, you should be doing that already.
Be wary of photos whose origins you don't know.
And finally, have Windows show you file extensions so you won't be fooled. In Start menu's Search field, or in Windows 8's Search charm, type folder options. Select Folder Options. On the View tab, uncheck Hide extensions for known file types.
See the original forum discussion.
This story, "Watch Out for Photos Containing Malware" was originally published by PCWorld. | <urn:uuid:b0b2963e-2464-4e98-a210-72438db582af> | CC-MAIN-2017-04 | http://www.cio.com/article/2377548/malware/watch-out-for-photos-containing-malware.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923536 | 597 | 2.828125 | 3 |
Buffer Overflows are one of the most common and potentially deadly forms of attack against computer systems to date. They allow an attacker to locally or remotely inject malicious code into a system and compromise its security. This paper deals with the technical details concerning buffer overflows and the methods of prevention. Examples are in C and x86 assembly.
Download the paper in DOC format here. | <urn:uuid:e1940370-f711-4be2-9b44-8d692c62cb40> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/09/06/buffer-overflows---defending-against-arbitrary-code-execution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958193 | 75 | 2.78125 | 3 |
by Gregory R.Scholz, Northrop Grumman Information Technology
Wireless networks are described as both a boon to computer users as well as a security nightmare; both statements are correct. The primary purpose of this article is to describe a strong security architecture for wireless networks. Additionally, the reader should take from it a better understanding of the variety of options available for building and securing wireless networks, regardless of whether all options are implemented. The security inherent with IEEE 802.11 wireless networks is weak at best. The 802.11 standard provides only for
Wired Equivalent Privacy
, or WEP, which was never intended to provide a high level of security . For an overview of 802.11 and WEP, see reference . Wireless networks can, however, be highly secure using a combination of traditional security measures, open standard wireless security features, and proprietary features. In some regard, this is no different than traditional wired networks such as Ethernet, IP, and so on, which have no security built in but can be highly secure. The design described here uses predominantly Cisco devices and software. However, unless explicitly stated to be proprietary, it should be assumed that a described feature is either open standard or, at least, available from multiple vendors.
Customer needs range from highly secure applications containing financial or confidential medical information to convenience for the public "hot spot" needing access to the Internet. The former requires multiple layers of authentication and encryption that ensures a hacker will not be able to successfully intercept any usable information or use the wireless network undetected. The latter requires little or no security other than policy directing all traffic between the wireless network and the Internet. Security is grouped into two areas: maintaining confidentiality of traffic on the wireless network and restricting use of the wireless network. Some options discussed here provide both, whereas others provide for a specific area of security.
The level of security required on the wireless network is proportional to the skill set required to design it. However, the difficulty of routine maintenance of a secure wireless network is highly dependant on the quality of the design. In most cases, routine maintenance of a well-designed wireless network is accomplished in a similar manner to the existing administrative tasks of adding and removing users and devices on the network. It is also assumed that security-related services such as authentication servers and firewall devices are available on the wired network to control the wireless network traffic.
It is not necessarily the case that one can see the user or device attempting to use the wireless network. This is the most alarming part of wireless network security. In a wired network, an unauthorized connected host can often be detected by link status on an access device or by actually seeing an unknown user or device connected to the network. The term "inside threat" is often used to refer to authorized users attempting unauthorized access. This is the inside threat because they exist within the boundaries that traditional network security is designed to protect. Wireless hackers must be considered more dangerous than traditional hackers and the inside threat combined because if they gain access, they are already past any traditional security mechanisms. A wireless network hacker does not need to be present in the facility. This new inside threat may be outside in the parking lot.
is the new equivalent to the traditional war dialing. All that is required to intercept wireless network communications is to be within range of a wireless access point inside or outside the facility.
Physical Wireless Network
In a highly secure environment, a best practice is to have the wireless access points connect to a wired network physically or logically separate from the existing user network. This is accomplished using a separate switched network as the wireless backbone or with a
(VLAN) that does not have a routing interface to pass its traffic to the existing wired network. This network terminates at a
Virtual Private Network
(VPN) device, which resides behind a firewall. In this manner, traffic to and from the wireless network is controlled by the firewall policy and, if available, filters on the VPN device. The VPN device will not allow any traffic that is not sent through an encrypted tunnel to pass through, with the exception of directed authentication traffic described later. With this model, the wireless clients can communicate among themselves on the wireless network, but there is no access to internal network resources unless fully encrypted from the wireless client to the VPN. This design may be further secured by configuring legitimate wireless-enabled devices to automatically initiate a VPN tunnel at bootup and by enabling a software firewall on the devices that does not allow communication directly with other clients on the local wireless subnet. In this manner, all legitimate communication is encrypted while traversing the wireless network and must be between authenticated wireless clients and internal network resources.
Many security measures available relate to access controlled through individual user authentication. Authentication can be accomplished at many levels using a combination of methods. For example, Cisco provides
Lightweight Extensible Authentication Protocol
(LEAP) authentication based on the IEEE 802.1x security standard. LEAP uses
Remote Authentication Dial-In User Service
(RADIUS) to provide a means for controlling both devices and users allowed access to the wireless network.
Although LEAP is Cisco proprietary, similar functionality is available from other vendors. Enterasys Networks, for example, also uses RADIUS to provide a means for controlling
Media Access Control
(MAC) addresses allowed to use the wireless network. With these features, the access points behave as a kind of proxy, passing credentials to the RADIUS server on behalf of the client. When these features are properly deployed, access to the wireless network is denied if the MAC address of the devices or the username does not match an entry in the authentication server. The access points in this case will not pass traffic to the wired network behind them. For security, the authentication server should be placed outside the local subnet of the wireless network. The firewall and VPN devices must allow directed traffic between the access points and the authentication server further inside the network and only to ports required for authentication. This design protects the authentication server from being attacked directly.
In addition to authenticating users to the wireless network, the VPN authentication and standard network logon can be used to control access further into the wired network. In this solution, the VPN client has the ability to build its tunnel prior to the workstation attempting its network logon, but after the device has been allowed on the wireless network. After the tunnel is built, specific rules on the VPN and the firewall allow the traditional network logon to occur. A robust VPN solution also treats the users differently based on the group to which they are assigned. Different IP address ranges are assigned to each group, allowing highly detailed rules to be created at the firewall controlling access to internal network resources based on user or group needs. The policy on the firewall must be as specific as possible to restrict access to internal resources to only those clients for whom it is necessary. Building very specific policy for users' access will also allow an
Intrusion Detection System
(IDS) to better detect unauthorized access attempts.
LEAP also provides for dynamic per-user, per-session WEP keys. Although the WEP key is still the 128-bit RC4 algorithm proven to be ineffective in itself , LEAP adds features that maintain a secure environment. Using LEAP, a new WEP key is generated for each user, every time the user authenticates to use the wireless network. Additionally, using the RADIUS timeout attribute on the authentication server, a new key is sent to the wireless client at predetermined intervals. The primary weakness of WEP is due to an algorithm that was easy to break after a significant number of encrypted packets were intercepted. With LEAP, the number of packets encrypted with a given key can be tiny compared to the number needed to break the algorithm.
When using LEAP for user and device authentication, WEP encryption is automatically enabled and cannot be disabled. However, if added security is needed, a VPN, as described earlier, can provide any level of encryption desired. Using a VPN as the bridge between the wired and wireless network is recommended regardless of the underlying vendor or technology used on the wireless network.
(IPSec) is a proven, highly secure encryption algorithm available in VPNs. By requiring all wireless network traffic to be IPSec encrypted to the VPN over the WEP-encrypted 802.11 Layer 2 protocol, any data passed to and from wireless clients can be considered secure. All traffic is still susceptible to eavesdropping, but will be completely undecipherable.
Aside from WEP and LEAP, some vendors provide other forms of builtin security. Symbol Technologies' Spectrum24 product provides Kerberos encryption when combined with a Key Distribution Center. Kerberos is more lightweight than IPSec and, therefore, may be better suited to certain applications such as IP phones or low-end
personal digital assistants
(PDAs). Other methods of automating the assignment and changing of WEP keys are also available, such as Enterasys' Rapid-Rekey . Wireless vendors have realized that security has become of critical importance and most, if not all, are working on methods for conveniently securing wireless networks. When available, most vendors seemingly prefer to use open-standard, interoperable security mechanisms with proprietary security being additionally available.
Bringing it all together
Numerous options are available to secure a wireless network. A highly secure design will include, at a minimum, an authentication server such as RADIUS, a high-level encryption algorithm such as IPSec over a VPN, and access points that are capable of restricting access to the wireless network based on some form of authentication. When all the security options are tied together, the wireless network requires explicit authentication to allow a device and the user on the wireless network, the traffic on the wireless network is highly encrypted, and traffic directed to internal network resources is controlled per user or group by an access policy at the firewall or in the VPN.
There is no substitute for experience and research when designing a network security solution. Using network security and design experience to exploit available technologies can further increase security of a wireless network. For example, grouping users into IP address ranges based on access requirements allows firewall access policy to help restrict unnecessary access. This can be accomplished using
Dynamic Host Configuration Protocol
(DHCP) reservations, assigning per-user or -group IP address ranges to the VPN tunnels or statically assigning addresses. Using a centralized accounts database for all authentication helps avoid inadvertently allowing an account that has been disabled in one part of the network to access resources through the wireless network. To use an existing user database for authentication while providing for dynamic WEP keys, use a LEAP-enabled RADIUS server that has the ability to query another server for account credentials. As with most network designs, a solid understanding of the available technologies is paramount to achieving a secure environment.
Utilizing all the security described in this article would yield the following design. When a device first boots up, it receives an IP address within a specified range on a segregated portion of the network. This IP range is based on the typical usage of the device and is most useful for machines dedicated to specific applications. As a user attempts to log onto a wireless device, a RADIUS server authenticates both the MAC address and the username of the device. If the user authentication is successful, access is granted within the wireless network. In order for traffic to leave the wireless network to access other network resources, a VPN tunnel must be established. Again, the IP address assigned to the tunnel can be controlled based on individual user authentication to help enforce access policy through the firewall. When the tunnel is established, firewall access policy will restrict access to resources on the network. Most, if not all, of the authentications required may be automated to use a user's existing network logon and transparently complete each authentication. This is not the most secure model, but it would be as secure as any single signon environment.
A secure wireless network is possible using available techniques and technologies . After researching needs and security requirements, any combination of the options discussed here, as well as others not discussed, may be implemented to secure a wireless network. With the right selection of security measures, one can ensure a high level of confidentiality of data flowing on the wireless network and protect the internal network from attacks initiated through access gained from an unsecured wireless network. At a minimum, consider the current level of network security and ensure that the convenience of the wireless network does not undermine any security precautions already in place in the existing infrastructure.
"Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," IEEE Standard 802.11, 1999 Edition.
"802.11," Edgar Danielyan,
The Internet Protocol Journal
, Volume 5, Number 1, March 2002.
"War Driving," Andrew Woods,
, last viewed August 11, 2002.
"Cisco Aironet® Product Overview," Cisco Systems, , last viewed August 11, 2002.
"IEEE Standard for Local and Metropolitan Area Networks?Port-Based Network Access Control,&quto; IEEE Standard 802.1X, 2001.
"Remote Authentication Dial-In User Service," C. Rigney, S. Willens, A. Rubens, and W. Simpson, IETF
, June 2000.
"Security of the WEP Algorithm," Nikita Borisov, Ian Goldberg, and David Wagner,
, last viewed August 11, 2002.
"802.11 Wireless Networking Guide," Enterasys Networks, June 2002,
"Wireless LAN Security in Depth," Sean Convery and Darrin Miller, Cisco Systems,
, last viewed August 11, 2002.
"Making IEEE 802.11 Networks Enterprise-Ready," Arun Ayyagari and Tom Fout, Microsoft Corporation, May 2001, last viewed August 11, 2002.
GREGORY SCHOLZ holds a BS in Computer and Information Science from the University of Maryland. Additionally, he has earned a number of certifications from Cisco and Microsoft as well as vendor-neutral certifications, including a wireless networking certification. After serving in the Marine Corps for six years as an electronics technician, he continued his career working on government IT contracts. Currently he works for Northrop Grumman Information Technology as a Network Engineer supporting Brook Army Medical Center, where he performs network security and design functions and routine LAN maintenance. He can be reached at: | <urn:uuid:e64bc9c9-4708-4b96-a695-ec0f2f873d71> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-14/wireless-networks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00159-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929662 | 2,948 | 2.8125 | 3 |
Computer science is a fundamental skill in the modern economy, President Obama declared on Tuesday as the White House announced a series of initiatives aimed at advancing education in the STEM fields of science, technology, engineering and mathematics.
That includes a $200 million investment from Oracle to extend computer science education to 125,000 U.S. students, along with a host of commitments from federal agencies, schools and other groups to promote STEM training.
[ Related: Obama expands STEM education and training efforts ]
In remarks at the sixth annual White House science fair, Obama touted the efforts his administration has made to expand STEM education, and called on schools and businesses to encourage students "to actively engage and pursue science and push the boundaries of what's possible."
Reading, writing, arithmetic and computer science
"And that's why we're building on our efforts to bring hands-on computer science learning, for example, to all students," Obama said. "As I've said before, in the new economy, computer science isn't optional -- it's a basic skill, along with the three Rs."
Obama also called attention to the low rates of participation among women and minorities in the STEM fields, urging action to counter the "structural biases" within the STEM fields that have made those subjects feel like hostile ground for some students.
"We want to make sure every single one of our students -- no matter where they're from, what income their parents bring in, regardless of their backgrounds -- we want to make sure that they've got access to hands-on [STEM] education that's going to set them up for success and keep our nation competitive in the 21st century," Obama said. "But the fact is, is that we've got to get more of our young women and minorities into science and technology, engineering and math and computer science."
"[W]e're not going to succeed if we got half the team on the bench, especially when it's the smarter half of the team," he added to laughter from the audience, though it was plain that he was serious about the point.
As part of the administration's STEM push, the Education Department is issuing guidance to states, districts and individual schools to help secure federal grant funding to improve instruction in technical fields, including computer science.
In addition to Oracle's pledge of funding for computer science programs, more than 500 schools have committed to broadening access to computer science education, thanks in part to support from Code.org, a nonprofit group promoting education in the field.
Another nonprofit group, US2020, is supporting a new online program to help STEM workers find volunteer and mentor opportunities.
The White House announced a host of other initiatives to promote STEM education from federal agencies, schools and private-sector groups.
Before delivering his remarks in the White House East Room, Obama made the rounds at the science fair, chatting with several of the student teams about their projects. In the course of those conversations, he asked each of the students how they became interested in science. Their responses, he said, made a powerful argument for promoting STEM education from the earliest stages of school.
"[T]here were a couple whose parents were in the sciences, but for the majority of them, there was a teacher, a mentor, a program -- something that just got them hooked," Obama said. "And it's a reminder that science is not something that is out of reach, it's not just for the few, it's for the many, as long as it's something that we're weaving into our curriculum and it’s something that we're valuing as a society."
This story, "Obama announces computer-science-for-all initiative" was originally published by CIO. | <urn:uuid:bfb822a0-9536-4b83-a89f-806466fd75ff> | CC-MAIN-2017-04 | http://www.itnews.com/article/3057074/education/obama-announces-computer-science-for-all-initiative.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00307-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975506 | 757 | 2.6875 | 3 |
When we left off, we were looking at Frame Relay as an example of a Layer-2 infrastructure. Some customers will have very few sites, and others may have thousands. The logical topology (hub-and-spoke, partial-mesh or full-mesh) for a particular customer would be negotiated between that customer and the provider, based on the customer’s number of sites, which sites communicate with which, and with what bandwidth and latency requirements. Of course, the customer’s cost increases as the number of PVCs configured and their bandwidths increase. Once the negotiations are complete, the WAN provider will establish the required PVCs between the customer’s sites.
Speaking of money, it’s in the provider’s best interest to have as many customers as possible, and a large provider may have thousands. Refer to Figure 1, which shows the PVC’s for two customers, “A” (in red) and “B” (in blue).
Note that while each customer has three sites, “A” has a full-mesh, while “B” has a hub-and-spoke (with site B1 as the hub), and that although the customers are sharing the same physical infrastructure, their traffic is kept separate. Thus, each customer has a VPN (Virtual Private Network), which means that the provider’s network acts as if there is a private WAN for each customer.
As you can see, Customer A’s site A1 has the same IPv4 address space as does Customer B’s site B1, etc. Since we’re using VPNs (which act as logically separate networks) there are no “address collisions” despite the overlapping address spaces. In fact, the provider’s addressing scheme (Layer-2) is completely independent of those of the customers’ Layer-3 networks. In other words, the provider doesn’t know or care what IPv4 subnets the customers use, or whether the customers are using IPv4 at all (they could just as well be using IPv6, IPX, Appletalk, DECnet, SNA, or whatever). As long as the packet can be encapsulated using Frame Relay, the provider can get it where it needs to go.
Also, because the provider doesn’t know what routed protocols the customers are using, the provider has nothing to do with the customer routing protocols, either. The system we’re using is commonly referred to as an “overlay VPN”, because we have “overlayed” (superimposed) a VPN for each customer onto the provider’s Layer-2 network
Considering all of this, we can summarize the advantages of overlay VPNs as follows:
- A common physical infrastructure is shared between customers.
- Customers can independently choose any logical topology they want.
- Customers can use any combination of Layer-3 protocols they desire.
- There are no “address collisions” between customers.
- The provider does not participate in customer routing.
Since they offered great flexibility at reasonable cost, overlay VPNs using X25, ATM and Frame Relay became very popular over the past few decades.
Next time, we’ll look at the disadvantages of using overlay VPNs.
Author: Al Friebe | <urn:uuid:a2f58f01-b32d-48db-9d36-5ee5378b8c5f> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/04/15/mpls-part-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00123-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95963 | 708 | 3.125 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Last week I noted how the World Wide Web Consortium's (W3C) work on XML was probably its most significant contribution to the online world, at least in recent years. That commitment was underlined with the release of the XML Conformance Suite. According to the accompanying press release, the suite consists of more than 2,000 test files for establishing the conformance of code to the XML 1.0 (Second Edition) standard.
Just as important as this bolstering of current standards is the work driving them forward. It is hard to tell from the rather muddled XML activity statement, but the W3C is very productive here. Some of this work seems rather specialised, like the XML Schemas, although the extent of Robin Cover's XML Schemas links suggests that this is a subject of lively interest to those in the field.
For the rest of us, three other areas look likely to have a more immediate impact on the way we use the Web, especially in business. The first of these is the Extensible Stylesheet Language (XSL). This is a good example of how far the W3C has moved on since the early days of XML.
XSL was originally designed as a way of applying a stylesheet to a general XML file to produce another file, for example HTML, for display in a browser. In this respect, it was very similar to the W3C's other stylesheet standard, Cascading Style Sheets (CSS), and the resulting confusion forced it to explain why two such standards were needed.
But XSL has moved on, and now consists of three parts a href="http://www.w3.org/TR/xslt" >: XSL Transformations (XSLT), which handles the transformation of one XML file into another; XML Path Language (XPath), which is a language used by XSLT to access or refer to parts of an XML document; and XSL Formatting Objects, which handles the actual formatting. Much more about this increasingly rich area can be found in Robin Cover's pages devoted to the subject.
The XML trio of XML Pointer (XPointer), XML Base and XML Linking (XLink) are about creating a kind of generalised hyperlink in XML documents.
XLink allows elements to be inserted into XML documents in order to create and describe links between resources. As well as the simple unidirectional kind of links found in HTML documents, other, more sophisticated variants are also possible.
XPointer allows the internal structures of XML documents to be addressed (and external ones, too), and builds on XPath, which is also used by XSL Transformations. Once again, Robin Cover's resources provide invaluable help in disentangling what are a complex and intertwined set of standards.
The third area where the W3C is doing some potentially important work is that of XQuery. This is a query language for XML documents that is designed to bridge the gap between the worlds of traditional databases and online documents. The kind of situations where XQuery might be used are illustrated in a background paper from the W3C.
It is striking that already a number of commercial products support XQuery. The presence of both Oracle and Microsoft hints at great things for this nascent technology. More background is available from the relevant page from Cover. | <urn:uuid:2dc5aa21-db39-49e2-b463-422926980f28> | CC-MAIN-2017-04 | http://www.computerweekly.com/opinion/XML-the-next-generation | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00333-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946035 | 712 | 2.734375 | 3 |
Using open source software, the National Security Agency was able to gather a community of professional and amateur security experts together to make unprecedented security protections available to public.
The National Security Agency has a mission. It is not just the nation’s code keeper and code breaker, but it must ensure the security of the nation’s digital infrastructure. Ironically, it had a security problem: the ecosystem for software that was keeping top secret information secret was deeply broken. There was little competition, no innovation and this essential software was expensive, slow to market, and antiquated.
Multi-Level Security, or MLS, is a complex problem: how to allow data with many different security classifications exist on the same machine? MLS software is difficult to get right, and easy to get wrong. It is subject to a stringent certification process. Although useful in certain areas of the private sector, there’s really only one customer for this kind of software: government. Once you’ve deployed MLS software, it’s very difficult to move to another solution as every MLS system was different. These are near-perfect conditions for very expensive, proprietary software that doesn’t innovate.
The NSA didn’t care for this situation at all. It was spending too much money to acquire software that was quickly obsolete. It was dependent on a handful of companies who had every reason to lock the NSA in to their platform. What’s worse, the private sector had no ready access to this technology that could be enormously helpful in the war against hackers and viruses.
The NSA had a new approach to this security problem, called Flexible Mandatory Access Control. They also had a new approach to bringing this theory into the real world. They knew that if they could solve the MLS problem with an open source implementation of this new approach, it would simultaneously reduce the cost of the software, open the field to new innovations, and make the technology available to the private sector. In one stroke.
So the team did something unprecedented: they took their proof of concept and released it to the world as a project called SELinux. It began as a set of changes to the open source Linux operating system, but soon it was completely integrated. What was once expensive and proprietary was now available to millions of Linux users and developers, at no charge.
At first glance, this is strange. Detractors of the SELinux project warned that this software must have backdoors that would give the NSA access to their computer systems. Others claimed that an open source security project could never be secure, since anyone could see where the flaws might be.
After careful scrutiny — scrutiny on a scale that was only possible because the software was open — it was quickly determined that SELinux had no backdoors. Likewise, the NSA knew that the best way to ensure the security of the software was to make it open an available to anyone’s scrutiny. They knew that software is never perfect, and the most effective strategy for identifying and quickly fixing security problems is to make sure that anyone can find the flaws, and anyone can provide a fix.
The SELinux project now has a life of its own. There’s a broad community of developers working on new SELinux features and improvements. The project solves much more than the MLS problem. It now provides a generalized framework for access control that’s as useful to the private sector as it is to the government. A number of companies now provide consulting and development services around SELinux. The availability of the SELinux project has drastically expanded the use of these controls and created a private sector market that maintains the software over time, which is exactly what the NSA needed.
Millions of Linux users now protect themselves from attack with SELinux, dramatically improving the security of computer systems around the world. Healthcare companies can now use sophisticated security measures to protect personal health records and meet the government-mandated HIPPA requirements. Cloud computing has introduced serious security concerns, and SELinux is being used to safely and efficiently allow many users to share the same computing resources.
Open source software creates markets. It spreads innovation, and harnesses the collective intelligence of every member of the community. Without open source, the NSA would still be saddled with expensive and antiquated MLS systems. That’s the power of open source software: we can do more when we work together. | <urn:uuid:1d06aa98-4d61-45c9-bce1-3ef46cec98f8> | CC-MAIN-2017-04 | https://atechnologyjobisnoexcuse.com/2009/07/the-nsas-security-challenge/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00333-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969924 | 907 | 2.6875 | 3 |
We know it's fashionable to find an IT crisis to worry about every year or so, but we really are facing one now—the end of the "free lunch". For years, you've been able to write poor quality, bloated code, secure in the knowledge that next years' computers will be a bit faster and will run your bloated application fast enough.
Not any longer. This is because, although computers are still getting faster, they're not doing this by cranking up the clock speed of the CPU any more. Instead, CPUs will run more slowly but there will be more of them, so overall throughput will still increase year on year. This approach is inescapable, because heat production goes up with increasing clock speed rather faster than processing power does; and the largest datacentres (used by people like Amazon and eBay) are finding that the power available from the electricity grid (not just to run those CPUs but also the associated air-conditioning) is limiting growth.
The crisis comes because many applications written in the past, many of which are still in use, can't multiprocess very well. Put them on your new state of the art computer and they only run on one of its CPUs and therefore slow down (the clock speed is lower). Writing programs that run on arbitrarily many CPUs is hard and many programmers can't do it very well. When they try, they find that things sometimes run in the wrong order (especially when the system is heavily loaded) or lock up solid as one process waits for another process to release resources—if the other process is itself waiting for resources held by the first process. A lot of software isn't designed for multiprocessing (multi-CPU) environments and many programmers aren't equipped (whether from lack of ability or lack of training) to rewrite it. It's a crisis recognised by Intel (which invented the free lunch with Moore's Law) and even by Microsoft, which once appeared to endorse the writing of bloatware, as a rational response to the free lunch.
However, there is hope. J2EE can handle multiprocessing quite well, for small online transactions, and people are producing multiprocessing frameworks that let programmers write simple single-threaded programmes which are then run on multiple CPUs, more or less transparently to the programmers. It is simply too dangerous to let programmers loose on locks and semaphores—there are too many opportunities for subtle bugs that only turn up in production, when things get stressed.
One good example of such a framework comes from what is perhaps an unexpected place—Pervasive Software, vendor of embedded databases going right back to Btrieve. In fact, its DataRush framework is a logical outcome of its database expertise. It is aimed at large batch oriented database applications of the sort that are starting to require unacceptable amounts of time to run on conventional computers as data volumes shoot past the terabyte barrier. The DataRush approach is based on dataflow processing and an understanding of the properties of the data—data analysis or database specialists understand the approach easily; programmers sometimes take a little longer. There is a DataRush FAQ here.
Nevertheless, we aren't going to get a new free lunch to replace the one we mentioned earlier as now ending. People sometimes talk as though the multi-CPU issues will go away just as soon as we get cleverer compilers that can multi-process serial programmes automatically—but this is an unrealistic expectation. Compilers will improve in their exploitation of parallel processing, but they can't (in general) anticipate patterns in the data the compiled program will be fed, nor can they know about activities which can or can't be parallelised safely for business reasons—unless you tell them with compiler ‘hints’ (and these imply that the programmer understands parallel processing and future data patterns).
Even DataRush, however, isn't a free lunch. Taking advantage of its parallelisation techniques currently involves writing Java code "customizers", which are invoked during the DataRush compile cycle. The customizers can take advantage of information such as the number of processors configured to partition data and control the parallelisation appropriately. These techniques are useful for what are traditionally called batch-oriented or data analytics applications.
The "dataflow" technique of computer processing it uses (based on Kahn networks and Parks scheduling—see the article by Jim Falgout and Matt Walker here and its references) has been around for some time, although there isn't space to go into its technicalities in this article. However, it requires a new way of thinking about pipelined applications, a new process description language (DRXML) and a new graphical design approach in the Eclipse IDE (and programmers don't like change). DataRush doesn't make you throw away your existing programmes and rewrite them from scratch, but it isn't a simple recompile or automated port either—some rewriting of code is necessary. And it probably requires a degree of professionalism from your programmers, who'll have to follow established good practice.
That said, there aren't any alternative magic solutions out there that we can see; other solutions have their own problems (typically, complexity of programming and/or expense). For the subset of problems DataRush is suited to, it delivers orders of magnitude improvements in throughput. For new applications, Jim Falgout (DataRush Solutions Architect at Pervasive) says that its approach fits well, in practice, with innovative technologies such as the Vertica column-oriented database and (Azul's 768 core appliances). He has even contemplated producing a DataRush data processing appliance, superficially similar to those produced by Netezza but, in DataRush's case, running on commodity hardware and targetting data-intensive applications.
The issue of changing the programming culture is, in part, being addressed by using existing open source and standards-based environments. Its GUI is based on Eclipse 3.3, it supports JMX performance monitoring, it exploits the parallel processing features of Java 6 and it now includes support for several scripting languages. It is supported on Windows XP, Windows Server 2003, Vista, Linux (Red Hat, Suse and Azul), HP-UX, AIX and Solaris and is currently in Beta 2 (you can download it from here—registration required). However, organisations will need to address the cultural issues internally as well—probably by providing interactive (face to face) training and by encouraging programmers to take part in the DataRush community.
The issue of parallel processing on multi-cpu processors (and think in terms of hundreds of CPUs, not just 4 or 8 way processors) is a real one and will require new skills from programmers and significant cultural change. DataRush seems, to us, to promise a cost-effective way of processing very large data volumes on modern multi-cpu processors—without the need to load your data into a heavyweight data warehouse-style database. And (despite the framework as a whole being in beta) it is already the basis for a shipping product called Pervasive Data Profiler, which performs computationally intensive calculations (such as Sum, Avg, Min, Max, Frequency Distributions, Tests, regulatory compliance checks, etc.) on arbitrary columns, in all the rows/records of a table simultaneously. And, intriguingly, perhaps DataRush's underlying processing paradigms will offer a more generalised way of thinking about applications in future, now that the shortcomings of the, essentially serial, Von Neumann architecture are being recognised. | <urn:uuid:e535eded-53b4-44f1-9752-ab5a4ddaab60> | CC-MAIN-2017-04 | https://www.bloorresearch.com/analysis/the-end-of-the-free-lunch-is-a-final-course-appearing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00573-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950682 | 1,527 | 2.734375 | 3 |
Today is Digital Learning Day 2014, a day dedicated to sharing new and better ways to engage children in digital learning, making them better prepared for success in college and a career. Schools, libraries, and education centers around the country are taking the Digital Learning Pledge to support the effective use of technology to improve the education of children.
In addition to being a Digital Learning Day National Core Partner, NCTA and Cable in the Classroom, the cable industry’s education foundation, created InCtrl, a series of free, standards-based lessons and videos that teach digital citizenship.
Perhaps a new term for some, digital citizenship is the idea that we can empower students to make thoughtful decisions online and develop a sound digital foundation for the rest of their lives. InCtrl is a holistic and positive approach to teaching digital citizenship by helping students learn how to be safe and secure, as well as smart and effective participants in a digital world. You can learn more about InCtrl by watching the video below.
When the program was released in 2013, Eric Langhorst, a teacher at Discovery Middle School in Liberty, Missouri wrote, “Teaching students to be good digital citizens is an extremely important topic. Unfortunately it is too often neglected because adults are not sure how to address and explain the issues. The InCtrl curriculum helps teachers start the discussion about digital citizenship topics which can then be supported by and continued with discussions at home.” We couldn’t have said it better ourselves.
And while we’re particularly proud of InCtrl and the success it’s had in bringing the topic of digital citizenship to the front of the education technology conversation, it’s not the only example of using the power of cable broadband to grow and improve digital learning. For example:
Cartoon Network launched Stop Bullying: Speak Up, a multi-platform pro-social initiative focused on motivating bystanders to speak up and help prevent bullying. Organizations participating in the Partner Network include the U.S. Department of Health and Human Services (HHS), Boys & Girls Clubs of America, Barnes and Noble, CNN, Facebook, and Time Warner.
And Discovery Education Techbook, a digital textbook series, offers robust digital content to engage students and enhance their digital learning. A multimodal platform, the Discovery Education Techbook includes text, video, virtual experiments and collaborative projects. It even allows students to change reading levels for better comprehension.
For young people, the programming and access that cable television and broadband delivers can transform their education. Digital literacy and an expansive, accessible education is not a luxury – it’s a necessity. We’re pleased to be a part of Digital Learning Day and we commend the many organizations who not just today, but every day, recognize the importance of digital learning. | <urn:uuid:62a68ecd-497d-4ad9-a95d-5ca2394cdeff> | CC-MAIN-2017-04 | https://www.ncta.com/platform/industry-news/how-digital-learning-day-is-making-education-better-for-all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00297-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933184 | 568 | 3.265625 | 3 |
With the coming of age of civil aviation in the 1930s and 1940s, the rural settings of many U.S. airports began to change. By the 1950s, expanding metropolitan centers surrounded formerly remote airfields with residential and commercial developments. As more powerful airliners rolled off the drawing boards, noise became an issue for those living nearby. With the arrival of commercial jets in the late '50s, neighborhoods under the flight paths of major airports became almost unlivable.
Mounting complaints, declining property values and class-action lawsuits by residents and cities prompted the aviation industry and state and federal governments to find solutions to the problem. California passed Title 21 and Title 24, mandating that airports and local governments provide mitigation measures. By 1979, Federal Aviation Regulations Part 150 enabled airports and local jurisdictions to apply for funding to either sound-insulate affected homes or acquire them and relocate their owners. In the 1960s and 1970s, the Los Angeles Department of Airports -- now Los Angeles World Airports (LAWA) -- removed 2,800 homes and relocated 7,000 residents from around Los Angeles International Airport (LAX).
Airport Noise Mitigation Program
In compliance with California regulations, the LAWA Noise Management Bureau (NMB) recently completed plans for an Airport Noise Mitigation Program (ANMP) to sound-insulate approximately 25,000 residences surrounding LAX. The construction phase of the program, begun in 1996, will be completed in the next five to seven years, at a cost of more than $200 million. Funding for the LAWA program is generated from "passenger facility charges," a $3 surcharge on departing passengers, authorized by the Federal Aviation Administration. Jurisdictions in the program areas can use LAWA funds and/or FAA grants to underwrite sound-insulation work in their respective areas. At present, ANMP applies only to residences. In the future, schools, churches, hospitals and other sensitive land uses may be added to the program.
To determine which land uses (number and type of parcels) qualified for sound-insulation, LAWA set up a sophisticated network of noise-monitoring stations in jurisdictions surrounding LAX. Data from the network is loaded into an ArcInfo GIS running a program that models noise contours at the 65, 70, and 75 decibel (dB) levels. The contours are overlaid on a parcel-level basemap. Detailed information on parcels within the contours is then obtained from the related database, also used to phase qualifying residences into the sound-insulation schedule.
NMB Environmental Supervisor Mark Adams pointed out that contours also help define costs. "For example, we know that a single-family home at the 75dB contour is going to cost more to insulate than one at 65dB. To estimate the total cost of the project and compute a construction schedule, we need a fairly accurate estimate of the number and type of homes at these noise levels. The contours help us to access that information."
Data, tables, maps and information on all ANMP phases were required to be submitted in a lengthy annual report to the California Department of Transportation's Division of Aeronautics. Preparation and publication of the documents required a month or more. Wyle Laboratories, acoustical engineers and prime contractor for the program, was responsible for coordinating development of the initial report. Psomas and Associates, civil engineers with GIS expertise, had the task of updating and expanding a parcel-level database, and developing tools to speed up preparation of ANMP reports.
According to Psomas Vice President Matt Rowe, the Santa Monica, Calif.-based firm began with a parcel-level database developed and maintained by NMB since the early 1980s. Although originally intended for a different project, the database covered much of the LAWA noise-mitigation planning area. An updated and expanded version, Rowe explained, could be used not only for spatial data management and noise analysis but also for monitoring the sound-insulation phase of the program.
"Obviously, LAWA is not going to insulate and/or acquire 25,000 properties all at once," he said. "They will use the database to phase in that part of the program, beginning with the most heavily impacted areas close to the airport, and work their way out."
Using ArcView and AutoCAD, Psomas expanded the original database to encompass additional areas in the five jurisdictions surrounding LAX. The process included updating general community plans and incorporating changes in jurisdictions, zoning and housing. The firm also populated the database with local-use codes, parcel numbers, TRW information, census data from TIGER line files and Thomas Brothers street maps. General community plans were then overlaid on the basemaps, and the noise contours placed over these.
With ArcView AVENUE and a previous ANMP report as a template, Psomas programmers developed a structured query language that automated many of the complex steps involved in querying the database, and in identifying and quantifying spatial relationships.
Wyle used the data and GIS application provided by Psomas to identify parcels within the contours; develop tables, reports and maps for noise mitigation plans; and calculate cost estimates and construction schedules -- all required for the annual report. Since neither Wyle nor NMB is a high-end GIS developer, the application enabled them to produce the ANMP report in considerably less time than with earlier methods.
"What used to take a month," said Psomas Project Manager Matt Caraway, "now takes three to four days."
"By automating much of the report," Adams added, "Psomas enabled all our GIS users to produce a relatively sophisticated product regardless of their skill levels."
Projected Superjets Noise
Psomas is also assisting LAX master planners Landrum and Brown in analyzing the projected noise from 550-passenger superjets now on the drawing boards. Airliners of the 21st century will have larger, more powerful engines and will need runways of two miles and longer. At this point, however, runway configurations for LAX are in the study phase. Final approval depends on the Los Angeles City Council and numerous federal and state regulatory agencies.
Psomas' role in the project is similar to its work with the ANMP; the firm provides the database, and overlays the projected noise contours from Landrum and Brown onto the updated basemaps. Planners use the data to calculate the probable noise impact on surrounding communities.
GIS enabled LAWA not only to expedite the complex process of documenting the airport noise mitigation program, but, as Adams pointed out, it also enabled them "to get a better handle on the scope of the program," particularly in identifying and scheduling residences for sound-insulation construction. The technology is currently helping LAX airport planners estimate with greater accuracy some of the environmental costs of accommodating the next generation of superjets.
Bill McGarigle is a writer specializing in communication and information technology.
October Table of Contents | <urn:uuid:d6a1f3e1-a382-42ea-8fd6-c9368fab78fb> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Airport-Soundbytes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00021-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929275 | 1,439 | 3.09375 | 3 |
There is a certain beauty in working with numbers. They can be collected, averaged, analyzed and combined to create new definitions and projections. In business continuity, we rely on numbers to determine the likelihood of an event occurring, and how best to offset it.
Humans base risk-related decisions on two systems of thinking, the intuitive and the analytical (Gardner 2008). Most of us are familiar with the analytical side of risk, as in the case of auto insurance. Actuaries compare your key characteristics against the national claims averages of others like you to determine the premium you must pay to offset the company’s financial risk in possibly having to pay for your accident.
This system of thinking is logical, affirming our ability to make better judgments about risk using clear statistics like hazard, exposure, consequence and probability. The analytical system works slowly; it examines, calculates and considers all evidence. A decision based on facts is easy to explain, as in this common equation for risk: Risk = Probability * Impact.
While this equation has long been used to measure risk, the risk management community has seen the value of a missing variable — perception. What we believe or do not believe about risks has an enormous effect on how well we prepare ourselves for them and the action we take when they occur. This is where the intuitive system of thinking comes into play. As it works without our conscious awareness, the intuitive system is automatic, emotional and swayed by our culture, social environment and personal experiences.
The psychology of risk is a burgeoning topic. We are just beginning to research and understand patterns of perception, processing and response to threats. While there is a lot we don’t know, having an understanding of the patterns discovered so far can better prepare us to recognize these patterns when we encounter them in real life.
Journalist Amanda Ripley has reported on human response and survivor stories in recent natural and manmade disasters. After being instructed to evacuate for a hurricane or flood, most people check at least four sources (e.g., family, neighbors, news sources, officials) before actually deciding to leave (Ripley 2008). In such a situation, while the facts are straightforward — there is a hurricane coming — and the logical response is to get moving, our intuitive system still demands confirmation from familiar sources before encouraging action.
Familiarity breeds trust. Therefore trust in the source of a communication, regardless of how serious or accurate the information, is critical in predicting behavioral outcomes. Decisions based on trust or other value-based factors are hard or even impossible to explain because they are based on automatic settings operating within our unconscious. This makes our simple equation less easy to compute: Risk = Probability * Impact * Perception.
If value-based factors such as trust are so difficult to pin down, and vary for each individual, why do we need to take them into account when managing risks? These additional considerations not traditionally factored into the analytical system of thinking explain why our fears don’t always match the facts, cluing us in to how our perceptions can completely alter the outcome of an incident.
Let’s examine what factors into fear. Most of us are more concerned about a plane crash than an automobile crash, and more fearful of sharks while at the beach than of developing skin cancer. In reality, it is 67 times riskier to travel the same distance by car than by plane, and annually there are only 6 deaths from shark attacks compared to 48,000 from melanoma (Sunstein 2002). So what drives this false sense of risk?
Sunstein (adapted from Paul Slovic, 1993) has identified several common factors that influence our perception of risk, including:
- Catastrophic potential: If fatalities occur in large numbers in a single event (instead of dispersed over time), our perception of risk rises.
- Familiarity: Risks that are new or rare cause more fear than familiar ones.
- Understanding: If we believe that how an activity or technology works is not well understood, our sense of risk goes up.
- Personal control: We worry more if we feel the potential for harm is beyond our control (e.g., a passenger in an airplane versus a driver of a car).
- Media attention: More media attention to a risk means more worry.
- Dread: If the effects generate fear, the sense of risk rises.
- Future generations: If the risk threatens future generates, we worry more.
- Accident history: A history of bad events boosts the sense of risk.
- Reversibility: If the negative effects of an event cannot be reversed, perception of the severity of risk rises.
- Origin: Man-made risks seem more threatening than those of natural origin.
- Timing: More immediate threats loom larger than those whose impact will not be felt for some time.
The above factors influence how we perceive a risk, threat or incident and will subsequently influence our response, given that perceptions often override facts. This is important to understand when thinking about how people will respond in a crisis.
The Impact Of Stress
In a crisis, people behave in unexpected ways. You cannot know how you will respond until you are actually facing a threat, regardless of how much you have thought about or anticipated the event. There are many options we are faced with in responding to an incident, and we either consciously or instinctively make a choice about how to react.
When we consider how we process information related to risk and incidents, we must look at the way our systems are affected by stress. Stress is not an element that is exclusive to disasters or incidents; some studies indicate we respond to some level of stress 100 times a day. But when we are faced with a life-or-death situation or a terrible incident, these stress levels can skyrocket way beyond what we are used to dealing with. When our stress levels are out of control, we often find our mind, and sometimes even our body, also out of control. Learning about how stress and fear affect our mind and body can help us anticipate behaviors that can facilitate or inhibit a successful response to incidents.
Some common sources of disaster-related stress include a threat to our values (our core belief about what is right and wrong), our personal finances, our family and our beliefs about the world. We don’t feel in control of what happens, and we don’t know what’s going to happen or how much of our own actions might influence the outcome. We don’t know if what we are doing is right. We are forced to act and respond very quickly in many incidents without time to weigh the options and outcomes like we normally do. We don’t have enough information about what is happening, what has happened, how it has affected things, or what it might affect further down the line. All of these contribute to stress in respondents of disaster, particularly for those with a leadership role whose decisions will affect others.
Particularly for those of us serving in crisis team roles during an incident, another source of stress comes from tending to our personal life and our work life. For incidents that affect the region and not just your company, our personal life is also likely to come into impact with the incident. Even when the incident is contained to the workplace or the home environment, stresses from one environment are likely to weigh on the other.
This leads to what is called “person-role conflict,” which refers to the tension between our personal and professional roles and responsibilities. Members of a business continuity or crisis team will be expected to spend many hours resolving the incident and contributing to recovery of the site. If you also have a lot of demands for attention from your personal life, or if you are concerned for the welfare of your loved ones, it can be challenging to focus your attention on the company’s recovery, and difficult to cope with the stress from balancing both environments (Greenhaus & Beutell 1985; Kahn, Wolfe, Quinn, Snoek, & Rosenthal 1964).
When under severe stress or fear, our ability to intake and interpret information falls drastically. Message retention falls by 80%. Some of the ways we cope with this is by reducing complexity of the information heard; acting on our pre-existing beliefs; seeking analogies in the current situation to problems we already understand; and eventually, blocking out new information when it becomes too much to process.
Understanding this is critical to communicating with others during an emergency. One way to treat this is to focus most on what people hear first. Get to the point by giving key information first, instead of bombarding your audience with too much (Covello 2010).
Given the difficulties associated with communicating during stressful situations, it is critical to communicate regularly prior to an emergency. Consistent communication of information needed during an emergency will allow for internalization of the information for when it might become relevant. Developing and testing communications is crucial for ensuring that recipients understand information as intended.
One of the ways stress impacts our ability to successfully respond to an incident is through its impact on our decision-making abilities. If our stress level has crossed over into the “too high” zone, it will be difficult to think rationally, and in stressful incidents we won’t have the information, time or mental capabilities to use our typical reasoning processes.
One manifestation of this impact to decision-making ability is known as “Cognitive Lock-In.” Since our ability to intake information is reduced when under stress, our more complex reasoning capabilities decrease. This results in a tendency to make an initial decision, and stick with it, despite later information indicating a better course of action. In a state of intense concentration, decision-makers desperately want to solve problems; new information or evidence that distracts from what has already been decided as a good solution is sometimes treated as distracting or annoying for causing “cognitive dissonance,” and we may disregard important new information after we have “locked in” to our early decision (Rouse and Morris 1986).
Another way decisions are often affected is by what is known as task saturation, or a focus on solving small problems and losing sight of the big picture (Dörner 1997). A strong desire to solve problems in a crisis can manifest in a tendency to hyper focus on smaller problems or find relief in manual labor that can create a feeling of contributing to a solution. While small problems and physical problems are important, we have to keep sight of the big picture.
A third common way we rush to decisions when forced to make decisions under stress is known as “Groupthink.” Groupthink references a condition where group members are more eager to come to an agreement and minimize group conflict than to come to the right decision (Kamau and Harorimana 2008). It has a tendency to happen in close groups when put under stress, and people may censor themselves or refute contradictory evidence to maintain the decision of the group. Groupthink can affect even seasoned problem-solvers if the mix of people in the group comes from a similar background, and particularly if there is a very assertive leader.
This impact on decision-making can affect even very savvy leaders under times of stress. This is one of the reasons why planning is so important. Thinking through the decisions you will have to make while you are not under the stress of the incident helps ensure that you are thinking rationally and making well-thought-out decisions.
By increasing our knowledge about what drives fear and behavior, we can improve communication and training around these elements to gain a greater sense of control over our risk environment, lessen our distress of the unknown, and become better prepared to react. | <urn:uuid:14d5462c-b342-4dec-8087-8d10cab6214e> | CC-MAIN-2017-04 | http://www.continuityinsights.com/article/2014/02/risk-shrink-human-element-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00443-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947132 | 2,389 | 3.453125 | 3 |
This resource is no longer available
Cisco Telepresence Zones
There are many zones in Cisco TelePresence products; so many, in fact, that there is a lot of confusion about the different zones and their uses. The concept of zones originally came from the H.332 RAS protocol; zones are utilized by gatekeepers to resolve phone number to IP address mapping and to manage device bandwidth using Call Admission Control (CAC). The concept of zones in Cisco TelePresence products is confusing to many people; but this white paper should clear up some of the confusion. | <urn:uuid:9b564e85-5bc4-489e-a5b2-08a50e55dc1d> | CC-MAIN-2017-04 | http://www.bitpipe.com/detail/RES/1363983866_172.html?asrc=RSS_BP_TERM | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00224-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938629 | 117 | 2.515625 | 3 |
Material & Resource Use
PRODUCT MATERIAL CONTENT
Information technology (IT) devices contain substances that are essential to the functionality and safe use of the product, but some of them can adversely impact ecological and human health when not properly managed. To protect people and the environment, EMC takes a proactive approach to minimizing the use of these substances in our products by researching and, where feasible, substituting alternative materials. We also take measures to prevent these substances from entering the natural ecosystem. To learn more, visit Product End-of-Life.
DESIGN FOR ENVIRONMENT
The EMC® Design for Environment (DfE) program incorporates environmental considerations throughout product design. EMC engineers take what we have learned about the environmental impact of existing product designs and use that knowledge to implement best practices for ongoing design. To learn more, visit Efficient Products.
To eliminate environmentally sensitive materials in our products, viable alternatives must be found. When we believe that a material may be of concern, we take a precautionary approach by exploring alternatives that are safer for ecological and human health. We prioritize the substances to assess, and then collaborate across the industry and academia to identify and qualify alternatives that meet the same or higher standards of reliability, cost-effectiveness, performance, and availability as the materials we currently use. We implement substitutes in new designs where feasible.
Flame retardants in IT products are essential for product functionality and human safety. Halogens are an ingredient in flame retardants commonly used in laminates for printed circuit boards (PCBs), but there are concerns about halogens’ impact on the environment and human health. EMC has been working for several years to identify halogen-free substitutes that meet the rigorous technical requirements for our products.
In 2011, EMC successfully shifted the majority of its new PCBs to a halogen-free material. However, that halogen-free substitute could not be used in our high-performance PCBs, which have more stringent requirements. Because a suitable halogen-free substitute did not exist on the market, EMC decided to develop a solution.
In the spring of 2012, EMC invited chemists and engineers from a PCB manufacturer and a laminate supplier to work with EMC on this challenge. EMC set the vision to identify, test, and implement a new flame retardant that is halogen-free, meets the technical requirements of our high-performance PCBs, and is affordable to implement. EMC’s own experts in PCB design, signal integrity, and electrical and mechanical engineering participated in the project.
By the end of 2012, this collaborative group identified a halogen-free material that meets EMC’s requirements and will be implemented on our high-performance PCBs in 2013.
Originally, EMC was the only customer for these halogen-free substitutes. Today, our suppliers report that there is significant interest from other companies. By driving this effort with our suppliers to identify these substitutes, EMC is not only helping our own business, but also the rest of the industry and the planet’s ecosystem.
EMC participates in the U.S. Environmental Protection Agency (EPA) Partnership on Alternatives to Certain Phthalates, a project of their Design for Environment Program. This project has identified eight phthalates of high concern and a list of potential alternatives. We are currently working with our suppliers to evaluate these and other alternatives for use in our products. We are also members of the Green Chemistry and Commerce Council (GC3), which is conducting tests of alternative materials to determine human toxicity. In 2013, we intend to identify substitutes for those eight phthalates identified by the EPA, with the intent to implement changes in 2014.
FULL MATERIAL DISCLOSURE
EMC’s Full Material Disclosure (FMD) database catalogs the substances used in EMC products. This database enables us to quickly and easily identify the presence of substances—when there are new regulations regarding their use—and to respond more rapidly to those requirements. It also helps with identifying where “conflict minerals” (tin, tantalum, tungsten, and gold) are used in our products so that we can trace their source. To gather this information, we ask suppliers to identify materials used in every part of EMC products by CAS number (a unique identifier for chemical substances).
Compiling this database is complex due to the vast number of parts in our hardware products, the constant evolution of our product portfolio, and the maturity level of each supplier’s ability to report FMD. We continue to gather this information from our suppliers, adding data for our new products and backfilling data from our older product releases.
MEETING COMPLIANCE AND CUSTOMER REQUIREMENTS
As interest in reducing the environmental impact of IT products has grown, regulations on product material content worldwide have followed. There has also been an increase in requests for information from our customers about specific substances in our products. The initiatives mentioned above are critical to our efforts to stay ahead of government regulations and customer desires, but the proliferation of regulations and the lack of global harmonization can be a challenge. EMC has a governance body that oversees environmental product compliance and regularly anticipates and communicates requirements to our engineering organization and supply chain. In 2013, we plan to further educate our suppliers to help them understand and prepare for the quickly changing regulatory landscape. | <urn:uuid:6ba32563-492b-4f44-9d1e-ce77469b5c75> | CC-MAIN-2017-04 | https://www.emc.com/corporate/sustainability/sustaining-ecosystems/material-content.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00224-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920082 | 1,113 | 3.078125 | 3 |
Additionally, many social networking tools and technologies that consumers are comfortable with could be leveraged in both running the government as well providing services. For example, using Twitter to convey important information and communicate directly with citizens during disaster relief. Or, another example is using Facebook for law enforcement to help identify suspects, etc. Social networking tools allow direct engagement with citizens, rather than relying on the proxy of government employees or elected representatives. That poses challenges, but even more it creates opportunities for engagement, responsiveness, accuracy and better services.
Set the Standard for Privacy Regulation
It must regulate the use of data to protect citizens privacy as government agencies and systems become more integrated. Increased integration means more controls need to be put in place to regulate the use of data and protect citizens privacy. While transparency is needed, data security must be top of mind.
We need to build a proof of concept using the federal government as a case study for the rest of the nation. That concept must clearly illustrate how sound, privacy policies could enable citizen control over information and create greater efficienciesand at less cost for the government. Unfortunately, most people only assume any shared information is either bad or a violation of their rights. Unless citizens can visualize how they personally will benefit, this perception will remain.
One way to address that perception would be to identify cases that require a citizen to volunteer information that could make their lives easier. It should happen not just in one transaction (paying taxes or license renewal), but across many. For example, one compelling scenario in the business world is health care. Providing personal information to providers, insurance companies and employees can lead to better health care, reduced hassle for the consumers and save the companies involved money. Privacy becomes much less of a concern when there are mutual benefits for the parties involved and the right steps are taken to ensure security of that data.
As more and more federal systems become integrated, protecting that data becomes even more crucial for agencies and U.S. citizens.
State a Clear Business Case
How problems are solved is just as important as the solution. CIOs and CTOs at companies across the country must make the business case for their IT vision to business executives, providing rationale for recommended solutions and showing the business costs associated with them. The U.S. CIO and CTO also should provide a rationale for their plans and make a clear business case for how citizen dollars are spent to improve the governments IT infrastructure.
As taxpayers, were all partial owners of the millions of government servers, applications and IT systems. It is important to explain the value of IT investments in a way that citizens can understand and support. Just as with enterprises, that value can come in many forms, but the common thread of all good business cases is that they are presented to the right people, and address a pain or need that is relevant and material.
Finally, as the new CIO and CTO create their new vision, they have the opportunity to include those of us in the private sector in the dialog and formation of this vision. The new U.S. CIO and CTO can become the ultimate role models for others in their same positions, creating a mantra for efficient business that provides secure access to its constituents.
That would be good business and good government.
Tyson Hartman is Avanades global CTO and VP of Enterprise Technology Solutions. Tyson is responsible for Avanades technology vision and R&D investments. He also leads the worldwide strategy and team driving Avanades business in application development, enterprise infrastructure and managed services that provide solutions across the complete enterprise IT lifecycle.
As CIO and corporate VP, Dale Christian guides the development of technical infrastructure and applications architecture for Avanade. He also works closely with Microsoft to ensure Avanades position as an aggressive early adopter of Microsoft enterprise technologies. Dale joined Avanade after more than 14 years at Microsoft, where he held a variety of IT leadership positions in application development and architecture. Most recently he served as general manager of application development for IT and managed solutions. | <urn:uuid:510fdd13-eb97-4b2b-9314-fb2d05f372cf> | CC-MAIN-2017-04 | http://www.cioupdate.com/reports/article.php/11050_3817981_2/What-the-New-US-CTO-and-CIO-Should-Consider.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00068-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957152 | 821 | 2.546875 | 3 |
During fiber optic network installation, maintenance or restoration, it is also often necessary to identify a specific fiber without disrupting live service. This battery powered instrument looks like a long handheld bar is called fiber optic identifier or live fiber identifier.
Optical fiber identifier employs safe and reliable macro bending technology to avoid disruption of network communications that would normally be caused by disconnecting or cutting a fiber optic cable for identification and testing. The fiber optic identifier is intended for engineers and technicians to identify dark or live fiber and excessive losses due to the misalignment of mechanical splices or poor connections.
There is a slot on the top of fiber identifier. The fiber under test is inserted into the slot, then the fiber identifier performs a macro-bend on the fiber. The macro-bend makes some light leak out from the fiber and the optical sensor detects it. The detector can detect both the presence of light and the direction of light.
A fiber optic identifier can detect “no signal”, “tone” or “traffic” and it also indicates the traffic direction. The optical signal loss induced by this technique is so small, usually at 1dB level, that it doesn’t cause any trouble on the live traffic. Fiber optic identifiers can detect 250um bare fibers, 900um tight buffered fibers, 2.0mm fiber cables, 3.0mm fiber cables, bare fiber ribbons and jacketed fiber ribbons.
Most fiber identifiers need to change a head adapter in order to support all these kinds of fibers and cables. While some other models are cleverly designed and they don’t need to change the head adapter at all. Some models only support single mode fibers and others can support both single mode and multimode fibers.
Difference Between Fiber Identifier and Visual Fault Locator
Fiber optical identifier and fiber optic visual fault locator all are most important tools for testing in our network. But sometimes we would mistake them. To be honest, they are different test tools.
1. Fiber Optical Identifier, it is a very sensitive photodetectors. When you will be a fiber bending, some light rays from the fiber core. The light will be detected by the fiber identification, technical staff according to these light can be a single fiber in the multi-core optical fiber or patch panel identified from the other fiber out. Optical Fiber Identifier can detect the status and direction of the light does not affect the transmission. In order to make this work easier, usually at the sending end to the test signal modulated into 270Hz, 1000Hz or 2000Hz and being injected into a specific fiber. Most of the optical fiber identifier for the operating wavelength of 1310nm or 1550nm single-mode fiber optical fiber, optical fiber identifier can use the macro folding technology to name the direction and power of the transmission fiber and the fiber under test online.
2. VFL (Visual Fault Locator)
This revolutionary product is based on laser diode visible light (red light) source, when the light being injected into the fiber, if fiber fracture, connector failure, folding over, poor weld quality failure by launching the light of the fiber to fiber fault visual images positioning. Visual Fault Locator launched a continuing trend (CW) or pulsed mode. The common frequency of 1Hz or 2Hz, but can also work in the kHz range. Usually the output power of 0dBm (1mW) or less, the working distance of 2 to 5km, and to support all the common connector.
You can get fiber optic identifiers from Wilcom, Ideal, 3M, FiberStore and other network test equipment manufacturer. We recommend you Wilcom and FiberStore products since both manufacturers have very high customer satisfaction rate. | <urn:uuid:2681b7b2-f7c4-4c4e-b179-76fef59f69d8> | CC-MAIN-2017-04 | http://www.fs.com/blog/maintaining-fiber-network-with-fiber-optic-identifier.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00464-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895777 | 767 | 2.9375 | 3 |
The Various Approaches to Going “Green” in Data Centers
There has been a major push in reducing the energy footprint of data centers, especially with the amount of data that is accumulated each and every day. One approach is to go lean by cutting out unneeded servers and adopting other energy reducing measures wherever possible, which seems simple enough, right?
While being green is attractive from a business standpoint, many data centers are not too eager to take this route. The need for cooling or other energy requirements exists for a reason, and cutting it down comes at the cost of imposing some restrictions in other places. Again, while identifying and eliminating unneeded servers and other appliances may indeed cut costs, the cost to actually do that may outweigh those savings.
Data centers unable to go lean due to their business model or any other reason should consider the following approaches to go green if they are faced with these challenges:
- Reduce consumption of fossil fuel energy and tap energy from renewable sources, including solar photo-voltaic and wind. Microsoft's proposed facility, Pilot Hill Wind Project, which aims to provide 675,000MWh of renewable energy per year from 2015 to power its data centers, is a sign of the changes that are coming. The European Commission recently announced the RenewIT initiative, which aims for 80% of the European data center industry to be powered from renewable and sustainable resources, is sure to provide a push in this direction as well.
- Avoid the use of highly polluting diesel generators for backup power and use bio diesel powered generators instead. Bio diesel reduces the carbon footprint of diesel engines significantly, but it does comes with the trade off of generators requiring more frequent care and cleaning.
- Upgrade equipment periodically to remove obsolete and energy guzzling equipment. A case in point is the Direct Expansion (DX) cooling systems, where the focus was on the most cost efficient system a decade ago. Today, the primary focus of the cooling system is on the COP (Coefficient of Performance), or the ratio of energy moved to energy used to move it.
- Use recycled equipment as long as possible. This would greatly reduce the carbon footprint associated with the manufacturing of the equipment, even if there are no major operational savings.
- Adopt energy efficient architectural approaches, especially incorporating green innovations in ventilation and air conditioning, with the overall aim of minimizing the data center's impact on environment.
Green data centers are not just an environmental friendly move; they are a sound business preposition. Most of the time, the investment to “go green” sees a return on investment and starts to generate additional savings very shortly after it's implemented.
Lifeline Data Centers prides itself on energy efficient practices. If you're interested in learning more, schedule a tour of our facility today. | <urn:uuid:7d907d47-7765-46e1-a9d5-880de6b10058> | CC-MAIN-2017-04 | http://www.lifelinedatacenters.com/data-center/going-green-in-data-centers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948048 | 568 | 2.65625 | 3 |
Black Box Explains...50-micron vs. 62.5-micron fiber optic cable
As today’s networks expand, the demand for more bandwidth and greater distances increases. Gigabit Ethernet and the emerging 10 Gigabit Ethernet are becoming the applications of choice for current and future networking needs. Thus, there is a renewed interest in 50-micron fiber optic cable.
First used in 1976, 50-micron cable has not experienced the widespread use in North America that 62.5-micron cable has.
To support campus backbones and horizontal runs over 10-Mbps Ethernet, 62.5 fiber, introduced in 1986, was and still is the predominant fiber optic cable because it offers high bandwidth and long distance.
One reason 50-micron cable did not gain widespread use was because of the light source. Both 62.5 and 50-micron fiber cable can use either LED or laser light sources. But in the 1980s and 1990s, LED light sources were common. Since 50-micron cable has a smaller aperture, the lower power of the LED light source caused a reduction in the power budget compared to 62.5-micron cable—thus, the migration to 62.5-micron cable. At that time, laser light sources were not highly developed and were rarely used with 50-micron cable—mostly in research and technological applications.
The cables share many characteristics. Although 50-micron fiber cable features a smaller core, which is the light-carrying portion of the fiber, both 50- and 62.5-micron cable use the same glass cladding diameter of 125 microns. Because they have the same outer diameter, they’re equally strong and are handled in the same way. In addition, both types of cable are included in the TIA/EIA 568-B.3 standards for structured cabling and connectivity.
As with 62.5-micron cable, you can use 50-micron fiber in all types of applications: Ethernet, FDDI, 155-Mbps ATM, Token Ring, Fast Ethernet, and Gigabit Ethernet. It is recommended for all premise applications: backbone, horizontal, and intrabuilding connections, and it should be considered especially for any new construction and installations. IT managers looking at the possibility of 10 Gigabit Ethernet and future scalability will get what they need with 50-micron cable.
The big difference between 50-micron and 62.5-micron cable is in bandwidth. The smaller 50-micron core provides a higher 850-nm bandwidth, making it ideal for inter/intrabuilding connections. 50-micron cable features three times the bandwidth of standard 62.5-micron cable. At 850-nm, 50-micron cable is rated at 500 MHz/km over 500 meters versus 160 MHz/km for 62.5-micron cable over 220 meters.
Fiber Type: 62.5/125 µm
Minimum Bandwidth (MHz-km): 160/500
Distance at 850 nm: 220 m
Distance at 1310 nm: 500 m
Fiber Type: 50/125 µm
Minimum Bandwidth (MHz-km): 500/500
Distance at 850 nm: 500 m
Distance at 1310 nm: 500 m
As we move towards Gigabit Ethernet, the 850-nm wavelength is gaining importance along with the development of improved laser technology. Today, a lower-cost 850-nm laser, the Vertical-Cavity Surface-Emitting Laser (VCSEL), is becoming more available for networking. This is particularly important because Gigabit Ethernet specifies a laser light source.
Other differences between the two types of cable include distance and speed. The bandwidth an application needs depends on the data transmission rate. Usually, data rates are inversely proportional to distance. As the data rate (MHz) goes up, the distance that rate can be sustained goes down. So a higher fiber bandwidth enables you to transmit at a faster rate or for longer distances. In short, 50-micron cable provides longer link lengths and/or higher speeds in the 850-nm wavelength. For example, the proposed link length for 50-micron cable is 500 meters in contrast with 220 meters for 62.5-micron cable.
Standards now exist that cover the migration of 10-Mbps to 100-Mbps or 1 Gigabit Ethernet at the 850-nm wavelength. The most logical solution for upgrades lies in the connectivity hardware. The easiest way to connect the two types of fiber in a network is through a switch or other networking “box.“ It is not recommended to connect the two types of fiber directly. | <urn:uuid:416a9d5a-5b4f-47c8-9a74-52338cbe58c7> | CC-MAIN-2017-04 | https://www.blackbox.com/en-au/products/black-box-explains/black-box-explains-50-micron-vs-62-5-micron-fiber-optic-cable | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00060-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896837 | 961 | 2.78125 | 3 |
Session Announcement Protocol as an experimental protocol designed for the purpose of multicasting a session’s information. IETF issued it as RFC 2974. SDP (Session Description Protocol) is being used by SAP as real-time transport protocol’s session depiction arrangement. With SAP use, correspondent can transmit SDP descriptions from time to time to an acknowledged multicast address and also to port.
Category: Session layer
Sytek Inc developed NetBIOS in 1983 as an API (a specification proposed for using it as an interface to communicate by software parts) for software contact over IBM PC LAN networking technology. The Network Basic Input/Output System (NetBIOS) was at first introduced by IBM (a company, which is running IT consultation and computer technology business to access LAN resources. Since its creation, NetBIOS has developed as a starting point for a lot of other networking applications including International Business Machines, for example: Sytek (API). This Basic Input/Output system serves as an interface specifications to access the networking services. | <urn:uuid:e8c7754c-f41a-475e-9dc9-0aebfcb50d80> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/category/networking/sessionlayer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937632 | 211 | 2.546875 | 3 |
It’s time for part 2 of our closer look at RFID frequencies and technologies. Today I’ll be providing additional information about High Frequency (HF) Passive RFID. 13.56 MHZ HF RFID (passive) has the following characteristics:
- Read range: from approximately 1 inch to 3.28 ft.
- Reads multiple tags simultaneously
- Moderate memory: 256 to 16KB
- Penetrates most materials well, including water and body tissue
- Easily embedded in non-metallic items
- Not as effective as LF RFID in the presence of metal
- Not typically affected by electrical noise in an industrial environment
- Orientation of tags influences communication range-optimum range requires reader and tag to be parallel
Typical HF RFID applications include:
- Access control-ID cards and employee badges
- Asset tracking
- Retail security and Electronic Article Surveillance (EAS)
- Patient and specimen tracking in healthcare
- Maintenance and inspections
Check back next week for part 3 when I take a closer look at Ultra-High Frequency (UHF) RFID. | <urn:uuid:fffd1380-93dd-416c-b9af-ebf65a2bf14d> | CC-MAIN-2017-04 | http://blog.decisionpt.com/high-frequency-hf-rfid-technology | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00390-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.859911 | 230 | 2.53125 | 3 |
Introduction to temporal data management with DB2
DB2 supports time-based data management that allows you to insert, update, delete, and query data in the past, the present, and future while keeping a complete history of what you knew and when you knew it.
Temporal tables in DB2
DB2 supports three types of temporal tables:
- System-period temporal tables— Where DB2 transparently keeps a history of old rows that have been updated or deleted over time. With new constructs in the SQL language standard, users can go back in time and query the database at any chosen point in the past. This is based on internally assigned system timestamps that DB2 uses to manage system time, which is also known as transaction time.
- Application-period temporal tables— Where applications supply dates or timestamps to describe the business validity of their data. New SQL constructs enable applications to insert, query, update, and delete data in the past, present, or future. DB2 automatically applies constraints and row splits to correctly maintain the application-supplied business time, also known as valid time.
- Bitemporal tables— Manage system time and business time. Bitemporal tables combine all the capabilities of system-period and application-period temporal tables. This combination enables applications to manage the business validity of their data while DB2 keeps a full history of any updates and deletes.
This article series assumes you are familiar with temporal tables in DB2. The article "A Matter of Time: Temporal Data Management in DB2" provides an introduction to these topics. "DB2 best practices: Temporal data management with DB2" provides additional usage guidelines. For example, range partitioning for temporal tables, privileges for history data, history conscious schema design, and other recommendations are included in the best practices piece.
System-period temporal tables — the basics
When you create a table with a system time period, you're instructing DB2 to automatically capture changes made to the table and to save "old" rows in a history table— a separate table with the same structure as your base table (also called current table).
Defining a new system-period temporal table from scratch involves the following steps:
Create the base table for the current data
Listing 1. Definition of the base table
CREATE TABLE employees ( empid BIGINT NOT NULL PRIMARY KEY, name VARCHAR(20), deptid INTEGER, salary DECIMAL(7,2), system_begin TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW BEGIN, system_end TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW END, trans_start TIMESTAMP(12) GENERATED ALWAYS AS TRANSACTION START ID, PERIOD SYSTEM_TIME (system_begin, system_end) );
Create the history table with the same structure as the base table
Listing 2. Definition of the history table
CREATE TABLE employees_hist ( empid BIGINT NOT NULL, name VARCHAR(20), deptid INTEGER, salary DECIMAL(7,2), system_begin TIMESTAMP(12) NOT NULL, system_end TIMESTAMP(12) NOT NULL, trans_start TIMESTAMP(12) );
Alternatively, the history can also be created with the
LIKEclause in the
CREATE TABLEstatement, which ensures that the columns are the same as in the base table:
CREATE TABLE employees_hist LIKE employees;
Enable versioning and specify which history table to use
Listing 3. Enabling versioning
ALTER TABLE employees ADD VERSIONING USE HISTORY TABLE employees_hist;
Once this link is established, history rows are automatically created and time-travel queries are supported.
Querying a system-period temporal table
You can run regular SQL queries on a system-period temporal table as on any other table. For example, the following query returns the current data for employee 1000:
SELECT empid, name, deptid, salary, system_begin, system_end FROM employees WHERE empid = 1000 ;
Additionally, a system-period temporal table can be queried with the new
FOR SYSTEM_TIME clause to retrieve past states of your data. For example, the next query
returns the record of employee 500 as it was in the database on 1 Feb 2011
SELECT empid, name, deptid, salary, system_begin, system_end FROM employees FOR SYSTEM_TIME AS OF '2011-02-01' WHERE empid = 500 ;
The row returned by this query can be a current row or a historical row. DB2
transparently examines the current table and the history table, and returns the
correct result. The value in the
FOR SYSTEM_TIME AS OF clause can be a date,
timestamp, expression, parameter marker, or a host variable.
NOTE: In DB2 for z/OS®, the literal value 2011-02-01 must be written
TIMESTAMP '2011-02-01' so that the value is correctly cast to the target data
To retrieve all data (current and history rows) for employee 4711, use this SQL:
SELECT empid, name, deptid, salary, system_begin, system_end FROM employees FOR SYSTEM_TIME FROM '0001-01-01' TO '9999-12-30' WHERE empid = 4711 ;
Sometimes you might wish to retrieve the previous version of a row, which is the latest version before the current version. The following query returns the previous version of the row for employee 1212:
SELECT prev.* FROM employees curr, employees FOR SYSTEM_TIME BETWEEN '0001-01-01' AND CURRENT_TIMESTAMP prev WHERE curr.empid = 1212 AND prev.empid = 1212 AND curr.system_begin = prev.system_end;
Alternatively, the same result can be obtained by querying the history table directly:
SELECT * FROM employees_hist WHERE empid = 1212 AND system_end = (SELECT MAX(system_end) FROM employees_hist WHERE empid = 1212) ;
Typical migration scenarios
Adopting system-period temporal tables in DB2 is easy. The exacts steps to migrate an existing solution to system-period temporal tables depend on the characteristics of your existing tables and data, such as:
- Whether you are already recording history rows, for example with triggers
- Whether you use a single table to hold current and historical data or two separate tables
- Whether you store one or two timestamp values for each version of a row
- The period model (inclusive-exclusive vs. inclusive-inclusive) you have chosen for your existing temporal solution
Based on these properties, Part 1 and Part 2 of this series examine five flavors of existing solutions and describe their migration to system-period temporal tables in DB2.
Table 1. Overview of migration scenarios discussed in part 1 and 2 of this article series
|Existing solution records history?||Existing history is in a separate table?||No. of timestamp columns used for versioning?||Your period model?||Article|
|Scenario 0||No||N/A||None||None||Part 1|
|Scenario 1||Yes||Yes||Two||Inclusive-exclusive||Part 1|
|Scenario 2||Yes||No||Two||Inclusive-exclusive||Part 2|
|Scenario 3||Yes||No||One||Inclusive-exclusive||Part 2|
|Scenario 4||Yes||No||Two||Inclusive-inclusive||Part 2|
Scenario 0 is the simplest and covered first.
For scenarios 1-4, here are the general steps for migration to a system-period temporal table in DB2:
- Ensure that no table access (read or write) takes place during the migration.
- Disable any custom triggers or application code responsible for creating history rows on update/delete.
- Align the table schema: Create a history table and any additional timestamp
columns, if necessary. Ensure that the current table and the history table have
the same columns with the same names, order, nullability, and data types. Tip:
CREATE TABLEstatement with a
LIKEclause, as shown earlier.
- Change the data type of existing timestamp columns to
- Move existing history rows into a history table, if your existing solution uses a single table for current and history data.
- Adjust applications:
INSERT/UPDATE/DELETEstatements typically require minimal or no changes. Depending on the existing period and the number of timestamp columns, minor changes to existing queries might be necessary. Additional changes are recommended for ease of use and better performance.
Scenario 0 — Enable versioning for a non-temporal table
This scenario describes how to start recording history and enable time-travel queries for a regular table that does not yet have any existing history associated with it. Let's take the table employees_s0 in Listing 4 as an example.
Listing 4. Existing table definition
CREATE TABLE employees_s0 ( empid BIGINT NOT NULL PRIMARY KEY, name VARCHAR(20), deptid INTEGER, salary DECIMAL(7,2) );
You can easily turn this table into a system-period temporal table using the three
statements in Listing 5. The
ALTER TABLE statement adds
the three mandatory timestamp columns and declares the
SYSTEM_TIME period. The
subsequent statements create the corresponding history table and enable versioning,
respectively. After these steps, reorganization of the table is not necessary.
The new timestamp columns are defined as
IMPLICITLY HIDDEN, which is optional and
ensures that they do not appear in the result set of
SELECT * queries. Hence,
existing applications will see exactly the same query results as before the
migration. No changes to existing queries or insert, update, and delete statements
Listing 5. Converting the table employees_s0 into an STT
ALTER TABLE employees_s0 ADD COLUMN sys_start TIMESTAMP(12) NOT NULL GENERATED AS ROW BEGIN IMPLICITLY HIDDEN ADD COLUMN sys_end TIMESTAMP(12) NOT NULL GENERATED AS ROW END IMPLICITLY HIDDEN ADD COLUMN trans_id TIMESTAMP(12) GENERATED AS TRANSACTION START ID IMPLICITLY HIDDEN ADD PERIOD SYSTEM_TIME (sys_start, sys_end); CREATE TABLE employees_s0_hist LIKE employees_s0; ALTER TABLE employees_s0 ADD VERSIONING USE HISTORY TABLE employees_s0_hist;
When you add timestamp columns to an existing table, as in Listing 5, all existing rows get the value 9999-12-30 in the sys_end column, indicating that all rows are current rows. However, DB2 does not know when these rows were originally inserted and what their sys_start values should be. Hence, all existing rows initially get the sys_start value 0001-01-01, which is January 1 in the year 0001.
If you prefer to use the current time as the system start time for all existing rows, you have two options to achieve this:
- Add the sys_start column with the desired default value first, and then issue a
ALTER TABLEstatement to make the column GENERATED AS ROW BEGIN. See Listing 6.
- You use the steps in Listing 5 but export and reload all rows before you enable versioning. This is shown in Listing 7.
Listing 6. Converting the table employees_s0 into an
STT, with custom value for
ALTER TABLE employees_s0 ADD COLUMN sys_start TIMESTAMP(12) NOT NULL DEFAULT CURRENT_TIMESTAMP IMPLICITLY HIDDEN ADD COLUMN sys_end TIMESTAMP(12) NOT NULL GENERATED AS ROW END IMPLICITLY HIDDEN ADD COLUMN trans_id TIMESTAMP(12) GENERATED AS TRANSACTION START ID IMPLICITLY HIDDEN; ALTER TABLE employees_s0 ALTER COLUMN sys_start DROP DEFAULT SET GENERATED AS ROW BEGIN ADD PERIOD SYSTEM_TIME (sys_start, sys_end); CREATE TABLE employees_s0_hist LIKE employees_s0; ALTER TABLE employees_s0 ADD VERSIONING USE HISTORY TABLE employees_s0_hist;
Listing 7. Loading data with the
EXPORT TO emp.del OF DEL MODIFIED BY IMPLICITLYHIDDENINCLUDE SELECT * FROM employees_s0; LOAD FROM emp.del OF DEL MODIFIED BY PERIODIGNORE IMPLICITLYHIDDENINCLUDE REPLACE INTO employees_s0; ALTER TABLE employees_s0 ADD VERSIONING USE HISTORY TABLE employees_s0_hist;
PERIODIGNORE in the
LOAD command instructs DB2 to ignore the timestamps
in the exported data and instead generate new timestamps during load. In other
situations, you might find it helpful to use the modifier
PERIODOVERIDE, which loads existing timestamps into the system time columns instead of generating new
timestamps during the load operation.
Scenario 1 — Migrate two tables for current and historical data
In this scenario, we look at an existing temporal solution that has similar properties as system-period temporal tables in DB2, including the following:
- There are two separate tables, one for the current data and one for history data.
- The period model is inclusive-exclusive.
- Two timestamp columns are used for the period.
Existing table definitions
Let's assume the existing temporal solution uses the following tables to capture information about employees and the history of changes.
Listing 8. Existing table definitions
CREATE TABLE employees_s1 ( empid BIGINT NOT NULL PRIMARY KEY, name VARCHAR(20), deptid INTEGER, salary DECIMAL(7,2), system_begin TIMESTAMP(6) NOT NULL DEFAULT CURRENT TIMESTAMP, system_end TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '3000-01-01 00:00:00.000000' ); CREATE TABLE employees_s1_hist ( empid BIGINT NOT NULL, name VARCHAR(20), deptid INTEGER, salary DECIMAL(7,2), system_begin TIMESTAMP(6) NOT NULL, system_end TIMESTAMP(6) NOT NULL );
The table employees_s1 holds current information about employees. Each row in this table has the system_end value 3000-01-01 00:00:00.000000 to indicate that the information is current until changed.
We assume that appropriate
AFTER DELETE and
AFTER UPDATE triggers are defined on the
table employees_s1 to insert the before images of updated and deleted rows into the
Table 2. Data in employees_s1
|1000||John||1||5000.00||2010-05-11 12:00:00.000000||3000-01-01 00:00:00.000000|
|1212||James||2||4500.00||2011-05-11 09:30:00.100000||3000-01-01 00:00:00.000000|
|4711||Maddy||1||5250.00||2011-07-30 09:25:47.123456||3000-01-01 00:00:00.000000|
Table 3. Data in employees_s1_hist
|500||Peter||1||4000.00||2010-05-11 12:00:00.000000||2011-06-30 09:15:45.123456|
|1212||James||1||4000.00||2010-05-11 12:00:00.000000||2011-05-11 09:30:00.100000|
|4711||Maddy||1||4000.00||2010-05-11 12:00:00.000000||2011-07-30 09:25:47.123456|
Drop any triggers that create history rows
Since DB2 generates history rows for updated or deleted records automatically, you should remove any triggers that generate history rows in your existing solution. Drop the triggers at the beginning of the migration process to avoid the unnecessary generation of (possibly) incorrect history rows, in case any rows need to be updated as part of the migration process itself.
Migrate the table definition and data
- The existing system_end value for current rows is 3000-01-01, but should be 9999-12-30 in a system-period temporal table.
- The data type of the system_begin and system_end columns is
TIMESTAMP(6), but it must be
TIMESTAMP(12)for a system-period temporal table.
- System-period temporal tables must have a transaction ID column in the base table and the history table.
- The generation definitions of the system_begin and system_end columns are
DEFAULT CURRENT TIMESTAMP/TIMESTAMP '…'vs.
GENERATED ALWAYS AS ROW BEGIN/END.
- A system-period temporal table requires a
- For a system-period temporal table, versioning must be enabled.
The statements in Listing 9 address these differences and convert the table employees_s1 into a system-period temporal table.
Listing 9. Converting the table employees_s1 into an STT
-- 1. Change the system_end values to the same value that DB2 generates UPDATE employees_s1 SET system_end = '9999-12-30'; -- 2.+3. Change data types to TIMESTAMP(12) and add the transID column ALTER TABLE employees_s1 ALTER COLUMN system_begin SET DATA TYPE TIMESTAMP(12) ALTER COLUMN system_end SET DATA TYPE TIMESTAMP(12) ADD COLUMN trans_start TIMESTAMP(12) GENERATED ALWAYS AS TRANSACTION START ID IMPLICITLY HIDDEN; ALTER TABLE employees_s1_hist ALTER COLUMN system_begin SET DATA TYPE TIMESTAMP(12) ALTER COLUMN system_end SET DATA TYPE TIMESTAMP(12) ADD COLUMN trans_start TIMESTAMP(12) IMPLICITLY HIDDEN; -- 4.+5. Set the auto generation of the columns system_begin and system_end, and -- declare these columns as a system time period ALTER TABLE employees_s1 ALTER COLUMN system_begin DROP DEFAULT SET GENERATED ALWAYS AS ROW BEGIN ALTER COLUMN system_end DROP DEFAULT SET GENERATED ALWAYS AS ROW END ADD PERIOD SYSTEM_TIME (system_begin, system_end); -- 6. Reorg the tables and enable versioning REORG TABLE employees_s1; REORG TABLE employees_s1_hist; ALTER TABLE employees_s1 ADD VERSIONING USE HISTORY TABLE employees_s1_hist;
Let's discuss each of these steps in more detail:
Updating the existing system_end values from 3000-01-01 to the same value that DB2 generates (9999-12-30) is highly recommended but not strictly necessary for the migration process. To identify current rows by a single common system_end value after the migration, that value must be 9999-12-30.
Updating many rows in a single statement might require a lot of log space. To avoid a log-full condition, make sure your log is large enough or update the rows in a series of smaller batches with intermediate commits.
Note that DB2 does not use the value 9999-12-31 as the system_end value for current rows. The reason is that the value 9999-12-31 might change to a value in the year 10000 if an application converts it to a different time zone. This is undesirable because a date with five-digit year cannot be inserted or loaded in DB2 again.
- When you increase the precision of the system_begin and system_end columns from
TIMESTAMP(12), existing values in these columns are automatically cast and padded with six additional zeros. For example, the value 2010-05-11 12:00:00.000000 becomes 2010-05-11 12:00:00.000000000000.
- Because the new column trans_start is defined as nullable, all existing rows
have the NULL value in this column. For new rows, DB2 generates a value for this
column automatically, if needed. The trans_start column is defined as
IMPLICITLY HIDDENso it does not show up in the result set of
"SELECT *"queries. But it can still be retrieved or compared if you use its column name explicitly in the
- Once the system_begin and system_end columns have been changed to
GENERATED ALWAYS AS ROW BEGIN/END, users cannot provide values for these columns in
UPDATEstatements. Instead, DB2 always generates a timestamp value for the transaction in which the insert or update took place.
- Adding the
PERIOD SYSTEM_TIMEdeclaration will fail if the columns involved do not have the required properties.
- Explicit activation of versioning is required so the table becomes an STT,
history is automatically recorded, and temporal queries are supported. Enabling
versioning will fail if the columns of the history table don't match the base
table. A REORG of both tables is required before any
DELETEstatements can be executed. You might also want to update statistics at this time by issuing the
RUNSTATScommand on the base and on the history table separately.
Existing applications that read or write to your tables may or may not require minor
changes, depending on how exactly they access the tables. For example, a
Java™ application is not affected by the data type change
In this migration scenario, most if not all
continue to work unmodified. The reason is that in the existing solution (Listing 8) the values for the system_begin and system_end
columns were automatically supplied by default values, so applications did not
have to provide values for these columns explicitly. After the migration to a
system-period temporal table, DB2 continues to generate values for these columns
automatically. Also, in the existing solution history rows were created by a
database trigger, which got replaced by DB2's automatic generation of history rows.
Again, no changes on the application side are required.
If an application contains
UPDATE statements that write to the history
table or provide values for the system_begin and system_end columns in the current
table, those statements need to be changed. The reason is that DB2 performs
these writes automatically for you.
Any application that explicitly tests for the previous system_end value, either in
application code or in an SQL
WHERE clause, such as
system_end = '3000-01-01 00:00:00.000000', should be changed to test for
the DB2 generated system_end value 9999-12-30 instead.
Many common temporal queries will still work unchanged, but can be greatly simplified. For example, assume you want to retrieve the information recorded for employee 500 as of midnight on 1 Feb 2011. Listing 10 shows what such a query would have looked like before the migration as well as a simplified version of the same query you can use after the migration.
Listing 10. Retrieving employee 500 as of 1 Feb 2011
-- Before the migration: SELECT empid, name, deptid, salary, system_begin, system_end FROM employees_s1 WHERE system_begin <= '2011-02-01' AND system_end > '2011-02-01' AND empid = 500 UNION ALL SELECT empid, name, deptid, salary, system_begin, system_end FROM employees_s1_hist WHERE system_begin <= '2011-02-01' AND system_end > '2011-02-01' AND empid = 500;
-- Simplified query after the migration: SELECT empid, name, deptid, salary, system_begin, system_end FROM employees_s1 FOR SYSTEM_TIME AS OF '2011-02-01' WHERE empid = 500;
Although the original query still works on the migrated tables (if you have the read privilege for the history table), the simplified query is a lot shorter, easier to understand, and thus less error-prone.
Similarly, imagine you want to see all current and history rows for employee 4711. Listing 11 illustrates how you can code such queries before and after the migration.
Listing 11. Retrieving all current and history rows for employee 4711
-- Before the migration: SELECT empid, name, deptid, salary, system_begin, system_end FROM employees_s1 WHERE empid = 4711 UNION ALL SELECT empid, name, deptid, salary, system_begin, system_end FROM employees_s1_hist WHERE empid = 4711;
-- Simplified query after the migration: SELECT empid, name, deptid, salary, system_begin, system_end FROM employees_s1 FOR SYSTEM_TIME FROM '0001-01-01' TO '9999-12-30' WHERE empid = 4711;
Variations and additional considerations
In this section, we discuss several variations of migration scenario 1.
Null value in the ROW_END column to indicate "until the end of time"
In this migration scenario, we have assumed that the existing data uses 3000-01-01 as the system_end value for current rows to indicate "until the end of time." What if your application has used the NULL value instead? In that case, you must replace all system_end values for current rows with the value 9999-12-30. Additionally, any queries that previously tested for the NULL value in the system_end column should be changed.
For example, the predicate
system_end IS NULL
should be changed to
'9999-12-30'. Similarly, a search condition in your existing solution such as
system_end > '2012-02-01' OR system_end IS NULL can be simplified to
Null values in the ROW BEGIN column
In a system-period temporal table, the system_begin column cannot be NULL. If your existing system_begin column contains NULL values, maybe to indicate that rows existed since some unknown point in time, you must replace these NULL values with a non-null value such as 0001-01-01 or 1900-01-01.
ROW BEGIN value is not less than ROW END value
In each row of a system-period temporal table, the system_begin value must be less
than the system_end value. If this is not true for your existing data, queries
might return unexpected results. If you are not sure, consider defining the
CHECK(system_begin < system_end) on your
existing tables employees_s1 and employees_s1_hist before the migration. If there
are any rows that violate this constraint, you must correct or delete them.
You can drop the constraint once all rows satisfy the constraint and the migration is completed. When versioning is enabled for a system-period temporal table, DB2 automatically ensures that system_begin is less than system_end, so the constraint would only be unnecessary overhead.
History table contains more or fewer columns than the base table
In your existing temporal solution, the history table might have a different number of columns than the base table. However, the number of columns as well as their names, position, data type, and nullability must be the same in both tables before you can convert them into a system-period temporal table. So you must add or remove columns so that both tables have the same schema.
If you want to record history only for a subset of the columns in the base table, consider splitting the base table vertically into two tables, as discussed in "DB2 best practices: Temporal data management with DB2."
Need to record the user ID or application ID for each row change
A system-period temporal table does not automatically record which user ID or
application ID has cause a particular row change. To record this
information, you need to have explicit columns for it and logic to provide the proper
values. For example, you could use a
BEFORE INSERT trigger such as in Listing 12 to record the user ID in the history table:
Listing 12. Trigger to record user IDs in the history table
CREATE TRIGGER pop_user_id_col NO CASCADE BEFORE INSERT ON employees_s1_hist REFERENCING NEW AS NROW FOR EACH ROW MODE DB2SQL BEGIN ATOMIC SET user_id_col = CURRENT CLIENT_USERID; END#
History table also contains current rows
If your existing solution stores current rows redundantly in the base table and the history table, you must delete the current rows from the history table. Otherwise, temporal queries might return incorrect results.
History table contains rows with overlapping periods
For a given key value, such as an empid value in our employee example, the history table must not contain two or more rows with the same key and overlapping periods. Overlapping periods for the same key value means that at some point in time, there were two current rows for the same primary key in the current table, which would be inconsistent. As a result, temporal queries might return unexpected results.
If you are not sure, you should verify that your existing history data does not contain overlaps. The query in Listing 13 returns all overlaps, if any exist.
Listing 13. A query that detects temporal overlaps
SELECT empID, previous_end AS overlap_start, system_begin AS overlap_end FROM (SELECT empID, system_begin, system_end, MIN(system_end) OVER (PARTITION BY empID ORDER BY system_begin ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) AS previous_end FROM employees_s1_hist) WHERE system_begin < previous_end;
Generation of history rows: One per transaction vs. one per statement
One difference between versioning with a homegrown trigger-based solution and a DB2 system-period temporal table is that the trigger-based solution records history rows per statement whereas a system-period temporal table records history rows per transaction.
If the same row is updated multiple times in a single transaction, a trigger generates a history row for each of these updates. As a result, the history table will contain intermediate versions of the row that were never committed in the database. Consequently, any queries against the history table can see uncommitted data from the base table. From a transactional point of view this means that any history queries are performing "dirty reads," as if the isolation level was uncommitted read (UR), which is clearly not desirable for most applications.
This problem is solved by the migration to system-period temporal tables in DB2. For each modified row, a system-period temporal table creates at most one history row per transaction, which records the state of the row before the current transaction. This behavior in DB2 saves storage space and ensures that the history data is transactionally correct.
The temporal capabilities in DB2 provide sophisticated support for time-aware data management, compliance, and auditing requirements. Part 1 of this series has described two common migration scenarios to adopt system-period temporal tables in DB2. As it turns out, migrating to temporal tables is really easy.
Scenario 0 is the most basic case where a regular (non-versioned) table is converted to a system-period temporal table with just three simple DDL statements. Scenario 1 assumes that an application has an existing pair of tables to record current and history data using timestamps and triggers. Again, only a handful of DDL statements is required to migrate them to temporal tables. Additional migration scenarios are discussed in subsequent parts of this series.
- For a technical introduction to temporal tables in DB2, read "A Matter of Time: Temporal Data Management in DB2."
- Consult usage and performance guidelines: "DB2 best practices: Temporal data management with DB2."
- The benefits of temporal data management are discussed in "Improving data quality for exceptional business accuracy and compliance: Temporal data management with IBM DB2 Time Travel Query."
- Refer to the DB2 product documentation: Time Travel Query using temporal tables.
- Learn more about Information Management at the developerWorks Information Management zone. Find technical documentation, how-to articles, education, downloads, product information, and more.
- Stay current with developerWorks technical events and webcasts.
- Follow developerWorks on Twitter.
Get products and technologies
- Download a DB2 trial version or the free DB2 Express-C to try out the new temporal data management features yourself.
- Build your next development project with IBM trial software, available for download directly from developerWorks.
- Now you can use DB2 for free. Download DB2 Express-C, a no-charge version of DB2 Express Edition for the community that offers the same core data features as DB2 Express Edition and provides a solid base to build and deploy applications.
- Participate in the discussion forum.
- Ask questions in the DB2 Temporal discussion forum.
- Check out the developerWorks blogs and get involved in the developerWorks community. | <urn:uuid:78e8af22-0ec9-4dec-a268-2485ff92de9a> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/data/library/techarticle/dm-1210temporaltablesdb2/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.762429 | 7,189 | 2.828125 | 3 |
Using the Windows 2000 Distributed File System
In Windows NT 4.0, Microsoft provided an add-on product called Distributed File System (DFS) that allowed physically separate network file resources to be grouped together and accessed as if they were a single logical structure. The product, which was a free download, failed to make a great impact with network administrators and went largely unnoticed. With Windows 2000, DFS is included with the OS and provides a number of new functions. The tool for managing the DFS structure has been improved, and wizards serve to make setup an easy task.
DFS is a service that gives administrators a way to provide users with simple access to increasingly distributed amounts of data. In this article, I will look at some of the features of DFS and how to create a DFS tree in Windows 2000.
|DFS in a Heterogeneous Environment|
The functionality of DFS is not just limited to Microsoft Operating systems. For instance, if the server hosting the DFS root has access to a NetWare server through client or gateway software, directories on the NetWare server can be added to the DFS tree. This is a major advantage to administrators managing data in a heterogeneous environment.[end]
DFS file structures can be accessed from any workstation that is running the DFS client software. This software is included with Windows 98, Windows NT 4.0, and Windows 2000. A downloadable client is available for systems running Windows 95. To take full advantage of the fault tolerance capabilities of DFS, the updated Active Directory Client Extensions must be installed for the respective client platforms.
What Is DFS?
DFS provides the ability to create a single logical directory tree from different areas of data. The data included in a DFS tree can be in any location accessible from the computer acting as the DFS root. In other words, the data can be on the same partition, disk, or server, or on a completely different server. As far as DFS is concerned, it makes no difference. A DFS tree appears as one contiguous directory structure, regardless of the logical or physical location of the data.
After the DFS root is created, links to directories can be added or removed to construct the single logical directory structure. The DFS tree can be navigated using standard file utilities such as Windows Explorer. Unless users are made aware of the fact that the data is being accessed from different locations, they will not realize that they are using a DFS system at all.
DFS trees can be used with both FAT and NTFS partitions. If you do use NTFS, the inclusion of a file or directory in a DFS structure has no effect on security permissions.
There are two types of DFS:
- Stand-alone DFS--Refers to a DFS tree that is hosted on a single physical server, and is accessed by connecting to a DFS share point on that server. DFS configuration information is stored in the server's Registry. Stand-alone DFS provides no fault tolerance. If the server hosting the DFS root should go down, users will no longer be able to access their data unless they explicitly know where the data is stored.
- Domain DFS--Provides more functionality, including features such as replication and load-balancing capabilities. Domain DFS information is stored in Active Directory. A domain member server must act as the host for the DFS tree. By storing the domain DFS configuration in Active Directory, the server-centric nature of stand-alone DFS is removed, enabling the administrator to create DFS root replicas. If a server were to go down, users would be redirected to a DFS root replica and could continue to access the DFS tree.
|DFS Disk Space Reports|
When a DFS share is accessed, the amount of free disk space on the drive is reported for the drive that hosts the DFS root. This amount will often differ from the amount of disk space available through different areas of the DFS structure. As an administrator, this change is easy to account for, but it can be confusing for users.
Advantages of DFS
DFS brings with it advantages for both users and administrators. All the directories and files users need to access exist in one easy-to-navigate structure. This has two effects. First, users can easily locate data, reducing the need for administrative assistance. Second, users can more easily save data in the right place, thereby increasing the effectiveness of backups and reducing related support calls. From an administrative perspective, DFS provides the ability to manage data from within one simplified structure. Other benefits include the ability to move a data structure from its original location to another drive, or even another server, without affecting the DFS structure or the users' perception of the location of the data.
Creating a DFS Tree
The initial creation of a DFS tree takes just a couple of minutes, thanks to a wizard that guides you through the necessary steps. The wizard is accessed from within the Distributed File System management utility, which can be found in the Administrative Tools menu. After starting the Management Utility, choose Action|New to launch the DFS Root Creation Wizard. After you click Next on the introduction screen, the wizard prompts you to select whether to create a stand-alone DFS root or a domain DFS root. For this example, I will create a domain DFS root.
The next two screens allow you to select first the domain, and then the server that will host the DFS root. Each server can only host one DFS root. The following screen requires that you specify the share point at which you wish to create the DFS root. You can either select an existing share by using the drop-down box, or create a new share point for the DFS root. The next screen allows you to specify a name for the DFS root, and to include a comment. Clicking Next then takes you to a summary screen, in which you can check the information that has been entered. Figure 1 shows a completed summary screen. Once the information has been checked, click Finish to create the new DFS system.
Adding new links to the DFS tree is simple. With the DFS root object selected in the management utility, right-click and choose New DFS Link. Then, simply add the path to the data you want included in the DFS tree. Repeat this procedure for each data area that you wish to add to the tree. In Figure 2, you can see the view of a DFS tree with a number of links added. The left pane shows the DFS Management Utility; the right pane shows what the tree looks like when viewed through Windows Explorer.
DFS provides a simple solution to one of network administration's most time-consuming challenges: managing data access. By creating a DFS tree, Windows 2000 administrators can manage data easily. //
Drew Bird (MCT, MCNI) is a freelance instructor and technical writer. He has been working in the IT industry for 12 years and currently lives in Kelowna, Canada. You can e-mail Drew at firstname.lastname@example.org. | <urn:uuid:885fdeb4-dd8b-4ea6-b7ab-92c829f909f2> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/624521/Using-the-Windows-2000-Distributed-File-System.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00142-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909112 | 1,476 | 2.71875 | 3 |
There are various types of software testing techniques. Each individual technique is good at finding a particular type of defect. Each testing technique falls into a number of different categories.
There are two main categories - static and dynamic. Dynamic techniques are subdivided into three more categories:
- Specification-based Testing (black-box, also known as behavioral techniques)
- Structure-based Testing (white-box or structural techniques)
- Experience-based Testing
Click Here To Enlarge
Figure 1: Static Techniques
Click Here To Enlarge
Figure 2: Dynamic Techniques
Let’s discuss structure-based testing marked in red in the above diagram.
Structure-based testing techniques use the internal structure of a software to derive test cases. They are commonly called 'white-box' or 'glass-box' techniques. Structure-based techniques can also be used at all levels of testing.
E.g.: In component testing, component integration testing, and system and acceptance testing.
Structure-based test design techniques are a good way to help ensure more breadth of testing. To measure what percentage of code has been exercised by a test suite, one or more coverage criteria is used. A coverage criterion is usually defined as a rule or requirement, which a test suite needs to satisfy.
There are a number of coverage criteria. Let’s discuss Statement, Decision (Branch) and Path coverage, and understand how to calculate, with examples.
IF X+Y > 100 THEN
If X > 50 THEN
Print “X Large”
For calculating Statement, Decision (Branch) and Path coverage, I’ve created a flow chart for better understanding:
Figure 3: Coverage - Flow Chart
- Nodes ( , ) represent statement of code [E.g.: entry, exit, decisions]
- Edges ( ) represent links between nodes
Statement coverage is a whitebox testing technique technique where the all the statements at the source code are executed at least once. To calculate Statement Coverage, find out the shortest number of paths following which all the nodes will be covered.
In the above example, in case of “Yes”, while traversing through each statement of code and the traversing path (A1-B2-C4-5-D6-E8), all the nodes are covered. So by traveling through only one path all the nodes (A, B, C, D and E) are covered.
Statement coverage (SC) =1
Branch coverage covers both ways (true and false). It covers all the possible outcomes of each condition at least once. Branch coverage is a whitebox testing method that ensures that every possible branch from each decision point in the code is executed at least once. To calculate Branch coverage, find out the minimum number of paths which ensure covering of all the edges.
In the above example, in case of traversing through a ‘Yes’ decision, path (A1-B2-C4-5-D6-E8), maximum numbers of edges (1, 2, 4, 5, 6 and 8) are covered but edges 3 and 7 are left out. To cover these edges, we have to follow (A1-B3-5-D7). So by travelling through two paths (Yes, No), all the edges (1, 2, 3, 4, 5, 6, 7, 8) are covered.
Branch Coverage /Decision Coverage (BC) = 2
It is executed in such a way that every path is executed at least once. It ensures that every statement in the program is guaranteed to be executed at least one time. Path Coverage ensures covering all the paths from beginning to end, in the above example. All the possible paths are:
Path coverage (PC) = 4
It is related to decision coverage but has better sensitivity to the control flow. Condition coverage reports the true or false outcome of each condition. It measures the conditions independently of each other. Multiple condition coverage is also known as condition combination coverage.
Let us take an example to explain condition coverage:
IF ("X && Y")
In order to suffice valid condition coverage for this pseudo-code, the following tests will be sufficient.
TEST 1: X=TRUE, Y=FALSE
TEST 2: X=FALSE, Y=TRUE
I hope this blog has helped you understand and calculate the coverage in White Box Code testing. | <urn:uuid:da9cfe50-2f64-45cd-b669-b571795e5fd7> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/structure-based-or-whitebox-testing-techniques | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00355-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909467 | 924 | 3.078125 | 3 |
Relational databases (RDBMSs) have been the dominant data management tool for 30 years. They proved to be a good solution for the capture and management of structured data and fairly reasonable for decision support analysis. Their shortcomings, however, have become increasingly obvious in recent years as unstructured information has begun flooding into the data center.
Business leaders are realizing the tremendous value of this unstructured data, which includes corporate email, documents, video, audio, still images, and social networking data, for such vital uses as:
1. Predicting market trends
2. Identifying informal relationship structures and key influencers inside large enterprises and in external markets
3. Targeting marketing investments to gain the most advantage in the market
4. Predicting the needs of individual customers in order to increase service levels while decreasing costs
NoSQL as an Alternative?
To capture these diverse data types and support this type of analysis, businesses have turned to two new classes of database technology: big data systems (or key/value systems) such as Hadoop and Hbase, and semantic web systems, aka "triplestores." These have been lumped into the general term of "not only SQL" (NoSQL) and are typically not seen as replacements but rather supplements to RDMBSs, with the capability of organizing very large volumes of both structured and unstructured data and combining them in various kinds of analysis. Each of these has its own strengths and weaknesses and its own natural application areas.
Relational databases are strongest in regular enterprise applications that only deal with structured data. The enterprise values in particular the transactionality and ACID (atomicity, consistency, isolation, durability) properties of the relational database model.
Big data technologies are designed to work with billions of nested objects (a webpage, a Facebook account, etc.) that by virtue of its size needs to run on large clusters of machines. These big data databases don't have the rigor of a relational database when it comes to transactions and ACIDness and they have given up on doing any complex joins, but they do an amazing job at making billions of objects available for millions of requests per second.
Semantic web triplestore databases are best at complex metadata applications where the number of classes change on a day-by-day basis, where classes can change on-the-fly, and where it is really important to have self descriptions of data. Modern triplestores have developed to the point where they offer the rigor of relational databases, the scalability of big data systems, and still support big complicated joins.
In particular, NoSQL as the "big data" type of database has been a movement to offer nonrelational distributed data storage that does not try to provide full ACID compliance. These offerings provide weak consistency guarantees such as eventual consistency and transactions restricted to single data items. While this offers significant flexibility and scaling, it may not be the best choice for primary storage of business-critical data.
The Hbases or big data databases are designed to accept very high volumes of data objects that are largely self-contained and involve very few joins. Like the RDBMSs they are very good at concurrent dynamic access. Big data systems also provide high availability. One thing they cannot do well is complex graph searches, and they are not good at combining structured and unstructured data, two areas where triplestores excel. Triplestores offer a viable option for NoSQL flexibility along with the ACID compliance you need from RDBMSs. The scaling capabilities of triplestores are continually maturing, and we are starting to see large-scale projects rely on triplestores in an enterprise setting.
Need Ultimate Flexibility? Triplestores Come Out on Top
The highly structured nature of RDBMSs makes them inflexible in the kinds of data they can accept. If you would want to add relationships between data, you would have to overhaul your schema system and add new link tables. In comparison, triplestores offer several ways of adding new relationships.
For instance, in the triplestore data model shown in Figure 1-Conceptual Triplestore Model, the simplest way to add a new relationship is to add a triple like "person1 uncle-of person2," with no need to make a new schema and add a new link table. Just add this new triple and now you can ask new queries involving uncles.
The disadvantage of this approach is that you would have to add a lot of triples to record all these family relationships. Thus, it is faster to just add a few rules, such as:
- if p0 has-child p1 and p0 has-child p2 then p1 has-sibling p2.
- p1 uncle-of p3 if p1 is male & p1 has-sibling p2 & p2 has-child p3.
Triplestores are highly flexible, making the addition of new information not anticipated in the original database design far more straightforward. In fact, triplestore databases are so flexible that database designers do not have to create a schema up front but can build an ontology based on the data they need to include, editing it as they go. But nothing prevents the designer from creating an initial ontology. Because of this structural flexibility it is easy to integrate databases in an almost lazy, bottom-up fashion.
In the traditional top-down master data approach, you spend an eternity getting the entire "truth" for all the data that you will integrate. With the triple store approach, you can keep (most of) the data in the original databases and slowly start building a set of triples and rules to integrate your data.
Complex Event Analysis? Triplestores Win Hands Down
We see a number of companies requiring event analysis with real-time, complex query capabilities. These companies are using large data warehouses with disparate RDF (Resource Description Framework)-based triple stores describing various types of events, where each event has at least two actors, usually a beginning and end time, and very often a geospatial component. These events are literally everywhere:
- In healthcare applications, we see hospital visits, drugstore visits, and medical procedures.
- In the communications industry, we see telephone call detail records including locations.
- In large corporations, email and calendar databases are basically social network databases filled with events in time and, in many cases, space.
- In the financial industry, every transaction is essentially an event.
- In the insurance industry, claims are important events that need more activity recognition.
- In the homeland security industry, basically everything focuses on events and actors.
So How Can Triplestores Help With This?
Some triplestores now offer social network analysis libraries and efficient geospatial and temporal indexing. With these capabilities they can do queries such as "find all meetings that happened in November 2010 within 5 miles of Berkeley that were attended by the three most influential people among Joe's friends and friends-of-friends." This kind of relationship analysis is becoming important in business both for the identification of macro trends and micro opportunities for sales to individual customers, and in governmental areas such as intelligence and defense.
This complex relationship analysis is nearly impossible to do with traditional RDMBSs, which are too inflexible to capture data on complex, evolving relationships effectively, while big data systems cannot accommodate the large numbers of joins required. Semantic technologies, however, can provide these insights and adapt their answers to changing conditions and increased data availability, making them ideal for the kind of pattern recognition analysis that is the heart of both market trend identification and intelligence.
Where Are Triplestores Used Today?
Triplestore technologies are already in use in several industries including pharmaceuticals, the defense industry (and the U.S. Department of Defense), telecommunications, media companies, and IT. They are used in such areas as:
- The analysis of the relative effectiveness of different cancer drugs in combination with other treatments on different patient populations
- The capture and analysis of detailed information on very large numbers of companies and the interrelationships among them
- The analysis of how all the customers of large cell phone providers use their phones and which, for instance, are good prospects for plan upgrades
- The integration of multiple complex databases such as those that enter a large enterprise as part of acquisitions
A Combination Is Best
A successful combination of technologies is an ideal approach. Wholesale replacement of your RDBMS or NoSQL investment is a fool's errand. A more practical approach is using a triplestore to "add a brain" to your legacy system. For a NoSQL approach, a combined system could provide fast, scalable access to the full content, with the inference and aggregation from a triplestore that is needed for the added richness to round out the solution. | <urn:uuid:515c306f-d5dd-4265-b410-67a5fed988d5> | CC-MAIN-2017-04 | http://www.dbta.com/Articles/ReadArticle.aspx?ArticleID=74251 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936313 | 1,837 | 2.625 | 3 |
September 17th, 2014 - by Walker Rowe
The world is running out of IPv4 IP addresses and will soon only issue IPv6 addresses. What does the system administrator need to do to make the transition to this new reality?
An IPv4 address is in the form of 255.255.255.255, which is 4 octets of 8 bits each equals 32 bits. A bit can be 0 or 1, so there are 2^32= appx 4.3 billion possible IP addresses. That is a smaller number than the number of people on the planet, and since people living in developed countries have multiple computers and cellphones, the world is running out of IP addresses.
The Microsoft cloud, Windows Azure, for example, has already run out of IP addresses for new customers in the USA. The only way to obtain more is to buy them from an ISP since ICANN does not have many more to issue. But in fact ISPs do not have many left to sell. ISPs in Europe only have about 16 million left according to RIPE; to say that these are taken means they have been allocated to ISPs. Brazil has some still available from its allotment. Africa as a continent has more.
The answer to this problem is to move to IPv6, which will allow 3.4×1038 addresses, which is enough for every computer, TV set, cell phone, wind machine, thermostat, and many trillion more domains to have IP address. In other words, the plan is to allow the internet to grow to an almost unlimited number of domains and devices. In the future, cars and devices not even built yet will be attached to the internet. This IPv6 number 3.4×1038 is greater than the weight of the earth in grams (2.09*1027) so we will never (never say 'never'?) run out again.
The new format for IP addresses is 8 groups of hexadecimal digits, which is written like this:
FFFF:FFFF:FFFF:FFFF:0000:0000:0000:0000 where the last four are zone information that can be left off. Zone information indicates routing type, like locally routable or globally routable.
Each of these groups has 16 bits. So the maximum number of combinations is 2^16 = 0xFFFF or 65,636. But 0 is a valid address, so we subtract 1 to give a possible 65,635 addresses. 0 is a valid address format too; you can abbreviate 0000 to 0 just like you can abbreviate 0DEF to DEF.
What do I need to do?
Google keeps track of IPv6 adoption here. For example, they show that 6.11% of domains in the Czech Republic have adopted IPv6, but that this is not enough because there is a 0.1% increase in latency due to routing issues there.
First, you need to make sure that your domains have an AAAA record (IPv6) in addition to an A record (IPv4).
For example, Facebook has one: 2a03:2880:2050:3f07:face:b00c::1.
Some things you are going to need to do are the following.
In addition to updating your DNS record, if you run a DNS server, it will need to be upgraded or configured to support IPv6. For example, IPv6 makes DNS queries using both UDP and TCP. IPv4 uses UDP only.
If you are an ISP, cloud company, or web hosting company, you can read information from the RIPE Network Coordination Center (Europe) here on how to get IPv6 addresses for your customers.
For detailed information on routing and other DNS and security issues you can read Guidelines for the Secure Deployment of IPv6 from the American NIST's (National Institute for Standards and Technology's).
Some older Cisco routers have already hit a 500K limit on routing table space, resulting so far in at least one global slowdown. This means that the global routing table has grown to such proportions that at times it exhausts the 500K memory limit in certain Cisco routers under certain conditions. IPv6 is supposed to make routing easier, since routing information is built into the zone part of the IP address itself. Less memory will be required, but that does not mean internet backbones can keep the old Cisco routers.
Because there are enough IP addresses to give every device on this planet (and other planets too!) an IP address, there will be no more need to do NAT routing. With NAT, all the devices on an internal network translate to one IP address on the internet. With IPv6 each device can have its own address.
All of this means you will need to reconfigure or replace your routers, especially those connected directly to the internet.
Web servers too must be configured to listen on an IPv6 address, like the Apache httpd.conf file:
<VirtualHost [2607:f0d0:1002:11::4]> ServerAdmin email@example.com DocumentRoot /home/httpd/cyberciti.biz/http ServerName cyberciti.biz ServerAlias www.cyberciti.biz ErrorLog logs/cyberciti.biz-error_log TransferLog logs/cyberciti.biz-access_log ErrorLog "/home/httpd/cyberciti.biz/logs/ipv6.error.log" CustomLog "/home/httpd/cyberciti.biz/logs/ipv6.access.log" common ScriptAlias /cgi-bin/ "/home/httpd/cyberciti.biz/cgi-bin
Storage arrays and Apache Hadoop also have IP addresses that you might need to change. So have application servers. Ubuntu is set up by default without IPv6 support. Windows has had IPv6 support since the year 2000.
You can choose to keep IPv4 addresses for internal systems for many years as internal networks can still use IPv4, and Apache and other devices and software can run in dual-stack mode. However, as you add new domains to your hosting environment, your ISP is going to run out of IPv4 addresses for new customers, so your web server, firewall, and routers all need to be configured to support this. Your ISP will also need to upgrade their DHCP servers.
Mobile cell phones too will have to make the switch to IPv6. This is being addressed in the 4G mobile phone standard. But 4G is not available everywhere. 3G and even 2G remain the only options in most of the world. So you might find that people using mobile devices cannot even access the internet: but this is a problem for the phone company to fix; not you.
Cloud Tools and Other Apps
If your Microsoft Exchange server is configured to connect to the cloud and to connect to your antispam and antivirus vendor, it might need to be changed to IPv6. Domains get blocked when their IP address are blacklisted for sending spam, but what if that IP address is IPv6? SpamHaus and others maintain a list of these blocked domains. You will need to make sure that your antivirus and spam software and vendor supports IPv6.
These are just a few of the issues and action items to address for the system administrator who is facing the end of IPv4 and will be forced onto IPv6 - and soon. Now is the time to plan to revise all of your devices and applications, as well as your domain names for you and for all of your hosting clients, many of whom may have no technical knowledge about this and will rely on your for direction.
Akamai’s State of the Internet Q1 2014 Report
American NIST (National Institute for Standards and Technology) Guidelines for the Secure Deployment of IPv6
RIPE Network Coordination Center (Europe) guide for IPv6 | <urn:uuid:0f9d6f68-85a0-44aa-9b9a-12e9cce422db> | CC-MAIN-2017-04 | https://anturis.com/blog/ipv6-guide-for-system-administrators/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00381-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929182 | 1,635 | 2.703125 | 3 |
In part five of this blog series we discussed that classification is considered to be the traffic that is important or grouping the traffic into groups based on the type of traffic. In this part of the series we’ll explore policy maps and how they impact what will be done to the traffic after it has been classified.
Marking is related to classification and allows network devices to classify a packet or frame based on a specific traffic descriptor. Some traffic descriptors include CoS, DSCP, IP precedence, QoS group, and MPLS. Marking can take place at Layer 2 or Layer 3.
Marking a packet or frame with its classification allows network devices to easily distinguish the marked packet or frame as belonging to a specific class.
Link layer media often changes as a packet travels from its source to its destination. Because a CoS field does not exist in a standard Ethernet frame, CoS markings at the link layer are not preserved as packets traverse non-trunked or non-Ethernet networks. Using Marking at the network layer provides a more permanent marker that is preserved from source to destination. Some edge devices only mark frames at the data link layer making it necessary for there to be a way to map QoS marking between the data link layer and the network layer.
A policy map matches the classes from the class map with how much bandwidth and/or priority has been assigned to this traffic. A policy map contains three elements:
- A case sensitive name
- A traffic class specified in the class command
- And the QoS policies
All traffic that is not classified by any of the class maps is considered to be part of the class default. The class default is part of every policy map, even if not configured.
In the following example, the Modular QoS CLI (MQC) Three-Level Hierarchical Policer has been configured for three classes within three separate policy maps. The three classes, called “c1,” “c2,” and “c3,” respectively, have been configured using the match criteria specified as follows:
class-map c1 match any class-map c2 match ip precedence 1 2 3 class-map c3 match ip precedence 2
Next, the classes are configured in three separate policy maps, called p_all (the primary-level policy map), pmatch_123 (the secondary-level policy map), and pmatch_2 (the tertiary-level policy map), as shown below.
policy p_all class c1 police 100000 service-policy pmatch_123 policy pmatch_123 class c2 police 20000 service-policy pmatch_2 policy pmatch_2 class c3 police 8000
The primary goal of this configuration is to limit all traffic to 100 kbps. Within the primary goal, the secondary goal is make sure packets with precedence values of 1, 2, or 3 do not exceed 20 kbps and that packets with precedence value of 2 never exceed 8 kbps.
To verify that the classes have been configured correctly and to confirm the results of the traffic policing configuration in the policy maps, use the show policy-map command. The following sample output of the show policy-map command verifies the configuration of the classes in the policy maps:
Router# show policy map Policy Map p_all Class c1 police cir 100000 bc 3000 conform-action transmit exceed-action drop service-policy pmatch_123 Policy Map pmatch_123 Class c2 police cir 20000 bc 1500 conform-action transmit exceed-action drop service-policy pmatch_2 Policy Map pmatch_2 Class c3 police cir 8000 bc 1500 conform-action transmit exceed-action drop
In this example the first two classes were separately configured using the class-map command. The third class was configured by specifying the match condition after the names in the class.
Class-map match –all Test1 Match protocol http Match access-group 100 Class-map match-amy Test 2 Match protocol http Match access-group 101 Policy-map Test Class Test1 Bandwidth 100 Class Test2 Bandwidth 200 Class Test3 access-group 100 Bandwidth 100 Access-list 100 permit tcp any host 10.1.1.1 Access-list 101 permit tcp any host 10.1.1.2
Service Policy Map
The purpose of the service-policy map is to attach service policies to interfaces. The service policy map is used to create hierarchical service policies in the policy map class configuration mod.
The service-policy command has the following restrictions:
- The set command is not supported on the child policy
- The priority command can be used in either the parent or the child policy, but not both policies simultaneously
- The fair=queue command cannot be defined in the parent policy
In this example all traffic on FA0/0 is set to 2Mbps, and the HTTP traffic is guaranteed 1Mbps to HTTP traffic
(Parent) Class-map AllTraffic Match any Policy-map ShapeAll Class AllTraffic Shape 2000000 Service-policy QueueAll Interface FastEthernet0/0 Service=policy output ShapeAll (Child) Class-Map HTTP Match protocol http Policy-map QueueAll Class HTTP Bandwidth 1000
In the next part of this series on QoS, the third part of MQC setup will be discussed. The Service Policy identifies where this policy will be implemented.
Author: Paul Stryer
- Cisco IOS Quality of Service Solutions Configuration Guide, Release 12.4T
- End-To-End QoS network Design by Tim Szigeti and Christina Hattingh – ISBN # 1-58705-176-1
- DiffServ – The Scalable End-To-End QoS Model
- Integrated Services Architecture
- Definition of the Differentiated Services Field
- An Architecture for Differentiated Services
- Requirements for IP Version 4 Routers
- An Expedited Forwarding PHB (Per-Hop Behavior) | <urn:uuid:6d8bed54-f68e-455c-8e4b-043b3dd792b4> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/12/21/quality-of-service-part-6-marking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00289-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.837473 | 1,233 | 2.546875 | 3 |
A team of scientists from the University of Chicago Research Computing Center, the Texas Advanced Computing Center at the University of Texas at Austin, the San Diego Supercomputer Center at the University of San Diego, and the Department of Defense High Performance Computing Center in Vicksburg, Miss., are using some of the nation’s most powerful supercomputers to study influenza virus replication.
The primary treatment for influenza A is Amantadine. The drug is an organic compound that blocks proton flow through the M2 channel, one of the main targets for antiviral therapies. Unfortunately, the treatment is becoming less effective as a consequence of viral evolution. Mutations in the flu virus have changed the ability of Amantadine to bind to the M2 protein. Currently, there is a big push to identify more effective compounds for blocking influenza proteins to help guard against deadly pandemics.
To simulate the complex process of proton transfer through the M2 channel, the research team commandeered four high-performance computing systems: the Midway high-performance computing cluster at the University’s Research Computing Center, as well as resources from the Texas Advanced Computing Center at the University of Texas at Austin, the San Diego Supercomputer Center at the University of San Diego, and the Department of Defense High Performance Computing Center in Vicksburg, Miss.
The combined HPC power facilitated multiscale simulations with unprecedented detail. The results validate the link between mutations on the M2 protein and drug resistance, a connection that had been demonstrated in experiments, but up to now had not been described computationally. It’s a breakthrough that’s two decades in the making – as that’s how long scientists and drug designers have been striving to understand the intricacies of the M2 channel.
“Computer simulation, when done very well, with all the right physics, reveals a huge amount of information that you can’t get otherwise,” reports one of the lead researchers, Gregory Voth, the Haig P. Papazian Distinguished Service Professor in Chemistry. “In principle you could do these calculations with potential drug targets and see how they bind and if they are in fact effective.”
Deconstructing the flow of protons and the role of the M2 channel will enable scientists to predict the effectiveness of potential drug targets. The team is now gearing up to make the simulation run faster and to explain the effects of drug resistant mutations. They also plan to expand their study to other forms of influenza, like influenza B, which has a different M2 channel and is completely resistant to Amantadine.
A paper describing the research appears in the Proceedings of the National Academy of Sciences Online Early Edition for the week of June 16-20, 2014. It was written by Ruibin Liang, a graduate student in chemistry at UChicago, and three co-authors. | <urn:uuid:436264b5-9b9d-43b9-b5f6-23cc0dbcb255> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/06/17/computer-models-advance-flu-research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936275 | 592 | 2.875 | 3 |
Article by Coleen Torres
Cell phones don’t feel newfangled but in truth they are. With innovation comes swift change, sometimes so swift that it is difficult for forensic scientists to keep up.
Criminals use cell phones in a variety of crimes and it is up to the forensic scientists to uncover their transgressions.
But where do they start? What are some complications that scientists encounter?
- Innovation - Change is the number one issue for forensic scientists to overcome. Even the cell phone manufactures don’t always know how to retrieve information stored in new phones, so how can scientists retrieve the information? Staying up-to-date on new cell phones is challenging but not impossible. As fast as they are created, criminals come up with ways to abuse them. Strangely enough, this can be beneficial for forensic scientists. Using online tips can allow scientists to simply access information that would otherwise remain unreachable.
- Charge – Unlike computers, much of what is stored in a phones memory is reliant upon the battery. When the electricity goes, so does the information. Depending on what information you are looking for and how it is stored, battery or charger power is an essential thing to think about.
- SIM cards and removable media - SIM cards are the soul of a cell phone. They carry vital user information. Likewise, removable media, such as SD cards, can have lots of stored data on them. It is important that forensic scientists have the appropriate equipment to read and evaluate the data.
- Passwords – Password protection on cell phones is challenging to overcome, though not impossible. Depending on the model, passwords can be circumvented in several ways.
- Internet connection – The smarter cell phones become, the harder they are to examine. Using an internet connection instead of SMS or voice makes a forensic scientist’s job much more difficult.
- Quarantine – One thing that is often disregarded is the need to sequester the cell phone before analyzing it. New text messages can overwrite old material, and connections to the internet can invalidate old data. It is imperative to make sure the phone is isolated.
- Security augmentations - Forensic scientists must be especially alert when dealing with cell phones that have been improved in some way. Some users have the capability of putting in dead man’s switches, effectually wiping the contents after an action or a period of time. Malware can also be downloaded onto the phone, placing the computer systems in danger.
There are many more problems for forensic scientists to watch out for, but these are the seven most common. Tracing cell phone data is a laborious task, but it can be done. All it takes is a little investigation, a few tools, and a lot of persistence.
This is a guest post by Coleen Torres, blogger at Phone Internet. She writes about saving money on home phone, digital TV and high-speed Internet by comparing prices from providers in your area for standalone service or phone TV Internet bundles.
Talkback and comments are most welcome...
Cross-posted from Short Infosec | <urn:uuid:f2d1b107-416d-4220-b927-ac03904b04dd> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/20180-Seven-Problems-with-Cell-Phone-Forensics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00225-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93011 | 629 | 3.25 | 3 |
A critical part of any CME setup are the dial peers. Dial peers are what are used to make calls in and out of the CME system. In modern telephone systems (generally speaking) there are two types of dial peers, POTS and VOIP dial peers. The POTS type is used when we are connecting to any traditional type voice connection and the VOIP type is used when defining IP addressable voice device. Since my labs have dealt with VOIP phones and SIP trunks (IP Voice connectivity) I’ll be talking mostly about the VOIP dial peers. The dial peers themselves consists of several critical pieces. Let’s walk through the two major pieces one at a time.
I like to think of destination patterns sort of like route statements. When a call is placed the router looks through the available dial peers and their associated destination patterns. It takes the digits that were dialed and matches them against all of the possible destination patterns looking for a match. When it finds a match, it uses that dial peer to place the call.
Destination Patterns are composed of numbers and ‘operators’. The most basic of operators are the wild card operators. These come in handy when you are only interested in matching part of a number. For instance, you might want to match 4 digit extensions at your second office. Since all of the numbers start with 2 at the second office, you want the dial peer to match a number that looks like…
4 , <Any Number>, <Any Number>, <Any Number>
Operators help you complete tasks such as this. Lets run through them quick and give an example of each. Note – I’m going to use spaces in my examples below between each character. In real life, don’t use spaces when defining your destination patterns.
Used to represent the wild card for any digit 0-9.
6 1 2 . . . . . . .
Any 10 digit number that begins with ‘612’
The Letter T (T)
The digit 1 followed by a variable digits (anything else)
Used to represent a range of digits. The range can be represented by a contiguous number of digits (1-3), by individual digits (6,9), or as a combination (1-3,6,9). Additionally, you can use the ^ operator as a ‘not’ symbol to create a range that shouldn’t be matched.
1 [ 1 – 3] . . . . . . . . [ 6 , 8 ]
A number that starts with 11 digit number whose second digit is 1 through 3 and last digit is a 6 or a 8.
1 [ ^ 0 – 7] . . . . . . . . [ 6 , 8 ]
The digit one followed by 8 or 9, 8 wild card digits, and a 6 or a 8.
Plus Sign (+)
Used to match one or more instances of the preceding digit.
1 + 6 1 2 4 5 6 3
1 or more 1s followed by 6, 1, 2, 4, 5, 6, and 3
Session Protocol / Target
The other critical piece of the dial peer is the session protocol and target settings. Once the router has matched the destination pattern and decided which dial peer to use, we have to know where and how to send the call. In my case, this was the SIP-UA.com SIP trunk. I walked through how those worked in an earlier port. | <urn:uuid:dda85061-667a-4287-a8ce-711208520ec5> | CC-MAIN-2017-04 | http://www.dasblinkenlichten.com/destination-patterns-the-basics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00161-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899217 | 723 | 3.109375 | 3 |
There are many different competing vendors for virtual environments out there. Two of the major vendors are Microsoft and VMWare. While all of their products have more or less the same capabilities, each has its own quirks and peculiarities. Virtual machines are extremely useful tools for data storage. But your virtual environment is only as fault-tolerant as your server. Physical and logical damage to your server can lock you out of your virtual machines. If you’ve lost data from a Hyper-V virtual machine, our Hyper-V data recovery experts can help.
What Is Hyper-V?
Hyper-V is Microsoft’s Enterprise-class virtualization hypervisor. A hypervisor is the software that allows you to create virtual hard drives and virtual machines. Hyper-V is a Type I “bare metal” hypervisor, meaning that the hypervisor itself is the next step up from the hardware.
A diagram of the Hyper-V architecture. Hyper-V makes a virtual “parent partition” with Windows Server 2008, 2012, or 2016, and then creates “child partitions” for each guest machine
In general, the hypervisor is the mediator between “host” machine and “guest” machine. The host machine is the actual physical device itself. The guest machine (or machines) is the virtual environment created by the hypervisor. A Type I hypervisor like Hyper-V (can you guess where the name comes from?) manages the server’s hardware the same way the operating system itself would.
Type II embedded hypervisors, such as VMWare Workstation or VirtualBox, run inside an O/S just like a normal program. Hyper-V differs from an embedded hypervisor in that it occupies the spot in the “chain of command” where the host machine’s operating system would usually sit. Somewhat counter-intuitively, the operating system (Windows Server 2008, 2012, or 2016) is installed first. The Hyper-V kernel then inserts itself in between the O/S and the hardware. This gives Hyper-V the control over the hardware that the Windows Server O/S usually has. Hyper-V creates a “parent partition” to contain the Windows Server O/S. Hyper-V can then create further “child partitions” to contain each virtual guest machine. The operating systems in the child partitions do not have any direct access to the server’s hardware. Instead, Hyper-V manages that on their behalf.
Recovering Hyper-V VMs After a Server Crash
Hyper-V data recovery from a crashed server or SAN can be a difficult and intensive procedure. Essentially, there are two data recovery cases going on in every virtual environment recovery scenario. First is the recovery of the files from the physical media. Second is the recovery of the files from the virtual media.
If the server or SAN containing your Hyper-V virtual environments crashes, Gillware can help. In these situations, we commonly see the virtual machines stored on RAID-5, RAID-6, or nested RAID arrays. While these arrays are fault-tolerant, they are not failure-proof. A failure of multiple hard drives can occur for a variety of reasons. Our engineers will repair the failed hard drives in your server or SAN and reconstruct the array as best they can. Heavy damage to the drives can lead to “gaps” in the array. These gaps can affect the integrity of your Hyper-V virtual machines.
Our Hyper-V data recovery technicians clone the recovered virtual environments onto physical hard drives for thorough analysis. Through clever use of status mapping, our technicians can cross-reference the failures on your server with the data on your virtual machines. Cross-referencing the status maps helps our engineers make sure we are obtaining the best Hyper-V data recovery results possible.
How We Recover Deleted Hyper-V Files
If a Hyper-V virtual hard drive goes missing due to accidental deletion, you may see an error message like this: “The absolute path is valid for the ” Hard Disk Image pool, but references a file that does not exist.”
One of the benefits of Hyper-V is that it is difficult to accidentally delete the virtual hard disk itself. Using Hyper-V Manager to remove a virtual workstation only deletes the checkpoints and configuration files and removes the virtual machine from its manager. The actual virtual hard disk file remains untouched, and you would have to go out of your way to delete it. That said, how many times have you intentionally deleted a file only to realize after the fact that you still needed something inside it?
No matter how much stuff is inside a virtual machine, behind the magic your hypervisor makes to make it look like a real computer, it’s still just a really big single file. Your host machine’s filesystem determines where a file goes when it’s deleted. Windows Server 2008 uses NTFS, while Server 2012 and 2016 can both use NTFS and Microsoft’s new ReFS filesystem. In these filesystems, a deleted file doesn’t automatically disappear. Instead, the filesystem removes the flags that mark the clusters containing the file as “in use”.
This becomes a big problem if your server remains in use afterward. Every new bit of data written to your server has a chance of overwriting part of the virtual machine. And that chance climbs higher and higher the more new data gets written. This can corrupt the critical files inside your Hyper-V virtual machine. After recovering your deleted VM, our technicians look through it and judge how much data, if any, has been corrupted.
We can also recover deleted data from within a healthy Hyper-V virtual machine. Our technicians take the healthy virtual machine and clone it to one of our own hard drives. We can then perform a normal deleted file recovery operation on the now-physical disk. Our technicians can also recover data that has been lost after an accidental checkpoint revert operation.
Why Choose Gillware for Hyper-V Data Recovery?
At Gillware, our computer scientists have pioneered and mastered virtual machine data recovery techniques. We can give you the most optimal Hyper-V data recovery results possible. Our technicians are well-acquainted with the unique properties of the Hyper-V virtual environment.
Our Hyper-V data recovery services are financially risk-free. Our evaluations are free. Even inbound shipping is free! Our evaluation process takes about one to two business days on average. After our technicians have evaluated your Hyper-V data recovery needs, we send you a statement of work. This includes a price quote and probability of success. If you don’t approve of the quote, we close your recovery case without charging you a cent. But if you do, we go ahead and perform the whole procedure before sending you a bill. We only bill you for our efforts after we’ve recovered your critical data.
Your recovered data is extracted to a password-protected hard drive for security and shipped to you. If you need certain critical files that were inside your Hyper-V virtual machines ASAP, we can send a small amount of your recovered files through a secure FTP connection. Your data never leaves our data recovery lab in any other way, shape, or form. We wait a week after you’ve received your data to erase it from our storage. This is all done in accordance with our SOC 2 Type II data security policies.
Ready to Have Gillware Assist You with Your Hyper-V Data Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:66e7c8a8-3c82-40c1-b458-cc69d7463aae> | CC-MAIN-2017-04 | https://www.gillware.com/hyper-v-data-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00161-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906081 | 2,118 | 3.171875 | 3 |
A study conducted at Texas A&M University has found that driver response times are significantly delayed when voicing messages aloud to your phone - troublesome news for the likes of Apple and its voice command system, Siri.
It is the first study to compare traditional texting with voice-to-text on a handheld device during driving.
Christine Yager, the woman who headed the study, told Reuters: "In each case, drivers took about twice as long to react as they did when they weren't texting. Eye contact to the roadway also decreased, no matter which texting method was used."
The research revolved around 43 participants, all of whom, were made to drive along a test track without using any electronic devices. They were then made to take the same route, whilst texting and then again whilst using voice-to-text.
Yager revealed that voice-to-text actually took longer than ordinary texting, due to the need to correct errors during transcription.
Research carried out by The Cellular Telecommunications Industry Association found that 6.1 billion text messages per day were sent in the United States in 2012 alone. Data collected from AAA, the national driver's association, revealed that 35 per cent of drivers admitted to reading a text or email while driving, whilst 26 per cent admitted to typing a message.
Yager voiced concerns that drivers actually feel safer whilst using the voice-to-text method of communicating whilst driving, even though driving performance is equally hindered. The worry is, that this may lead to a false belief that texting while using spoke commands is safe, when this isn't the case.
Last year, a survey carried out by ingenie, a driving insurance company for 17-25 year olds, asked 1,000 customers how they use their phone whilst driving. 17 per cent admitted to playing Angry Birds behind the wheel.
This doesn't bode well for Volkswagen; the German car giant has just unveiled the iBeetle, which is based around the idea of being able to manipulate your car through voice commands issued to the iPhone.
This story, "Apple's Siri Could Make You Crash" was originally published by Macworld U.K.. | <urn:uuid:957c082a-c191-4a6d-afe3-d8a41d480b0f> | CC-MAIN-2017-04 | http://www.cio.com/article/2386443/mobile-apps/apple-s-siri-could-make-you-crash.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00519-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974941 | 438 | 2.828125 | 3 |
Electromagnetic radiation is pervasive. We are surrounded by a wealth of appliances, mobile devices and their signals, not to mention a number of other technologies that emit microwaves that pass through our bodies.
Major questions have emerged in recent years about the effect of this radiation on the human body—not the least of which is the question of whether brain and other cancers might not be linked to the use of cell phones and other microwave-radiating devices. In an attempt to discover just how these microwaves interact with our bodies as they pass through us, the University of Texas at Austin has been tasked with a five-year interdisciplinary study that uses one of the highest-resolution electromagnetic human models to date.
This human model, called AustinMan, is helping scientists understand in great detail what happens to body tissues when they encounter microwaves, particularly those from mobile devices. The scientists behind the AustinMan modeling project claim that their method is superior to the “traditional” method of gauging the impact of wireless devices on our bodies—one that relied almost exclusively on survey and statistical data to make broad generalizations about relationships between wireless devices and human health problems.
AustinMan is, as the University of Texas describes, “a publicly available model that represents the human body with one-millimeter-cubed resolution (something akin to a virtual Lego body composed of extremely small parts).” To create the AustinMan model, the group worked with anatomists to transform the image slices into computational maps of the body’s tissues. “Whereas previous models had included only a handful of tissue types, the current model contains 30 types of tissues, each with unique electromagnetic properties. Overall, the model contains more than 100 million voxels (3-D versions of pixels) that interact with one another during the virtual cellphone calls.”
According to Aaron Dubrow from the Texas Advanced Computing Center (TACC), “Such extreme simulations are impossible using traditional computing methods and software. Even with the efficient algorithms that the researchers are developing, each simulation would take about five years of continuous execution on an ordinary desktop computer. Crunching the numbers on the Ranger supercomputer at TACC on the Pickle Research Campus in North Austin, however, the lead researcher and his team can perform these simulations in less than six hours.”
Dubrow writes, “During the past two years, the project has used more than 3 million computing hours on TACC’s supercomputers, the equivalent of 342 years on a single processor.”
While the team’s goal is not to explore the direct medical connections in depth, the creation of this model will allow for more in-depth studies that can shed light on the effect of the use of cellphones and other devices that are reliant on wireless signals. | <urn:uuid:6bf82d76-70d1-477c-92c7-cdd9144e7bb8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/10/18/tacc_supermodel_takes_on_microwaves/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00427-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942633 | 574 | 3.46875 | 3 |
In the state of New York, one of the world’s most pristine natural ecosystems is being threatened. Road salt, storm water runoff and invasive species are harming Lake George -- a long, narrow lake at the southeast base of the Adirondack Mountains.
So to both understand and manage these threats, Rensselaer Polytechnic Institute, IBM and the FUND for Lake George have launched a three-year, multi-million dollar collaboration, called "The Jefferson Project at Lake George."
This project, according to a press release, includes an environmental lab with a monitoring and prediction system that will give scientists and the community a real-time picture of the health of the lake. The facility, according to the release, is expected to "create a new model for predictive preservation and remediation of critical natural systems on Lake George, in New York, and ultimately around the world."
To gain a scientific understanding of the lake, a combination of advanced data analytics, computing and data visualization techniques, new scientific and experimental methods, 3-D computer modeling and simulation, and historical data will be used -- as will weather modeling and sensor technology.
The monitoring system is expected to give scientists a view of circulation models in Lake George -- something they've not seen before. These 3-D models could then be used to understand how currents distribute nutrients and contaminants across the 32-mile lake and their correlation to specific stressors, according to the release. The models also can be overlaid with historical and real-time weather data to see the impact of weather and tributary flooding on the lake's circulation patterns.
In addition, a new Smarter Water laboratory and visualization studio will help local leaders see a real-time picture of the current and future computer modeled conditions, water chemistry and health of the lake's natural systems -- data that local groups could use to make informed decisions about protecting the lake and its ecosystem.
“Lake George has a lot to teach us, if we look closely,” said Rensselaer President Shirley Ann Jackson. “By expanding Rensselaer’s Darrin Fresh Water Institute with this remarkable new cyberphysical platform of data from sensors and other sources, and with advanced analytics, high performance computing and web science, we are taking an important step to protect the timeless beauty of Lake George, and we are creating a global model for environmental research and protection of water resources.” | <urn:uuid:7c4b8d94-deb0-463b-add0-3931b547fc99> | CC-MAIN-2017-04 | http://www.govtech.com/data/Making-New-Yorks-Lake-George-the-Worlds-Smartest-Lake.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00153-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926847 | 494 | 3.375 | 3 |
Definition: An efficient, in-place variant of radix sort that distributes items into hundreds of buckets. The first step counts the number of items in each bucket, and the second step computes where each bucket will start in the array. The last step cyclically permutes items to their proper bucket. Since the buckets are in order in the array, there is no collection step. The name comes by analogy with the Dutch national flag problem in the last step: efficiently partition the array into many "stripes". Using some efficiency techniques, it is twice as fast as quicksort for large sets of strings.
See also histogram sort.
Note: This works especially well when sorting a byte at a time, using 256 buckets.
The flag of the United States of America.
Peter M. McIlroy, Keith Bostic, and M. Douglas McIlroy, Engineering Radix Sort, Computing Systems, 6(1):5-27, 1993.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 2 December 2009.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "American flag sort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 December 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/americanFlagSort.html | <urn:uuid:b23b800b-bea4-4ae3-9204-a69a02b5c52d> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/americanFlagSort.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.855499 | 315 | 3.765625 | 4 |
Definition: The (weight) balance of a tree is the number of leaves of the left subtree of a tree, denoted |Tl|, divided by the total number of leaves of the tree. Formally, ρ(T) = |Tl|/|T|.
Also known as root balance.
See also BB(α) tree, height-balanced tree, right rotation, left rotation, relaxed balance.
Note: The balance of a node is the balance of the (sub)tree rooted at that node. After Johann Blieberger <firstname.lastname@example.org>, Discrete Loops and Worst Case Performance, page 22.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 20 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "balance", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 20 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/balance.html | <urn:uuid:7bc09af4-a358-4e01-8b43-251edc60cc5a> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/balance.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.874648 | 256 | 3 | 3 |
Best Practices for Safe Computing - Prevention of Malware Infection
Common sense, Good Security Habits and safe surfing is essential to protecting yourself from malware infection. No amount of security software is going to defend against today's sophisticated malware writers for those who do not practice these principles and stay informed. Knowledge and the ability to use it is the best defensive tool anyone could have. This includes educating yourself as to the most common ways malware is contracted and spread as well as prevention.
Important Fact: It has been proven time and again that the user is a more substantial factor (weakest link) in security than the architecture of the operating system or installed protection software.
- End Users Remain Biggest Security Headache as Compromised Endpoints Increase
- Studies prove once again that users are the weakest link in the security chain
- Social Engineering: Attacking the Weakest Link in the Security Chain
- Social media platforms...a hotbed of cyber criminal activity
- Millions of users open spam emails, click on links
Therefore, security begins with personal responsibility.
Tips to protect yourself against malware infection:
Keep Windows and Internet Explorer current with all security updates from Microsoft which will patch many of the security holes through which attackers can gain access to your computer. When necessary, Microsoft releases security updates on the second Tuesday of each month and publishes Security update bulletins to announce and describe the update. If you're not sure how to install updates, please refer to How To Access Windows Update.
Avoid pirated software (warez), cracking tools, and keygens. They are a security risk which can make your computer susceptible to a smörgåsbord of malware infections, remote attacks, exposure of personal information, and identity theft. In some instances an infection may cause so much damage to your system that recovery is not possible and the only option is to wipe your drive, reformat and reinstall the OS.
Avoid peer-to-peer (P2P) file sharing programs (i.e. Limewire, eMule, Kontiki, BitTorrent, BitComet, uTorrent, BitLord, BearShare). They too are a security risk which can make your computer susceptible to malware infections. File sharing networks are thoroughly infested with malware according to security firm Norman ASA and many of them are unsafe to visit or use. Malicious worms, backdoor Trojans, IRCBots, Botnets, and rootkits spread across P2P file sharing networks, gaming and underground sites. Users visiting such sites may encounter innocuous-looking banner ads containing code which can trigger pop-up ads and malicious Flash ads that install viruses, Trojans, and spyware. Ads are a target for hackers because they offer a stealthy way to distribute malware to a wide range of Internet users. The best way to reduce the risk of infection is to avoid these types of web sites and not use any P2P applications. If you must use file sharing, scan your downloads with anti-virus software before opening them and ensure Windows is configured to show file extensions - Why you should set your folder options to “show known file types”.
Avoid Bundled software. Many toolbars, add-ons/plug-ins, browser extensions, screensavers and useless or junk programs like registry cleaners, optimizers, download managers, etc, come bundled with other software (often without the knowledge or consent of the user) and can be the source of various issues and problems to include Adware, pop-up ads browser hijacking which may change your home page/search engine, and cause user profile corruption. Thus, bundled software may be detected and removed by security scanners as a Potentially Unwanted Program (PUP), a very broad threat category which can encompass any number of different programs to include those which are benign as well as problematic. Since the downloading of bundled software sometimes occurs without your knowledge, folks are often left scratching their heads and asking "how did this get on my computer." Even if advised of a toolbar or Add-on, many folks do not know that it is optional and not necessary to install in order to operate the program. If you install bundled software too fast, you most likely will miss the "opt out" option and end up with software you do not want or need. The best practice is to take your time during installation of any program and read everything before clicking that "Install" or "Next" button. Even then, in some cases, this opting out does not always seem to work as intended.
Beware of Rogue Security software and crypto ransomware as they are some of the most common sources of malware infection. They infect machines by using web exploits, drive-by downloads, exploit kits, social engineering and scams.
The best defensive strategy to protect yourself from ransomware (crypto malware infections) is a comprehensive approach...make sure you are running an updated anti-virus and anti-malware product, update all vulnerable software, disable VSSAdmin.exe, use supplemental security tools with anti-exploitation features capable of stopping (preventing) infection before it can cause any damage and routinely backup your data...then disconnect the external drive when the backup is completed.
- Use an Anti-Exploit Program to Help Protect Your PC From Zero-Day Attacks
- Best practices for securing your environment against ransomware
- Ransomware: 7 Defensive Strategies
- How to Strengthen Enterprise Defenses against Ransomware
- Ransomware Do's and Dont's: Protecting Critical Data
You should also rely on behavior detection programs like McAfee Real Protect rather then standard anti-virus definition (signature) detection software only. This means using programs that can detect when malware is in the act of modifying/encrypting files AND stop it rather than just detecting the malicious file itself which in most cases is not immediately detected by anti-virus software.
...Prevention before the fact is the only guaranteed peace of mind on this one.
Some anti-virus and anti-malware programs include built-in anti-exploitation protection so be sure to familiarize yourself with all their features and settings.
For more specific information on how these types of malware install themselves and spread infections, read How Malware Spreads - How your system gets infected.
Keeping Autorun enabled on flash drives has become a significant security risk as they are one of the most common infection vectors for malware which can transfer the infection to your computer. One in every eight malware attacks occurs via a USB device. Many security experts recommend you disable Autorun as a method of prevention. Microsoft recommends doing the same.
* Microsoft Security Advisory (967940): Update for Windows Autorun
* Microsoft Article ID: 971029: Update to the AutoPlay functionality in Windows
Note: If using Windows 7 and above, be aware that in order to help prevent malware from spreading, the Windows 7 engineering team made important changes and improvements to AutoPlay so that it will no longer support the AutoRun functionality for non-optical removable media.
Always update vulnerable software like browsers, Adobe Reader and Java Runtime Environment (JRE) with the latest security patches. Older versions of these and several other popular programs have vulnerabilities that malicious sites can use to exploit and infect your system.
* Kaspersky Lab report: Evaluating the threat level of software vulnerabilities
* Time to Update Your Adobe Reader
* Adobe Security bulletins and advisories
* Microsoft: Unprecedented Wave of Java Exploitation
* eight out of every 10 Web browsers are vulnerable to attack by exploits
Exploit kits are a type of malicious toolkit used to exploit security holes found in software applications...for the purpose of spreading malware. These kits come with pre-written exploit code and target users running insecure or outdated software applications on their computers.
Tools of the Trade: Exploit Kits
To help prevent this, install and use Secunia Personal Software Inspector (PSI), a FREE security tool designed to detect vulnerable and out-dated programs/plug-ins which expose your computer to malware infection.
Use strong passwords and change them anytime you encounter a malware infection, especially if the computer was used for online banking, paying bills, has credit card information or other sensitive data on it. This would include any used for taxes, email, eBay, paypal and other online activities. You should consider them to be compromised and change all passwords immediately as a precaution in case an attacker was able to steal your information when the computer was infected. Many of the newer types of malware are designed to steal your private information to include passwords and logins to forums, banks, credit cards and similar sensitive web sites. Always use a different password for each web site you log in to. Never use the same password on different sites. If using a router, you also need to reset it with a strong password.
Don't disable UAC in Windows, Limit user privileges, remove Admin Rights or use Limited User Accounts AND be sure to turn on file extensions in windows so that you can see extensions. Ransomware disguises .exe files as fake PDF files with a PDF icon inside a .zip file attached to the email. Since Microsoft does not show extensions by default, they look like normal PDF files and people routinely open them. A common tactic of malware writers is to disguise malicious files by hiding the file extension or adding spaces to the existing extension as shown here (click Figure 1 to enlarge) so be sure you look closely at the full file name.
Know how to recognize Email scams and do not open unsolicited email attachments as they can be dangerous and result in serious malware infection. For example, Zbot/Z-bot (Zeus) is typically installed through opening disguised malicious email attachments which appear to be legitimate correspondence from reputable companies such as banks and Internet providers or UPS or FedEx with tracking numbers.
* Using Caution with Email Attachments
* How to Avoid Getting a Virus Through Email
* Safety tips for handling email attachments
Beware of phony Tech Support Scamming.
Cybercriminals don't just send fraudulent email messages and set up fake websites. They might also call you on the telephone and claim to be from Microsoft. They might offer to help solve your computer problems or sell you a software license...Neither Microsoft nor our partners make unsolicited phone calls (also known as cold calls) to charge you for computer security or software fixes...Do not trust unsolicited calls. Do not provide any personal information.
For more specific information about these types of scams, please read this topic.
Finally, back up your important data and files on a regular basis. Backing up is among the most important maintenance tasks users should perform on a regular basis, yet it's one of the most neglected areas. Some infections may render your computer unbootable during or before the disinfection process. Even if you're computer is not infected, backing up is part of best practices in the event of hardware or system failure related to other causes.
* Methods for backing up your files
* Windows Backup - The essential guide
It is also a good practice to make a disk image with an imaging tool (i.e. Acronis True Image, Drive Image, Ghost, Macrium Reflect, etc.). Disk Imaging allows you to take a complete snapshot (image) of your hard disk which can be used for system recovery in case of a hard disk disaster or malware resistant to disinfection. The image is an exact, byte-by-byte copy of an entire hard drive (partition or logical disk) which can be used to restore your system at a later time to the exact same state the system was when you imaged the disk or partition. Essentially, it will restore the computer to the state it was in when the image was made.
Security Resources from Microsoft:
* How can I help protect my computer from viruses?
* Threats and Countermeasures: Security Settings in Windows Server 2003 and Windows XP
* Threats and Countermeasures: Security Settings in Windows Server 2008 and Windows Vista
* Microsoft Solutions for Security: The Antivirus Defense-in-Depth Guide
Other Security Resources:
* US-CERT: Safeguarding Your Data
* US-CERT: Good Security Habits
* Simple and easy ways to keep your computer safe and secure on the Internet
* Malware Prevention - Preventing Re-infection
* Hardening Windows Security - Part 1 & Part 2
* How to Stop 11 Hidden Security Threats
Browser Security Resources:
* Configuring Internet Explorer for Practical Security and Privacy
* How to Secure Your Web Browser
* Safe Web practices - How to remain safe on the Internet
* Use Task Manager to close pop-up messages to safely exit malware attacks
Simple Ways To Secure Your Privacy:
* The Simplest Security: A Guide To Better Password Practices
* Securing Privacy Part 1: Hardware Issues
* Securing Privacy Part 2: Software Issues
* Securing Privacy Part 3: E-mail Issues
* Securing Privacy Part 4: Internet Issues
Other topics discussed in this thread:
- Choosing an Anti-Virus Program
- Replacing your Anti-virus - Why should you use Antivirus software?
- Supplementing your Anti-Virus Program with Anti-Malware Tools
- Choosing a Firewall
- Glossary of Malware Related Terms
- Why you should not use Registry Cleaners and Optimization Tools
- I have been hacked...What should I do? - How Do I Handle Identify Theft, Scams and Internet Fraud
- About those Toolbars and Add-ons - Potentially Unwanted Programs (PUPs)
- About In-text advertising: Text Enhanced Ads & How to remove Them
- File Sharing (P2P), Keygens, Cracks, Keygens, Cracks, Warez, and Pirated Software are a Security Risk
- There are no guarantees or shortcuts when it comes to malware removal - When should I reformat?
- Beware of Phony Emails &Tech Support Scams | <urn:uuid:faff5e99-9925-4f69-87b2-aad832946878> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/407147/answers-to-common-security-questions-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00455-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897589 | 2,896 | 2.90625 | 3 |
Some exemplary data center cooling techniques
Friday, Mar 7th 2014
Computers are hot. Any person who ever actually places a laptop on their lap knows this. After 20 or 30 minutes of uninterrupted computing - or sooner depending on the demands of the program you're running - the device starts to heat up a bit. If it is not given a rest, that heat can grow scorching. Now imagine that heat on an exponentially greater scale and you have an idea of the conditions of a data center. Without cooling options, data centers would quickly overheat and become completely unusable. For this reason, data room cooling systems represent one of the most integral components of any center's functionality. Here are some examples of facilities that take creative approaches to cooling, reaping benefits in the process:
1. Microsoft Data Center (Dublin, Ireland): The computing giant has dealt with a greater influx of big data ever since it began building its cloud platform. In order to accommodate its cloud presence in Europe, Africa and parts of the Middle East, the company built a data center in Dublin, according to Data Center Knowledge. But despite covering a massive 584,000 square feet, the facility does not use conventional chillers to maintain a good server room temperature. Instead, it uses fresh air.
Such a process would not be possible just anywhere. In an extremely hot region, for instance, using free air would present a greater challenge. However, by building the center in a naturally cool area like Dublin, Microsoft guaranteed its facility would benefit from outside air, thereby cutting cooling costs and remaining environmentally friendly. Microsoft's efforts have not gone unnoticed. The center received an award for Best European Enterprise Data Center Facility at the 6th Data Centres Europe 2010 Conference.
2. Verne Global Data Center (Keflavik, Iceland): If you plan on maximizing the benefits of outside air for data room cooling, there are few locations more optimal than near to the Arctic Circle. And that is exactly where the Verne Global Data Center set up shop. By placing its facility in one of the most consistently cool climates in the world, Verne Global guaranteed a steady stream of natural resources to keep servers chilled, Data Center Knowledge reported.
Data centers are often housed in buildings and locations repurposed from other industries. This is the case with Verne's facility, which was built on a former NATO base. Because it uses hydroelectric and geothermal power, the facility remains focused on environmental conservation even as it houses an energy-intensive operation. According to Verne's website, the center's cooling solutions enable it to cut costs by 80 percent. Because the Icelandic climate is so conducive to cooling data centers, Verne boasts the use of outside air as a cooling method 365 days a year.
3. Deltalis RadixCloud (Swiss Alps): Like Verne's facility, the Deltalis RadixCloud data this center takes full advantage of a singularly incredible location. Buried in the center of the Swiss Alps, the Deltalis RadixCloud center uses glacial water to maintain a workable data center temperature, according to Energy Manager Today. Housed in an old Swiss Air Force base, the center remains cool year round. | <urn:uuid:82981681-0c8c-43dc-ab10-075a3c6e36e1> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/data-center/some-exemplary-data-center-cooling-techniques-592043 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935912 | 654 | 2.546875 | 3 |
I’m sorry to put here something that is not really technical but for a blog with the name “howdoesinternetwork.com” it would be strange not to follow the story about the future of DNS governance given the fact that DNS is a crucial part of internet functionality.
You probably know how the internet works given the fact that you are visiting a blog like this. Regardless of that, it will not hurt to explain in few words the importance of DNS (Domain Name System) for a normal internet operation.
Let’s surf to se how this works
If you want to open this webpage or send an email to someone, you must enter a destination to your computer so it could know where to sent your stuff. As you are most surely a human, being, you would like to use a name like google.com for opening a webpage or an e-mail address in order to send a message to your colleagues (rather than some strange numbers separated by dots or colons). Almost all humans are like that and they want to use names and addresses. Computers, on the other hand, know to reach each other only by IP addresses.
You can see that we needed someone to take the role of the “address book” as soon as we got the internet. | <urn:uuid:7a765ddc-beb2-43de-a020-78ad123e766b> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/category/word-from-the-author | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00483-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948298 | 265 | 2.875 | 3 |
UDP port 5152 protocol information
is an address associated with a particular process on a computer. Ports have a unique number in the header of a data packet that is used to map this data to that process. Port numbers
are divided into three ranges: Well Known Ports, Registered Ports
, and Dynamic/Private Ports
. Default port values for commonly used TCP/IP services have values lower than 255 and Well Known Ports have numbers that range from 0 to 1023. Registered Ports range from 1024 to 49151 and Dynamic/Private Ports range from 49152 to 65535. An "open port" is a TCP/IP port number that is configured to accept packets while a "closed port" is one that is set to deny all packets with that port number.
Hackers use "port scanning
" to search for vulnerable computers with open ports using IP addresses or a group of random IP address ranges so they can break in and install malicious programs (viruses, Trojans). Botnets
and Zombie computers
scour the net and will randomly scan a block of IP addresses. These infected computers are searching for "vulnerable ports
" and make repeated attempts to access them. If your PC is sending out large amounts of data, this usually indicates that your system may have a virus or a Trojan horse.
You can use netstat
, a command-line tool that displays incoming and outgoing network connections, from a command prompt
to obtain Local/Foreign Addresses, PID and listening state.
- netstat /? lists all available parameters that can be used.
- netstat -a lists all active TCP connections and the TCP and UDP ports on which the computer is listening.
- netstat -b lists all active TCP connections, Foreign Address, State and process ID (PID) for each connection.
- netstat -n lists active TCP connections. Addresses and port numbers are expressed numerically and no attempt is made to determine names.
- netstat -o lists active TCP connections and includes the process ID (PID) for each connection. You can find the application based on the PID on the Processes tab in Windows Task Manager. This parameter can be combined with -a, -n, and -p (example: netstat -ano).
If the port in question is listed as "Listening" there is a possibility that it is in use by a Trojan server but your firewall, if properly configured, should have blocked any attempt to access it.
There are third party utilities that will allow you to manage, block, and view detailed listings of all TCP and UDP endpoints on your system, including local/remote addresses, state of TCP connections and the process that opened the port:Caution: If you're going to start blocking ports, be careful which ones you block or you may lose Internet connectivity. For a list of TCP/UDP ports and notes about them, please refer to
You can investigate IP addresses and gather additional information at:
You can use Process Monitor
, an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity or various Internet Traffic Monitoring Tools
for troubleshooting and malware investigation. | <urn:uuid:94eafad9-4f8b-47aa-a4b9-707182458a87> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/260763/port-5152/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00299-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896478 | 653 | 3.0625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.