text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Anyone who has a child, or who has seen a child recently, can tell you that the youth of America are highly connected users of Wi-Fi. According to recent data from the Pew Internet Project, three-quarters of teenagers use mobile devices like smartphones and tablets to access the Internet. The FCC has recognized and responded to the ever-increasing demand for mobile broadband, recently facilitating the use of additional spectrum for Wi-Fi. This additional spectrum and the widespread use of mobile technology could be leveraged to greatly enhance kids’ learning experiences, if only Wi-Fi were ubiquitously available in educational settings. Although the E-rate program administered by the FCC gives out nearly $2.5 billion each year for technological advancements in schools and libraries, to date very little of this funding has been used to make Wi-Fi available to students within school buildings and libraries. Fortunately, the FCC is currently revamping the E-rate program to respond to the changing broadband needs of schools and libraries. As part of these reforms, the FCC has identified $2 billion within the existing program that it can use to fund broadband. Chairman Wheeler is proposing to put this $2 billion to work for America’s students by funding Wi-Fi deployments in schools and libraries beginning next year. While the Commission often is called on to decide complex, highly contentious issues, every once in a while it finds itself in a situation where a particular decision has the potential to achieve important public interest objectives with enthusiastic support from the full spectrum of stakeholders. Using E-rate funds to provide schools and libraries with much-needed Wi-Fi falls into this category, garnering support from schools and libraries, education companies, equipment manufacturers, and service providers, including cable operators. If this proposal is adopted, schools and libraries that have not been able to access funding for Wi-Fi will now be able to offer wireless broadband to their students and educators. High-speed broadband will no longer be limited to the front office or computer lab, but can be untethered and available to each student on individual devices. And this can be done immediately without collecting a single additional dollar from consumers that pay into the fund. This is a clear win-win solution for everyone and NCTA strongly encourages the FCC to put this proposal into action as quickly as possible.
<urn:uuid:9b22971f-efcd-474f-95e2-cce852558a92>
CC-MAIN-2017-04
https://www.ncta.com/platform/public-policy/a-win-win-scenario-for-wi-fi-and-e-rate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00131-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96261
468
2.875
3
5.3.1 What are ANSI X9 standards? American National Standards Institute (ANSI) is broken down into committees, one being ANSI X9 2. The committee ANSI X9 develops standards for the financial industry, more specifically for personal identification number (PIN) management, check processing, electronic transfer of funds, etc. Within the committee of X9, there are subcommittees; further broken down are the actual documents, such as X9.9 and X9.17. ANSI X9.9 [ANS86a] is a United States national wholesale banking standard for authentication of financial transactions. ANSI X9.9 addresses two issues: message formatting and the particular message authentication algorithm. The algorithm defined by ANSI X9.9 is the so-called DES-MAC (see Question 2.1.7) based on DES (see Section 3.2) in either CBC or CFB modes (see Question 2.1.4). A more detailed standard for retail banking was published as X9.19 [ANS96]. The equivalent international standards are ISO 8730 [ISO87]. and ISO 8731 for ANSI X9.9, and ISO 9807 for ANSI X9.19. The ISO standards differ slightly in that they do not limit themselves to DES to obtain the message authentication code but allow the use of other message authentication codes and block ciphers (see Question 5.3.4). ANSI X9.17 [ANS95] is the Financial Institution Key Management (Wholesale) standard. It defines the protocols to be used by financial institutions, such as banks, to transfer encryption keys. This protocol is aimed at the distribution of secret keys using symmetric (secret-key) techniques. Financial institutions need to change their bulk encryption keys on a daily or per-session basis due to the volume of encryptions performed. This does not permit the costs and other inefficiencies associated with manual transfer of keys. The standard therefore defines a three-level hierarchy of keys: - The highest level is the master key (KKM), which is always manually distributed. - The next level consists of key-encrypting keys (KEKs), which are distributed on-line. - The lowest level has data keys (KDs), which are also distributed on-line. The data keys are used for bulk encryption and are changed on a per-session or per-day basis. New data keys are encrypted with the key-encrypting keys and distributed to the users. The key-encrypting keys are changed periodically and encrypted with the master key. The master keys are changed less often but are always distributed manually in a very secure manner. ANSI X9.17 defines a format for messages to establish new keys and replace old ones called CSM (cryptographic service messages). ANSI X9.17 also defines two-key triple-DES encryption (see Question 3.2.6) as a method by which keys can be distributed. ANSI X9.17 is gradually being supplemented by public-key techniques such as Diffie-Hellman encryption (see Question 3.6.1). One of the major limitations of ANSI X9.17 is the inefficiency of communicating in a large system since each pair of terminal systems that need to communicate with each other will need to have a common master key. To resolve this problem, ANSI X9.28 was developed to support the distribution of keys between terminal systems that do not share a common key center. The protocol defines a multiple-center group as two or more key centers that implement this standard. Any member of the multiple-center group is able to exchange keys with any other member. ANSI X9.30 [ANS97] is the United States financial industry standard for digital signatures based on the federal Digital Signature Algorithm (DSA), and ANSI X9.31 [ANS98] is the counterpart standard for digital signatures based on the RSA algorithm. ANSI X9.30 requires the SHA-1 hash algorithm encryption (see Question 3.6.5); ANSI X9.31 requires the MDC-2 hash algorithm [ISO92c]. A related document, X9.57, covers certificate management encryption. ANSI X9.42 [ANS94a] is a draft standard for key agreement based on the Diffie-Hellman algorithm, and ANSI X9.44 [ANS94b] is a draft standard for key transport based on the RSA algorithm. The former is intended to specify techniques for deriving a shared secret key; techniques currently being considered include basic Diffie-Hellman encryption (see Question 3.6.1), authenticated Diffie-Hellman encryption, and the MQV protocols [MQV95]. Some work to unify the various approaches is currently in progress. ANSI X9.44 will specify techniques for transporting a secret key with the RSA algorithm. It is currently based on IBM's Optimal Asymmetric Encryption Padding, a ``provably secure'' padding technique related to work by Bellare and Rogaway [BR94]. ANSI X9.42 was previously part of ANSI X9.30, and ANSI X9.44 was previously part of ANSI X9.31.
<urn:uuid:ec5b74a3-97ce-4e0b-b644-102473556911>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/ansi-x9-standards.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00251-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912311
1,096
2.75
3
The History And Culture Of Chinese Tea Chinese tea was first chronicled in Zhou dynasty. It was praised for its medicinal value. Since then, there has been no turning back for Chinese tea drinking in the Chinese civilization. In EasternTea.com, we detail the main types of tea from China. We provide succinct descriptions of such teas as well as their tastes and levels of fermentation. In EasternTea.com, we hope to bring you the development of Chinese tea as well as teapot and tea ceremony since Zhou dynasty. In other words, we are an informative knowledge-based website that brings you comprehensive information about tea and that it goes beyond being merely a beverage. We have well-researched articles that details the intricacies of the Chinese tea ceremony and how it has evolved over the years. We also have a user-friendly dictionary that contains phrases of all the tea words and vocabulary necessary in understanding tea and tea culture. In China, tea drinking has been elevated to an art form. Thus, it is a mission of EasternTea.com to discuss some of the aesthetics involved in Chinese tea ceremony and tea arts. During Tang and Song dynasties, the golden era of tea culture, the Chinese civilization spawned off an elaborate tea ceremony that remains in Japan today. Sinc then, Chinese tea arts have gone through several stages of evolution including the transformation of the principal drinking vessels from bowls to cups. We hope to detail this historical development in a chronological way. For the health conscious, we have articles that details the medicinal properties of tea and how it helps in one's health. We try to incorporate as much tea knowledge about health and tea properties as possible. This is in a bid to keep up with modern sciences and see what contribution tea can bring to an average consumer's health. We would also examine more exotic tea in a bid to find out how such teas can contribute to the ever-evolving needs of tea drinking. Exotic teas may include floral teas as well as scented teas. Though we are not physicians, we try to obtain such information from friends who are to ensure an up to date information base on the health properties of tea leaves. For Singaporean viewers, there is an additional treat, for the first time in cyberspace history, we attempt to record Singapore's own eccentricities in tea drinking. We have the ubiquitous Bak Kut Teh - a dish which is closely associated with Chinese tea-drinking. Another first in cyberspace is the provision of a comparative element in Japanese and Chinese tea culture. We are the first website that has information on both Chinese and Japanese tea culture. For this purpose we are honoured to have the Japanese tea master from the renowned Omote Senke tea school as our resident advisor and information provider. Through her we can see the differences and the divergence that has taken place between the two civilizations even though they have the same roots in China during the Song dynasty. All rights reserved. No parts of this article may be reproduced unless authorization is given by EasternTea.com Read more on... The art of drinking tea Chinese tea classification, wares, history, culture, how to select excellent tea and best ten Chinese teas A Brief History of China Chinese Dynasty Time Line Chinese Famous poet by the name of Li Bai liked things that can be found within a cup. He recited a poem that said whoever had a beverage, please leave your name. The thing that he loved in the cup is wine. However, in Tang poems, praising the smell of tea is also common. Lu Yu's Cha jing as well as yu chuan cha ge (the song of jade tea) were some examples of poems which had tea as its subject.
<urn:uuid:3175a33c-a7c9-4819-b5a3-86c34073dfe5>
CC-MAIN-2017-04
http://www.easterntea.com/tea/chinesetea.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00305-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965115
771
2.890625
3
source: http://www.securityfocus.com/bid/2823/info Outlook Express is the standard e-mail client that is shipped with Microsoft Windows 9x/ME/NT. The address book in Outlook Express is normally configured to make entries for all addresses that are replied to by the user of the mail client. An attacker may construct a message header that tricks Address Book into making an entry for an untrusted user under the guise of a trusted one. This is done by sending a message with a misleading "From:" field. When the message is replied to then Address Book will make an entry which actually replies to the attacker. Situation: 2 good users Target1 and Target2 with addresses firstname.lastname@example.org and email@example.com and one bad user Attacker, firstname.lastname@example.org. Imagine Attacker wants to get messages Target1 sends to Target2. Scenario: 1. Attacker composes message with headers: From: "email@example.com" <firstname.lastname@example.org> Reply-To: "email@example.com" <firstname.lastname@example.org> To: Target1 <email@example.com> Subject: how to catch you on Friday? and sends it to firstname.lastname@example.org 2. Target1 receives mail, which looks absolutely like mail received from email@example.com and replies it. Reply will be received by Attacker. In this case new entry is created in address book pointing NAME "firstname.lastname@example.org" to ADDRESS email@example.com. 3. Now, if while composing new message Target1 directly types e-mail address firstname.lastname@example.org instead of Target2, Outlook will compose address as "email@example.com" <firstname.lastname@example.org> and message will be received by Attacker. Related ExploitsTrying to match CVEs (1): CVE-2001-1088 Trying to match OSVDBs (1): 1852 Other Possible E-DB Search Terms: Microsoft Outlook 97/98/2000/4/5, Microsoft Outlook 97, Microsoft Outlook
<urn:uuid:61b6bfdb-9490-4b56-8cad-6e4ab0449296>
CC-MAIN-2017-04
https://www.exploit-db.com/exploits/20899/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882561
444
2.765625
3
By Lucas Apa @lucasapa Is the risk associated to a Remote Code Execution vulnerability in an industrial plant the same when it affects the human life? When calculating risk, certain variables and metrics are combined into equations that are rendered as static numbers, so that risk remediation efforts can be prioritized. But such calculations sometimes ignore the environmental metrics and rely exclusively on exploitability and impact. The practice of scoring vulnerabilities without auditing the potential for collateral damage could underestimate a cyber attack that affects human safety in an industrial plant and leads to catastrophic damage or loss. These deceiving scores are always attractive for attackers since lower-priority security issues are less likely to be resolved on time with a quality remediation. In the last few years, the world has witnessed advanced cyber attacks against industrial components using complex and expensive malware engineering. Today the lack of entry points for hacking an isolated process inside an industrial plant mean that attacks require a combination of zero-day vulnerabilities and more money. Two years ago, Carlos Mario Penagos (@binarymantis) and I (Lucas Apa) realized that the most valuable entry point for an attacker is in the air. Radio frequencies leak out of a plant’s perimeter through the high-power antennas that interconnect field devices. Communicating with the target devices from a distance is priceless because it allows an attack to be totally untraceable and frequently unstoppable. In August 2013 at Black Hat Briefings, we reported multiple vulnerabilities in the industrial wireless products of three vendors and presented our findings. We censored vendor names from our paper to protect the customers who use these products, primarily nuclear, oil and gas, refining, petro-chemical, utility, and wastewater companies mostly based in North America, Latin America, India, and the Middle East (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and UAE). These companies have trusted expensive but vulnerable wireless sensors to bridge the gap between the physical and digital worlds. First, we decided to target wireless transmitters (sensors). These sensors gather the physical, real-world values used to monitor conditions, including liquid level, pressure, flow, and temperature. These values are precise enough to be trusted by all of the industrial hardware and machinery in the field. Crucial decisions are based on these numbers. We also targeted wireless gateways, which collect this information and communicate it to the backbone SCADA systems (RTU/EFM/PLC/HMI). In June 2013, we reported eight different vulnerabilities to the ICS-CERT (Department of Homeland Security). Three months later, one of the vendors, ProSoft Technology released a patch to mitigate a single vulnerability. After a patient year, IOActive Labs in 2014 released an advisory titled “OleumTech Wireless Sensor Network Vulnerabilities” describing four vulnerabilities that could lead to process compromise, public damage, and employee safety, potentially leading to the loss of life. |Figure 1: OleumTech Transmitters infield| The following OleumTech Products are affected: - All OleumTech Wireless Gateways: WIO DH2 and Base Unit (RFv1 Protocol) - All OleumTech Transmitters and Wireless Modules (RFv1 Protocol) - BreeZ v126.96.36.199 An untrusted user or group within a 40-mile range could inject false values on the wireless gateways in order to modify measurements used to make critical decisions. In the following video demonstration, an attacker makes a chemical react and explode by targeting a wireless transmitter that monitors the process temperature. This was possible because a proper failsafe mechanism had not been implemented and physical controls failed. Heavy machinery makes crucial decisions based on the false readings; this could give the attacker control over part of the process. |Figure 2: OleumTech DH2 used as the primary Wireless Gateway to collect wireless end node data.| Video: Attack launched using a 40 USD RF transceiver and antenna Industrial embedded systems’ vulnerabilities that can be exploited remotely without needing any internal access are inherently appealing for terrorists. Mounting a destructive, real-world attack in these conditions is possible. These products are in commercial use in industrial plants all over the world. As if causing unexpected chemical reactions is not enough, exploiting a remote, wireless memory corruption vulnerability could shut down the sensor network of an entire facility for an undetermined period of time. In May 2015, two years from the initial private vulnerability disclosure, OleumTech created an updated RF protocol version (RFv2) that seems to allow users to encrypt their wireless traffic with AES256. Firmware for all products was updated to support this new feature. Still, are OleumTech customers aware of how the new AES Encryption key is generated? Which encryption key is the network using? |Figure 3: Picture from OleumTech BreeZ 5 – Default Values (AES Encryption)| Since every hardware device should be unmounted from the field location for a manual update, what is the cost? IOActive Labs hasn't tested these firmware updates. We hope that OleumTech’s technical team performed testing to ensure that the firmware is properly securing radio communications. I am proud that IOActive has one of the largest professional teams of information security researchers who work with ICS-CERT (DHS) in the world. In addition to identifying critical vulnerabilities and threats for power system facilities, the IOActive team provides security testing directly for control system manufacturers and businesses that have industrial facilities – proactively detecting weaknesses and anticipating exploits in order to improve the safety and operational integrity of technologies. Needless to say, the companies that rely on vulnerable devices could lose much more than millions of dollars if these vulnerabilities are exploited. These flaws have the potential for massive economic and sociological impact, as well as loss of human life. On the other hand, some attacks are undetectable so it is possible that some of these devices already have been exploited in the wild. We may never know. Fortunately, customers now have a stronger security model and I expect that they now are motivated enough to get involved and ask the vulnerable vendors these open questions.
<urn:uuid:2cc90597-9b2a-4364-bce5-9a67ec38e240>
CC-MAIN-2017-04
http://blog.ioactive.com/2015/07/hacking-wireless-ghosts-vulnerable-for.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92448
1,261
2.640625
3
Alphanumeric CAPTCHAs – those more or less difficult-to-read combinations that are used by many online services to discern whether a user is human or a bot – have been in use for over 15 years now, but I’ve yet to meet a person who likes “solving” them. While the general consensus is that they serve a good purpose, researchers have tried to make their solving as pleasant as possible for users and, at the same time, as difficult as possible for computers. Computer security researcher Elie Bursztein, who joined Google in 2012, is known for his extensive research of different types of CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart). He is the lead of Google’s anti-abuse efforts and has recently shared a few discoveries about CAPTCHA use that have spurred Google to prefer numeric CAPTCHAs when the chances – as calculated by the firm’s risk analysis system – that a user is human are high. “The meaning of the word used in the CAPTCHA deeply influences both how people perceive its difficulty and how pleasant the task of answering it to be,” they discovered. While actual words and pseudo-words are more easily and accurately solved than numeric CAPTCHAs and those consisting of random letters and random combinations of letters and numbers, words carry meaning. “People are unconsciously biased by the meaning and the frequency of the words used in the CAPTCHA,” he explained. “For example, for our survey we chose the word pretty as an example of a high frequency positive word, and cutest as a low frequency positive word. Google search returned approximately 1.1 billion results for pretty and about 36 million results for cutest. We chose guilty as a high frequency negative word, and abject as a low frequency negative word. These words had 150 million and 4 million Google search results, respectively.” The results of the survey have proven that users vastly prefer positive words to negative ones, and that the latter negatively impact user perception (and satisfaction). As it is impossible to generate hundreds of millions of captchas with consistent user sentiment, numeric CAPTCHAs are a better choice when dealing with humans. Bursztein says the change was a success. “People are 6.7% more accurate at solving it compared to the old one,” he noted, and added that these new system has helped to reduce frustration. “People click 55% less on the reload button for the new CAPTCHA,” he says.
<urn:uuid:3c2028db-a48a-4dea-b6b8-ab6816004ab6>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/05/12/why-google-prefers-numeric-captchas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00168-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963576
541
2.84375
3
Fixing the flaws in North American maps Geodetic survey taps the latest in geospatial technology to improve accuracy - By William Jackson - Jul 07, 2010 Advances in geospatial technology have, in a relatively short time, outpaced the standard terrestrial mapping systems that defined the world’s geography for centuries. The National Geodetic Survey is putting that new technology to use, embarking on a 10-year program to rewrite the map of North America. The National Spatial Reference System, which is the basis for most mapping and surveying in the United States, primarily consists of 1.5 million passive markers installed by NGS during the past 200 years. The markers provide reference points for surveyors, but the accuracy of their positioning varies, depending on the state of the art at the time they were set. The system, last updated in the 1980s, contains errors of as much as 2 meters in latitude and longitude and about 1 meter in elevation. State-of-the-art technology, with global navigation satellite systems, can place a position to within a centimeter using a few hours of data collections, and that process is expected to speed up significantly during the next 10 years. “The future of positioning is GNSS,” states a recent NGS white paper, titled “Improving the National Spatial Reference System.” New space race is on for satellite positioning systems Google to bring NOAA data to new heights NOAA expands geodetic reference network Better accuracy can have a significant impact. A one-meter mistake in elevation in a coastal region such as Louisiana could be the difference between a hurricane evacuation route being passable or underwater and could also result in water flowing the wrong way when dikes and channels are designed. Satellite positioning technology that will soon be available to consumers will have a degree of accuracy that will put them out of alignment with existing maps, which are not as accurate, said NGS Chief Geodesist Dru Smith. That could, for example, result in drivers being directed by a car navigation system into the wrong lane in traffic. To address those issues, the National Oceanic and Atmospheric Administration, NGS' parent agency, convened the first federal geospatial summit in May. “No longer are passive marks the way of doing business,” Smith said. “The geodetic control of the future is the orbits of the satellites in the sky.” Those orbits, coupled with increasingly precise measurements of Earth’s gravity field and computing tools for massaging data, can provide more accurate positioning than what is now available from the National Spatial Reference System. NGS already uses satellite positioning data from the Global Positioning System to provide survey-grade positioning that is more accurate than the standard used for most mapping. The goal of the modernization program is to bring the entire system into the 21st century, aligning the National Spatial Reference System with global satellite systems, with constantly updated data provided from satellites, combined with increasingly accurate models of the Earth and its gravity field. “We have the technology in place to do it,” Smith said. “We need to bring the world along.” That will not necessarily be simple. It will make millions of existing maps, charts and other records that federal, state and local governments now depend on obsolete. So NGS is moving slowly, phasing in the new standards during the next 10 years before abandoning the existing system. Advances in mapping, positioning and other geospatial technology have resulted in a proliferation of sensors and systems that provide information, much of it in digital formats. A lot of geospatial data exists today only in digital form. Although a good deal of it soon might be obsolete, it is valuable for comparative uses, and the Library of Congress is engaged in an effort to help preserve that data. Whether in maps, aerial imaging or other forms, the data is used to track changes in the Earth’s geography, structures, land use and environment, said William Lefurgy, digital initiatives project manager at the library’s National Digital Information Infrastructure and Preservation Program. “There is so much of it cranked out every day that managing it is becoming a problem,” Lefurgy said. “There is a growing awareness that you not only need to preserve what you have now but also the material you had yesterday.” Because the data is being produced in a variety of digital formats, the library and Columbia University are creating a Web-based clearinghouse for sharing best practices in preserving digital data. Geodesy is the science of studying the Earth’s size and shape, its gravity, and its magnetic fields. At the National Geodetic Survey, “we are concerned with precisely positioning things on the Earth,” Smith said. “Precisely” is a relative term because of errors that creep into measurements and the constantly changing surface of the Earth. Land shifts and elevation change as land rises and subsides. Sea level, the traditional benchmark for measuring elevation, is not constant. It changes from hour to hour and place to place, and more significantly, it is rising globally. The existing U.S. geodetic baselines — the references from which measurements are made — are the North American Datum of 1983 for latitude and longitude and the North American Vertical Datum of 1988 for elevation. Those were updated from the previous standards that had been established in the 1920s. “It was quite good for the era,” Smith said of the 1920s references. But “the technology was pretty primitive by modern standards.” By the 1960s, techniques for electronic distance measurement and early satellite measurements were advancing the state of the art, and by the 1980s, the data had to be updated to correct errors of as much as hundreds of meters in some positions. That occurred “just in time to be obsolete when GPS went up,” Smith said. Although the baselines in use are a significant improvement over those of the 1920s and are good enough for most uses today, the submeter accuracy that soon will be available on handheld consumer positioning devices will result in “glaring errors to general users,” according to the NGS white paper. Moreover, a system of static reference points, no matter how accurately positioned, will not address changes in the Earth’s surface over time. “Decisions made based on marks set in a subsiding crust may yield unintentional harm to life or property,” the NGS white paper states. “For example, decisions about building homes in flood-prone areas or declaring roads to be high enough to serve as evacuation routes must be based on accurate heights or the results can be devastating.” Areas of largest concern today are low-lying areas, including the Gulf of Mexico, Chesapeake Bay and California agricultural regions. Global navigation satellites orbit around the center of the Earth’s mass, which now has been located to within less than a centimeter. Those precisely measured orbits enable this generation of GPS devices for general consumer use to be accurate to within a few meters instantaneously. But NGS can provide more accurate measurements by massaging the data and comparing it with measurements from a nationwide network of about 1,400 precisely positioned and continuously operating GPS receivers. NOAA operates about 5 percent of the permanent receivers in this network, called Continuously Operating Reference Stations. The rest are operated by universities, state transportation departments and other organizations. Surveyors who need accurate positioning send their GPS data to NGS’ Online Positioning User Service, and OPUS figures the position based on its relationship to the permanent CORS receivers. “What we do is massage the data using precise orbits, clock corrections and accounting for other phenomena like atmospheric conditions” and use it with terrestrial measurements of the gravity field, NGS’ Smith said. The actual definition of height depends on knowing the strength of gravity at a point, and gravity’s strength at a given point depends on the distance from the center of the Earth and the distribution of the Earth’s masses, especially near the point in question. Precise measurements of the gravity field help determine the height of a surveyed point. Although the human body is not sensitive enough to detect such small changes in the gravity field, flood patterns are affected by them. That system will replace the existing system of static markers to take advantage of the strengths of global satellite systems and NGS’ own expertise with modeling the gravity field. However, many of the gravity measurements across the country are out-of-date and in need of a consistent, coordinated resurvey. NGS has initiated the Gravity for the Redefinition of the American Vertical Datum project to resurvey the gravity field. It is expected to take about 10 years and cost $40 million to complete before its data will be ready to update the National Spatial Reference System. In contrast, merely updating the existing system by resurveying benchmarks that use the same techniques from the 1980s would cost an estimated $200 million and would not solve any of the problems that result from using a system of passive marks. NOAA has estimated that the modernization program could produce benefits of $4.8 billion in 15 years, including $2.2 billion in savings from improved floodplain management. The federal, state and local planners that use this information need to be able to manage and access it over time so that changes can be tracked, but the rapid changes in the technology for gathering and recording geospatial data make it difficult to maintain and access. The preservation clearinghouse that the Library of Congress and Columbia University are establishing will be a source for best practices already in use for maintaining data. “The data includes both current and legacy information about geography and structures, land use and environmental measurements,” LOC’s Lefurgy said. “Quite a bit of it is photographic material.” Because there are many different types of data, there are silos of expertise in managing it. At a meeting on preservation of geospatial data hosted by LOC last year, users complained that there was no easy way to take advantage of that expertise. “Everyone agreed that there is a lot they could learn from each other,” Lefurgy said. The clearinghouse will not seek to reinvent the wheel but will focus on sharing existing best practices to enable curators to take best advantage of the state of the art. “There probably still is a lot of work to be done in developing best practices,” Lefurgy said. “A lot already has been done, but nobody is going to say they have [all] the answers.” The new geospatial standards that NGS is putting in place are accurate enough that this could well be the last time that they have to be updated, Smith said. Additional precision undoubtedly could be squeezed out of present measurements, but with the center of the Earth’s mass located to within less than a centimeter and the ability to accurately model the gravity field and satellite orbits, it is unlikely that any more order-of-magnitude changes will be made, he said.
<urn:uuid:9f9b3195-a317-43fe-a1c5-67e2a4d6a38d>
CC-MAIN-2017-04
https://gcn.com/articles/2010/07/12/geodetic-survey-fixing-us-maps.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00078-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944067
2,321
2.71875
3
Some Windows computers are infecting Android devices with malware Since Android is based on Linux, many users consider it rather safe and secure. However, this is not at all true -- most malware that targets mobile devices, targets Android. For the most part though, it is easy to stay safe by only installing reputable apps from the Play Store. What if, however, your desktop operating system was infecting your Android device without you knowing? Sadly, this can happen, as some Windows users are finding out. Symantec announces it has found such a case, and it is really nasty. "We've seen Android malware that attempts to infect Windows systems before [...] Interestingly, we recently came across something that works the other way round: a Windows threat that attempts to infect Android devices", says Flora Liu of Symantec. Liu further explains, "The infection starts with a Trojan named Trojan.Droidpak. It drops a malicious DLL (also detected as Trojan.Droidpak) and registers it as a system service". What makes this particularly devious and nasty, is that Droidpak downloads a configuration file, which causes the mayhem. This file triggers a download of a malicious Android .apk file and adb for Windows. If an Android device with USB debugging enabled is connected to the infected Windows PC, the malicious .apk file is pushed to the device. Once the .apk file is pushed to the device, the user is presented with a fake "Google App Store". The fake app store will then intercept the user's text messages as well as replace Korean banking apps with malicious versions. Symantec suggests the following, in order to stay safe: - Turn off USB debugging on your Android device when you are not using it - Exercise caution when connecting your mobile device to untrustworthy computers - Install reputable security software, such as Norton Mobile Security - Visit the Symantec Mobile Security website for general safety tips While, this all sounds horrible, in reality, the majority of Android users should not have debugging enabled and thus, are safe. However, it is not uncommon for power users to have this feature turned on for tinkering purposes. Have you encountered Trojan.Droidpak? Tell me about it in the comments.
<urn:uuid:4e57f651-7f83-4c3b-a046-af83df0ca586>
CC-MAIN-2017-04
http://betanews.com/2014/01/25/some-windows-computers-are-infecting-android-devices-with-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92647
465
2.578125
3
Wednesday’s 24-hour worldwide test of IPv6, the next-generation Internet addressing standard, is sure to yield valuable data and some unexpected results. Government agencies and other public entities that are participating in World IPv6 Day could also see some effects, such as citizens who have trouble accessing public-facing websites. But fear not. The transition to IPv6 — Internet protocol version 6 — will likely take several years, if not a decade. There’s still time to prepare for the new 128-bit standard, which will support trillions of unique IP addresses. In February, the Internet Corp. for Assigned Names and Numbers, one of the nonprofits that coordinate IP distribution, announced all IPv4 addresses had been distributed and that IPv6 would be the new standard going forward. The test on June 8 is a starting point. Google, Facebook and other online heavyweight have publicly committed that they will participate, as will several federal agencies and a few municipal governments and universities. “We want to find holes,” said Timothy Winters, who studies IPv6 as senior manager of the University of New Hampshire InterOperability Laboratory, which tests data communications technology. “If the day goes perfectly that’s great, but I fully suspect that we’re going to find issues — and I hope we do because then we can solve them.” Better to deal with problems now than during a full-scale deployment down the road, he said. What might happen? One possibility is that a website visitor coming in through an IPv6-aware device might get a timeout notice and be unable to access content on a website that’s supporting IPv6 — if the user’s device or router is misconfigured or their Internet service provider isn’t supporting v6. “That’s the real disaster scenario because what happens is your packets are going nowhere,” Winters said. A firewall will eat those data packets up. For the government agencies that are testing IPv6-enabled websites Wednesday, that could mean at least a few citizens won’t be able to get to a government webpage. Rob Barnes, a division manager in Fresno, Calif.’s IT department, said he has read that about 1 percent of website users could fall into this “black hole” situation. Last month the Fresno city government set up a test page in anticipation of the test. If incoming IPv6 traffic proves to be significant, the city might have to begin considering how to support IPv6 full time, he said. The city is will also have 20 workstations running IPv6 on June 8 so that staff also can start testing outside websites The test will give enterprises and website operators a good look for the first time at how many Internet users could use IPv6 if it were turned on everywhere. Cyber-security will also be examined. During the past few days, there have been rumblings that the test has taken a level of significance with high-profile hackers, said Carl Herberger, vice president of security solutions at Radware. He said he's concerned about the possibility of a significant security breach related to the test event. While the security industry backs the new standard, Herberger said, there are vulnerabilities that have yet to be addressed. One issue is that IPv6 is a "heavy" protocol that requires four times the processing power, which in effect makes it a force multiplier for those attempting denial-of-service attacks. Other potential problems that could crop up, Winters said, are server load balancing issues for IPv4 versus IPv6 traffic as well as the discovery of consumer-level routers — the kind available at electronics stores —that advertise they’re IPv6 -enabled but really aren’t or do poorly. Government agencies, in particular, should be thinking about identifying their legacy systems that can only communicate over IPv4, and formulating plans to bring them onto v6, Winters said. He said this can be done in one of two ways: translating existing v4 addresses to IPv6, or tunneling them over a v4 connection. Also, governments (and all organizations) should be only buying hardware that supports the new standard, he said. Participants in World IPv6 day are eager to see what the day will bring. More than 400 entities have publicly announced they are onboard. N.C. State has obtained a significant amount of IPv6 address space and already is running a Cisco dual stack, according to William Brockelsby, the university’s lead network architect. Testing on Wednesday will be confined to a lab rather than the university’s entire network Ed Furia, network design engineer at Indiana University, isn’t expecting too many problems. The school has been running a dual-stack wired network for the past eight years for research purposes, and internal users are running completely on IPv6. But the university’s public-facing applications and websites were brought onto v6 only recently. Winters said governments still operating exclusively on IPv4 won’t see much difference on Wednesday. But there still could be a few hidden problems. Some agencies might not be supporting IPv6 today on the up connection, he said, but many of them have some equipment on their network that supports the new standard. If you don’t turn off that functionality, it could lead to network problems. More will be known Wednesday. “I’m sure there are other issues we’re just not aware of,” Winters said.
<urn:uuid:a2137589-aa7d-4cba-8395-0565dcf06216>
CC-MAIN-2017-04
http://www.govtech.com/e-government/World-IPv6-Day-Should-Bring-Surprises.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00160-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952666
1,141
2.59375
3
Technology can be applied to artistic pursuits — Photoshop, music recording, video editing — and Tech Page One reports that technology is pushing art education forward via the “cyber arts.” This means that traditional art forms such as painting, drawing, and sculpture are no longer the only ways students express themselves creatively in school. With the growing pressure to balance budgets in light of growing costs, art programs in schools are usually the first to be cut in times of budgetary constraints. But new technological innovations have presented creative students with opportunities — and these same innovations are also presenting opportunities for VARs to capitalize on the new trend. Many cyber artistic pursuits are skills that can be applied in the working world; many graphic designers, for instance, use Photoshop every day. Here are two more examples of how cyber arts are being implemented in schools. “Technology is their generation, their life. Bringing these valuable tools into learning will help build conceptual ideas such as conservation,” teacher Hasmick Cochran told the Daily Bruin, UCLA’s newspaper. VARs could overlook the art department when considering the technology needs of their customers with education facilities — it might be something to add to the discussion of your client’s needs, how to keep these solutions functioning at an optimum, and how to keep them secure.
<urn:uuid:ad86070a-4ce5-410c-ac96-e488bc0b9308>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/technology-in-art-education-is-opportunity-for-vars-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00214-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962052
270
3.15625
3
The idea of infecting BIOS has long been a highly intriguing prospect for cybercriminals: by launching from BIOS immediately after the computer is turned on, a malicious program can gain control of all the boot-up stages of the computer or operating system. Since 1998 and the CIH virus, which could merely corrupt BIOS, malware writers have made little progress on this front. That changed, however, in September when a Trojan was detected that could infect BIOS and as a result gain control of the system. The rootkit is designed to infect BIOS manufactured by Award and appears to have originated in China. The Trojan’s code is clearly unfinished and contains debug information, but Kaspersky Lab analysts have verified that its functionality works. Attacks against individual users The DigiNotar hack. One of the main aims of the hackers who attacked the Dutch certificate authority DigiNotar was the creation of fake SSL certificates for a number of popular resources, including social networks and email services that are used by home users. The hack occurred at the end of July and went unnoticed throughout August while the attacker manipulated the DigiNotar system to create several dozen certificates for resources such as Gmail, Facebook and Twitter. Their use was later recorded on the Internet as part of an attack on Iranian users. The fake certificates are installed at the provider level and allow data flows between a user and a server to be intercepted. The DigiNotar story once again demonstrates that the existing system of hundreds of certificate authorities is poorly protected and merely discredits the very idea of digital certificates. MacOS threats: the new Trojan concealed inside a PDF. Cybercriminals are taking advantage of the complacency shown by many MacOS users. For instance, most Windows users who receive email attachments with additional file extensions such as .pdf.exe or .doc.exe will simply delete them without opening them. However, this tactic proved to be a novelty for Mac users, who are more prone to unwittingly launch malicious code masquerading as a PDF, an image or a doc etc. This mechanism was detected in late September in the malicious program Backdoor.OSX.Imuler.a, which is capable of receiving additional commands from a control server as well as downloading random files and screenshots to the server from the infected system. In this case, the cybercriminals used a PDF document as a mask. Kaspersky Lab detected 680 new variations of malicious programs for different mobile platforms in September. 559 of them were for Android. In recent months there has been a significant increase in the overall number of malicious programs for Android and, in particular, the number of backdoors: of the 559 malicious programs detected for Android, 182 (32.5%) were modifications with backdoor functionality. More and more malicious programs for mobile devices are now making extensive use of the Internet for such things as connecting to remote servers to receive commands. Mobile Trojans designed to intercept text messages containing mTANs used in online banking are becoming increasingly popular among cybercriminals. Following in the footsteps of ZitMo, which has been operating on the four most popular platforms for the last year, is SpitMo which works in much the same way but in tandem with the SpyEye Trojan rather than ZeuS. Attacks via QR codes. At the end of September the first attempted malicious attacks using QR codes were detected. When it comes to installing software on smartphones, a variety of websites offer users a simplified process that involves scanning a QR code to start downloading an app without having to enter a URL. Predictably, cybercriminals have also decided to make use of this technology to download malicious software to smartphones: Kaspersky Lab analysts detected several malicious websites containing QR codes for mobile apps (e.g. Jimm and Opera Mini) which included a Trojan capable of sending text messages to premium-rate short numbers. Attacks on corporate networks The number of serious attacks on large organizations that make use of emails in the initial stages is on the increase. In September alone there was news of two major incidents that made use of this tactic. The first, named Lurid, was uncovered by Trend Micro during research by the company’s experts. They managed to intercept traffic to several servers that were being used to control a network of 1,500 compromised computers located mainly in Russia, former Soviet republics and countries in eastern Europe. Analysis of the Russian victims showed that it was a targeted attack against very specific organizations in the aerospace industry, as well as scientific research institutes, several commercial organizations, state bodies and a number of media outlets. The attackers managed to gain access to data by sending malicious files via email to employees in these organizations. Attack on Mitsubishi. News about an attack on the Japanese corporation Mitsubishi appeared in the middle of the month, although research by Kaspersky Lab suggests that it was most probably launched as far back as in July and entered its active phase in August. According to the Japanese press, approximately 80 computers and servers were infected at plants manufacturing equipment for submarines, rockets and the nuclear industry. Malware was also detected on computers at the company’s headquarters. There is now no way of knowing exactly what information was stolen by the hackers, but it is likely that the affected computers contained confidential information of strategic importance. “It is safe to say that the attack was carefully planned and executed,” says Alexander Gostev, Chief Security Expert at Kaspersky Lab. “It was a familiar scenario: in late July a number of Mitsubishi employees received emails from cybercriminals containing a PDF file, which was an exploit for a vulnerability in Adobe Reader. The malicious component was installed as soon as the file was opened, resulting in the hackers getting full remote access to the affected system. From the infected computer the hackers then penetrated the company’s network still further, cracking servers and gathering information that was then forwarded to the hackers’ server. A dozen or so different malicious programs were used in the attack, some developed specifically with the company’s internal network structure in mind.” The war on cybercrime Closure of the Hlux/Kelihos botnet. September saw a major breakthrough in the battle against botnets – the closure of the Hlux botnet. Cooperation between Kaspersky Lab, Microsoft and Kyrus Tech not only led to the takeover of the network of Hlux-infected machines, the first time this had ever been done with a P2P botnet, but also the closure of the entire cz.cc domain. Throughout 2011 this domain had hosted command and control centers for dozens of botnets and was a veritable hotbed of security threats. At the time it was taken offline the Hlux botnet numbered over 40,000 computers and was capable of sending out tens of millions of spam messages on a daily basis, performing DDoS attacks and downloading malware to victim machines. Kaspersky Lab currently controls the botnet and the company’s experts are in contact with the service providers of the affected users to clean up infected systems. Detection for Hlux has been added to Microsoft’s Malicious Software Removal Tool, helping to significantly reduce the number of infected machines. More detailed information about the IT threats detected by Kaspersky Lab on the Internet and on users' computers in September 2011 is available at http://www.securelist.com.
<urn:uuid:70d1adc4-64c2-4147-b379-b292f027568c>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2011/Malware_in_September_The_Fine_Art_of_Targeted_Attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00214-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964462
1,518
2.640625
3
If you’ve ever driven through the poor ends of a major U.S. city, like many people, you’ve probably wondered what could be done to improve conditions there. While so-called “slums” in the U.S. may not compare to slums in developing nations, they are still considerably disadvantaged when it comes to infrastructure, and the adults and children who live there are usually at a higher risk for crime, health problems and general prosperity. Many cities have tried to tackle the problem with limited success. The problems that usually dog the attempts to improve conditions are familiar: limited budgets, bureaucracy and delays. According to the University of Pennsylvania’s Eugenie Birch and Amy Lynch, co-authors of “Measuring U.S. Sustainable Urban Development” in State of the World 2012,” more than 200 U.S. cities have developed plans for improving economic, environmental and social sustainability, but few have established specific metrics to monitor their progress. Birch and Lynch say that a national indicator system would help cities more uniformly measure their success in moving toward sustainable development. Image via Shutterstock The report’s authors say that the best approach is for the U.S. to a establish a national sustainable development agenda and a set of standardized national indicators, and as effective indicators are identified; they should be assessed and culminated into a national monitoring system. While monitoring and standards are great, how do U.S. cities make a concrete push to improve cities in need of a path to prosperity? Ultimately, these ideas may converge with new ideas about “smart cities,” or urban environments that make the maximum use of technology and “smart” components to improve life in a number of ways. There are smart city technologies designed to improve the flow of traffic , public transportation (critical in poorer neighborhoods), like car sharing, centralized communications and public Wi-Fi, self-reporting of buildings and infrastructure and “smart housing” that can offer urban officials more control over crime and residents more security. For now, smart cities are something still largely in the prototype stage. ABI Research predicted last year that while $8.1 billion was spent on smart city technologies in 2010, by 2016 that number will likely reach $39.5 billion. As of today, there are 102 smart city projects worldwide, says ABI, with Europe leading the way at 38 cities, North America at 35, Asia Pacific at 21, the Middle East and Africa at six, and Latin America with two. As the world’s population continues migrating to its cities, those cities will become inevitably larger. To keep slums from spreading and the condition of those who live in them from deteriorating, cities would be well advised to include smart city concepts into their long-term sustainable development agendas. Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO Miami 2013, Jan 29- Feb. 1 in Miami, Florida. Stay in touch with everything happening at ITEXPO (News - Alert). Follow us on Twitter. Edited by Brooke Neuman
<urn:uuid:8df7f75c-4cb9-4cc3-ba9c-8cb5b4a9c872>
CC-MAIN-2017-04
http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/12/19/320252-sustainable-urban-development-smart-cities.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00058-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947799
641
3.421875
3
The Next Version of the Internet Protocol - IPv6 By Pete Loshin IPv6 - The Next Version of the Internet Protocol A lot of hot air has been blowing over the past year or so about the next version of the Internet Protocol, IPv6. Unlike the Y2K problem or even the move to support the latest version of Microsoft Windows, there are no "flag day" transitions by which time systems must be upgraded or else. Members of the IETF (Internet Engineering Task Force) recognized that the current version of IP, IPv4, would need an upgrade by the late 1980s, and RFCs specifying the new protocol began appearing by 1995. But some big questions remain unanswered: why support IPv6 at all, and how will it work? Savvy network professionals already know quite a bit about IPv6. For one thing, they know that IPv4 has limited address space and that IPv6 increases the network address size from 32 bits to 128 bits. They know that IPv6 smoothes the rough edges around IPv4 and adds some very nice features such as stateless autoconfiguration ("plug-and-play" networking). But they may not know all that much more about IPv6, especially not about the upgrade paths from IPv4 to IPv6, how to migrate individual hosts and networks, what to do about applications, and where to find more real-world resources for deploying IPv6. IPv4 is sufficiently robust and scalable to go from serving as the network layer protocol for a research network linking a few dozen government and academic research sites to today's Internet, a global network now linking something on the order of 100 million nodes. But IPv4 was published in RFC 791 back in 1981, and it has needed a face-lift for some time. Number one problem is the IPv4 address space. As anyone who has requested a globally unique IP network address in the past five years knows, they are in very short supply. Despite the fact that the 32-bit IPv4 address could (in theory at least) uniquely identify over four billion different nodes, much of that space is inaccessible (either reserved or unused). The problem is that addresses were originally apportioned inefficiently. But perhaps an even more pressing problem is how to cope with the explosive growth in Internet routing tables. Part 2: The Trouble with IPv4 The Trouble with IPv4 Figure 1 shows how the IPv4 address space is allocated: as you can see, the original architecture allocated fully half of all IPv4 addresses to 126 Class A networks. Originally intended for very, very large networks maintained at the national level (or multinational, for corporations), quite a few Class A addresses were snatchedup by net-savvy organizations such as MIT and Carnegie Mellon University early on. Each Class A network is capable of handling as many as 16 million nodes, so since few organizations with Class A network addresses have that many nodes much of that address space is wasted. Another 25% of all addresses are allocated for Class B networks: roughly 16,000 Class B networks are possible, each capable of addressing as many as 65,000 nodes. Again, net-savvy organizations scooped these up early even though they might never come close to having that many nodes. The problem was that Class C networks, which compose only one eighth of the entire IPv4 address space and of which there are over 2 million, can handle no more than 254 nodes. Clearly, these addresses are inappropriate for companies with 1,000 nodes even if a Class B is overkill. Figure 1: (from RFC 791) IPv4 started slowly strangling on this structure by the mid 1990s even as corporations began embracing TCP/IP and the Internet in earnest. Each new IP network address assigned meant some more addresses taken out of circulation. Even though there are still plenty of addresses left, that is only due to the implementation of a series of stopgap measures, strict rationing, and better utilization of existing addresses. The IETF and the IANA (the Internet Assigned Number Authority, in the process of being superceded by the Internet Corporation for Assigned Names and Numbers, ICANN) used several approaches to extending IPv4's lifetime while IPv6 was being readied. These steps can be characterized as rationing, repackaging, recycling, and replacing. First, rationing. This one is easy: the process of getting a Class B or Class A network address was tightened up. And Class C addresses were distributed by ISPs, who get a limited number of addresses and need to take care that they are not wasted unnecessarily. Class B addresses were very hard to come by as early as 1990 or so, and Class A addresses virtually impossible. By holding onto the Class A and B network addresses, it is now possible to break them up and redistribute them in smaller chunks. Next, repackaging. Classless InterDomain Routing (CIDR) does away with the class system, allowing ISPs to allocate groups of contiguous Class C addresses as a single route. The alternative would be to have routers treat each individual Class C address as a separate route, resulting in a nightmarishly large routing table. Instead of Class A, B, or C, routed addresses are expressed along with a number indicating how many bits of the network address is to be treated as the route. For example, 256 Class C addresses could be aggregated into a single route by indicating that 16 bits of the address is to be treated as the route (the same as for a Class B address). In this way, an ISP or other entity that administers CIDR networks can handle the routing from the Internet. Address space can be recycled, sort of, in two ways: first, Class A and B addresses that have not yet been assigned can be divided up and allocated to smaller organizations. Where the CIDR approach is sometimes referred to as "supernetting", this approach simply breaks the larger networks into subnets which can be routed by some entity handling routing for the entire (undivided) network address. Another approach is to use the reserved network addresses, sometimes called Network 10, to do network address translation, or NAT. RFC 1918 sets aside the network address ranges:10.0.0.0 - 10.255.255.255 172.16.0.0 - 172.31.255.255 192.168.0.0 - 192.168.255.255 to be used for private intranets. These addresses provide one Class A, 16 Class B, and 255 Class C network addresses to be used by anyone who wants to, as long as they don't attempt to forward packets to or from those networks on the global Internet. The last option is to replace IPv4 addresses entirely. This is the IPv6 option. Each of these other approaches pushes back the day when IPv4 will no longer work, but does not relieve the stress. Part 3: The IPv6 Solution The IPv6 Solution IPv6 adds 128-bit addresses and an aggregatable address space to solve the address shortage while at the same time making possible much smaller backbone routing tables. Its streamlined header and design refinements fix nagging issues such as network autoconfiguration, mobile IP, IP security, fragmentation, source routing and the very large packets known as jumbograms. The IPv6 global aggregation addressing architecture splits addresses into two parts. The high-order 64 bits identify the network, and the low-order 64 bits identify the node. A format prefix gives the type of IPv6 address. Next comes a top-level aggregation entity, likely to be a country or a large carrier, followed by 8 bits reserved for future growth. Then comes another aggregation entity, likely to be a large company or Internet provider, and finally a site-level aggregation entity, probably assigned by the entity above it. Such addresses are far more efficient to route across backbones. Aggregation means any address contains its own route. The first few bits of the address might indicate, say, Europe. The packet would go to a router serving Europe, which might see Portugal in the next few bits and forward the packet to Portugal's router. From there, the packet might go on to a router in Lisbon and then on to its final destination. Figure 2 shows that the Top-Level Aggregation ID (TLA) uses 13 bits. This gives an upper limit of no more than 8,192 (2 to the 13th power) top-level entities, which pares down the size of the routing table a backbone router would have to deal with to forward packets anywhere in an IPv6 Internet. The next 8 bits are reserved, presumably held back, just in case the TLA allocation should be bigger (or the Next-Level Aggregation ID allocation should be bigger). Figure 2: (from RFC 2373) NLA entities are expected to include large ISPs, among others. These entities get their address allocations from the TLAs, who also handle routing for the NLAs. Each TLA can allocate as many as 16 million or so NLA networks (2 to the 24th) The NLAs, in turn, can allocate as many as 65,536 networks each (2 to the 16th) to Site-Level Aggregation (SLA) entities. In other words, network sites. And each SLA entity still has 64 bits of address space to play around with, for as many as 18 million trillion (18,446,744,073,709,551,616) nodes per network. While the IPv6 address is longer than we're used to, the IPv6 header is simpler than we are used to (see Figure 3). IPv6 eliminates length, identification, flag, fragment offset, header checksum, options, and padding fields that were found in IPv4 headers. Because IPv6 headers are all the same length, no length field is necessary. IPv6 prohibits fragmentation except between end nodes, so the identification, flag and fragment offset fields go away, too. Figure 3 (from RFC 2460) IPv6 options are handled in separate extension headers so options no longer clutter the main header. The IPv4 type-of-service field has evolved into the traffic class field, and the time-to-live field is replaced by the hop limit field. A flow label field supports IPv6 packet sequences that require the same routing treatment, such as video streams. The simplified, standard-sized IPv6 header also makes routing easier for packets with special options. IPv4 forces routers to sense and handle all special packets, such as those using IP Security encryption and authentication. But IPv6 routers can ignore the end-to-end options and process only those relevant to the routing process. Part 4: Migrating to IPv6 Migrating to IPv6 Software upgrades, particularly operating system upgrades, can have a huge impact on organizations. Remember the transition from Windows 3.x to Windows 95? In addition to the raw cost of the OS upgrade, system hardware had to be upgraded or systems discarded because they lacked the RAM, CPU, or hard drive resources to run the new OS. Migration to IPv6 is likely to produce less intense pain and has the potential for being less expensive. For one thing, the transition will be gradual. Brian Carpenter, Internet Architecture Board (IAB) chair and Program Director of Internet Standards and Technology for IBM, explains: "We never expected the transition process to take less than 15 years, counting from around 1994." Another active member of the IPng working group and senior member of technical staff at Compaq's UNIX Internet Engineering Group, Jim Bound, urges not to "view IPv6 as a migration or transition for the majority of organizations, but rather the 'interoperation' of IPv6 with IPv4 for some time." Bound continues, "It's important to realize that IPv6 is an evolution from IPv4, not a revolution to a [totally] new Internet Protocol." By design, moving to support IPv6 will mean moving to a multiprotocol Internet rather than a full-blown protocol cutover or flag-day conversion. No one expects IPv4 to go away, ever. Which means that the big question will not be whether or not to upgrade to IPv6, but rather when, how, where, and how much to transition to support for IPv6. Supporting IPv6 is going to be both simpler and more complex than any other networking decision you'll make. IPv6 interoperability with IPv4 is supported in three ways: tunnels, translators, and dual-stacks. As Bound explained, these are all works in progress: "Right now, to build any products on these technologies is premature." He continued, "multiple tools will be defined...a user will have a range of tools to use just like a carpenter, mason, or landscaper does in their tasks." According to Bound, no single mechanism is "better" than the others; he can "see a case where all three are used in one organization eventually." There is no single road to IPv6 support. Some individual networks will be upgraded en masse, creating reservoirs of IPv6 support surrounded by oceans of IPv4. Individuals within the IPv6 networks can be IPv6-only, but IPv4/IPv6 gateways are necessary at their borders for these networks to interoperate with IPv4 networks. And different IPv6 networks can communicate with each other through the IPv4 Internet by setting up IPv6/IPv4 tunnels. Other organizations will migrate host by host, with dual-protocol IPv4/IPv6 nodes scattered throughout the existing IPv4 network like raisins in a loaf of raisin bread. These nodes will be able to interoperate with each in native IPv6, or with IPv6 nodes outside the network by tunneling IPv6 inside IPv4 packets. Part 5: Rolling IPv6 Out Rolling IPv6 Out Even though IPv6 lacks broadbased demand, router vendors Bay Networks, 3COM, Digital, Hitachi, Nokia, Sumitomo and Telebit all currently support IPv6; the Linux kernel also includes IPv6 support. Other vendors are working on IPv6 routers as well as IPv6 stacks for nodes. Microsoft Research, for example, currently offers an alpha version of an experimental IPv6 stack that works with Windows NT and Windows 2000; the Microsoft Windows networking group is reportedly working on a commercial version. The next issue is finding an IPv6 network to connect to. Though you can deploy IPv6 on a testbed network within your organization, that level of implementation will not adequately demonstrate IPv6's strengths or identify potential problems. Right now your only options are the 6BONE and the 6REN; 6BONE is an experimental IPv6 backbone and 6REN offers production quality IPv6 networking. In either case, you can't connect to an IPv6 network without connecting to an IPv6 access point: either a pTLA (pseudo top level aggregator) for 6BONE backbone transit or a pNLA (pseudo next level aggregator) for non-backbone transit. Access providers are designated pseudoTLAs and pseudoNLAs because no official registry is yet assigning "real" TLA or NLA address spaces. The access provider allocates IPv6 network address space to its customers. At that point, you can build a configured IPv4 tunnel from your site's IPv6 router to your 6BONE point of entry. Internet Architecture Board (IAB) chair Carpenter suggest that "right now, the thing to do is to learn about IPv6." Once implementers are freed from the constraints of a overly-full IP address space, almost anything is possible. Carpenter suggests that IPv6 will soon make possible very interesting applications like "small appliances such as smart cell phones, that roll out in millions." Allison Mankin, computer scientist at University of Southern California/Information Sciences Institute (USC/ISI) adds that one "potential killer app in IPv6 is efficient, transparent mobility. The pull for continuously connected moving devices is not here yet, but someone could create it with IPv6." Compaq's Bound sees great potential for IPv6, especially where the plentiful IPv6 addresses can reflect a business model, as in "retail department stores where each aisle is an IP subnet." What should you do about IPv6? Organizations can support IPv6 from the inside out or from the outside in. Early implementers have the option of building islands of IPv6 connectivity within the organization to meet a specific need; research groups may begin IPv6 support this way. Other groups may support IPv6 as requested by end users, for example to enable mobile IPv6 networking, IP security architecture (IPsec) networking, and IPv6-enabled applications. Expect network vendors to fold IPv6 support into all their products just as they now support IPv4. IAB chair Carpenter says "If it is shipped as a standard operating system or router upgrade, the costs will be operational in nature. That makes it very dangerous to generalize about the cost--a fair analogy would be with the costs of implementing an operating system release." Mankin suggest that supporting IPv6 will reduce costs in the long run. She suggests that moving to IPv6 for ISPs is "not as costly as making a transition to nested NATs (between providers)" while for for end-users, "the cost of transition is as low as just the cost of upgrading the operating system or router version." Overall, Mankin claims, "the cost of running an IPv6 network is less than the cost of running an equivalent IPv4 network." Part 6: IPv6 - The Bottom Line IPv6 - The Bottom Line? Is IPv6 all that and a bag of chips? Not everyone agrees, but it's hard to find anyone close to the issues who believes that IPv4 is fine the way it is and needs no updating. Even so, foes of IPv6 proclaim deep flaws and plan to wait for "something better than IPv6" before they give up on IPv4. They believe that address assignment and routing problems are under control. According to John Levine, author of IDG's "Internet for Dummies", the original motivation for IPv6 was a shortage of IPv4 addresses and that is no longer enough reason to change. Levine claims that conservation measures have worked so well that "the original impetus for IPv6 has disappeared, and now it's a solution casting about for a problem." While ISPs seem to dislike IPv6 more than most, they should also be the ones who gain the most from a new IP with no restrictions on addressing. Mankin says "providers are opposed to adding another protocol to their operations, because just operating IPv4 is a strain," but "the same providers do respond to customer requests, so I believe that when customers request IPv6, the providers will be less opposed." The IETF has already invested almost a decade in the development of the next generation of IP; it's hard to imagine someone else coming up with an alternate solution any time soon. Continued growth puts the Internet at risk unless relief can be found for the address space crunch as well as the routing table explosion. Despite these pressures, it may ultimately be the pent up demand for ever more ubiquitous networks that drives acceptance of IPv6. With the future of the Internet hanging in the balance, the next ten years should prove interesting, to say the least. Pete Loshin (email@example.com) began using the Internet as a TCP/IP networking engineer in 1988, and began writing about it in 1994. He runs the website Internet-Standard.com where you can find out more about Internet standards.
<urn:uuid:c6d08bdc-e06c-4147-9a04-56b8ed042a63>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/616701/The-Next-Version-of-the-Internet-Protocol--IPv6.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941945
4,062
2.609375
3
SAS vs. SATA - Page 3 In the subsequent sections I'm going to focus on Silent Data Corruption (SDC) of both the SAS and SATA data channel which have an impact on data integrity of each type of connection. Then I'll talk about the impact of T10 DIF/PI and T10-DIX on improving silent data corruption and how they either integrate (or not) with SAS and SATA. Then finally I'll mention the file system ZFS as a way to (possibly) help the situation. Silent Data Corruption in the Channel In the SAS vs. SATA argument a key area that is often overlooked is the data channel itself. Channels refer to the connection from the HBA (Host Bus Adapter) to the drive itself. Data travels through these channels from the controller to the drive and back. As with most things electrical, channels have an error rate due to various influences. The interesting aspect of SAS and SATA channels is that these errors result in what is termed Silent Data Corruption (SDC). This means that you don't know when they happen. A bit is flipped and you have no way of detecting it, hence the word "silent." In general, the standard specification for most channels is 1 bit error in 10E12 bits. That is, for every 10E12 bits transmitted through the channel you will get one data bit corrupted silently (no knowledge of it). This number is referred to as the SDC Rate the number of bits before you encounter a silent error. The larger the SDC Rate, the more data needs to be passed through the channel before encountering an error. The table below lists the number of SDC's likely to be encountered for a given SDC rate and a given data transfer rate over a year (table is courtesy of Henry Newman from a presentation given at the IEEE MSST 2013 conference). Table 2: Numbers of Errors as a function of SDC rate and throughput For example, if the SDC rate is 10E19 and the transfer data rate is 100 GiB/s you will encounter about 2.7 SDC's in a year. The key thing to remember is that these errors are silent - you cannot detect them. The SATA channel (and the IB channel) has an SDC of about 10E17. If you transfer data at 0.5 GiB/s you will likely encounter 1.4 SDC's in a year. In the case of faster storage with a transfer rate of 10 GiB/s you are likely to encounter 27.1 SDC's in a year (one every 2 weeks). For very high-speed storage systems that use a SATA channel with a transfer data rate of about 1 TiB/s, you could encounter 2,708 SDC's (one every 3.2 hours). On the other hand, SAS channels have a SDC rate of about 10E21. Running at a speed of 10 GiB/s you probably won't hit any SDC's in a year. Running at 1 TiB/s you are likely to have 0.3 SDC's in a year. The importance of this table should not be underestimated. A SATA channel encounters many more SDC's compared to a SAS channel. The key word in the abbreviation SDC is "silent." This means you cannot tell when or if the data is corrupted. Sometimes even an SDC of 10E21 is not enough. We have systems with transfer rates hitting the 1 TiB/s mark pretty regularly and new systems being planned and procured with transfer rates of 10 TiB/s or higher (100 TiB/s is not out of the realm of possibility). Even with SAS channels, at 10 TiB/s you could likely encounter 2.7 SDC's a year. This may seem like a fairly small number but if data integrity is important to you, then this number is too big. What if the data corruption occurs on someone's genome sequence? All of a sudden they may have a gene mutation that they actually don't and the course of cancer treatment follows a direction that may not actually help the person. Moreover, what happens if the channel is failing and the bit error rate of the base channel drops to a worse number before it fails? For example, what happens if the channel rate decreases from 10E12 to something smaller, perhaps 10E11.5 or 10E11? At the very least, systems with very high data rates could see a few more SDC's than expected. There is a committee that is part of the InterNational Committee for Information Technology Standards (INCITS), which in turn reports to the American National Standards Institute (ANSI). This committee, called the "T10 Technical Committee on SCSI Storage Interfaces" or T10 for short, is responsible for the SCSI architecture standards, and most of the SCSI command set standards (used by almost all modern I/O interfaces).
<urn:uuid:721ef5e9-af35-4a9e-a2a0-3fd40458978a>
CC-MAIN-2017-04
http://www.enterprisestorageforum.com/storage-technology/sas-vs.-sata-3.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944872
1,015
2.734375
3
Q&A: MIT Media Lab professor Nicholas Negroponte says his plan to build low-power, wireless-enabled laptops in bulk includes strategies to generate the power they will need. Providing electronic communications to underserved populations has been one of the challenges for as long as many of these technologies have existed. Cable television, for example, started in part to provide video where broadcast or satellite couldnt. Projects such as the Royal Flying Doctor Service and the Alice Springs School of the Air, both in Australia, leveraged two-way radios powered by sewing-machine treadles, now largely supplanted by the Internet. A few decades ago, Arthur C. Clarke, a noted science and science-fiction author, proposed bringing information to remote villages using a combination of televisions and satellite broadcast. In the early 1990s, initiatives such as Montanas Big Sky Internet worked to bring even one or two computers to reservations and remote sites, using store-and-forward not only for e-mail but also for queries to the Web and its predecessors, such as Gopher and Usenet. More recently, there have been reports of the Pony Express of the 21st centuryWi-Fi-equipped motorcycles driving through areas, picking up and receiving e-mail without needing to stop. One of the latest proposals for bringing computer and Internet technology to remote regions that are often unelectrified and unnetworked comes from Nicholas Negroponte, a professor at the MIT Media Lab, with what has been dubbed the "$100 Notebook Program." It is a proposal for creating low-power, wireless-enabled notebook computers in major quantities targeted at a per-unit cost of about $100. Guy Kewneys Feb. 2 column, "Power Politics Overshadow $100 PC Concept," took issue with aspects of the plan, particularly the power requirements. Negroponte responded briefly to Kewneys column, pointing out that the Media Lab is working on a variety of power options. To read Kewneys column, click here. Daniel P. Dern conducted this interviewby e-mail, due to Negropontes travel schedulefor eWEEK.com as a more in-depth follow-up. What is the relationship of your plans to other initiatives, such as the "Wi-Fi-on-Wheels" motorcycles, or the Global Services Trust Fund efforts discussed at the Arthur C. Clarke Institute? Arthur is an old and dear friend. He has been an inspiration since we met in 1976. I was with him the day my book "Being Digital" came out. In 1981, Seymour Papert and I started in Senegal, under the Paris-based "World Center." [Editors Note: Papert is a mathematician, a co-founder with Marvin Minsky of the Artificial Intelligence Lab at MIT, and a founding faculty member of the MIT Media Lab, where he continues to work.] Steve Jobs gave us Apple IIs. That, and later work in Colombia and Costa Rica, was geared toward primary schoolsway ahead of its time. In the 90s, the Media Lab had projects in what you call "providing life-changing information technology to rural areas and Third World countries" in Brazil, India and a handful of other countries that formed Digital Nations at the MIT Media Lab. One project was called LINCOS. The Pony Express you mention was one of those projects as well. The story is not new, and we have been at parts of it for almost 25 years. What is new is the attack, focused on the laptop, for several reasons: One, while not solved, telecommunications is working itself out, and bandwidth scales, in the sense that it is very elastic for asynchronous applications. A 2-megabit line can well serve 10 or 100 kids. Two, we believe that children learn far better with a "one laptop per child" model, something they own and carry back and forth, use for work, play, at home, etc. Three, the cost of laptops does not scale the same way; 100 kids costs 10 times 100. Next Page: Negroponte says having a village server would be "wrong-minded."
<urn:uuid:e41335a9-39c5-41ff-9bd7-1d6bc4ddcda3>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Negroponte-Defends-Merits-of-100-Notebook-Project
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9558
861
2.609375
3
A European company specializing in embedded systems announced today it would make available on the Internet of Things a resource that has become increasingly important to the actual Internet (The Internet With Some People, But Mostly 'Bots, Trolls and Marketing Remoras). The Internet of Things, according to Amaro, Italy-based EuroTech, consists of any device with the ability to gather and process information, whether it's mounted in buildings, vehicles or carried by humans, and communicate that data across a network. Not surprisingly, it's difficult for many of those devices to communicate their data with the servers interested in hearing from them. EuroTech has created own cloud designed to make it easier for machines to find one another in this crazy, mixed-up world and help companies too small to build their own Internet-of-Things meeting places a place to let their devices go and do what comes naturally (or industrially, depending on your point of view). The Everyware Cloud 2.0 is a software development and runtime platform designed to allow customers to connect automated sensors, embedded devises, mobile devices and other automated gear using relatively standard standards and protocols optimized for efficient machine-to-machine (M2M) communication. The cloud as middleware tier in multi-tier web apps Everyware Cloud 2.0 isn't quite "open" and "standard." It depends on Eurotech's own Everyware Software Framework (ESF) – a proprietary, specialized layer of middleware that installs on top of the operating system of an embedded device, but below the application that runs it. ESF allows companies with devices that are too smart to remain unconnected to get connected without having to write all the arcane, low-level systems-management code that differs with each device. Instead they can write to the APIs in ESF, which handles all the picayune requirements of making software talk to non-standard, non-PC hardware that could be anything from a smartphone to the temperature-regulation monitor on a nuclear-fuel-storage facility. Everyware Cloud, is "device-independent," meaning in this case that is supports any device with embedded intelligence, as long as the device itself runs ESF. That's not quite what most people think of as "open," but things are different in the device world. Not everything has the same sense of ethics as the imprecise, fallible, wetware-occluded bags of random biochemistry for which they work. A little extra propriety in the middleware might be acceptable, even in an "open" cloud. Eurotech's own devices, which range from smart gateways for other devices to embedded systems boards, run Wind River Linux, support development tools, full networking capability, full remote-access, control and remote on/off as well as support for the ESF middleware. Both ESF and the Everyware Cloud support other vendors' hardware as well as long as they support Eurotech's ESF first. The real benefit of the Kind-of-Proprietary Cloud of Things, according to Eurotech, is that customers can run analysis and reporting apps on the cloud to crunch the numbers all those lonely devices are sending out in their search for love. Depending on how immediate the need, data from those devices can be run in real time, at intervals or analyzed in retrospect after all the excitement is over and the fire trucks have gone home. It's a good option for new services like the one Euro-startup Sensuss is launching to prevent head injuries in sports using helmets with embedded sensors whose data can be transmitted and analyzed by apps running in the Everyware Cloud according to quotes from the company's chief engineer in the Eurotech announcement. Sounds kosher to me. The first real clouds were also proprietary, as were the first real web-based apps. The Everyware Cloud isn't designed as a universal solution, but it's certainly a start. Now we just have to hope our devices are mature and responsible enough to handle all that direct Internet access without individual supervision. Otherwise the Internet of Things would require the construction of another layer of the network: The Internet of Things That Monitor Networked Things That Can't be Trusted. That would be too expensive.
<urn:uuid:44667dfc-e318-49df-b1f4-b3f616787f9e>
CC-MAIN-2017-04
http://www.itworld.com/article/2727347/cloud-computing/-internet-of-things--gets-its-own-cloud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954111
870
2.65625
3
It’s one of the most essential questions, which speaks to the very fact of our existence: why is the universe made of matter? Researchers at Brookhaven National Laboratory are attempting to determine why the early universe ended up with an excess of matter. Without that excess, the matter and antimatter created by the Big Bang would have cancelled each other out, leaving the universe devoid of matter. Imagine a world that contains nothing but light, no planets, no stars, no people. Theoretical physicists have long suspected there was a way to solve for this imbalance, and by doing so, shed light on our very existence. They’ve spent the last 50 years attempting to unravel this fundamental riddle. “The fact that we have a universe made of matter strongly suggests that there is some violation of symmetry,” said Taku Izubuchi, a theoretical physicist at the US Department of Energy’s (DOE) Brookhaven National Laboratory. This asymmetry is called charge conjugation-parity (CP) violation. It occurs when “certain subatomic interactions happen differently if viewed in a mirror (violating parity) or when particles and their oppositely charged antiparticles swap each other (violating charge conjugation symmetry).” Scientists at Brookhaven discovered evidence of this symmetry “switch-up” in experiments conducted in 1964 at the Alternating Gradient Synchrotron, with additional evidence coming from experiments at CERN. The work led to a Nobel Prize for the researchers, who were able to observe the decay of a subatomic particle, known as a kaon, into two other particles called pions. These particles are further composed of quarks. But that’s as far as the research went: understanding kaon decay in terms of its quark composition was the next horizon. The next step was for theoretical physicists to develop a theory to explain this kaon decay process – a mathematical description that could calculate how frequently it happens and whether it would help explain the fundamental matter imbalance in the universe. “Our results will serve as a tough test for our current understanding of particle physics,” Izubuchi said. The work belongs to a field called Quantum Chromodynamics, or QCD – which comprises a multitude of variables and possible values for those variables. The necessary computational tools only recently became sophisticated enough to handle such advanced calculations. Currently theoretical physicists are performing kaon calculations using the QCDOC supercomputer at Brookhaven. Still even with the best-in-class supercomputers, the problem would have taken many years if not for a new efficient algorithm developed by the Brookhaven group in late 2012. “The algorithm…divides the whole calculation into a ‘difficult’ but small piece and an ‘easier’ large piece, and devotes more computation time to the latter part to save the total computation required,” explains Izubuchi. “It accelerates the speed of the computations by a factor of ten or more. This very simple idea of dividing the calculation into two pieces actually helped to reduce the statistical error of the computation by a lot,” he adds. So did the theorists achieve their long-sought answer? It’s a matter of yes and no. The calculated strength of the weak interaction only partially accounts for the matter antimatter asymmetry after the Big Bang, according to Izubuchi. He adds: “We cannot explain why the universe is matter-rich based solely on the amount of CP violation that this kaon decay accounts for. So there may be other sources of CP violation other than the weak interaction that would be revealed if a discrepancy were found between our calculation and the experimental results.” This research is part of DOE’s Scientific Discovery through Advanced Computing program “Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers,” supported by the DOE Office of Science. The project relied on a number of Blue Gene/Q systems in labs around the world as well as PC cluster machines at Fermi National Accelerator Laboratory and at RIKEN.
<urn:uuid:bdb1e10f-2b25-4c1b-83ba-566891cd5164>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/10/01/theoretical_physicists_still_unraveling_big_bang/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946974
864
4.125
4
In today’s world, people spend a lot of time catching up on several fields - education, science, sports, politics, and travel. As all this information is scattered across various websites on the internet, we usually tend to gather information from these websites and then collate it. Accessing a single website that has all the information or services can help significantly reduce time. This approach brings multiple site content under a single umbrella and also reduces the user’s internet bandwidth. The user can easily access multiple websites using a single application URL in the web browser and need not enter the accessing website URL. The WebClient wrapper class is one of the features available in .NET technologies. It defines the data extraction procedure, which is used to download resources with user specified URLs, and the class provides common methods for sending or receiving data from any local intranet, or Internet resource identified by a URL. “System.Net.Sockets” is a namespace which contains the WebClient object or wrapper class. It is used to extract data from HTML, XML and CSV file types. The WebClient class provides four methods for downloading data from a resource: - The OpenRead method returns data from the resource as a stream - The DownloadString method returns data from the resource as a string - The DownloadData method downloads data from a resource and then returns a byte array - The DownloadFile method downloads data from a resource to a local file How it works The target resource website provides data in the HTML format. Every ten minutes, we automatically get the updated data from the target website using the WebClient object in .Net. WebClient provides asynchronous methods for fetching webpage data as well; they are named similar to the synchronous methods. The OpenRead method is used to retrieve the following information and then this is appended in the web application: - Train status - Electricity board Possibility: We will be able to integrate the content from multiple sites in our application. I have implemented a sample application using this logic and attached a screen shot below: It takes more time to fetch the content from the target website if the internet connection is slow. Supports the following platforms: Windows Phone 8.1, Windows Phone 8, Windows 8.1, Windows Server 2012 R2, Windows 8, Windows Server 2012, Windows 7, Windows Vista SP2, Windows Server 2008 (Server Core Role not supported) & Windows Server 2008 R2 (Server Core Role supported with SP1 or later; Itanium not supported). - Supported in: 4.5.1, 4.5, 4, 3.5, 3.0, 2.0, 1.1 & 1.0 - .NET Framework Client Profile
<urn:uuid:1e847f4f-87b8-4ec1-964f-e7aa5bb20d2e>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/engineering-and-rd-services/live-data-updates-through-websites
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00235-ip-10-171-10-70.ec2.internal.warc.gz
en
0.820455
566
2.890625
3
App Inventor for Android is designed to help people with no computer programming experience write applications for smartphones based on Google's Android open-source mobile operating system. Google July 12 launched App Inventor for Android, a tool people without programming knowledge can use to build applications for smartphones based on the company's open-source Android platform. While Android was designed for software programmers who speak geek, App Inventor is a sort of software Lego set for amateur programmers who can sign up to use the tool here with a Gmail account. Instead of writing code, users will drag and drop blocks, which are ready-made code sets, on a programming palette to construct their applications. These blocks include images, sound, text and screen arrangement. See Google's demo video here, in which an amateur programmer connects her Google Nexus One to her desktop PC to build an application with App Inventor. The App Inventor Web page in Google Labs states that the tool provides building blocks for "just about everything you can do with an Android phone," as well as blocks for storing information, repeating actions and communicating with Web services. While the Web page said users can use App Inventor to construct games or draw pictures, users may also do more useful things such as creating a quiz application to help classmates study for a test. Users may even take advantage of Android's text-to-speech capabilities, for example to make the phone ask the test App Inventor also features a GPS-location sensor to let users build applications that know their location. Those who already command some Web programming knowledge can use App Inventor to write Android applications that talk to Twitter, Amazon.com and other Websites and However, Google's intent is to let average consumers build their own applications for the smartphones they use every day. This is something that has never yet caught on among desktop computer users, despite tools such as Basic, Logo and Scratch. scientist at the Massachusetts Institute of Technology who led the project as a visiting faculty member at Google, said more than a year ago on Google's Research Blog that several major universities, including Harvard, MIT, University of California at Berkeley and the University of Michigan, were testing App Inventor. Abelson told the New York Times that Google tested the tool with "sixth graders, high school girls, nursing students and university undergraduates who are not computer science majors." Developers have written close to 100,000 applications for the Android platform. If App Inventor catches on among nonprogrammer Android phone users, it could boost that number considerably. At the least, App Inventor could increase awareness of Android as an alternative to proprietary platforms such as Apple's iPhone. The next logical leap for App Inventor would be an App Inventor Mashup Maker. In such an instance of classic crowdsourcing, Google would provide tools allowing users to build mashups, or application chimeras.
<urn:uuid:4d794996-1552-409f-85b9-b88d47135c99>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Google-App-Inventor-for-Android-Lets-Amateurs-Write-Apps-300074
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00445-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920015
648
3.03125
3
The International Organization for Standards (ISO) and the International Electrotechnical Commission (IEC) have together published a new standard for governing the use of biometric authentication technology. The newly issued standard, designated as the ISO/IEC 24745:2011, Information technology – Security techniques – Biometric information protection, is designed to provide guidance for the implementation of biometric technology to further protect sensitive online transactions. “As the Internet is increasingly used to access services with highly sensitive information, such as eBanking and remote healthcare, the reliability and strength of authentication mechanisms is critical. Biometrics is regarded as a powerful solution because of its unique link to an individual that is nearly or absolutely impossible to fake," said Myung Geun Chun, Project Editor of ISO/IEC 24745. “And the technology has come of age. The cost of biometric techniques has been decreasing, while their reliability and popularity have been growing. But biometric identification raises unique privacy concerns," Chun continued. The privacy concerns center around the need to collect, process, and store sensitive biometric information from users of such systems. Unlike other authentication systems, the breach of biometric data is difficult to remedy. Users can not simply alter the authenticating data used to access secure networks, as one would with usernames and passwords - the data is permanently and uniquely identifiable to the individual user. “While the unchanging and distinct association with an individual on the one hand, provides strong assurance of authentication, this binding which links biometrics with personally identifiable information on the other hand, carries some risks, including the unlawful processing and use of data. ISO/IEC 24745 is an invaluable tool for addressing those risks," Chun stated. According to the ISO website, the new standard specifies: - Analysis of threats and countermeasures inherent in a biometric and biometric system application models - Security requirements for binding between a biometric reference and an identity reference - Biometric system application models with different scenarios for the storage and comparison of biometric references - Guidance on the protection of an individual’s privacy during the processing of biometric information.
<urn:uuid:2755162d-6e7d-4fb6-9e48-ad92c0d9eea0>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/15907-ISO-and-IEC-Publish-Biometric-Authentication-Standard.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00445-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920579
442
2.6875
3
Software builds - that is, compiling programs into machine executable code - is an important part of most developers lives. When builds fail, due to compilation errors, it requires programmers to take extra time and brainpower to find and fix the problem, reducing their productivity. A better understanding of the cause of frequent software build errors, then, could help lead to new or improved development tools that would reduce these errors and increase developer output. That was the motivation behind a new study from a group of researchers from Google, the Hong Kong University of Science and Technology and the University of Nebraska. The team wanted to address three main questions: How often do builds fail, why do builds fail and how long does it take to fix builds? To answer these questions, they looked at the results from over 26 million builds by 18,000 Google engineers from November 2012 through July 2013. The builds were of Java or C++ code, Googles most common languages, and errors were generated by either the javac compiler (for Java) and the LLVM Clang compiler (for C++). A build was defined as a single request from a programmer which executes one or more compiles and deemed a failure if any compile in the build failed. Compile error messages were grouped into one of five categories (dependency, type mismatch, syntax, semantic and other). After reading through the study, there were a few findings that I found particularly interesting: Build failure rates are not related to build frequency or developer experience Going in, the researchers had hypothesized that developers who build more frequently would experience a higher rate of build failure. The researchers found no correlation between a developers build count and the build failure ratio. They also theorized that more experienced developers would have a lower failure rate. Again, though, this was found not to be the case. The researchers found no evidence in these data that experienced developers (defined as those with at least 1,000 builds in the previous nine months) had a lower failure ratio than novice developers (those with fewer than 200 builds in the previous three months). The majority of build errors are dependency-related Almost 65% of all Java build errors were classified as dependency-related, such as cases where the compiler couldnt find a symbol (the most common one, 43% of all build errors), a package didnt exist or Google-specific dependency check failed. Similarly, almost 53% of all C++ build errors were classified as dependency-related. The most common such errors were using an undeclared identifier and missing class variables. C++ generates more build errors than Java, but theyre easier to fix The study found that the median build failure rate for C++ code was 38.4%, while the median for Java was 28.5%. It was also found that syntax errors occurred more frequently when building C++ code than Java. The researchers attribute this difference to the greater use of IDEs in Java development, which helps to cut down on these simpler errors. It probably also helps to explain why C++ build errors tended to be resolved more quickly than Java errors. One of the key implications of the study is that tool developers should focus on helping software engineer prevent or resolve dependency errors. Cutting back on the number of these build errors, or, at least, the time it requires to resolve them, should help improve developer productivity. In fact, the authors say that, based on this study, the Google infrastructure team is now trying to do just that. How generalizable these Google-specific findings are to rest of the general software developer population is unclear. But its a good start into a type of research that could eventually make developers lives easier and improve the efficiency of the whole software development process. Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. This story, "Why Software Builds Fail" was originally published by ITworld.
<urn:uuid:90a9fb54-6d66-4983-a847-516893528b67>
CC-MAIN-2017-04
http://www.cio.com/article/2375277/developer/why-software-builds-fail.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95995
830
2.96875
3
The old saw “an ounce of prevention is worth a pound of cure” can certainly be applied to all aspects of computer security. To protect our computer networks from the endless stream of attacks by hackers and malicious code, we employ preventive measures that may include sound security policies, well-designed system architecture, properly configured firewalls and strong authentication programs. While these tools are helpful, they may not be enough for today’s breed of sophisticated attacks. A misconception of some network administrators is that firewalls first recognize attacks, then block them. In actuality a firewall is more like a fence around your home or business with a couple of gates as entry points. The fence has no ability to determine if somebody coming through a gate should be permitted entry. It simply restricts all access through entry points or gates. Enter intrusion detection systems (IDSs). An intrusion detection system will act as a burglar alarm, alerting you to potential external break-ins or internal misuse of the system(s) being monitored. Network intrusion detection and prevention systems are software programs and/or hardware-based devices designed to detect unauthorized attacks on a computer network system. An intrusion detection system is a system designed to detect attempts to compromise the confidentiality, integrity or availability of the protected network or associated computer systems. One fundamental objective of computer security management is to affect the behavior of individual users in a way that protects information systems from security problems. Intrusion detection systems help organizations accomplish this goal by increasing the perceived risk of discovery and punishment of attackers. This serves as a significant deterrent to those who would violate security policy. IDSs examine patterns of computer activity instead of just individual files, thereby giving them further-ranging protective abilities than ordinary antivirus software. Of the two basic IDS types, the most versatile are HIDS (host intrusion detection systems), as they are installed locally on host machines. From this local installation, HIDS are able to ascertain where attacks are affecting a particular host or system (which processes and what users). Since they can directly access and keep track of data files and OS system processes that may be marked by an attack, HIDS can see the outcome of an attempted breach. NIDS, or network-based intrusion detection systems, identify breaches by monitoring and capturing network traffic. In an NIDS, the software or hardware is a part of the system (dedicated software/hardware) and examines network packets. NIDSs can be comprised of a set of single-purpose sensors situated at different sites on a network. At these sites, network traffic is monitored, including local analysis of the traffic as well as reporting of attacks to a centrally located console. As with firewall and antivirus products, there is no shortage of vendors for the IDS market. Since IDS products are notorious for producing false positives, a high-quality (albeit more expensive) security appliance product is recommended for network intrusion detection. Three industrial-strength (and popular) IDS appliances are: - Real-Time Network Awareness by Sourcefire (www.sourcefire.com). - RealSecure by Internet Security Systems (www.iss.net). - Cisco IDS (www.cisco.com). These products help make intrusion detection systems more efficient and as such are valuable in enterprise networks. For those looking for a no-cost software-based IDS solution, Snort may be just the product to fit the bill. According to Snort.org, “Snort is a lightweight network intrusion detection system, capable of performing real-time traffic analysis and packet logging on IP networks. It can perform protocol analysis, content searching/matching and can be used to detect a variety of attacks and probes, such as buffer overflows, stealth port scans, CGI attacks, SMB probes, OS fingerprinting attempts and much more.” A compelling Snort feature is that it has been ported to work with many different operating systems, including Mac OS X. For more information or to download a free copy of Snort, visit www.snort.org. Additional information on configuring Snort can be found at www.winsnort.com. At this Web site the novice as well as the expert will find tips and useful information on the installation and configuration of Snort (IDS) in a Windows, Solaris 9 (beta) or Red Hat 9 environment. Figure 1: GFI LANguard S.E.L.M. 4 Monitor Some network managers mistakenly assume that unauthorized access is largely attempted by external parties. According to GFI, “the majority of corporate security threats stem from internal sources, against which a firewall offers no protection. GFI LANguard Security Event Log Monitor (S.E.L.M.) monitors the security event logs of all your Windows NT/2000/XP/2003 servers and workstations and alerts you to possible intrusions/attacks in real time, giving you peace of mind.” (See Figure 1.) GFI LANguard S.E.L.M. ships with a security event analysis engine which takes into account the type of security event, security level of each computer, when the event occurred (outside or during operating hours), the role of the computer and its operating system (workstation, member server or domain controller). Based on this information, GFI LANguard S.E.L.M. can decide whether the security event is critical, high, medium or low (see Figure 2). For more information and pricing, visit www.gfi.com. GFI also provides a freeware version of the GFI LANguard S.E.L.M. that performs event-log-based intrusion detection and network-wide event log management for one server and up to five workstations. Figure 2: GFI LANguard S.E.L.M 5.0 Configuration Douglas Schweitzer, A+, Network+, i-Net+, CIW, is an Internet security specialist and the author of “Securing the Network from Malicious Code” and “Incident Response: Computer Forensics Toolkit.” He can be reached at firstname.lastname@example.org.
<urn:uuid:0f2d76bc-a656-4ef4-a8d7-31a5e151ecec>
CC-MAIN-2017-04
http://certmag.com/intruder-alert/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00501-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921342
1,266
3.265625
3
A lone police officer in Vancouver, Wash., detained a shooting suspect at a local park and radioed for backup. But the police channels were jammed with radio traffic, and the officer's call never arrived. Luckily, another officer heard the location before the arresting officer went off the air, and arrived to provide backup. This incident last year prompted Vancouver City Councilmember Pat Jollota, a former dispatcher, to createa solution to the crowded public safety channels in her community. But it was costly. The city's stopgap measure was to borrow $2 million from a water and sewer fund to purchase some new communications equipment and to bounce signals off a repeater in nearby Portland, Ore., in a less-crowded frequency range. This anecdote illustrates a common problem faced by public safety communications. As more public safety vehicles -- outfitted with more communications equipment -- are put on the street , the narrow bands used by police cars, ambulances and fire engines are becoming crowded, especially in urban areas. When this happens, a local government can try to get more frequencies assigned by the federal government. But this is becoming more difficult as cellular telephones, beepers and other civilian communications applications take over the airways. Public safety should have a particular part of the spectrum set aside for use by ambulances, police and fire departments, assert local government and public safety advocates. Without this allocation, it could become more difficult for local public safety agencies to acquire radio frequencies and establish interoperability between agencies. It could also hinder effective use of emerging applications, such as sending archived mug shots or fingerprints to a patrol car. BRIEFING ON SPECTRUM Spectrum is the finite space that contains all radio, television and microwave frequencies. Different types of signals are allocated certain parts of the spectrum, with television using a different allocation than beepers, for example. The FCC assigns frequencies within these allocations, and equipment is manufactured to work within the assigned areas of the spectrum. The FM radio station you listen to (unless it's a "pirate" station) has an FCC license to broadcast at a given power using a particular frequency within the FM radio allocation. Telecommunications providers used to be assigned frequency by the FCC. But that changed with the Omnibus Budget Reconciliation Act of 1993, which authorized the FCC to use competitive bidding to award licenses in some areas of the spectrum, with the proceeds earmarked for deficit reduction. By early this year, more than $9 billion had been raised through auctions from hundreds of new licenses for new wireless services. The problem is that public safety agencies, which have generally not had much space allocated over the years, do not have the financial resources to compete in auctions against the private sector. As radio sections of the spectrum become more crowded, especially in urban areas, there is concern that too much space could be auctioned off, leaving public safety without sufficient room both now and in the future. When Congress voted in 1993 to allow spectrum auctions, it ordered a report with recommendations on spectrum allocations for public safety use. The Public Safety Wireless Advisory Committee (PSWAC) was then formed, under the auspices of the FCC and the Commerce Department's National Telecommunications and Information Administration, to study and make recommendations on what public safety's spectrum needs could be between now and 2010. PSWAC has been working for about a year, and participants have included the FBI director, a New York City deputy police commissioner, representatives from the military and the private sector. The commission's report and recommendations, which took a year to develop, were scheduled to be released in September and should be on the Internet at or available from the FCC. The report becomes part of the FCC's process to allocate spectrum for public safety. A docket has been opened on the matter (WT Docket 96-86) and comments were taken until Sept. 20. Reply comments on submissions are being taken until Oct. 18, but this period could be extended. The commission intends to act on the issue by the end of the year, said Tom Stanley, an FCC spectrum advisor. A good way for public safety advocates to get involved is through trade organizations participating in the process, such as the Association of Public-Safety Communications Officers (APCO) and the International Association of Chiefs of Police. The National League of Cities is also active in the process. Interviewed before the final report was released, PSWAC chair Phil Verveer, a Washington, D.C., telecommunications attorney, said the group was hoping to secure spectrum between what is now television channels 60 through 69. This range is used by only about 100 stations nationwide, including Boston's Channel 69, which broadcasts the region's Red Sox games. The frequencies of this range are between about 706 megahertz and 806 megahertz, just below a spectrum area used for mobile communications. But Verveer said that the committee doesn't expect public safety to get all of that from the FCC. "It would be hard for them to accommodate 100 megs," he said. If just four of those channels are set aside the current public safety allocation of 23 megahertz scattered across the spectrum would be doubled, he said. There are other possibilities being looked at, including some Defense Department bands. The commission's preference is to find public safety frequencies in the megahertz range, but this is difficult because that area is crowded, said David Wye, a technical advisor in the FCC's Wireless Division working on the issue. "We want to find one area for interoperability." Public safety agencies around the country are scattered across the spectrum, making interoperability between agencies, and even within large ones, difficult and sometimes impossible. A solution is to have a space on the spectrum allocation for interoperability. INTEROPERABLE ECONOMIES OF SCALE If public safety is eventually allocated spectrum space which is different from what a local jurisdiction currently uses, locals won't necessarily have to give up current space and move to a new frequency. But when expanding or upgrading a communications system, a local agency could move into the allocated range. It would also be able to get equipment in the allocated area for interoperability purposes. This could help governments save money, too. When a local agency gets a space on the spectrum, manufacturers sell equipment made to work in those frequencies. But if there is a common allocation, then equipment such as radios may not be as expensive because fewer would need to be customized by manufacturers. "It would be wonderful if we had the same spectrum across the country," said Linda L. Bunker, commanding officer of the Los Angeles Police Department Emergency Command Control Communications System Division. "There could be a lot of economies of scale realized." IN THE MEANTIME But until public safety allocations and interoperability are established by the federal government, local agencies will probably continue to struggle with clogged airways. When radio users begin to step on each other because too many people are on the same frequencies, agencies try to get more space or use more channels to split up the air traffic. A police department may divide its radio channels into sectors to deal with growing numbers of users, for example. It costs money to do all of this, because new radios and other equipment have to be purchased, sometimes for the entire force. Agencies can even be pushed off frequencies. The Los Angeles Police Department, for example, had to move its microwave communications because that portion of the spectrum was auctioned as PCS licenses last year by the FCC. Los Angeles is also having a difficult time finding spectrum. "We have a tremendous problem enhancing and enlarging," said Bunker. "We've been going all over the spectrum." Vancouver has some of the same problems with space, and bouncing its communications off Portland's repeater is only a temporary solution. A $14 million plan for the city and Clark County, Wash., to switch communications to a different frequency and get new radios for fire, police and other vehicles was being considered by local government bodies. Funding could come from a bond issue which may be put on this November's ballot. But it will be difficult to get voter approval for the borrowing, Councilmember Jollota said, adding that part of the problem is that people can't see or touch what they would be voting to spend money on. "Traditionally, we don't even pass park bond issues," she said. Spur Local Reaction The public safety community and local government advocates were upset this summer by congressional and administration proposals to auction spectrum space to pay for tax credits and cuts before public safety needs were addressed. Shortly before leaving the Senate to campaign for president, Bob Dole proposed repealing some federal gas taxes and making up the revenue by auctioning some spectrum space. President Clinton proposed selling spectrum as a way to pay for a tax deduction for some college expenses. Federal law requires that revenue reductions, such as tax cuts, must be balanced either by reducing spending or increasing revenue elsewhere. The National League of Cities and International Association of Chiefs of Police protested the proposals. "We are concerned that the proposed mechanism could interfere with emergency and public safety communications and endanger the lives and safety of citizens," wrote NLC President and Columbus, Ohio, Mayor Greg Lashutka, in a letter to Congress members. Queries as to why spectrum auctions were chosen by Clinton and Dole to pay for tax reductions were not fully answered. Lawrence Haas, associate director for communications at the Office of Management and Budget, said that spectrum auctions were picked because "the president thought that this was the best option, and that is why he chose it." Haas would not elaborate on what Clinton's other choices were in paying for the tax credits. The Dole presidential campaign referred Government Technology to Sen. Trent Lott, who was elected Senate majority leader when Dole resigned. Lott's office did not respond to messages asking why spectrum auctions were chosen to pay for the gas tax repeal, and several calls to the office of Rep. Andrea Seastrand, who authored the House version of the bill, were not returned. * As local public in urban areas, the need for radio space is The FCC is a look at
<urn:uuid:1263f36f-ee42-4143-b97a-538d3ab516f0>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/In-Search-of-Spectrum.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00225-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966818
2,083
3
3
Highlighting helps users identify the item that they're acting on. The highlight on a component behaves differently depending on the component and the context. If you use BlackBerry UI components, highlighting is built into the component. Best practices for highlighting custom components For binary interactions, highlight the component when a user touches it. Remove the highlight when the user moves their finger off the control. The highlight should return when the control is touched again, unless the control scrolls with the view. If users scroll through a list or view (for example, a grid view), don't highlight individual items. If an item requires continuous interaction (for example, a slider), highlight the item until the user releases their finger. In this case, you might need to lock other items that allow scrolling, such as lists. Don't let users highlight items they can't act on. Disable components by dimming them or remove them from the screen. Highlighting in action: Touch and hold highlight Users can touch and hold list and grid items to bring up context menus. To give the user a visual clue that the context menu will be triggered, the item cycles through a three-step highlight progression:
<urn:uuid:a212f44f-4699-48bf-a032-c8356a814b83>
CC-MAIN-2017-04
http://developer.blackberry.com/design/bb10/highlight_behavior.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00041-ip-10-171-10-70.ec2.internal.warc.gz
en
0.844385
239
2.5625
3
Article adapted from: "Putting Out the Fire," TechBeat, Fall 1998. Reprinted with permission. At times it looked like Utica, N.Y., was going up in smoke. Its arson rate was twice the national average and three times the state average. Utica's arson problems could be traced to several sources. The city had lost more than 30 percent of its population due to the closing of Griffiss Air Force Base and defense-related businesses. With the local economy spiraling downward and home sales plummeting, some property owners started burning their homes for the insurance money. Others just boarded them up and walked away. These areas became prey to drug dealers who often burned property used by competitors to take over additional turf. At the same time, New York City cracked down on criminal activities, significantly lowering its crime rate but sending many criminals scurrying for new and more lucrative areas. In addition, profiteers bought abandoned housed at fire-sale prices, insured for $100,000, and torched them. At its worst, Utica firefighters battled two or three blazes a night, with 45 percent of all structure fires ruled arson. The national average for arson case closures was 15 percent; Utica, a town of 65,000 people living in nine square miles, only closed 2 percent. The inner city bore the brunt of the arson-related crimes, but it was where hope was born. With $10,000 from the Federal Emergency Management Agency (FEMA), Utica, several surrounding local agencies and federal agencies formed the Utica Arson Strike Force in April 1997. From each participating agency, the strike force tapped experts in arson investigation and housed them in an abandoned firehouse in the city's most fire-ravaged section. Utica was designated the fourth pilot city in FEMA's National Arson Prevention Initiative. FEMA asked the National Institute of Justices' (NIJ) National Law Enforcement and Corrections Technology Center (NLECTC) to assess and provide the team's technology requirements. "They needed a digital camera, a color scanner, printers, and funding to build a custom database, which we provided," says John Ritz, director of NLECTC-Northeast. "They also needed a local area network, which we designed, built, and implemented. This network gives them the capability to send and receive information with other agencies." Through the U.S. Air Force's Law Enforcement Analysis Facility, NLECTC-Northeast also cleaned up audio tapes taken from body wires and enhanced the quality of surveillance audio and videotapes. "The actual number of dollars invested has not been that much," Ritz says. "The task force has substantial manpower and expertise in every area of arson investigation. We provided the technology that supports what they do. With the digital camera, they can develop high-quality investigative documents, which increases their conviction rate. It also lets them e-mail suspect photos to other agencies, which has helped them arrest arsonists in New York City, North Carolina, Nevada, and Florida." The strike force consists of a commander, a deputy commander, a technical-resource coordinator, an operations officer, three fire marshals, an arson-detection dog and handler, a forensic technician, a special agent from the Bureau of Alcohol, Tobacco and Firearms (ATF), an assistant district attorney on call 24 hours a day, and six investigators. Participating agencies include the Utica Police Department (UPD), Oneida County Sheriff's Department, Utica Fire Department (UFD), New York State Office of Fire Prevention and Control, and New York State Police. Part-time members come from the U.S. Marshals Service, the New York State Insurance Fraud Bureau, and NLECTC-Northeast. The strike force also took advantage of cooperation, donations and funding from the community: A local communications company provided intercoms for the strike force's offices; a cellular phone company supplied phones to investigators for free for six months; local businesses, agencies and colleges donated office furniture, computers and supplies; area insurance companies donated money and camera equipment; the UFD donated pagers with group paging capabilities; the ATF provided a radio base station, portable radios, surveillance and a van; the sheriff's department provided two computers and three vehicles seized from drug investigations; and the U.S. Marshals Service provided prisoner-transportation services. In addition to accessing technologies and expertise, the arson strike force changed the basic structure of the typical arson investigation. Instead of waiting for the fire marshal to investigate and rule on a particular blaze, the strike force assumed every fire was arson and treated the area as a crime scene. Investigators and fire marshals rolled alongside the fire department at the moment the fire alarm sounded. They watched how the structure burned, canvassed the crowd for suspects and witnesses, conducted on-scene interviews, and took photographs of the crowd and fire scene. If the fire marshal decided it was arson after the fire was out, the strike force simply continued their investigation. Arson has dropped 50 percent, closure rates stand at 52 percent, and the conviction rate is 100 percent, according to UPD Capt. Claude DeMetri, who heads the strike force. Not only did the strike force investigate current fires, DeMetri says, it opened more than 120 old cases dating back to 1991. Nineteen of those led to arrests. The strike force has been such a success that it is expanding to cover the entire county and is being used as a model for an area drug task force. Even more important to the city's economic welfare is that downtown business owners are starting to rebuild, remodel and restore their properties. Utica is truly rising from the ashes. For more information about the Utica Arson Strike Force and its operations, contact John Ritz or Dave Hallett at NLECTC-Northeast, 888/ 338-0584; or Capt. Claude DeMetri, 315/732-7260. You can also access the strike force's Web site. TechBeat is the flagship publication of the National Law Enforcement and Corrections Technology Center system. Contact Rick Neimiller, managing editor, by calling 800/248-2742 or via email. Writer and contributing editor, Lois Pilant.
<urn:uuid:b6d4d27b-6607-479f-9001-fd09643ce8dc>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Putting-the-Fire-Out-in-Utica.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00069-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947433
1,292
2.703125
3
On Wed, Jun 4, 2008 at 8:48 PM, Shannon ODonnell I keep thinking it's called the route table, but I don't think that's it. More along the lines of an SNMP table or something like that maybe? The routing table contains routing instructions for IP packets - on a PC this usually consists of the local subnet and a default gateway. This information is usually received via DHCP (Ethernet) or IPCP (PPP On routers or firewalls, things can get more complicated - there are even protocols for automatic distribution of routing table entries (BGP, OSPF, RIP).
<urn:uuid:8222c55f-3cdf-48f0-a833-6edd18109125>
CC-MAIN-2017-04
http://archive.midrange.com/midrange-l/200806/msg00162.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00463-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921583
137
3.140625
3
All of these items are critical factors contributing to the TCP protocol’s overall success. The problems begin, however, when congestion controls from the outer TCP protocol interfere with those of the inner one and vice versa. TCP divides a data stream into segments which are sent as individual Internet Protocol (IP) datagrams. Each segment carries a sequence number that numbers bytes within the data stream along with an acknowledgement number indicating to the other side what sequence number was last received. TCP uses adaptive timeouts to decide when a re-send should occur. This design can backfire when stacking TCP connections though, because a slower outer connection can cause the upper layer to queue up more retransmissions than the lower layer is able to process. This type of network slowdown is known as a “TCP meltdown problem.” Surprisingly, this is not a design flaw, as the idea of running TCP within itself had not even occurred to the protocol designers at the time, which is why this dilemma was not originally addressed. Fortunately, some computer scientists have been able to demonstrate situations where a stacked TCP arrangement actually improves performance. In any case, Virtual Private Networking products like OpenVPN have been designed to accommodate for the problems that may occur with tunneling TCP within TCP. Unlike SSTP, OpenVPN is able to run over UDP to handle such times when a stacked TCP connection would actually degrade performance. Although SSTP may be suitable in some situations, it is severely limited by only being compatible with the latest versions of the Windows operating system. Microsoft has not announced any plans to port it to previous Windows OS versions or any other OS for that matter.
<urn:uuid:9d61c7b4-ce80-45cd-9ed6-62824c005675>
CC-MAIN-2017-04
http://vpnhaus.ncp-e.com/2011/06/30/sstp-the-problem-with-tcp-over-tcp-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00097-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952545
333
3.4375
3
Definition: Efficiently deciding whether a temporal logic formula is satisfied in a finite state machine model. See also Kripke structure, BDD. Note: Model checking is increasingly used in the formal verification of hardware and software. The decision process often uses some form of binary decision diagram (BDD). If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Sandeep Kumar Shukla, "model checking", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/modelcheckng.html
<urn:uuid:2a07deb1-62c0-49a7-855b-b441ccd45415>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/modelcheckng.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00005-ip-10-171-10-70.ec2.internal.warc.gz
en
0.84071
181
2.640625
3
A joint project involving five space agencies (NASA, Roscosmos of Russia, Japan Aerospace Exploration Agency, Canadian Space Agency and European Space Agency), the International Space Station (ISS) is the largest artificial object orbiting the earth. At a speed of approximately 17,211 miles per hour, it takes the ISS about 92 minutes to complete one orbit—traveling at an altitude of about 220 miles above the earth. Six people live and work inside the ISS, which is essentially an “orbiting laboratory” used to conduct research that’s not possible on earth. The space station currently operates more than 100 commercial workstations and laptops for a variety of functions, including engineering, testing and operations. The computers are replaced about every six years. Controlling costs without jeopardizing mission-critical operations. Stephen Hunter, resources manager for the ISS, was tasked to find ways to control IT costs—without any loss of functionality. Hunter found a solution: deploying HP ZBook 15 and Z240 mobile workstations. In an interview with Computer Dealer News (CDN), Hunter explained that he made the decision to standardize commercial, off-the-shelf computing solutions and reached out to leading vendors like HP to deliver the level of performance and reliability NASA required. The Z Series of mobile workstations met these demanding requirements. “I did not want a one-off machine,” Hunter told CDN. “I wanted one that could rise to the challenge and make it easier to use in the lab or for other sciences…I can’t send a technician 240 nautical miles to the space station. It would cost too much.” With advanced capabilities like 3D graphics, powerful processors and massive memory capacity, ZBook 15 Mobile Workstations and Z420 Desktop Workstations are currently being deployed on the ISS as well as at NASA Mission Control in Houston. They’re being used across all NASA missions to maintain mission-critical functions such as life support, command and control, maintenance and operations. HP Z Workstations also support science experiments, research studies, and the physical and psychological health of astronauts. New frontiers to conquer and hurdles to overcome. One of NASA’s new goals is to have an astronaut spend more than a year on the ISS. Currently, the average stay is between three to six months. This presents many challenges, however. For example, the effects of radiation in space can be harmful to IT components on board. Moreover, the effects of long-term space habitation can cause a person’s retina to detach. “The crew becomes an experiment on its own,” noted Hunter in the CDN interview. To help address these effects, ZBook Workstations will be loaded with Fundoscope, an app that checks the retina, and Vision Acuity Pro, which monitors the astronauts’ vision. Another new ISS initiative is the utilization of Microsoft HoloLens technology, which is designed to give astronauts high-definition holograms for an augmented view of space environments. Intended largely for training purposes, the HoloLens can simulate a 3D walk on Mars. (Currently all that are available are 2D raw images in black and white.) HoloLens technology will offer astronauts a closer glimpse into what they can expect on future expeditions. With HP Z Workstations at the helm, NASA and ISS are blazing new trails in space exploration. Stay tuned for more exciting developments.
<urn:uuid:4234d4cc-209f-4cc6-82b0-3c6d31dc2d11>
CC-MAIN-2017-04
http://www.ingrammicroadvisor.com/hppsg/the-final-frontier-workstations-in-space
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925722
710
3.21875
3
A Multimodel database management system (DBMS) is a database system that is built with a data records using an entity-relationship data model. The data is stored using a variety of logical models or views and uses a flexible combination of KeyValue pairs, documents and graphs. AranoDB is a flexible data model that uses a combination of documents, KeyValue pairs and graphs to store data. It is easy to use due to a graphical interface and it is licensed under the Apache License. It is a fast database that takes up less space than conventional NoSQL databases. The Alchemy database is a hybrid database that combines RDBMS and a NoSQL datastore. It can store unstructured and structure data as the Alchemy database does not have limits on tables, columns or indexes. It operates on commodity hardware and is easy to install.
<urn:uuid:b6d74228-7057-4da3-a057-6751a0a037a7>
CC-MAIN-2017-04
https://datafloq.com/big-data-open-source-tools/os-multimodel/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00519-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868203
174
2.5625
3
Lister A.M.,Natural History Museum in London | Lister A.M.,Climate Change Research Group | Fenberg P.B,Climate Change Research Group | Glover A.G,Climate Change Research Group | And 10 more authors. Trends in Ecology and Evolution | Year: 2011 In the otherwise excellent special issue of Trends in Ecology and Evolution on long-term ecological research (TREE 25(10), 2010), none of the contributors mentioned the importance of natural history collections (NHCs) as sources of data that can strongly complement past and ongoing survey data. Whereas very few field surveys have operated for more than a few decades, NHCs, conserved in museums and other institutions, comprise samples of the Earth's biota typically extending back well into the nineteenth century and, in some cases, before this time. They therefore span the period of accelerated anthropogenic habitat destruction, climate warming and ocean acidification, in many cases reflecting baseline conditions before the major impact of these factors. © 2010 Elsevier Ltd. Source
<urn:uuid:c2dc1658-f4ed-476b-b651-840d778bf408>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/climate-change-research-group-2536909/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00427-ip-10-171-10-70.ec2.internal.warc.gz
en
0.870506
215
3.125
3
Kaspersky Lab announces that almost 32 million vulnerable applications and files were detected on users’ computers in Q3 of 2010. The ten most widespread vulnerabilities even included some for which vendors had distributed patches in the period 2007-2009. Cybercriminals often use flaws in program code to gain access to data and resources on targeted computers. Malicious programs that are designed especially to take advantage of these vulnerabilities are called exploits and are increasingly widespread. They regularly dominate the Monthly Malware Statistics ratings compiled by Kaspersky Lab’s experts. The notorious Stuxnet worm, which exploits not one but four zero-day vulnerabilities in Windows, is yet another example of just how popular these programs are with cybercriminals. “Previously, cybercriminals mainly targeted vulnerabilities in the MS Windows family of operating systems. However, over the last few years they have shifted their focus to include Adobe products such as Flash Player and Adobe Reader,” commented Vyacheslav Zakorzhevsky, Senior Virus Analyst at Kaspersky Lab and author of the article ‘Cybercrime Raiders’ devoted to the problem of exploits. “As a result, a new product called Adobe Updater was released to perform a function similar to that of Windows Update: the automatic download and installation of patches for programs installed on users’ computers. At present, Sun, whose Java engine also has vulnerabilities targeted by exploits, is also trying to resolve its update situation.” Unfortunately, many users do not regularly update the software on their computers. This explains why exploits for patched vulnerabilities are still amongst the most widespread malicious programs detected on users’ computers. In his article, Vyacheslav Zakorzhevsky strongly recommends users to do the following to avoid infections via vulnerable software: regularly check for software updates, install them as soon as they are released, manually if necessary, and do not click on unknown links or open emails that appear in your inbox if you do not know and trust the sender. In other words, follow the basic rules of computer security. Using browsers such as Google Chrome, Mozilla Firefox and Internet Explorer that come with inbuilt filters that block phishing and other malicious websites will also help reduce the risk of being infected. The full version of the article ‘Cybercrime Raiders’ is available at www.securelist.com/en.
<urn:uuid:31756bd7-d9c9-46d3-b9c2-c83a5560e722>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2010/Disturbing_Statistics_Over_30_million_Vulnerabilities_Detected_On_Users_Computers
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00455-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958538
486
2.71875
3
For most malware, performing their malicious task(s) is the primary goal, and a close second is to stay unnoticed on the system for as long as possible. As developers of security software constantly improve detection methods, malware creators are always trying to keep one step ahead of their efforts. Take, for example, the Poweliks malware recently discovered and analyzed by G Data researchers. Poweliks is a trojan whose main objective is to download additional malware on the system. So far, that is nothing new. “When security researchers talk about malware, they usually refer to files stored on a computer system, which intends to damage a device or steal sensitive data from it. Those files can be scanned by AV engines and can be handled in a classic way,” says researcher Paul Rascagneres. But this malware is capable of surviving on the infected system without creating a file – all its tasks are performed within the memory. “To prevent attacks like this, AV solutions have to either catch the file (the initial Word document) before it is executed (if there is one), preferably before it reached the customer’s email inbox. Or, as a next line of defense, they need to detect the software exploit after the file’s execution, or, as a last step, in-registry surveillance has to detect unusual behavior, block the corresponding processes and alert the user.” As we’ve said, Poweliks doesn’t create a file, but it does create an encoded autostart registry key that will assure that the malicious activities survive system re-boots. And here, again, the malware authors have a found a way for this key to keep a low profile and resist analysis attempts: the key’s name is not an ASCII character, which hides it from system tools and prevents it from being opened. “This trick prevents a lot of tools from processing this malicious entry at all and it could generate a lot of trouble for incident response teams during the analysis. The mechanism can be used to start any program on the infected system and this makes it very powerful,” commented Rascagneres.
<urn:uuid:f3a74550-6901-4c53-b347-6d6352adaefb>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/08/04/poweliks-malware-creates-no-files-lays-low-in-the-registry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943219
445
2.734375
3
The Riken Advanced Institute for Computational Science in Kobe, already home to Japan’s largest computer system, said last week it will lead Japan’s exascale program, with “successful development of the exascale supercomputer scheduled for completion by 2020.” “We will devote our energy to this project,” said Kimihiko Hirao, director of the Riken institute, in a statement. An exascale system “will be a great boon for science and technology, as well as industry,” he said. The U.S., meanwhile, is aiming for an “early 2020s” delivery of an exascale system , a Department of Energy official said during a presentation that coincided with the annual supercomputing conference, SC13, in November. In December, Congress approved a fiscal 2014 defence budget bill that requires development of an exascale system within a 10-year period, or by 2024. This is an improvement over an earlier Senate defense funding bill that included a “20 year plan.” The Europeans are developing an ARM-based exascale system and have set a delivery goal of 2020. That goal, though, doesn’t have the stake-in-the-ground clarity of Japan. China, which presently operates the world’s fastest supercomputer, is believed to be targeting the 2018-2020 timeframe for exascale delivery, but has not yet made an official announcement. An exascale system is capable of a quintillion, or a million trillion, floating point operations per second. It is approximately 1,000 faster than a single petaflop system. The fastest systems in use today are well under 50 petaflops. Exascale development may be a race but no one has yet defined what will constitute a winner. Today, the fastest supercomputers are determined by their ranking on the Top 500 list. But if a nation deploys an exascale system that uses 100 MWs of power, and another nation deploys one two years later with technology that uses a third as much power, which nation is better off? It now costs about $1 million a year to run a 1 megawatt system, and current supercomputers are already in the range of 10 megawatts. There are numerous technical challenges to reduce those power requirements. For instance, memory is a major challenge for exascale developers. DRAM memory is too slow and expensive to support exascale, but scientists aren’t sure yet what will replace it. Along with the race to deliver exascale, another technology competition is taking shape: quantum computing. The U.K. last month said it is investing $444 million in quantum computing over the next five years. The money will fund a network of quantum computing centres. “Science is a personal priority of mine,” said U.K. Chancellor George Osborne, in a speech last month outlining the quantum computing effort. Quantum computing uses subatomic particles and has the potential to leapfrog all other forms of computing. Today, computation is based on bits that can be either 0 or 1, with calculations done one after the other. But quantum can hold those states, 0 and 1, simultaneously increasing processing power exponentially. In the U.S., quantum computing work is underway at federal research facilities. NASA’s Ames Research Centre has two 512-qubit D-Wave Two quantum computers. In November, it announced that it was working with Google and others to create the Quantum Artificial Intelligence Laboratory.
<urn:uuid:bf21accc-f821-4073-960b-b7466a57501f>
CC-MAIN-2017-04
http://www.cnmeonline.com/news/japan-outlines-2020-exascale-supercomputer-project/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935837
740
2.828125
3
NASA Ames Research Center is exploring the technology requirements to develop what it calls a state-of-the-art device that could detect health-related biomarkers of astronauts in space. The agency has issued a Request for Information (RFI), seeking detailed information regarding compact technologies currently available that can analyze health-related biomarkers in breath, saliva, [skin], blood, and urine using a single compact device. Such a device sound like the legendary fictional medical Tricorder of Star Trek fame. From NASA: "The specific biomarkers to be detected are currently under evaluation by NASA, but include abroad range of molecules and cells associated with health status, impact of the space environment on individual astronauts, and prediction of future health events. Analyses and analytes of interest include cell profiles, proteins and peptides, and small organic molecules." NASA said of existing technology, it wants to know: Which sample types currently are analyzable using the instrument? Which biomarkers are currently analyzable using this instrument? What is the weight, dimensions and power requirements of the device? NASA said is seeking responses regarding fully functioning devices (full integrated systems) that are currently available, developed to at least the advanced prototype stage. The space agency said it isn't looking for information about conceptual designs or individual component technologies that have not yet been integrated into a single device. The space agency may want to look into an ongoing X Prize Foundation challenge that is offering a $10 million prize for the company that can build a mobile platform that can accurately diagnoses 15 diseases from 30 consumers in three days. The idea is to use artificial intelligence and wireless sensing to make medical diagnoses independent of a physician or healthcare provider, X Prize stated. Metrics for health the device will need to measure could include such elements as blood pressure, respiratory rate, and temperature. Ultimately, this tool will collect large volumes of data from ongoing measurement of health and give consumers a way to see the state of their health from a mobile device, the group said. Speaking of devices: Inside Apple's iPad world-wide ubiquity A few of the other requirement for competitors, from the X Prize website on the Tricorder Challenge include: "Given that each team will take its own approach to design and functionality, the device's physical appearance and functionality may vary immensely from team to team. Indeed, the only stated limit on form is that the mass of its components together must be no greater than five pounds. But because an important part of the qualifying round will be evaluating consumer experience in using it, the limitations set by this competition will force teams to make choices. Teams will have to consider tradeoffs amongst weight, functionality, power requirements, battery life, screen resolution, AI engine location, diagnosis capability, end consumer cost, and so on." Layer 8 Extra Check out these other hot stories:
<urn:uuid:0b120ef0-2a3e-4acc-b5eb-7f9f81e07470>
CC-MAIN-2017-04
http://www.networkworld.com/article/2221864/smartphones/nasa-looking-for-star-trek-like-tricorder-to-track-astronaut-health.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943057
577
2.84375
3
Illuminato X Machina It’s a new way of organizing the traditional von Neumann computer, called robust physical computing. Rather than subdividing the system into its functional components, robust physical computing breaks the system down into a network of mini-computer units. These units, or ‘cells’ can combine to form a modular, scalable computer capable of adapting its performance to the task at hand. In less than four square inches, each Illuminato X Machina board contains the elements of a fully functional computer. A single Illuminato X Machina module, or ‘cell’, is equipped with a 72 MHz ARM-based microprocessor, a dedicated EEPROM chip for data storage, and RAM. LEDs serve as a simple output mechanism, and 14 I/O pins line each of its four edges for maximal node-to-node connectivity. Each IXM board is blacked out with gold vias and surface mount components for a slim profile. The surface is lined with multiple symmetrical sets of RGB LEDs, which serve as status lights or a desktop light show. It’s smart enough to know if it’s plugged into a neighbor rightside up, upside down or sideways, and dynamically establishes the correct power, and signal wires to exchange power and information with its neighbors. It truly is a complex adaptive information and power system. In this sense, it is also robust. If a cell in the motherboard grid detects a faulty neighbor, it can attempt to reprogram its neighbor and reboot its neighbor (because the distinction between the system’s firmware, software, and hardware are intentionally ambiguous and one). If this fails, each cell can then elect to disconnect power to its neighbors and “terminate” it from the network, like a cell would do if it detected cancerous growth in its neighbor. Like living organisms, IXM cells are “social”. They function best when interacting with other groups of cells, autonomously programming, reprogramming, processing and communicating with each other. They can be attached to the computer via USB using a special cable or connector board, and a grid can accept as many USB inputs as it has free edges. Fundamentally, it’s about making computer architecture accessible to people besides Intel and AMD…a do-it-yourself, open source physical computer, where the computer itself can adapt and be adapted in plug-and-play fashion, no matter what the project may be. And the beauty of open source means that today, it’s a square board with an ARM processor…tomorrow, it may be an octagon with a processor that doesn’t even exist yet. Welcome to the future of computing. Item ID: IXM-C-RevA-20090623 Technical Elements per Cell - Weight: 24g - L x W x H: 1.87” x 1.87” x 0.25” - General Purpose I/O: 16 pins - Total I/O: 24 pins - Processor Type: 32-bit ARM - Processor Name: LPC2368 - Processor Speed: 72 MHz @ 64 Dhrystone MIPS - Processor UART: 4 Hardware UARTs EEPROM: IC SRL EEPROM; 128 KB Living Elements per Cell - Senses: Outside Voltage Sense, Inside Voltage Sense - Reflexes: Neighbor Shutdown, 4 Blue LEDs, 1 RGB LED - Mode of Interaction: Single Switch Input - Power Management: Output Power, Real-time Frequency Shifting Resources and References The study of Robust Physical Computation uses the IXM processing cells and related hardware to research robust systems. - Frequently Asked Questions - Illuminato Labs downloads - “Quick link to some tutorial sketches in the reference pages” - Minimal Linux Install for hints about getting running in the Linux command line. - The programmer’s reference pages for sample sketches as well as API reference material. - Robust Physical Computation individual project pages using the IXM Programming the Illuminato X Machina: A demo with two cells. Illuminato X Machina Arrays in Action: A longer demo with two arrays of cells, and shows more of the autonomous, cellular functionality of the system.
<urn:uuid:13b22fef-4a84-4a23-8cbe-75a50644fe65>
CC-MAIN-2017-04
http://www.liquidware.com/shop/show/IXM/Illuminato+X+Machina
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879353
900
3.03125
3
As the IT industry grapples with mounting e-waste, how can your company do all it can to reduce its environmental footprint? Storing your tape libraries offsite in a climate-controlled vault is far more resource-efficient than using your own office space. The basic principle behind energy-neutral technology is simple: Don’t take more than you make. An energy-neutral home, for instance, is full of enough energy-creating features (such as solar panels) to offset its energy consumption. It’s a potentially difficult goal to achieve, especially in the business world. Yet for both environmental and economic reasons, many technology-related companies are looking for ways to cut energy use. And according to GreenBiz.com, the IT industry is already doing a fairly good job. Estimates indicate that data centers, PCs and networks consume between 1.5 and 3 percent of the world’s energy, and that’s more than offset by the efficiencies technology creates (using videoconferencing rather than flying to Tokyo, for example). What more can individual companies do to reduce their energy consumption and move toward an energy-neutral state in which they save as much as they consume? Consider the following steps. Step 1: Grow—or shrink—your office. It’s simple: A smaller office consumes less energy. So how can you downsize? One way is to move your archived data offsite and let a trusted information management service provider take care of it. Think about your tape archives, for example. When you opt to store them offsite, they’ll be housed in an optimized climate-controlled environment around the clock. That’s far more energy-efficient for your office than having tapes share valuable square footage with your employees. When you move archives offsite, it may also free up office space you can use to grow your business. Your company may even consider downsizing. Step 2: Get to know the power ratio. If your business runs its own data center, you should know its Power Usage Effectiveness (PUE) ratio. This measurement reveals how much of the data center’s power is used by its computers versus other overhead. A perfect ratio of 1.0 is theoretically impossible, since some energy will always be lost. But data managers should try to get the ratio as close to 1.0 as possible. The bigger your data center, the more important its PUE ratio becomes. Step 3: Energize old media. Here’s a fun fact: Decreasing your paper output by one ton saves 10,785 kWh of electricity. That’s enough to light 4,500 100-watt light bulbs for 24 hours. But paper isn’t the only media that’s prime for energy-efficient disposal. Consider CDs, DVDs, backup tapes, microfilm, photos, X-rays and more; a data destruction service provider is equipped to destroy all of these. Iron Mountain handles 1,500 tons of X-ray film annually, from which chemicals are extracted for reuse in manufacturing. Iron Mountain also sends materials to energy-from-waste facilities, which in 2010 turned more than 4,000 tons of material into energy, saving more than 8,250 barrels of oil and generating enough power to supply 404 homes for a year. Step 4: Send e-waste to its proper resting place. According to the EPA, only 25 percent of electronic waste, including old computers, monitors and printers, is recycled. Putting all that old hardware into landfills creates an enormous environmental burden. You can help by taking care to properly recycle your obsolete equipment. As the EPA notes, recycling one million laptops saves the energy equivalent of the electricity used by 3,657 U.S. homes in a year. Besides, you may have no choice. Since 2004, more than 93 bills concerning end-of-life electronics and landfill disposal bans have been introduced at the state and federal levels. Finding a recycling partner to help you manage your old equipment may become essential. On the bright side, clearing out old assets is another way to save space and labor costs and become more efficient. Partner to Reduce Resources Remember that energy neutrality isn’t just about saving electricity. It involves saving resources and preventing the need for the costly pursuit of more raw materials to make more new products. A trusted data management and secure media destruction partner can help you play a part in that effort. You’ll score big environmental points while helping your company’s bottom line. What Is E-Waste? Though wordsmiths have yet to define “e-waste,” one of the organizations dedicated to fighting it has taken on the challenge. The nonprofit e-Stewards Initiative defines e-waste as any piece of electrical or electronic equipment or gadgetry that contains potentially toxic materials, including (but not limited to) cell phones, laptops, and televisions. When burned, the already poisonous mercury, lead, cadmium, arsenic, beryllium and brominated flame retardants often found in e-waste generate even more toxins. Eventual health perils to anyone inhaling smoke containing these compounds can include cancer, reproductive disorders, endocrine disruption and many other health problems. For these reasons, it’s easy to understand why dumping, burning or exporting e-waste is not a long-term sustainable solution. To win the war against e-waste, U.S. businesses must explore ways to improve upon our current 11 to 14 percent recycling rate for electronic equipment. Iron Mountain Suggests: Work with a Recycling Partner Iron Mountain delivers comprehensive e-waste recycling and asset management services through its partnership with an e-Steward Electronic Recycler. A key advantage of partnering with an e-Stewards Electronic Recycler is a certified, auditable process which ensures that your e-waste is not exported, landfilled or incinerated—regardless of the kind of electrical or electronic equipment you’ve discarded. Do you have questions about data backup and recovery? Read additional Knowledge Center stories on this subject or contact Iron Mountain’s Data Backup and Recovery team. You’ll be connected with a knowledgeable product and services specialist who can address your specific challenges. Should You Chase the Cloud? Blog: So what's the deal with SharePoint? You decide Should It Stay or Should It Go? 10 Steps to Leveraging Data on a Tape Backup System
<urn:uuid:d77972bc-37f1-4994-bbc5-85a2a6cfeebb>
CC-MAIN-2017-04
http://www.ironmountain.com/Knowledge-Center/Reference-Library/View-by-Document-Type/General-Articles/C/Carving-a-Path-to-Energy-Neutral-Technology.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918479
1,351
2.515625
3
It seems I’m reading this question more and more: “I’m an Administrator on a Windows Vista box, but I can’t run program X with administrator rights”. I’ll try to explain this quickly and simply, omitting a lot of details (if there is enough interest, I’ll make a follow-up post). The cause of this program’s behavior is simple: restricted tokens. A token is a Windows kernel object that represents a user with all his privileges and group-memberships. The token is created when a user is login on, and is associated with all programs started by that user (i.e. processes). The Windows kernel uses the token to decide if the process is granted access to the securable objects it tries to access. A restricted token is a special token: it’s a token that represents only a part of what a user is allowed to do. Some privileges and permissions have been removed or denied (restricted). Restricted tokens exists since Windows 2000, but as a user, you weren’t really confronted with them until Windows Vista. Since Windows Vista, restricted tokens are used to run most user programs, in stead of the normal (unrestricted) tokens. In Windows Vista, when an administrator is login on, 2 tokens are created: the normal token (with all administrative rights) and a restricted token. For security reasons, most programs are started with the restricted token. And that’s why some programs don’t run as you expect, because they need more privileges and permissions than the restricted token is giving them. UAC decides if a program is started with the unrestricted token or the restricted token. There are several rules that guide UAC in its decision process between the 2 tokens, the application manifest is one source of information used by the UAC rules. The manifest is an XML file stored as a resource inside a PE file, and it can contain information about the execution level it needs to run correctly. If an application needs administrative rights, the developer should add an requireAdministrator value to the manifest file, so that the UAC uses the unrestricted token. If your application is missing this manifest, chances are that UAC will make the wrong decision and run the program with the wrong token. As a user, you can also instruct UAC to use the unrestricted token: right-click the program you want to start and select “Run as administrator”. If you often need to run the same program with administrative rights and UAC systematically makes the wrong decision about the token to use, create a shortcut to the program and check the “Run as administrator” toggle in the advanced tab: Another way to achieve this is to add (or update) a manifest to the executable file with a resource editor.
<urn:uuid:2bff6015-0e5c-46d5-bda4-b913df6db3ac>
CC-MAIN-2017-04
https://blog.didierstevens.com/2008/05/26/quickpost-restricted-tokens-and-uac/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927556
582
2.71875
3
VPL (Visual Programming Language) Included with the Robotics Studio is the Visual Programming Language. This is both a language and an IDE for visually programming robots-not by writing code, but by graphically connecting dataflow pieces on a diagram. The VPL simplifies the process of developing robotics applications. (However, advanced programmers typically prefer not to use it, instead opting to write code manually, such as in C#, C++ or even Python. Still, Microsoft is quick to point out that while the VPL is targeted to beginners, it can be used by advanced programmers as well.)However, below the panel of basic activities is another panel containing more advanced services that are preconfigured for common tasks. I don't have space to list them here (and you probably wouldn't want to read the whole thing, anyway) but there are interesting services, such as Game Controller, which lets developers read a game controller via a DirectInput interface; Generic Motor, which-as its name implies-is a generic interface to a motor; and Generic Sonar, which lets programmers interface to a sonar device. Other services are devices such as batteries, articulated arms, differential drives and Web cams. However, there are also interfaces for data processing such as a message logging and accessing SQL devices. Programming Robotic Studio with Visual Studio The fundamental approach to programming a robot with Robotics Studio is by piecing together various services so they can all work together. These services can operate independently and concurrently, just as most advanced robots need. Developers use pre-existing services that ship with Robotics Studio along with their own custom-made services. Programmers use services that send messages to a robot to control its actuators and they use other services to receive messages from its sensors. Developers can have services that take input from a human controlling the device, such as through a dialog box on the computer screen or through a remote device such as a game controller. All of these pieces work together to easily create a robot controller. The Robotic Studio is actually a whole set of tools, but once involved in a project, users will likely be working within good old Visual Studio, writing their own code, such as in C#. The official Microsoft Robotic Studio site includes several tutorials and introductions to help get started. One tutorial is a video that includes a PowerPoint presentation that explains the basic steps of piecing together services to control a wheeled robot (specifically one of the Lego robots that connects to the PC via Bluetooth). These services are quite simple, but they're representative of a more advanced project. The first service involves displaying a dialog box called a Direction dialog, which is just a box with five buttons on it, four for each direction (forward, back, left, and right), and one for stop. The next service might seem a bit trivial to a seasoned programmer, but it's nevertheless a required step-it's waiting for a button press on the Direction dialog. And then the following service sends the appropriate command to the robot. The Lego robot in the tutorial has a drive mechanism whereby two wheels are independently controlled and can move either forward or backward at different speeds. Using the Robotics Studio, developers can create a differential drive service that controls the wheels; the Robotics Studio includes a ready-made service called Lego NXT Drive specifically for this purpose. Users can see this service in the VPL, but they don't need to use VPL-they can access the service from your C# code in Visual Studio. Programmers can then write the code to connect the different services together by creating instances of classes and then "partnering" their objects using a Partner attribute in the C# code. In no time they'll get the system up and running. In addition, it's the user's choice whether to use the VPL IDE or to code the robotics by hand using C#. Conclusion The Robotics Studio is surprisingly easy to use. Programmers can quickly piece together all the services necessary to handle the controlling of a robot and the responding to signals from the robots with little, if any, programming. The software comes in two forms: a free Express form, and a Premium version. For many hobbyists, the free Express version should be sufficient. Senior Editor Jeff Cogswell can be reached at jeffrey.cogswell@ZiffDavisEnterprise.com. When piecing together the various entities in the VPL, the display on the screen is similar to a workflow diagram and includes common programming constructs such as variables and if-statements. In the left panel of the VPL is a list of the basic activities that include those in the previous diagram as well as a few others such as one for handling calculations.
<urn:uuid:dc60a114-4d6b-4164-9018-37bb6770df2d>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Microsoft-Robotics-Studio-2008-Makes-Controlling-Robots-Easier/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00566-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948213
956
2.6875
3
Applications are generally built from many different components, some of which are intended for internal use by other components and should not be accessible to external users. In a secure environment, such internal components are hidden and blocked from external access. An insecure system, however, might allow remote access to some or all of the internal components. Attackers may take advantage of this vulnerability in their attempts to attack the system application or system. Internal modules used by developers can be divided into two groups. The first group comprises components that are called by the browser from within other pages. These are not truly "internal" to the application, as they are normally exposed to requests from outside. What makes them internal is the way the developer treats them - meaning, the developer assumes that since they are called only from within another page (and not directly by a link or a form), they are not exposed to attacks, and therefore present no real security risk. Yet this type of component does not really differ from a normal page, and can be vulnerable to attacks such as SQL injection, cross-site scripting, parameter tampering, etc. The second group comprises components that are truly internal, and are called by other pages on the server side only. For example, instead of having all ASP pages access the database directly, the developer may build a Data Access Layer, implemented by another component, which all ASP pages access. This enables connectivity to the database to be handled in a single location, rather than in every page. Although using internal reusable components appears to be the right way to build an application, it may also be very dangerous. Many programmers allow these components to be accessed externally via the Internet, even though this is not required and not recommended. An attacker may access one of the main pages and cause it to send out an error indicating the name of the internal component, and then access the internal component directly. By doing so, the attacker may be able to execute attacks against that component. Internal components often do not include their own security checks. They are called by the main pages, and it is assumed that the main pages handle all security-related checks. Looking at the Database Access Layer in the example above, it is likely that there is no further need to handle access permissions for the database, as the pages that call it already check permissions. However, by accessing the Database Access Layer directly, the attacker may gain complete access to the database.
<urn:uuid:63bb923b-8e7b-45fc-a3a4-2b1e145cacc5>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=access_of_internal_components
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00528-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948764
486
3.171875
3
I found the script in the windows file system and when I opened it, I could see the code below. It appears that the file was cypher but it is not a normal Base64 encryption... Staring at the file I found some strings at the bottom of the file which they were not cyphered... We know that Base64 uses a character to indicate padding which is often "=". In the picture above we can see a lot of "==" characters followed by "-". It is as if it were not a unique string cyphered, but multiples strings coded one by one and separated by "-"... could this be possible? In this website www.base64decode.org we can decode Base64 strings. If we decode the Base64 string "Jw==" we can see it corresponds to the ASCII string " ' ". If for example we decode the Base64 string "DQ==" we can see it corresponds to the ASCII string " d "... Ok, we know how to decode the script... Each Base64 string is separated with an "-" and corresponds to a single character. But how can we decode it quickly? The first thing I thought was to make another script to decode the first one but I chose to get there another way... If used the notepad to replace the character " - " for a line spacing, I would have a document with one line for each Base64 string like in the picture below. Now, we can decode all coded strings by just executing a Linux command. base64 -d script_to_decode.vbs We can see the entire script uncoded and now we can continue researching the malware behaviour. Reading the code, I could say that this script is used to connect with the command and control server in order to download instructions and upload data from the infected computer. To continue researching the malware, we could change the hostname for another one where we would have a computer listening on the port 8088. We will receive the HTTP GET or POST petitions from the infected computer. Doing that, we would know what commands are used in this Botnet without the requirement of doing an advanced static analysis.
<urn:uuid:a30feec6-6e1e-4a9e-a40f-e25d0015e6e2>
CC-MAIN-2017-04
http://www.behindthefirewalls.com/2013/10/decoding-code-encoded.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932115
447
2.6875
3
An information security Risk Assessment is a complex examination mechanism that encompasses all the aspects that come into direct or indirect contact with the organization’s information systems. Within the framework of the assessment, the organization’s information systems are mapped to an abstract level, at which it is easier to examine their different components and grade the level of risk derived from all the systems. Numerous risks may affect the organization’s information assets, such as flawed allocation of authorizations to employees in various departments; information leakage among departments; lack of compartmentalization; deficient password management; uncoordinated information availability; recovery following a disaster; and erroneous firewall definitions. The risks are determined in accordance with the level of importance of the organization’s assets; therefore the performance of the assessment is subject to the cooperation of its various departments. By mapping and assessing the risks, it is possible to arrive at an organized plan according to which penetration tests will be carried out on the systems, based on their importance to the organization.
<urn:uuid:e9e8ab14-ea3b-45c3-a73d-bbe4c07f2e8e>
CC-MAIN-2017-04
https://www.bugsec.com/service/risk-assessment-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946585
205
2.53125
3
A couple of weeks ago I wrote about some of things that I found most frustrating about being a programmer, back when I was a programmer. One of those things was the size of indentation in code. Somewhere along the line, I was trained (or just started on my own; can’t recall) to set the tab key in my editor to four spaces. That’s what I stuck with for years and got used to seeing. When I read someone else’s code that used two or eight character tabs, I found it annoying. It wasn’t a dogmatic thing for me; I didn’t care that much. Four spaces was just my personal preference. In the programmer community, though, discussions of coding style such as the size of code indentation can quickly turn into a holy war. Some people tend to have very strong opinions on it. It flared up a little bit in the comments some people made on the article; but there are plenty of lengthier (and more heated) discussions on it elsewhere on the web. Spacing and indentation in code are important, of course, to help organize things and to make it readable. Code is often read by different people, and the preferences (or training) can differ from programmer to programmer. Also, depending on what type of indentation is used, the same code opened on different operating systems can look different (and less legible). It’s not as trivial an issue as it may sound to the non-programmer, and it’s also an argument that will likely never end. Among the choices are things like: hard tabs (that is, the tab key inserting the actual ASCII tab character) versus soft tabs (replacing the tab character with multiple spaces), how many columns wide shoud the indentation be, how to deal with indentation or spacing within a line (some prefer hard tabs at the start of lines, but soft tabs in the middle of lines), etc. There are pros and cons to each choice; hard tabs use less space in the file, but soft tabs will lead to consistent spacing across operating systems. Two spaces were better than four or eight when trying fit things on 80 character lines. On and on (and on) it can go.
<urn:uuid:af4f95e3-a58b-4b15-9de5-e724aaf9230f>
CC-MAIN-2017-04
http://www.itworld.com/article/2713010/it-management/religion--politics-and-coding-indentation-style--the-three-great-debates.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00006-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952479
465
2.5625
3
There are plenty of myths about malware in general, but Macs especially seem to attract an extra dose of mythos due to a smug sense of invulnerability among the Mac community. We covered 10 malware myths that refuse to die for USA Today, but there are plenty more than 10 misconceptions being passed around. Of the many reasons to love Macs, immunity to danger is not one of them. For a while now, people have felt a sense of security because they’re on an operating system that doesn’t inspire hundreds of thousands of new malware a day. But the total number of malware crawling around the Internet waiting to infect your computer is less important than this simple fact: it only takes one to ruin your day. By going out on the Internet with a false sense of safety, you can leave yourself more open to that malware bullet with your name on it. So what are some of the biggest Mac malware misconceptions that need to be cleared up? Here are five of the most prevalent ones: 1. Macs Don’t Get Viruses If you mean Windows-specific file viruses do not harm Macs, you’re totally right. If you mean self-replicating code doesn’t happen on Macs, there is really no period of time in which this statement has ever been true. Elk Cloner, the very first virus to be discovered in the wild, was written specifically for Apple DOS 3.3. Since then, every Mac OS has had some manner of virus or worm. There have been macro viruses capable of spreading on Macs as long as people have been using MS Office on Macs. The first OS X specific worm was discovered in 2006, so they do indeed exist. There are not a lot of viruses running around in Mac-land today, because there are not a lot of viruses running around, period. They’ve fallen out of favor with the malware-writing crowds, even on Windows. They’re a heck of a lot of work to make, they tend to cause system instability, and they’re no more difficult to find and remove than other non-replicating forms of malware. Bang-for-buck-wise, viruses are just not worth the effort. But that’s not to say there aren’t other types of malicious code causing problems for Mac users. Malware doesn’t have to replicate to be a pain in your machine. There are a lot of different types of threats to Macs, but the most common one these days is spyware – it gets into your system and steals your data, whether it’s in text laying around your file-system or it requires eavesdropping on your chat sessions. 2. Mac Malware Requires You to Input Your Password You generally need to input your password to install things on a Mac, so this is true with malware too, right? Not even a little. There was a very brief period of time in which this might have been partially true – the first OS X malware was what we call “Proof of Concept,” meaning its intent was to prove a point rather than to actually cause damage or steal anything. It didn’t mean to be harmful, so it didn’t have to be particularly stealthy because people who were running the file knew exactly what it was meant to do (prove a point about malicious code). But again, we run into that whole macro virus thing, and those did not require separate installation or entering of passwords. Now, malware is meant to bring in cash, so malware writers are motivated to make their creations stealthier. It can be tricky to get people to install some random piece of software as folks get more wary of threats. So most malware now employs some kind of exploit in order to install the malicious code without you even knowing. Drive-by downloads on compromised websites or in malicious advertisements is now the order of the day. It doesn’t matter what browser you use, they’re all vulnerable to some extent. Removing commonly attacked browser plugins can certainly help, but Java and Flash are not the only culprits (they’re the most popular because most people use them, regardless of operating system). 3. OS X’s Built-In Protection Will Save the Day OS X has a handful of ways to improve your security, some that are built in to the operating system itself, and some that are separate components. The three most important components are the Application Firewall, Gatekeeper, and XProtect (also known as File Quarantine). These are all fantastic and we heartily recommend people use them. However, they’re all limited by design. The Application Firewall will block incoming communications, but not outgoing. Gatekeeper is still vulnerable to malware that uses exploits. And XProtect will protect you only against certain specific, prevalent malware, and usually quite a while after the malware picks up steam. OSX/Flashback hit over 600,000 Mac users before it was incorporated into XProtect. When Apple’s own developers were hit with OSX/Pintsized, none of those protections saved them. 4. There are No Mac Malware Affecting Real People Let me throw a few names at you: Flashback. Pintsized. DNSChanger. MacDefender. Three of these malware hit large numbers of Mac users in the last few years; one of them also hit Apple’s own developers. Malware is real, and it hurts. Two of these malware left infected users’ machines open to attackers, to do what they pleased with them. One stole credit card information and “nominal fees,” and the last redirected users’ attempts to surf the web so the attackers could increase ad revenue. The underlying theme is profit motive – where there is a buck to be gained, there is a way. Choice of operating system or other software is not a sufficient deterrent. 5. OS X is Inherently a Safer Operating System As we discussed in point three, there are a few baked-in parts of the OS X operating system and add-on components that help prevent malware. And to some extent, OS X has enjoyed “security through obscurity” as it has less market share and was considered less interesting to malware authors. But that’s all changed now, as OS X steadily increases in popularity. But when it comes right down to it, the differences between the major OSes in terms of security are pretty negligible. None of them present sufficient hurdles for malware writers who are looking to get onto your machine. None of these things means you’re helpless against the malware onslaught. There are plenty of ways to improve your level of protection. For instance, make sure all the software on your machine is updated regularly, remove (or limit) browser plugins that you don’t frequently use, encrypt your data, and have up-to-date Mac antivirus software and a full firewall. It’s better to be properly prepared than to wander blindly and be rudely awakened after things go wrong.
<urn:uuid:ac3d529d-994e-4af1-8537-9112b15d30b5>
CC-MAIN-2017-04
https://www.intego.com/mac-security-blog/5-more-mac-malware-myths-and-misconceptions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00308-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950081
1,461
2.515625
3
Definition: A binary relation R for which a R b and b R c implies a R c. See also reflexive, symmetric. Note: The relation "less than" is transitive: if a < b and b < c, then a < c. The relation "is an ancestor of" is also transitive: if Reuben is an ancestor of Bill and Bill is an ancestor of Patrick, Reuben is an ancestor of Patrick. However "likes" is not transitive since someone I like may like someone that I don't like. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "transitive", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/transitive.html
<urn:uuid:56e462dd-bbdb-4dd9-80e5-be6af8cb44d8>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/transitive.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00456-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922249
229
3.328125
3
"Imagine you're at the airport and you want to find the free Wi-Fi. When you scan, your phone is going to display the Wi-Fi access points. That could be an easy channel for a hacker to inject malicious worm code into your smartphone," Du says. "Once the worm takes control, it can duplicate itself, and send copies to your friends via SMS messages, multimedia file sharing, and other methods." Du and a team of researchers from the College of Engineering and Computer Science at Syracuse University are warning about Cross-Device Scripting (XDS) attacks on smartphones if apps are based on HTML5. Details of the attacks are in the research paper "XDS: Cross-Device Scripting Attacks on Smartphones through HTML5-based Apps" (pdf), which will be presented at the Mobile Security Technologies (MoST) workshop in May. To help even technically challenged folks grasp the risks, the team put together video examples demonstrating the following four attack scenarios: - If you are at an Airport, and scan for free Wi-Fi access points using an HTML5-based app, you may be attacked. - If you receive an SMS message, and use an HTML5-based app to read the message, you may be attacked. - If you play an MP3 song or music using an HTML5-based app, you may be attacked. - If you scan a 2D barcode using an HTML5-based app, you may be attacked. Put another way, even basic activities like listening to music, watching a video, opening an image, sending a text message, or scanning for Wi-Fi can leave smartphones "vulnerable to harmful 'computer worms'." If an attacker injects malicious code into a victim's smartphone, it doesn't end there. The researchers warned (pdf), "It can be spread to other people's phones like a worm. The more popular the technology becomes, the more quickly a worm can spread." All major mobile platforms "will be affected, including Android, iOS, Blackberry, Windows Phone, etc., because they all support HTML5-based mobile apps." Xing Jin is a doctoral candidate at SU who has worked with Du on software security for the past year and a half. Jin said, “Professor Du always said, ‘You need to have an evil mind, but have a good heart'. I would like to use my knowledge to help the systems developer. I would like to see my work implemented within Samsung’s technology to benefit the greater good." So far, the Syracuse team has "identified 14 vulnerable HTML5-based apps from three types of mobile systems, including Android, iOS and Blackberry. Developers of those vulnerable apps have been informed and in an effort to give them time to fix the problem, researchers have decided not to disclose the names of the vulnerable apps." There is one simple solution; don't use apps based on HTML5. The researchers said, "If the app is written using the language native to the platform (e.g. Java for Android and Object-C for iOS), it is immune to this type of attacks." I encourage you to watch the plethora of videos showing the attacks, the one showing how to track the victim's location, and/or the longer version embedded above about code injection attacks on HTML5 apps. It's interesting work. You can also read the research, "XDS: Cross-Device Scripting Attacks on Smartphones through HTML5-based Apps" (pdf), before it hits the "mainstream" at the Mobile Security Technologies conference in May. Like this? Here's more posts: - Twice as many desktops still running Windows XP than Windows 8, 8.1 combined - IP address does not identify a person, judge tells copyright troll in BitTorrent case - Forget physical access: Remote USB attacks can blue screen Windows servers - Is Obama's proposal to end NSA bulk collection of phone records really a privacy win? - Social engineer tag teams to capture the flags at Def Con 22 contest - Google wants to black out court details about data-mining e-mails - Fake police warning leads to murder-suicide: Deaths due to ransomware? - Windows 8.1. Update required for future Windows 8.1, Server 2012 R2 security patches - How to change Windows 8.1 to local account with no Microsoft email account required - Biased software vulnerability stats praising Microsoft were 101% misleading - North Korean leader plays Homefront on Xbox to practice taking over US - Researchers: Phone metadata surveillance reveals VERY personal info about callers Follow me on Twitter @PrivacyFanatic
<urn:uuid:398debc7-7a83-4bc6-8dac-532823f03dfb>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226707/microsoft-subnet/research--attacks-on-html5-based-apps-infect-smartphones--spread-like-a--worm-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00364-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910597
956
2.578125
3
In my previous articles on Subnet Masks, I have given a detailed idea about Subnet Masking. All network engineers, from novices to experts, need to master IP addressing and subnet masks in order to perform their job. For the CCNA to CCIE examination, you need to have the skills to be able to play with subnet mask. In this article, I am going to show how you can use Variable Length Subnet Mask (VLSM) in your network design and implementation. What is VLSM and why do we use it? Variable Length Subnet Masking (VLSM) is the more realistic way of subnetting a network to make for the most efficient use of all the bits, allowing you a much tighter control over your addressing scheme. CCNA Training – Resources (Intense) Remember that when we perform class full subnetting, all subnets have the same number of hosts because they all use the same subnet mask. This leads to inefficiencies it means lots of hosts are wasted. For example, if you borrow 4 bits on a Class C network, you end up with 14 valid subnets of 14 valid hosts. A serial link to another router would only need 2 hosts, but with classic subnetting, you end up wasting 12 of those hosts. Also, if you use a class C address with a default subnet mask, you end up with one subnet containing 256 addresses. By using VLSM, you can adjust the number of subnets and the number of addresses depending on the specific needs of your network. The same rules apply to a class A or B addresses. That is why VLSM is used. To put it simply, it is the process of “subnetting a subnet” and using different subnet masks for different networks in your IP plan. What you have to remember is that you need to make sure that there is no overlap in any of the addresses. VLSM is supported by Cisco and vendor independent routing protocols like RIPv2, OSPF, Dual IS-IS, BGP-4, and Cisco proprietary EIGRP. You need to configure your router for Variable Length Subnet Masking by setting up one of these protocols. Then configure the subnet masks of the various interfaces in the IP address interface sub-command. To use supernet you must also configure IP classless routes. Classful & Classless Routing: - Classful (basic subnetting) Classful routing protocols require that a single network use the same subnet mask. - Classless (uses FLSM & VLSM) VLSM allows a single autonomous system to have networks with different subnet masks. This is often referred to as “subnetting a subnet.” Problem with FLSM: There are two problems with using FLSM: It wastes addresses if the number of hosts on the subnets varies in size. It forces the routers that talk to these subnets to process too much information. If we use a subnet mask that provides enough interface addresses for the three networks with 30 hosts, then we waste 28 addresses on all three of the 2 interface networks. Further, the upstream router must maintain six separate network addresses in its routing table. But the alternative is to use VLSM: In this method (VLSM)m an existing subnetwork is further subnetted. The resulting subnets of the subnet are all of a size that best fits the networks in question. If the router on the far left of the diagram on the slide that follows has been assigned the 172.16.32.0/20 network, this network can be further subdivided so as to introduce better IP address utilization and fewer routes in the far left router’s routing table. Notice that the /30 is formed from the space left after the /20 has been formed. In other words if we count from the far left of the available 32 bits, 20 bits, this is where the far left router’s network stops. The remaining 12 bits are meant to be used for interface addresses. Instead we can use part of this 12 bit space to create new subnets. In this example we first subnet at the /26 line, then take one of these /26 networks and subnet it at the /30 line. VLSM can be applied on any class of network which may be class A, B or C. Honestly, I can say that, overall, subnetting is very confusing. Many network administrators with extensive hands on experience in network engineering might not be able to design a network with VLSM so quickly because it involves many fundamentals like private/public classes, their respective ranges, and many other things considered when one applies VLSM on a network. The tables below may help you visualize the world of VLSM so you can be comfortable with VLSM for all classes: When to use VLSM? Why to use VLSM? What is the need of VLSM? These types of questions come to mind, so we’ll take a quick look on the following scenario to answer these questions. In the following scenario, we see that we have a class C pool of IP addresses, i.e. 192.168.1.0/24, and we subnetted the given network according to our requirements as described below. Go through the whole scenario then read the next line and look at the IP’s private or public. If you want in-depth classification, you can read my previous article on IPv4. After brief examination, I know you will say that there is so many IP addresses made useless or wasted. Where we need only 2 hosts there are 30 hosts available, a clear waste of 28 hosts. In FLSM, all the subnets should be the same and in the given scenario, the network is subnetted by /27. The described scenario is so small because a class C network only has 254 valid hosts and you can calculate total wastage of IP’s – approximately 140 hosts out of 210 valid hosts. Imagine a bigger scenario with the IP range of class A and you’ll realize how many IPs are wasted using FLSM. Now maybe you got the answers for all the questions above, and it should be clear that the solution to FLSM’s IP wasting is VLSM. Now we take a look at another scenario with a VLSM subnetted network which represents the same scenario as we discussed earlier, with the only difference being the subnetting technique we used here for VLSM. If you compare fig.1 with fig.2 (FLSM scenario with VLSM scenario) wastage of IP is decreased by about 90%, though this may vary per specific scenario. Tips for CCNA Exams: To pass the CCNA examination, you need to be able to answer the following: Why is VLSM used in your network? What is the required technique to use VLSM? What are the differences between VLSM and FLSM? I hope my article clears all the doubt about VLSM and makes you confident in the deployment of VLSM. To master the topics on IPv4 addressing and subnetting, please read all my articles on it and try to practice using it. If you have any query on these topics, please comment below. I will try to solve all your queries as soon as possible. Best of luck with your exams and in choosing a career in Cisco Technology. Guide to Cisco Certified Network Associate certification by Todd Lamlee, Sybex press. Guide to Cisco Certified Network Associate by Richard Deal. Cisco Certified Network Professional-Route by Wendell Odom, Ciscopress.com CCNP- Route Quick reference by Denis Donohue, Ciscopress.com Cisco Certified Internetwork Expert by Wendell Odom and others, Ciscopress.com Cisco Certified Internetwork Expert Quick reference by Brad Ellis, Ciscopress.com
<urn:uuid:8f73ad08-49d8-494d-9af5-cb27e2eec331>
CC-MAIN-2017-04
http://resources.intenseschool.com/ccna-prep-variable-length-subnet-masking-vlsm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00208-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923929
1,695
3.65625
4
As citizens of the United States prepare to cast their votes in the upcoming presidential election, the time is right to consider what implications, if any, Internet-borne threats may have on this process. With political candidates increasingly relying on the web to communicate their positions, assemble supporters and respond to critics – Internet-based risks are a serious concern as they can be used to disseminate misinformation, defraud candidates and the public and invade privacy. Protecting against these risks requires a careful examination of the attack vectors most likely to have an immediate and material effect on an election, which in turn impact votes, candidates or campaign officials. Once individuals and organizations have a better understanding of these risks, they can put in place many of the same tools and processes that have proven effective in providing Internet protection for both consumers and enterprises. Barbarians at the Gateway As malware has evolved into crimeware, Internet threats are no longer noisy devices designed to get attention. Rather, today’s malicious code has moved out of basements and dorm rooms and into the hands of organized crime, aggressive governments and organizations intent on using this ubiquitous high-tech tool for their own criminal purposes. Businesses and consumers are responding by adopting a more proactive approach to Internet security. Both at home and at work, many Internet users are implementing technologies and practices to mitigate their risk as they work and play online. After all, with their identities, financial well-being and reputations on the line, consumers and businesses have little choice but to tighten their defenses. However, an equally insidious yet less publicized threat remains: the potential impact of this malicious activity on the election process. Many of the same risks that users have become accustomed to as they leverage the Internet in their daily lives can also manifest themselves when the Internet is expanded to the election process. Beyond the concerns about voter fraud and the challenges of electronic voting, many of today’s threats from Internet-borne crimeware also have the potential to influence the election process leading up to voting day. From domain name abuse to campaign-targeted phishing, traditional malicious code and security risks, denial-of-service attacks, election hacking and voter information manipulation, the potential impact of these risks deserves consideration. What’s in a Domain? In today’s online environment, a number of risks are posed by individuals attempting to abuse the domain name system of the Internet. These include typo squatters, domain speculators and bulk domain name parkers. Typo squatting aims to benefit from mistakes users might make as they enter a URL directly into the address bar of their web browser. It used to be that a typo resulted in an error message indicating that the specified site could not be found. Now, however, a user is likely to be directed to a different website unrelated to the intended one. Unfortunately, organizations rarely have registered all potential variations of their domain name in an effort to protect themselves. Typo squatters anticipate which misplaced keystrokes will be most common for a given entity—in the case of election-focused activities, these would be websites related to the leading candidates—and register the resulting domain names so that traffic intended for the correct site goes instead to the typo squatter’s own web properties. The relative scarcity of simple, recognizable “core” domain names has resulted in the development of an after-market for those domain names and has led to the creation of a community of speculators who profit from the resale of domain names. In fact, typo squatters and domain name speculators no longer even need to host the physical web infrastructure for their own web content or advertisements. Domain parking companies now handle this, for a cut of the advertising profits. What’s more, some typo squatters’ sites may not simply host advertisements whose profits go back to them rather than to the intended site’s owner, but they may actually forward the user to an alternative site with differing political views. Worse yet, the real potential for future abuse of typo domains may revolve around the distribution and installation of security risks and malicious code, the potential impact of which is evident in online banking, ecommerce and other business-related online activities today. Phishers, Hackers, and More The use of malicious code and security risks for profit is certainly not new. It seems the authors of such creations are quick to reach into their bag of tricks in the wake of everything from natural disasters to economic downturns and even elections to try to manipulate users into becoming unwitting participants in their latest cyber scheme. For example, phishers targeted the Kerry-Edwards campaign during the 2004 federal election—in one case, setting up a fictitious website to solicit online campaign contributions and in another, setting up a fictitious “toll-free” number for supporters to call (and then charging each caller nearly $2 per minute). Whether leveraging a fundraising site to which users have been redirected, a candidate’s legitimate site, spoofed emails or typo-squatted domains, phishers have a wide range of vehicles from which to deliver their malicious activity. Malicious code infection represents one of the most concerning potential online threats to voters, candidates and campaign officials. With malicious tools that monitor user behavior, steal user data, redirect browsers and deliver misinformation, malicious code targeted at voters has the potential to cause damage, confusion and loss of confidence in the election process itself. By placing keyloggers or Trojans on a user’s system, a cyber criminal could hold the user’s data hostage until a fee is paid to release it; such threats have already surfaced and been leveraged in the larger Internet user community. In addition, a carefully placed targeted keylogger might potentially result in the monitoring of all communications from an individual, including the candidate, campaign manager and other key personnel. Denial-of-service attacks, which make a computer network or website unavailable and therefore unusable, have become increasingly common on the Internet today. In May 2007, one such attack was launched against the country of Estonia by Russian patriots who disabled numerous key government systems over the course of several weeks. Regardless of the motivation of such attacks or their geographic setting, in an election process they could potentially prevent voters from reaching campaign websites and impede campaign officials from communicating with voters. In fact, the security of a campaign’s website plays a role in how much faith voters have in the election process. Yet, these websites can also be hacked so that attackers can post misinformation or deploy malicious code to unsuspecting visitors. Attempts to deceive voters through the spread of misinformation using traditional forms of communication are not new. Past campaigns have aimed at intimidating minorities and individuals with criminal records, announced erroneous voting dates and introduced other tactics to create voter confusion. Such activities lend themselves to the Internet because of the ease with which they can be conducted by a single attacker rather than an organized group. As campaigns increasingly look to the Internet as a tool for gathering support, the inherent risks that follow must also be considered. From domain name abuses to phishing, hacking and other security threats, the risks of online advocacy must be understood by election campaigns so that the necessary precautions can be put in place to protect against them. By keeping a vigilant watch on cyber activities, candidates, their campaigns and voters can help maintain a dynamic yet reliable election process.
<urn:uuid:f20c878a-f792-4885-87f1-bd8df84cee6a>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2008/08/04/cybercrime-and-politics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00116-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949947
1,498
2.71875
3
As aging power grids are connected to the Internet, more systems will be transformed by the addition of information and communications technologies. But security experts worry about the potential threats that transition will pose to an already fragile electrical network. If hackers launch a major cyberattack, there are concerns about the ability of the North American electrical system to hold up. Indeed, the implications of an attack against poorly protected critical electric-grid sites are profound. The nonprofit organization that acts as a watchdog and standards-bearer for North America’s power grid told Congress that a worst-case scenario due to a cyberattack on the electric grid could trigger an outage that would last one to two weeks. Considering the increasing reliance on electricity in the United States - the EIA expects energy consumption to continue to increase at a steady clip over the next couple of decades - an outage of that magnitude would qualify as a crisis. Lloyds Bank published a study of worst-case scenarios involving cyberattacks against the U.S. power grid. The good news is that Lloyds believes the scenarios it considered were still “improbable.” The bad news is that the attacks remained “technologically possible” and had the potential to inflict up to $1 trillion in total damage on the economy. The challenge of upgrading an aging communications and network infrastructure to meet heightened cybersecurity standards is further compounded by the fact that there’s not just one grid with central authority. Rather, there exists a “system of systems” that are owned, or used by, more than 3,000 utilities. Lack of urgency At the same time, there are new security requirements that will surface as more smart systems and appliances get connected to the grid. Between 2007 and 2014, for example, the number of smart meters in the U.S. has soared from 10 million to more than 50 million as part of the rapid deployment of smart devices belonging to the Internet of Things, which also includes so-called smart houses, smart cars and other IoT-enabled devices. The concern is that attackers hacking into any of these IoT implementations could then tunnel their way upstream into the electric grid. The worry is compounded by a seeming lack of urgency to prepare against potential threats. The Department of Homeland Security has published a set of cybersecurity guidelines that grid operators and other industrial control systems can follow to reduce their attack surface. It includes the usual reminders about patch management and other best practices for building defensible environments. But the biggest challenge may be philosophical. Only 29 percent of U.S. companies are starting to implement a cyberphysical strategy, 36 percent are still developing a strategy, and 18 percent have no plans to even develop a strategy, according to the SANS Institute. The lesson should be clear: If grid operators can modify their thinking about security to fit with changing times, they can avoid a lot of needless stress. Otherwise, they are fated to live in interesting times. Charles Cooper has covered technology and business for the past three decades. All opinions expressed are his own. AT&T has sponsored this blog post.
<urn:uuid:744e6a2b-dc98-49e2-a628-c96d8013a482>
CC-MAIN-2017-04
http://www.csoonline.com/article/3143616/internet-of-things/what-s-lacking-in-grid-cybersecurity.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957346
630
2.859375
3
One would think that these two terms are synonyms - after all, isn't information security all about computers? Not really. The basic point is this - you might have perfect IT security measures, but only one malicious act done by, for instance, the administrator can bring the whole IT system down. This risk has nothing to do with computers, it has to do with people, processes, supervision, etc. Further, important information might not even be in digital form, it can also be in paper form - for instance, an important contract signed with the largest client, personal notes made by the managing director, or printed administrator passwords stored in a safe. Therefore, I always like to say to my clients - IT security is 50% of information security, because information security also comprises physical security, human resources management, legal protection, organization, processes etc. The purpose of information security is to build a system which takes into account all possible risks to the security of information (IT or non-IT related), and implement comprehensive controls which reduce all kinds of unacceptable risks. This integrated approach to the security of information is best defined in ISO 27001, the leading international standard for information security management. In short, it requires risk assessment to be done on all organization's assets - including hardware, software, documentation, people, suppliers, partners etc., and to choose applicable controls for decreasing those risks. ISO 27001 offers 133 controls in its Annex A - I have performed a brief analysis of the controls, and the results are the following: - IT related controls : 46% - controls related to organization / documentation: 30% - physical security controls: 9% - legal protection: 6% - controls related to relationship with suppliers and buyers: 5% - human resources management controls: 4% What does all this mean in terms of information security / ISO 27001 implementation? This kind of project should not be viewed as an IT project, because as such it is likely that not all parts of the organization would be willing to participate in it. It should be viewed as an enterprise-wide project, where relevant people from all business units should take part - top management, IT personnel, legal experts, human resource managers, physical security staff, the business side of the organization etc. Without such an approach you will end up working on IT security, and that will not protect you from the biggest risks.
<urn:uuid:57d54151-de63-408b-bbb3-8e8771fde3f9>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/5482-Information-Security-or-IT-Security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940818
491
2.65625
3
WDM(Wavelength Division Multiplexing) systems are popular in fiber optic network because they allow to expand the capacity of the network without laying more fiber. Capacity of a given link can be expanded by simply upgrading the multiplexer and demultiplexer at each end. By using WDM and optical amplifiers, they can accommodate several generations of technology development in their optical infrastructure without having to overhaul the backbone network. WDM wavelengths are positioned in a grid having exactly 100 GHz (about 0.8 nm) spacing in optical frequency, with a reference frequency fixed at 193.10 THz (1552.52 nm). The main grid is placed inside the optical fiber amplifier bandwidth, but can be extended to wider bandwidths. Today’s DWDM systems use 50 GHz or even 25 GHz channel spacing for up to 160 channel operation. Dense WDM (DWDM) uses the same 3rd transmission window (C-band) but with denser channel spacing. A typical DWDM system would use 40 channels at 100 GHz spacing or 80 channels with 50 GHz spacing. For example, FiberStore provides 50G DWDM Multiplexer Module. Coarse WDM (CWDM) in contrast to conventional WDM and DWDM uses increased channel spacing to allow less sophisticated and thus cheaper transceiver designs. To again provide 16 channels on a single fiber CWDM uses the entire frequency band between 2nd and 3rd transmission window including both windows but also the critical area. The channels 31, 49, 51, 53, 55, 57, 59, 61 remain and these are the most commonly used. WDM, DWDM and CWDM are based on the same concept of using multiple wavelengths of light on a single fiber, but differ in the spacing of the wavelengths, number of channels, and the ability to amplify the multiplexed signals in the optical space. DWDM systems have to maintain more stable wavelength or frequency than those needed for CWDM because of the closer spacing of the wavelengths. In addition, since DWDM provides greater maximum capacity it tends to be used at a higher level in the communications hierarchy than CWDM. These factors of smaller volume and higher performance result in DWDM systems typically being more expensive than CWDM. DWDM transponders served originally to translate the transmit wavelength of a client-layer signal into one of the DWDM system’s internal wavelengths in the 1550 nm band. Signal regeneration in transponders quickly evolved through 1R to 2R to 3R and into overhead-monitoring multi-bitrate 3R regeneration. One dwdm transponder, with its tunable channel feature, can spare all DWDM channel 10G transceivers. It eliminates the need to purchasing individual transceivers (XFPs/Xenpaks) for each DWDM channel and greatly reduces sparing costs. Transceivers versus Transponders Transceivers – Since communication over a single wavelength is one-way (simplex communication), and most practical communication systems require two-way (duplex communication) communication, two wavelengths will be required (which might or might not be on the same fiber, but typically they will be each on a separate fiber in a so-called fiber pair). As a result, at each end both a transmitter (to send a signal over a first wavelength) and a receiver (to receive a signal over a second wavelength) will be required. A combination of a transmitter and a receiver is called a transceiver; it converts an electrical signal to and from an optical signal.There is usually types transceiver based on WDM technology, for example, there is CWDM XENPAK transceiver available. Transponder – In practice, the signal inputs and outputs will not be electrical but optical instead (typically at 1550 nm). This means that in effect we need wavelength converters instead, which is exactly what a transponder is. Transponders that don’t use an intermediate electrical signal (all-optical transponders) are in development.
<urn:uuid:6298faa7-71bd-40d8-9d8f-db7a7665e569>
CC-MAIN-2017-04
http://www.fs.com/blog/a-simple-guide-to-wdm-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921377
854
3.5
4
Hurricane Sandy caused 48 fatalities in the state of New York alone, and a total death toll over 110 across the Northeast at the end of October. This Category 1 hurricane affected a handful of Atlantic states and caused a lot of destruction, particularly in New York City and New Jersey, where many streets and subway tunnels were flooded from a violent coastline. This storm, however, assigns renewed important on some next-generation sources of electricity that can remain operational when everyone else is out – smart grids. The gale-force winds also brought down trees and power lines. As a result of the wind damage, the storm left many residents without electricity. New Yorkers were not only stranded and left in the dark, but lack of power left them with no heat as the country nears the winter season. The continuing exposure to weather-related risks in the U.S., however, teaches an important lesson in emergency preparedness and risk planning for electric companies people must count on to help prevent power failures. The responsibility for providing these essential services in advance rests on electric utility industries that need to find alternative methods to help predict and prevent potential losses of power. That solution may be as simple as deploying smart grids to improve our use and supply of electricity. The smart grid, an electric power system, is a fairly new concept that has been re-engineering the electricity services industry. Used as an alternating current power grid and distributed power source, it can deliver electricity when needed most. Even uses of “smart meters,” which are software-controllable devices dedicated particularly to electricity metering, have also proved their worth for power restoration efforts. Their capacity of two-way communication provides utilities real-time data if there is any loss of power, alerting those areas where the break has occurred. Meter readings, or lack thereof, provide the vital information for those responsible for taking the appropriate action, either to solve the problem associated with the fault of the meter or power outage from extreme weather conditions, or to restore power remotely from a linked smart command center – which can turn power back on. As either part of a smart grid or used by itself, smart meters can help ensure electricity is available whenever a disastrous storm like Sandy hits. To avoid having millions of people left without power for days, utility companies should plan to have in place smart grids and/or smart meters. Their technologies would have kept them informed of the damaged and destroyed power lines, and have relieved many workers in the electricity industry from having to physically drive to each one to check them out. Edited by Braden Becker
<urn:uuid:fcc6c9b3-cdab-4d6e-b0fa-6aba6e21f129>
CC-MAIN-2017-04
http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/11/06/314730-power-outages-from-hurricane-sandy-suggest-value-smart.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00291-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954156
525
3.171875
3
This resource is no longer available When worlds combine-the convergence of storage and data networking Data networking and storage networking have existed independently from each other, for the most part, since the early days of computing. The requirements of each were unique. Storage required high performance over relatively short distances with low latency and predictable performance. Data networking was willing to trade performance to gain distance, cost and operational flexibility. High-performance computing (HPC) applications have their own requirements, but tend to focus on latency and high throughput – much like storage networks. Approaches to handling the information carried were different, too. Storage systems expected the network to provide reliable delivery, while data networks gave best effort with the expectation that the applications running over them would handle any information lost in transit. These differing requirements led to a proliferation of specialized networking implementations in the datacenter, with each technology requiring operational and administrative expertise.
<urn:uuid:15a9e924-c310-47bb-a159-90ef45178aeb>
CC-MAIN-2017-04
http://www.bitpipe.com/detail/RES/1349193626_276.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00319-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95379
181
3.21875
3
Back before Apple Inc. made computers that fit in your pocket, it made computers that fit on your desk. Some were big-box machines, others were not so portable portables and still others were -- literally -- cube-shaped. But the first Macintosh, the one that started Apple's rise to iconic status, is to the computer industry what the wheel was to cave men. It was launched during the Super Bowl on Jan. 22, 1984 -- in a minute-long commercial directed by Ridley Scott that became a classic of its own -- and went on sale two days later. It was the first of a string of Apple computers that would captivate users for the next quarter of a century. Much has changed in technology over the course of the past 25 years, with Apple often at the center of the advances we now take for granted. To celebrate the Mac's 25th anniversary, I looked back over the years and picked 10 Apple computers that altered the company's course and changed the way the world works and communicates. My first pick, naturally, is the first Mac. The Macintosh (1984) The original Mac, with its compact all-in-one design, innovative mouse and user-friendly graphical user interface (GUI), changed the computer industry. Like the wheel, the Mac just made things convenient for the rest of us. Most computers in the early 1980s were controlled exclusively through text commands, limiting their audience to true geeks. True, Apple had released a GUI with the introduction of the $9,995 Lisa in 1983, but the Mac, priced at $2,495, was the first computer to capture the attention of everyday people, who could now use a computer without learning an entirely cryptic command-line language. The mouse, coupled with a user interface that closely followed the physical "desktop" metaphor, allowed users to tackle tasks unheard of for rival computers using its two included applications: MacWrite and MacPaint. Thus was born desktop publishing. Coupled with the Postscript software licensed from Adobe Systems Inc., Apple was able to also sell the Apple Laserwriter, which helped bring about WYSIWYG design, allowing artists to output precisely what was on the Mac's 9-in. black-and-white screen. In case you forgot, the first Mac came with 128KB of RAM and zipped along with an 8-MHz processor. Reviewers were not always friendly, but the stories of those who helped bring it to life, collected at Folklore.org, offer a fascinating look at the first computer to capture mainstream attention. The PowerBook 100 series (1991) On Oct. 21, 1991, Apple unveiled its new portable lineup, which included the PowerBook 100, 140 and 170. These "good, better and best" models, the culmination of a joint venture between Apple and Sony Corp., featured a 10-in. monochrome screen and yielded a design that became the blueprint for all subsequent laptop designs from all computer manufacturers. Apple's earlier attempt at a portable Macintosh -- aptly named the Macintosh Portable -- weighed in at a not-so-portable 16 lb. But the Macintosh Portable did introduce the trackball to mobile computing, in this case located to the right of the keyboard. The PowerBook line placed the keyboard back toward the LCD screen, allowing room for users to rest their palms. It also conveniently allowed Apple to locate the trackball at the center of the palm rest. That made it easy for either left- or right-handed users to operate the machine. The PowerBook series also introduced Target Disk Mode, which allowed the laptop to be used as a hard drive when connected to another Macintosh using the built-in SCSI port. It also came in a fashionable dark gray, breaking from the standard beige of the PC industry. The PowerBook 100 series brought in $1 billion in revenue for Apple in its first year, and its impact is still felt to this day. If you're using a laptop with a trackball or track pad between your palms, you can thank the PowerBook 100 design. (If you've got a track pad, you can thank the PowerBook 500. In 1991, that particular model was still three years away.)
<urn:uuid:15d7e386-046f-4d87-83ca-c0048e97ec3c>
CC-MAIN-2017-04
http://www.computerworld.com/article/2530440/computer-hardware/opinion--the-top-10-standout-macs-of-the-past-25-years.html?nsdr=true
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00493-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966093
863
2.609375
3
Generation Z: they’ve been tweeting, posting and blogging since their fingertips got hold of their first laptop or tablet. They know their way around a smartphone like they were born with it as part of their hand. Technology is well and truly weaved into the fabric of their lives. But as tech becomes an increasingly natural part of their lives, and with 52% of students admitting to being victims of cyberbullying, the risks of the online world grows. Did you know that in 2006 the state of California enacted legislation with specific reference to technology in the classroom? However, according to a September 2015 study by the US Department of Health and Human Services Cyberbullying Research Center, California is the number 1 state for the highest level of reported bullying incidents. With increasing pressure, worrying statistics and a legal duty of care to educate both students and teachers on the appropriate and ethical use of education technology, how clued-up are you on what school districts in California are legally required to implement? We’ve pulled some key points from California’s legislation that we think should be on your radar: While Californian legislation requires school districts to have a long-term education technology plan in place – and many districts have adopted their own guidelines and standards for internet safety in schools – have you considered a new approach to keeping students safe in the online world? Monitoring online activity allows students to learn how to navigate the digital world safely. Additionally, it allows instructors and administrators to address issues with cyberbullying as incidents occur. And with good real-time remote monitoring and management software in place, you can manage online behaviour on-the-fly – just like any other behavioural management issue. Find out more about our remote monitoring and management software. Follow us on Twitter!
<urn:uuid:c5705df6-58eb-4abb-991d-c3d588973923>
CC-MAIN-2017-04
https://www.imperosoftware.com/internet-safety-in-california-five-requirements-school-districts-should-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964403
362
3.015625
3
Data from Consumer Reports. Not many consumers know about Radio Frequency Identification (RFID), a wireless technology that allows objects and people to be tagged and tracked. RFID tags contain microchips and tiny radio antennas that are embedded in all kinds of products, credit cards, or stuck on labels. A three-month investigation in the June 2006 issue of Consumer Reports has found the RFID industry lacking in the necessary measures to strengthen tag security against identity thieves. RFID technology offers huge cost savings to business and it offers consumers conveniences such as speedier checkouts, and public benefits, including ways to manage toxic waste and encourage recycling. However, the tags are also a powerful new means of data collection about consumers, the things they buy, the books they read, and the places they travel. During the investigation, Consumer Reports found: - Consumers are barely aware of RFID technology, yet its use is exploding, with sales of an estimated 1.3 billion tags this year. - RFID tags are currently being used in credit cards, prescription-medicine packaging, computer equipment, TVs, clothing, cell phones, and the workplace. Soon the tags will be embedded in tires for safety recalls. - Plans to use RFID tag technology include incorporating tags into the entire drug-supply chain, and they already are being used in packing for prescription medicines such as Viagra. - The U.S. government has begun issuing new e-Passports, which contain an RFID chip in the back cover. The chip in the e-Passport will have enough memory to add fingerprints or iris patterns in addition to basic data already in the passport. - The potential for snooping has increased dramatically. Security experts in the U.S. and abroad have cracked payment devices and chips implanted in humans. - In an effort to patch up privacy protections, at least seven states are considering RFID bills of varying quality. - The RFID industry has mounted a subtle PR campaign to fend off consumer and government objections and to forestall government regulation. The report appears in the June 2006 issue of Consumer Reports, which will be availabe May 9th and is also available online. How RFID Tags Work An RFID tag contains a microchip and a tiny radio antenna. The tag broadcasts the unique identifying number of the item to any compatible reader within range, from a few inches for credit cards, up to 20 feet for merchandise tags, and up to 750 feet for battery-powered tags in toll passes. The reader then communicates with a computer database, where information linked to that ID number is stored, such as details about when a product was manufactured or medical records for people who have tags implanted. That database in turn can be linked to other networks via the Internet to allow for more widespread data sharing. While the RFID business steams along, several matters remain unaddressed. Several data-security experts recently demonstrated that when information is communicated wirelessly between RFID devices and readers, for example, it's possible to eavesdrop electronically and to pluck sensitive information out of thin air. Some argue that RFID technology could give the government a ready-made surveillance system as scanners become ubiquitous. Federal agencies and local law-enforcement agencies already negotiate contracts with private data collectors to obtain personal information they might otherwise be legally prohibited from collecting. Commercial data brokers such as ChoicePoint, Lexis-Nexis, and Acxiom compile computerized dossiers that in one click reveal to government agencies, potential employers, loan officers, or private investigators information that may include your home address, phone number, Social Security number, photograph, legal transgressions, details about divorces, and financial records, among other personal data. The idea that a tiny radio chip might be traveling in their shirts or shorts doesn't sit well with Americans. The public unease has put the RFID industry on the defensive and its leaders proclaim the importance of addressing the consumer's privacy concerns. But when Consumer Reports asked to discuss the subject with executives of one company, attempts were stonewalled by public relations representatives. "It's essential to develop the proper framework to protect consumers from the unprecedented privacy and identity theft risks that come with RFID," said Andrea Rock, senior editor at Consumer Reports.
<urn:uuid:e05772ca-8172-4e45-8cc8-adea3cea02ac>
CC-MAIN-2017-04
http://www.govtech.com/security/Consumer-Reports-Finds-Personal-Privacy-Concerns.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00245-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938686
874
2.734375
3
How do you stop a spacecraft traveling at 47,500 mph? Well if you're NASA you fire its rockets and that's exactly what the space agency did this week to slow down its Stardust satellite to ultimately let it snap better pictures of the Tempel 1 comet it will pass by next year. NASA this week said Stardust-NeXT fired its engines for 22 minutes 53 seconds on Feb. 17 to purposely delay its arrival at comet Tempel 1 by 8 hours 21 minutes, altering the spacecraft's speed by 54 miles per hour. The spacecraft's velocity relative to the sun is 47,500 mph, NASA said. Layer 8 Extra: Top 10 cool satellite projects The Lockheed Martin-built spacecraft will still fly by the comet on Feb. 14, 2011, Valentines' Day but hopefully the delay will let the satellite get better high-resolution images of the comet. NASA said that's important because the comet rotates, allowing different regions of the comet to be illuminated by the sun's rays at different times. Mission scientists want to maximize the probability that areas of interest previously photographed by NASA's Deep Impact Deep Impact mission in 2005 will also be covered by the sun's rays and visible to Stardust's camera when it passes by. According to the NASA, the comet's surface features three pockets of thin ice. The area the ice covers is small. The surface area of Tempel is roughly 45 square miles or 1.2 billion square feet. The ice, however, covers roughly 300,000 square feet. And only 6% of that area consists of pure water ice. The rest is dust. Along with new high-resolution images of the comet's surface, Stardust-NExT will also measure the composition, size distribution, and flux of dust emitted into the coma, and provide new information on how Jupiter family comets evolve and how they formed 4.6 billion years ago, NASA said. NASA launched the satellite on Feb. 7, 1999 and it was the first spacecraft in history to collect samples from a comet and return them to Earth for study. While its sample return capsule parachuted to Earth in January 2006, mission controllers were placing the still viable spacecraft on a trajectory that would let NASA the re-use the already-proven flight system if a target of opportunity presented itself, NASA said. In January 2007, NASA re-christened the mission "Stardust-NExT" (New Exploration of Tempel), and the Stardust team began a four-and-a-half year journey to comet Tempel 1. This will be humanity's second exploration of the comet - and the first time a comet has been "re-visited," NASA stated. The spacecraft has completed it 4,000th days of flight and traveled approximately 3.4 billion miles since its launch. Layer 8 in a box Check out these other hot stories:
<urn:uuid:78e05413-8aa6-44eb-ae27-decda894b7af>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229905/security/nasa-taps-the-brakes-on-comet-chasing-satellite.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00153-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94193
589
3.109375
3
Many people confuse data privacy and data security. While there are similarities, privacy and security are not the same thing. Data security focuses on the confidentiality, integrity and availability of information and information technology resources, whereas data privacy is about an individual’s ability to retain control over his or her personally identifiable information (PII). As individuals, we should ensure we are responsible “digital citizens” when using the Internet. Part of this responsibility includes understanding how to configure and manage the privacy settings for the Internet services that we use. This includes social networking services like Facebook and Twitter. Social networking services tend to change their privacy options frequently, so it is important to ensure you understand how you have configured the privacy settings for the social networking services you use. In the case of Facebook, they have recently introduced a powerful new search feature called Facebook Graph Search. This new feature will improve the ability to search and find information; however, it can increase the likelihood that other people can find your information through the search if your privacy settings aren’t set correctly. You must be sure your privacy settings are properly configured so that your personal information (posts, photos, likes, etc) doesn’t end up as a search result for someone you don’t wish to have access to your data. The EFF has an informative article about how to protect your Facebook privacy from the new Graph Search. In addition to social networking, many of us are now using applications on our smart phones and tablets. Some of these applications are able to access privacy data from the device on which they run. One example of this is “location settings” for applications. The ability to have the application know your location can improve the application’s functionality and ease of use, but it can also put your privacy at risk. Many devices have the capability to restrict an application’s ability to determine the user’s geographical location (also known as “geolocation”). Mobile devices often use a built-in GPS along with wireless hotspot proximity to determine location. You should carefully consider sharing geolocation information with applications, especially on devices used by minors. Decide which applications should have access to location services and disable access for all others. Does the game app you’re playing really need to know where you’re physically located? Think about it. Geolocation privacy concerns are not limited to apps though, as most smart phones include built-in cameras that have the ability to include geolocation metadata in each digital photograph captured by the device. Unless you disable the location awareness setting for the phone’s camera, every photo you take and share will contain geolocation metadata that can be examined by anyone with whom you share the photo. With the explosive growth in the number of applications available, it shouldn’t be surprising that some of them have been discovered to have software defects that have unintended consequences in regard to privacy. Here is a case in point: a recent popular mobile application “Crazy Blind Date”, coordinates blind dates: “Pick a time, pick a place, we find you a blind date”, and claims to keep your personal contact information, such as your phone number and email address confidential. However, the Wall Street Journal discovered that due to a programming mistake, technically-inclined users of the service were able to access the profile information of other users (including birth date and email address). The developer of the application, OKCupid.com, promptly fixed the problem after being informed by the Wall Street Journal. It is important to be aware that applications may have access to your privacy information, and that there is potential for unintentional disclosure of this information, either as a result of software defects, or improper configurations. For individuals, the key points in regard to electronic data privacy are: - Understand the services and devices you’re using to make certain you know how your privacy data is, or isn’t, being shared electronically. - Take time to review the settings for the Internet services and devices you and your family members use. - Think about what information you are comfortable sharing, and the impact of the improper disclosure of the information you’ve shared. Businesses and data privacy
<urn:uuid:37b77484-f6e4-4b03-8e41-d1ea61501fdd>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/01/29/privacy-tips-for-social-networking-apps-and-geolocation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00061-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9423
873
3.28125
3
Yang Y.,Beijing Municipal Environmental Monitoring Center | Yang Y.,Beijing Municipal Key Laboratory of Atmospheric Particulate Monitoring Technology | Li J.,Beijing Municipal Environmental Monitoring Center | Li J.,Beijing Municipal Key Laboratory of Atmospheric Particulate Monitoring Technology | And 13 more authors. Huanjing Kexue Xuebao/Acta Scientiae Circumstantiae | Year: 2015 In the study, 491 PM2.5 samples from 10 sites in Beijing were collected in the campaign from August 2012 to July 2013 and used to analyze the major sources of PM2.5. Five types of point source emissions, two types of mobile emissions and four types of fugitive emissions were defined and the chemical mass balance (CMB) model was used to conduct source apportionment analysis. Results indicated that the major sources of PM2.5 were organic matter (20%), secondary nitrate (20%), secondary sulfate (16%), motor vehicle (16%), coal burning (15%), soil dust (6%) and unidentified (7%). Compared with the previous results, the contribution from coal burning declined, while that of secondary inorganic matter and organic matter increased. Source apportionment of the key components showed that 25% of sulfates came from coal-burning boiler emission and 17% of OM emitted by motor vehicle. The source of each site was quite different, showing the characteristics of local pollutant source emission. To improve the air quality in Beijing City, it is important to take action regionally to reduce PM2.5 and precursor gases emissions. In the meanwhile, the local traffic and coal-burning emission should be more strictly controlled. ©, 2015, Science Press. All right reserved. Source
<urn:uuid:7558a930-4954-4bed-b3e6-b4342bcd6f88>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/beijing-municipal-key-laboratory-of-atmospheric-particulate-monitoring-technology-2106764/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00061-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90489
348
2.765625
3
People in developed countries are living longer. This isn’t exactly shocking news, but it is a harbinger of greater reported instances of disease, especially cancer. However, mortality rates from cancer are in decline. What makes that possible is increasingly personalized radiation treatments that harm fewer non-infected cells than ever before. Cloud environments featuring more elasticity help along that process by providing hospitals a cost-efficient avenue to run simulations on particular radiation treatments, as highlighted by the presentation below by BonFIRE. The biggest key to developing personalized radiation treatments is finding the right angles at which to launch the high-energy beams that would destroy the cancerous cells. Before, this was often done in manners that harm a significant portion of healthy cells, weakening the immune system and leaving the person vulnerable to other disease and infection. With enough computing power, however, facilities can run simulations that do well in approximating the effect of rays at given angles. “Thanks to the elasticity of cloud environments,” the video noted, “it is possible to control the execution of the treatment simulation to add more virtual machines to the cluster if necessary and return the result at the time initially set by the radiophysicist.” Returning results of simulations promptly and then executing them is critical to eliminating cancer in the early stages. This infrastructure has been made possible by the eIMRT project, which identifies and calls upon multiple cloud providers and their clusters for each project a single hospital would need to run. Of particular importance here is the ability to run simulations even when certain providers are experiencing bottlenecks or outright failures at their respective data centers. “There is always the risk that the physical infrastructure of the cloud provider fails,” as was noted in the presentation “as happened in the summer of 2012 with two main players in the world cloud market. If the hospital must have the results on a given day, eIMRT offers a fault tolerance solution using multiple cloud providers so that if one fails, the simulation proceeds normally.” Cloud computing, in this case through BonFIRE and the eIMRT project, could be a step in providing access to computing services that small but vital institutions like hospitals need to keep humans healthy.
<urn:uuid:2d5d6c36-bd91-41b5-b73e-34e66ff9ad5a>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/05/30/throwing_cancer_on_the_bonfire/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952718
455
2.984375
3
Meador D.P.,Center for Applied Horticultural Research | Fisher P.R.,University of Florida | Guy C.L.,University of Florida | Harmon P.F.,University of Florida | And 2 more authors. Journal of Environmental Quality | Year: 2016 Petrifilms are dehydrated agar culture plates that have been used to quantify colony forming units (CFU) mL-1 of either aerobic bacteria (Petrifilm-AC) or fungus (Petrifilm-YM), depending on substrate composition. Microbes in irrigation systems can indicate biofilm risk and potential clogging of irrigation emitters. The research objective was to compare counts on Petrifilms versus traditional, hydrated-agar plates using samples collected from recirculated irrigation waters and cultures of isolated known species. The estimated count (in CFU mL-1) from a recirculated irrigation sample after 7 d of incubation on Petrifilm-YM was only 5.5% of the count quantified using sabouraud dextrose agar (SDA) with chloramphenicol after 14 d. In a separate experiment with a known species, Petrifilm-YM did not successfully culture zoospores of Phytophthora cactorum. Isolates of viable P. cactorum zoospores were cultured successfully on potato-dextrose agar (PDA), with comparable counts with a vegetable juice medium supplemented with the antibiotics pimaricin, ampicillin, rifamycin, pentochloronitrobenzene and hymexazol (PARP-H). The quantification of Xanthomonas campestris pv. Begoniaceae on Petrifilm-AC was not significantly different (p < 0.05) than on PDA, but was lower than on Reasoner and Goldrich agar (R2A) or with a hemocytometer. The current formulation of Petrifilm-YM is unlikely to be a useful monitoring method for plant pathogens in irrigation water because of the inability to successfully culture oomycetes. However, Petrifilm-AC was an effective method to quantify bacteria and can provide an easy-to-use on-farm tool to monitor biofilm risk and microbial density. © American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. 5585 Guilford Rd., Madison, WI 53711 USA. All rights reserved. Source Villavicencio L.E.,Center for Applied Horticultural Research | Villavicencio L.E.,Everris Inc. | Bethke J.A.,University of San Diego | Dahlke B.,Center for Applied Horticultural Research | And 2 more authors. Journal of Economic Entomology | Year: 2014 The aloe mite, Aceria aloinis Keifer, causes physiological and morphological alterations in species of Aloe L. We conducted three trials to evaluate the potential of various miticides for curative and preventive control of damage caused by A. aloinis. In the first trial, the efficacy of nine miticides against aloe mite damage was assessed without the removal of infected tissue in Aloe reitziiae Reynolds. Although significant reductions in the number of mites and eggs were found due to the treatments, miticide application did not reduce the amount of plant area damaged or damage severity. Once the plants are infected, the irreversible damage by aloe mite progresses. The second trial analyzed the effects of seven miticides on aloe mite damage on Aloe 'Goliath' plants in which the damaged tissue was removed. Reduced damage severity and mite number was observed in all treated plants. To determine if aloe mite damage could be prevented, the effects of six miticides with and without surfactant were tested on uninfected plants of Aloe spinosissima A. Berger in a third trial. Except for chlorfenapyr and fenazaquin, all treatments reduced plant damaged area, damage severity, and the number of mites 60 wk following three miticide applications. The severity index in the second and third trials suggested that all treated plants would be marketable. Our study demonstrated that there were miticides that were effective by contact (carbaryl), translaminar (spiromesifen), and systemic (spirotetramat) action, which can be used to cure and to prevent aloe mite plant damage alone or in combination with cultural practices. © 2014 Entomological Society of America. Source
<urn:uuid:b4443e8e-3f72-407d-bc2b-d7c60b479725>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-applied-horticultural-research-1219668/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00292-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912097
966
2.6875
3
In which of the following is the size of the buffer used during an online backup defined ? a. DB2_backupsz registry variable b. DBM configuration of database instance. c. DBM.INI file for the database instance d. DB configuration for database being backed up. Given the following : CREATE TABLE tab1(col1 int CONSTRANT notnull CHECK(col1 IS NOT NULL), col2 CHAR(10) which of the following will enforce uniqueness of col1 which currently does not contain duplicate values ? a.Create primary key on col1. b.Create unique index on col1 c.Create a cluster index on col1 d.Create unique constrant on col1 backbufsz : This parameter specifies the size of the buffer used when backing up the database when a value is not explicitly specified in the backup utility. Here is the explanation: If the statement was unique constraint, then Db2 will automatically create a unique index. Since it is notunique constraint a unique index has to be created. There is a difference between defining a unique constraint and creating a unique index. Both enforce uniqueness, a unique index allows nullable columns and generally cannot be used as a primary key.
<urn:uuid:a6663dcb-3aba-49b1-b757-3d1b6c0359b6>
CC-MAIN-2017-04
http://ibmmainframes.com/about21193.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.76734
263
2.640625
3
Which statement about the GIF and JPEG file formats is true? Armand has been searching the Web for a gift for his son Tomas. Tomas asks to use thecomputer. Armand wants to ensure that Tomas cannot see which sites he has visited. Whatshould Armand do? Which Internet address class uses 8 bits for the host portion of an IP address and 24 bits for thenetwork portion of the IP address? Which protocol is used at the application layer of the Internet architecture model? Which option does a browser typically allow you to customize after you install the browsersoftware? Which of the following choices is an example of a country-level domain? Which hardware device’s only function is to amplify an electronic signal? Which of the following was a major feature of the ARPANET? Which option does a browser software package typically allow you to customize after you installthe browser software? Anna wants to search the Internet for information about network cards with an optical interface.She enters the following string in a search engine:network card opticalThis string returns too many results. Which string can Amanda enter to narrow her search results?
<urn:uuid:e416aea5-636f-4f9b-8dae-a92670f5d837>
CC-MAIN-2017-04
http://www.aiotestking.com/ciw/category/exam-1d0-410-ciw-foundations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909454
236
2.515625
3
10 Security Concerns for Cloud Computing The flexibility, reduced cost, and mobility of cloud computing have made the concept a hot topic. Before implementing this method of computing, however, it is important to consider the security of the "cloud." During this webinar, we will help you understand some of the risks and benefits of cloud computing so you can decide if it is the right solution for you. Global Knowledge instructor Debbie Dahlin has more than 30 years of IT experience as a practitioner and educator. She started her career as a trainer in the US Navy and has since trained military personnel and civilians, locally and abroad, through her efforts as a continuing education instructor, a college adjunct professor, and a high school instructor. When not training, she performs programming and systems integration consulting. Debbie holds a BS in Chemistry, with minors in Math and Computer Science, and is currently completing an MS in Computer Security. Her certifications include CISA, CISSP, CASP, Security+, CWNE, CWNA, CWSP, CWAP, CEH, CHFI, ECSA, and LPT. - The concept of cloud computing - Cloud computing models - Cloud computing providers - The benefits of cloud computing - Security concerns - Types of attacks
<urn:uuid:c1ac57b3-ec2b-41a5-beb6-9c5f733e2b47>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/recorded-webinar/10-security-concerns-for-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00192-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952248
258
2.515625
3
IBM researchers have developed a super-efficient chip inspired by the functioning of the human brain. Named TrueNorth, the chip features 5.4 billion transistors arranged in a network of 4,096 neurosynaptic cores, yielding the equivalent of one million neurons and 256 million synapses. Despite being one of the largest CMOS chips ever built, TrueNorth consumes just 70mW during real-time operation. The chip was built for IBM by Samsung Electronics using 28nm process technology. This is the latest breakthrough to come out of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program, which has received $53 million in funding since 2008 from the Pentagon’s Defense Advanced Research Projects Agency. A single core hardware prototype was announced in 2011, while the software ecosystem, programming language and chip simulator developed in 2013 enabled applications to be created for this new kind of device so different from the von Neumann architecture that has dominated computing since the 1940s. The effort undertaken by IBM with help from Cornell Tech and researchers around the world is part of a new approach to computation, called cognitive computing, and TrueNorth represents a major step forward for the emerging science. The synaptic chip will be featured in tomorrow’s edition of Science. “IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems – that complement today’s von Neumann machines – powered by an evolving ecosystem of systems, software, and services,” said Dr. Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-Inspired Computing, IBM Research. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation.” This a truly transformative approach with each core integrating memory, computation and communication. Cores operate without a clock in an event-driven fashion and the distributed mesh network supports very fast parallel processing that is inherently fault tolerant. Perhaps most impressive is the device’s energy profile. The chip has a power density of 20mW/cm2, nearly four orders of magnitude less than today’s microprocessors. While it’s not quite as efficient as the human brain, it is a major step forward and could pave the way for next-generation supercomputers, which will be expected to output 100-1000X more compute power with only a 10X increase in energy draw. To scale beyond the single-chip level, additional chips can be aggregated using a tile approach to build a foundation for future neurosynaptic supercomputers. IBM has already succeeded in building a 16-chip system with sixteen million programmable neurons and four billion programmable synapses. The next step is creating a system with one trillion synapses that requires only 4kW of energy. After that IBM has plans to build a synaptic chip system with ten billion neurons and one hundred trillion synapses that consumes only one kilowatt of power and occupies less than two liters of volume. IBM has commercial ambitions for the new hardware and software ecosystem that extend to mobile, cloud, supercomputing and distributed sensor applications. Targeted fields for the neuro-inspired chips include public safety, vision assistance, home health monitoring and transportation.
<urn:uuid:f38301bc-b18f-4a8e-8969-6f98522bbd76>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/07/brainy-ibm-chip-packs-one-million-neuron-punch/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918551
743
3.1875
3
A new report on green technologies estimates that their use could limit total data center greenhouse gas emissions by 13 percent through 2016. The report, “Green Data Centers” from Pike Research, explores global green data center trends with regional forecasts for market size and opportunities. Data center efficiencies – and ways to reduce their energy footprints – continue to top IT executives’ agendas. Budgets remain tight and companies dislike having to spend their hard-earned cash on operational expenses that do little for top-line growth (except, well, keep the lights on), and data center operators are finding it tough to keep energy demand in check while continuing to grow their capacity. The rising price of electricity, greenhouse gas emissions, IT improvements, cloud computing, virtualization, large advances in cooling techniques and improvements in monitoring and management solutions are all driving the need to reduce energy consumption, according to a press release issued by Pike Research about the new study. Those are combined with the fact that today’s data center industry consumes around 1.5% of the world’s energy. According to the new report, if energy efficient data center technologies and best practices are widely the growth of emissions of greenhouse gases (GHGs) from data centers could be significantly reduced over the next several years. The research firm says that if current trends continue, GHG emissions from data centers are expected to total 1326 million tons of carbon dioxide-equivalent. But green data center best practices could reduce that total to 1156 tons, a difference of 13% compared to the business-as-usual trend, according to firm’s analysis. “The drive toward green data centers is a response to business requirements to reduce costs across the company as well as a response to environmental concerns,” research director Eric Woods said in a prepared statement. “Within the data center environment, that translates to a mandate to reduce energy consumption, which in turn is driving innovation. Data center operators are exploring new ideas related to business models, facility construction, layout and design, air flow dynamics, new technology, and monitoring and management tools.” Pike Research forecasts that the green data center will offer an annual market opportunity that exceeds $45 billion worldwide by 2016. The Asia Pacific region is projected to have the highest revenue growth through 2016, with a compound annual growth rate (CAGR) of just under 30% between 2011 and 2016. Double-digit revenue growth is also projected for Europe and North America (CAGRs of almost 27% for both markets). Interestingly, tech companies that are seemingly tackling the environmental impact of their data centers are taking heat from Greenpeace. The international environmental organization this month released its report, How Clean is Your Cloud?, on energy consumption and energy sourcing in the data centers of some of the largest tech companies. The report looks at the data center deployments of 14 of the leading players in the market. You can download the full report here. Greenpeace doesn’t hold back, and few – if any – of the technology companies included in the report appear exceptional in their green efforts. Yahoo fared the best, while Apple took a tough beating. I’ll delve more into the report in a blog to follow. Meanwhile, I’d like to hear from you… what are you all doing in your organizations to “green” up your data center act? And do you think it matters?
<urn:uuid:06ab952b-5acc-4087-ae9d-1ed0daddb4ae>
CC-MAIN-2017-04
http://www.itworld.com/article/2725877/data-center/greening-up-data-centers-could-matter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00430-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947698
694
2.75
3
Google, joining forces with CERN, The LEGO Group, National Geographic and Scientific American, has announced the 2012 Google Science Fair, an online competition open to 13-to-18-year-olds around the world. Budding scientists can submit projects (pose a question, develop a hypothesis, conduct science experiments to test it) as individuals or in groups of up to three people. The deadline is April 1 and parental consent is required. Prizes include a $50,000 college scholarship, a 10-day trip to the Galapagos Islands and more. Judges include Google VP and Internet pioneer Vinton Cerf, CERN Director Steve Myers, oceanographer Sylvia Earle and others. The top 3 2011 Google Science Fair winners -- all girls -- were recognized for innovations in areas such ovarian cancer research and lung health as well as making grilled chicken safer to eat. The science fair attracted more than 7,500 entries from 90-plus countries. Last year, only English submissions were accepted, but this year submissions will be accepted in 13 languages. 90 regional finalists will be announced in May, 15 finalists will be announced in June and winners will be unveiled July 23 in Mountain View.
<urn:uuid:ad161002-3caa-4c9c-a1f2-54021e2dac17>
CC-MAIN-2017-04
http://www.networkworld.com/article/2221457/google-science-fair-back-for-2nd-year.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00366-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9618
242
2.59375
3
As computers become more powerful, the cooling systems they rely upon to beat the heat must become more sophisticated. Recent advancements from IBM and Sandia National Laboratories show how far technology has come from the old days, when a single fan was enough to cool down a computer tower. Europe’s fastest supercomputer, call SuperMuc, will be one of the first facilities to use a hot-water cooling system. The hot water flowing in the cooling system can reach 113 degrees F. Built by IBM, the supercomputer and cooling system at the Leibniz Supercomputing Centre in Germany consumes 40 percent less energy than air-cooled facilities, according to the company. IBM explains how the system works in this video. Source: IBM Labs What’s 10 times smaller, yet 30 times more efficient than a typical CPU cooling fan? The “Sandia Cooler,” developed by the Sandia National Labs. Sandia says its new cooler design combines the functionality of cooling fins with a centrifugal impeller. Spinning at 2,000 RPM just a thousandth of an inch from the CPU, the device is much more effective at eliminating heat and doesn't clog with dust. Source: Sandia National Laboratories
<urn:uuid:690f7165-d574-45ab-97a3-a1119dfba081>
CC-MAIN-2017-04
http://www.govtech.com/technology/Two-Cool-Technologies-Cooling-Computers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00484-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925877
253
3.984375
4
Significant investments have been flown in to Canada’s solar sector in the last four years. Photo voltaic power capacity in Canada grew at an annual rate of 25% between 1994 and 2008. In recent years, the growth was 49% in 2011 and 50% in 2012 due to Ontario incentives programs. Canadian solar market is growing by leaps and bounds with the government estimating that by 2025 solar energy would contribute as much as five per cent of the country’s energy needs. With fossil fuel prices fluctuating continuously and disasters like Fukushima and Chernobyl raising serious questions about nuclear power, renewable sources of energy are the answer to the world’s growing need for power. Hydro Power has environmental concerns so apart from water the other renewable source of energy in abundance is Solar. The Earth receives 174 petawatts of solar energy every year. It is the largest energy source on the Earth. Other resources like oil and gas, water, coal etc. require lot of effort and steps to produce electricity, solar energy farms can be established easily which can harness electricity and the electricity produced is simply given to the grid. The Canada Solar Power Market is estimated to reach $XX billion in 2020 with a CAGR of 9.1% from 2014 to 2020. Moreover, The Global annual solar power production is estimated to reach 500GW by 2020, from 40.134 GW in 2014, making this market one of the fastest growing one. Falling costs; stable policy and regulation; downstream innovation and expansion; and various incentive schemes for the use of renewable energy for power generation are driving the solar power market at an exponential rate. On the flipside, Canada does not have optimal access to sunlight owing to high latitudes of much of the country; high initial investment, intermittent energy Source, and requirement of large installation area to setup solar farms are restraining the market from growth. In the recent years, lot of research is going on in this field to make production easier, cheaper and also to make the solar panels smaller and more customer friendly. Lot of efforts are being put into increase the efficiency of solar panels which used to have a very meagre efficiency percentage. Different techniques like Nano-crystalline solar cells, thin film processing, metamorphic multijunction solar cell, polymer processing and many more will aid the future of this industry. This report comprehensively analyzes the Canada Solar Power Market by segmenting it based on type (Concentrating type, Non Concentrating type, Fixed Array, Single Axis Tracker, and Dual Axis Tracker) and by Materials (Crystalline Silicon, Thin Film, Multijunction Cell, Adaptive Cell, Nano crystalline, and others). Estimates in each segment are provided for the next five years. Key drivers and restraints that are affecting the growth of this market were discussed in detail. The study also elucidates on the competitive landscape and key market players.
<urn:uuid:bc0153cb-70b7-4880-b4a5-b75309a9f434>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/canada-solar-power-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947876
587
2.625
3
A cluster of 1.716 Sony Playstation3 games consoles has been used in a real-life "call of duty" at the US Airforce research laboratory. The Condor Cluster uses the consoles to create a heterogeneous supercomputer capable of 500 trillion calculations per second, according to Mark Barnell, director of high performance computing at the Airforce Research Laboratory. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. "It is about the 35th or 36th fastest supercomputer in the world, but with some upgrades in the next eight or nine months we could boost this to, say, the 20th fastest, and at the same time make it the greenest supercomputer." The $2m (£1.27m) machine is one 40th of the price of equivalent supercomputers and will be used to research computational intelligence. The computer could be used to read millions of lines of data and correct human errors. In theory it could be taught to learn by filling in missing information itself. According to Barnell the Condor Cluster can read up to 30 pages of information per second with 99.9% accuracy, even if 20% of the actual information is missing.
<urn:uuid:9296bc2c-3da6-4ef7-94cf-ebc00dd82805>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280094623/Call-of-duty-for-PS3-in-US-Airforce-Condor-supercomputer
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90636
259
2.734375
3
Protecting the Personal Information of K-12 Students Are your students a target for identity theft? Think of all the applications, cloud services, educational sites your teachers and students log into on any given day. How many user names, profiles, and passwords do your students need to keep track and remember? As school leaders and parents begin to realize the amount of private student data these educational technology tools have collected, there has been a strong push to restrict the educational technology industry from using personal information collected from students. In 2015, 15 states have passed 28 laws to protect data-privacy. California, to name one state, passed the Student Online Personal Information Protection Act, which prohibits companies that working with schools from selling or disclosing student personal information or marketing to students. Whether your school is in California or not, it is important for administrators and teachers to understand the importance of protecting children’s privacy. In addition to passing laws to protect student data-privacy, schools must practice due diligence in evaluating technology service providers. Schools providing awareness and training to faculty and staff on data security and how to evaluate educational website and applications do students in their care a service. Free resources like Common Sense Media, which rate educational websites and applications can save educators a lot time and help them make better choices when implementing a new solution for their students. For the record, All Covered Education has signed the Student Privacy Pledge. Flanigan, Robin L. "Why K-12 Data-Privacy Training Needs to Improve." Education Week. Publish dates: Education Week, October 21, 2015. Web. July 20, 2016. Posted by Judy Nguyen, All Covered Teacher & Learning Consultant/Trainer
<urn:uuid:8369449e-12c5-44aa-9413-99683fc12f89>
CC-MAIN-2017-04
https://www.allcovered.com/blog/protecting-personal-information-k12-students-295/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941775
347
2.6875
3
Electronic Waste Recycling Resource Center E-waste is a popular, informal name for electronic products that are discarded. Computers, monitors, televisions, DVDs, stereos, fax and copy machines are examples of common electronic products. By recycling your unwanted and broken electronics, you can help reduce e-waste. Depending upon your state, some electronic equipment may be covered by a manufacturer "take back" or electronic recycling program. Please check with your state's environmental agency for more information about recycling programs in your area. To help you find your state's environmental agency, click on your state.
<urn:uuid:5d35d93a-1661-4f30-a8ac-d80023f02371>
CC-MAIN-2017-04
https://www.cdwg.com/content/about/recycle/recycle.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904975
125
2.53125
3
Nov 95 Level of Govt: State. Function: Environmental Protection. Problem/situation: Effective response to environmental management was inhibited by multiple databases in the Massachusetts's Executive Office of Environmental Affairs. Solution: EOEA developed an integrated environmental management system connecting each division. Jurisdiction: Massachusetts. Vendors: EDS, Oracle. Contact: John Rodman, assistant secretary of Environmental Affairs, 617/727-9800 x217. Bill Loller G2 Research Massachusetts's Executive Office of Environmental Affairs (EOEA) faced a series of problems in 1988. The problems could be boiled down to the lack of coordination by a myriad of subordinate agencies involved in monitoring environmental processes were obstructing the agency's ability to assess environmental quality holistically and accurately. Within the EOEA, the Department of Environmental Protection maintained separate databases to support divisions managing the air, land or water quality programs. The separation of these activities led to multiple inspectors at a single facility to approve of air, land and water permits as well as storage of key environmental information in separate databases. These factors created inefficiency within the department and confusion in the private sector. Inspectors often entered the same data into each of the three databases, and companies were forced to work with numerous engineers and inspectors in order to receive the permits necessary to legally operate. In 1988, the Commissioner of the Department of Environmental Protection developed a new environmental agenda and reorganized the programs under three new bureaus. The goal was to develop comprehensive, not segmented, environmental protection; cross-media inspections and compliance strategies; and inter-agency data sharing. Getting Connected In order to successfully fulfill these objectives, EOEA required a new environmental management system which would integrate environmental data and connect each division. The development and implementation of an Environmental Protection Integrated Computer System (EPICS) ensured that the new operational framework for the department would succeed. The implementation of an integrated system became a priority in 1989. In response to the Waste Prevention Facility-wide Inspections to Reduce the Source of Toxins (FIRST) initiative, the department solicited proposals from the vendor community for an integrated environmental database. EDS was selected based on the technical merits of the proposed system as well as the commitment of EDS to EPICS implementation and future planning. The department had previously worked with EDS for hardware and software in addition to consulting services. EPICS revolves around the facility master file (FMF), the comprehensive data model which can be accessed by all programs within the Department of Environmental Protection. CASE methodology was utilized in developing and designing this integrated data model. Essentially, FMF integrated ten individual databases from the following areas: hazardous waste transporters, hazardous waste, transfer storage disposal, hazardous waste handler, air quality, solid waste, industrial waste water, water pollution control, water supply, water management, and cross connections. The integration of these databases eliminated data redundancy and provided all programs with a complete environmental picture of a facility and the status of its regulatory compliance. A single Oracle database stores all of the FMF information. subsystems within EPICS. The regional offices located across the state and within the Boston area access EPICS via a network of LANs and WANs. In fact, approximately 2,000 employees of EOEA at over a dozen locations throughout the state are connected by the network to the EPICS database. >From an operational standpoint, cross-trained inspectors are now able to perform all necessary functions at one facility during a single trip. A consolidated report is generated from the FMF which details the facility's entire compliance history and permit fees. This streamlines the permitting and inspection process and also provides the facility with a single point of contact if problems should arise. Data on individual programs are also stored on EPICS through the program subsystems. Inter-Agency Data Sharing The department has found increased functionality with EPICS with respect to inter-agency data sharing and improved response to potentially serious situations. In some cases, the Department of Health will contact the department about the increase in illnesses from a particular area. The Department of Environmental Protection can quickly assess the environmental status of a certain area through the FMF, which contains information on air, land and water pollution sources. Similarly, if the department suspects a problem, data contained in the FMF provides the department with a quick way to analyze a facility's environmental compliance and target potential violators for inspections. Moreover, EPICS identifies, tracks, and monitors the use of toxic chemicals within the state. This feature allows the department to pinpoint problem areas and also make recommendations to companies about alternative non-toxic chemical substitutions. The department expanded the Toxic Use Reduction subsystem to track the total volume of toxic chemicals within the state, as well as the manner in which the chemicals are transported. The Department of Environmental Protection is extremely satisfied with the EPICS system. Once the department had a system that complemented the organizational framework, the department experienced a large improvement in productivity through more efficient and streamlined business processes as well as better customer service. Data Manager Douglas Priest pointed out that "just having the information available at a moment's notice is a major savings." EPICS also presents the opportunity to expand into other areas. The department believes that public access may become an important issue in the near future. Moreover, the department highly values EPICS based on two key issues: revenue generation and environmental reaction and protection. According to Victoria Phillips, fees coordinator for EPICS, the automated billing module for permit fees raised $7 million in 1994. The fees are used to hire inspectors and make additions to the system. Skip Russell, a senior systems analyst with the department noted that "without the possibility of fee generation, the EPICS system would have died. Fee generation allows continued development of the system." Secondly, through the facility master file, EPICS has dramatically improved the department's response to environmental inquiries or problems and enforcement of violators. Thus, these two factors have made EPICS an effective method of improving the state's protection of the environment. Overcoming Resistance The department experienced resistance to EPICS at the beginning of the project. However, after the inspectors realized the tremendous benefits and power of the system, EPICS became very favorably viewed by the entire department. As the department adds more modules to the system, an increase in technical personnel will be required to maintain the system. In the near future, the department plans to implement new components to EPICS to enhance the power of such a comprehensive database with other available technologies. In particular, the department is planning to integrate EPICS with a Geographic Information System on which the environmental data can be layered in a mapped format. The department is also interested in implementing Electronic Data Interchange (EDI) to more reliably transmit and receive data from other environmental databases, namely the U.S. EPA, as well as data from the companies that are being regulated. On a smaller scale the department has already increased the system's reporting features through a reporting tool that allows end-users to query and analyze data without having to go through the MIS group. The department is also evaluating the implementation of a GUI front end. These additions, particularly GIS and EDI, will have an enormous impact on the department. Both GIS and EDI will increase the functionality of the system and reduce administrative costs within the department and EOEA. The Massachusetts Executive Office of Environmental Affairs was named winner of the 1994 Computerworld Smithsonian award for the EPICS project. The award recognizes the technology industry's most creative and innovative uses of information technology that benefit a wide spectrum of society.
<urn:uuid:b14fc77e-2fea-4384-b9d9-865108c73bf3>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Massachusettes-Integrates-Environmental-Data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00229-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947534
1,526
2.59375
3
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California. But perhaps the most notable feature of the system is that it is built off OpenStack Swift object-based storage, important for the research facilities SDSC plans to serve. Object-based storage is not a new thing for cloud providers; indeed it exists as the foundation for stores built by Amazon, Google, and Microsoft. However, the fact that an open source initiative like OpenStack will form the backbone of this cloud-based research network offers cost and customization advantages. Part of that customization includes supporting both Rackspace/Swift API and the standard Amazon Simple Storage Service (S3) toolsets. “My group is a group of three,” said SDSC Storage Platform Manager Steve Meier on the advantages of an open source platform permeating the research network. “We manage the infrastructure. We do some development and keep things going, but we didn’t have a large team to build and support clients and run the infrastructure as well. If you have users that are currently using S3, and they have scripts or command-line clients or other ways to manipulate their data upload, download, search, theoretically they could now point that tool at SDSC’s cloud storage and it would just work.” The system was actually born out of SDSC’s movement away from tape-based storage, a system used by many for long-term data. However, the tape method, whose data is slow and expensive to recover, does not jive well with the realization by research institutions that all data could be useful data. “The best use case for tape is ‘write once, read never,’” Meier said. “Our researchers archive and look at data more often,” Meier said. “When you have lots of accesses coming from reading back, and then [you have to] keep up with all the writes, there are additional costs to have enough hardware resources to also validate the tapes. All of those considerations made tape an expensive technology for us…With object storage, you can use relatively cheap hardware. You can spread your investment out.” The inexpensive hardware to which Meier referred includes 14 Aberdeen x539 storage servers each equipped with 24 Hitachi 2 TB near-line SAS drives. The promise is that SDSC’s cloud-based research network will be relatively inexpensive, coming in at a set rate of $0.0325 per month for a GB of storage. That translates to $32.50 per TB per month or $390 per TB per year. Meier’s hope was not to compete with major cloud providers like Amazon and Google but rather to build an inexpensive option for researchers. “We never intended to directly compete [with major cloud providers]. As a non-profit, that’s not our charter,” Meier said. “Our competition was to try to come up with technology that gave our researchers competitive advantages to get grants and have technologies that they could use to help further their research.”
<urn:uuid:d0f84ef2-f95e-40a7-9a15-f6227304f3ae>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/06/06/openstack_and_the_sdsc_research_cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00349-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960379
683
2.640625
3
Bloetscher F.,Florida Atlantic University | Stambaugh D.,Calvin | Hart J.,Calvin | Cooper J.,Calvin | And 5 more authors. Journal of Water Reuse and Desalination | Year: 2013 The City of Pembroke Pines is embarking on an alternative water supply (AWS) project that includes the potential of using treated wastewater for aquifer recharge. The concept includes the use of reverse osmosis membranes, ultraviolet disinfection and advanced oxidation processes downstream of activated sludge and microfiltration. One of the problems is that the permeate leaves the process grossly under-saturated, because with respect to minerals, virtually everythingin the water is removed by the reverse osmosis membranes. The practical natural minimum hardness level for water is 40 mg L-1 as CaCO3, while the permeate water was <7 mg L-1. As a result, a post-treatment system needed to be designed to restore minerals to the water to achieve stability so the water does not dissolve metals, other piping and treatment tank materials. Traditionally reverse osmosis plants for potable water systems use caustic soda, polyphosphates, orthophosphates and other chemicals to address the stability issue. These are costly and for an aquifer recharge project, the costs seemed high. For this project, the research focused on alternative solutions to restore hardness, alkalinity and pH using lime, limestone and kiln dust. All three resolved the pH and stability issues for the pilot process. © IWA Publishing 2013. Source
<urn:uuid:469d99a4-3610-4615-ad7f-38091db6c974>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/calvin-1586229/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00349-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89554
327
2.921875
3
You may or may not be old enough to remember the TV commercial for margarine that had the tag line was "It's not nice to fool Mother Nature." But that commercial came to mind as I was reading a report out recently that looked at the viability of large climate engineering projects that would basically alter large parts of the atmosphere to reduce greenhouse gases or basically reverse some of the effects of climate change. The congressional watchdogs at the Government Accountability Office took a look at the current state of climate engineering science and technology, which generally aim at either carbon dioxide removal or solar radiation management. Whereas carbon dioxide removal would reduce the atmospheric concentration of carbon dioxide (CO2), reducing greenhouse warming; and solar radiation management would either deflect sunlight before it reaches Earth or otherwise cool Earth by increasing the reflectivity of its surface or atmosphere. More on climate technology: Federal climate change action? Not through this maze The GAO gathered experts' views of the future of U.S. climate engineering research and potential public responses to climate engineering. Some of their key findings were that: - Climate engineering technologies are not now an option for addressing global climate change, given our assessment of their maturity, potential effectiveness, cost factors, and potential consequences. Experts told us that gaps in collecting and modeling climate data, identified in government and scientific reports, are likely to limit progress in future climate engineering research. - The majority of the experts consulted supported starting significant climate engineering research now. Advocates and opponents of research described concerns about its risks and the possible misuse of its results. Research advocates supported balancing such concerns against the potential for reducing risks from climate change. They further envisioned a future federal research effort that would emphasize risk management, have an international focus, engage the public and national leaders, and anticipate new trends and developments. - A survey of the public suggests that the public is open to climate engineering research but is concerned about its possible harm and supports reducing CO2 emissions. So what exactly are some of the major climate change projects that could emerge somewhere in the future? In the carbon dioxide removal technologies world, the GAO said most can be characterized as predominantly land-based or predominantly ocean-based. "Land-based technologies include direct air capture, bioenergy with CO2 capture and sequestration, biochar and other biomass-related methods, land-use management, and enhanced weathering. Direct air-capture systems attempt to capture CO2 from air directly and then store it in deep subsurface geologic formations. Bioenergy with CO2 capture and sequestration would also store CO2 underground, and biochar and other biomass-related methods would sequester carbon in soil or bury it. Land-use management practices we reviewed would enhance natural sequestration of CO2 in forests. Enhanced weathering would fix atmospheric CO2 in silicate rocks in a chemical reaction and then store it as either carbonate rock or dissolved bicarbonate in the ocean. Ocean-based technologies would fertilize the ocean to promote the growth of phytoplankton to sequester CO2," the GAO stated The GAO said seven solar radiation management technologies have been reported in sufficient detail for to assess them as candidates for climate engineering. Two would be deployed in the atmosphere-one scattering solar radiation back into space using stratospheric aerosols, the other reflecting solar radiation by brightening marine clouds. Two would be deployed in space-one scattering or reflecting solar radiation from Earth orbit, the other scattering or reflecting solar radiation at a stable position between Earth and the Sun. The three remaining technologies would artificially reflect additional solar radiation from Earth's surfaces-covered deserts, more reflective flora, or more reflective settled areas, according to the GAO. In the end the GAO stated that since most climate engineering technologies are in early stages of development, none could be used to engineer the climate on a large scale at this time. "Considerable uncertainty surrounds the potential effectiveness of the technologies we reviewed, in part because they are immature. Additionally, for several proposed carbon dioxide technologies, the amount of CO2 removed may be difficult to verify through modeling or direct measurements." From the GAO: "Both research advocates and opponents cautioned that climate engineering research carries risks either in conducting certain kinds of research or in using the results (for example, deploying potentially risky technologies that were developed on the basis of the research). Some also noted that other nations are conducting research and warned that, in the future, a single nation might unilaterally deploy a technology with transboundary effects. The research advocates suggested managing risks from climate engineering by, for example, conducting interdisciplinary risk assessments, developing norms and best practice guidelines for open and safe research, evaluating deployment risks in advance-and, potentially, as we discuss below, conducting joint research with other countries. Some advocates also indicated that rigorous research could help reduce risks from the uninformed use of risky technologies (as, for example, might occur in a perceived emergency) or emphasized the need to weigh potential risks from climate engineering against risks from climate change." Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:5bf3dc94-952f-4d02-a67e-145ee2cf5427>
CC-MAIN-2017-04
http://www.networkworld.com/article/2220511/data-center/will-climate-engineering-ever-be-ready-for-prime-time-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95092
1,044
3.328125
3
The idea is cool enough - build a reusable aircraft-like system that could easily and relatively cheaply launch satellites into orbit. The kinks will be that the system need do that for somewhere in the $5 million per launch range and oh yeah, go well over Mach 10. As you might have guessed, the project to develop such a system is being put forth by Defense Advanced Research Projects Agency (DARPA) which will more fully detail the program, known as the Experimental Spaceplane (XS-1) in October. From DARPA: "The objective of the XS-1 program is to design, build, and demonstrate a reusable Mach 10 aircraft capable of carrying and deploying an upper stage that inserts 3,000- 5,000 lb. payloads into low earth orbit (LEO) at a target cost of less than $5M per launch. The XS-1 program envisions that a reusable first stage would fly to hypersonic speeds at a suborbital altitude. At that point, one or more expendable upper stages would separate and deploy a satellite into Low Earth Orbit. The reusable hypersonic aircraft would then return to earth, land and be prepared for the next flight. Modular components, durable thermal protection systems and automatic launch, flight, and recovery systems should significantly reduce logistical needs, enabling rapid turnaround between flights." DARPA said that the long-term intent is for XS-1 technologies to be transitioned to support not only next-generation launch for government and commercial customers, but also global reach hypersonic and space access aircraft. The lofty technical challenges that will be part of the XS-1 program include: - A reusable first stage vehicle designed for aircraft-like operations - Robust airframe composition leveraging state-of-the-art materials, manufacturing processes, and analysis capabilities - Durable, low-maintenance thermal protection systems that provide protection from temperatures and heating rates ranging from orbital vacuum to atmospheric re-entry and hypersonic flight - Reusable, long-life, high thrust-to-weight, and affordable propulsion systems - Streamlined "clean pad" operations dramatically reducing infrastructure and manpower requirements while enabling flight from a wide range of locations For the first round of testing the XS-1, DARPA says it wants to see the spacecraft" - Fly ten times in ten days - Fly to Mach 10 at least once - Launch a representative payload to orbit at least once "We want to build off of proven technologies to create a reliable, cost-effective space delivery system with one-day turnaround," said Jess Sponable, DARPA program manager heading XS-1. "How it's configured, how it gets up and how it gets back are pretty much all on the table-we're looking for the most creative yet practical solutions possible." Commercial, civilian and military satellites provide crucial real-time information essential to providing strategic national security advantages to the United States. The current generation of satellite launch vehicles, however, is expensive to operate, often costing hundreds of millions of dollars per flight. Moreover, U.S. launch vehicles fly only a few times each year and normally require scheduling years in advance, making it extremely difficult to deploy satellites without lengthy pre-planning. Quick, affordable and routine access to space is increasingly critical for U.S. Defense Department operations. In the end the idea is to lower satellite launch costs by developing a reusable hypersonic unmanned vehicle with costs, operation and reliability similar to traditional aircraft, Sponable stated. The agency noted that it already has one quick, cheap satellite launch program working . The Airborne Launch Assist Space Access (ALASA) program looks to develop an aircraft-based satellite launch platform for 100 lbs. payloads and building low-cost, small satellites that could be rapidly be launched into any required orbit, a capability not possible today from fixed ground launch sites, DARPA stated. Boeing, Lockheed Martin and Virgin Galactic are working on separate offerings for that project. DARPA also has the Integrated Hypersonics program aimed at researching and developing what it calls "next- generation technologies needed for global-range, maneuverable, hypersonic flight at Mach 20 and above for missions ranging from space access to survivable, time-critical transport to conventional prompt global strike. The program seeks technological advances in the areas of: next generation aero-configurations; thermal protection systems (and hot structures; precision guidance, navigation, and control; enhanced range and data collection methods; and advanced propulsion concepts." DARPA has in the past equated the development of hypersonic equipment to the development of stealth technology in the 1970s and 1980s. The strategic advantage once provided by stealth technology is threatened as other nations' abilities in stealth and counter-stealth improve. "Restoring that battle space advantage requires advanced speed, reach and range. Hypersonic technologies have the potential to provide the dominance once afforded by stealth to support a range of varied future national security missions," DARPA said. There are a ton of technological issues to be addressed, one of the biggest being the heat generated by extreme speeds. At Mach 20, vehicles flying inside the atmosphere experience intense heat, exceeding 3,500 degrees Fahrenheit, which is hotter than a blast furnace capable of melting steel, as well as extreme pressure on the shell of the aircraft, DARPA stated. The thermal protection materials and hot structures technology area aims to advance understanding of high-temperature material characteristics to withstand both high thermal and structural loads. Another goal is to build structural designs and manufacturing processes to enable faster production of high-speed aeroshells, DARPA stated. Check out these other hot stories:
<urn:uuid:85cbe664-eee5-402a-bf0c-d77143c1768c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225388/security/darpa-hunts-airplane-like-spacecraft-that-can-go-mach-10.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920956
1,177
2.953125
3
It turns out, moths are actually pretty good at driving—at least when scientists hook them up to the right equipment. In a recent study published in JoVE Video Journal, researchers at the University of Tokyo leveraged a moth’s acute sense of smell to let the insects “steer” a vehicle toward a specific odor. It’s part of their ongoing research into making robots that mimic the insect’s odor-tracking skills. How does a moth driving a tiny car fit into their plan? This isn’t actually the core part of their research. The researchers are primarily working to build a model of a moth’s brain, using data to figure out how the insect localizes odors and translates those sensations into movement. But once they finish that model, they’ll need a way to test it, and that’s where the moth-driven robot comes in. The experiment proves a moth’s sensory-motor system can effectively steer not only a living creature, but also a mechanical robot. And it also shows how that future brain model would be expected to perform when connected to a robotic car. Watch the video here for more details about the research, and the significant practical applications of odor-tracking robots.
<urn:uuid:b88b1272-af9e-4794-b0c8-3fdc478acbad>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/2017/01/moths-are-driving-miniature-cars-help-scientists-build-odor-tracking-robots/134450/?oref=ng-channelriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00559-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931285
265
4.0625
4
The popularity of mobile browsing as a way of accessing the Internet has been building in popularity over recent years. Since 2014, the number of people who prefer to access the online world through their smartphone has surpassed the number of individuals using a wired desktop, which has therefore increased the desire for reliable network capacity. One potential solution established by experts is “microcell technology” – a strategy that allows for coverage of smaller areas while providing better performance, increased accessed points, improved throughput, and reliability. Cellphones now offer a wide range of data applications and technical solutions, prompting operators to search for opportunities to promote wide area coverage for high-rate data. What were once regarded as understandable gaps in coverage have become the focus of carriers attempting to expand their reach. Because of this, various carriers embrace femtocells and microcells as a way of helping individuals and businesses gain better access. Why Use Microcells? The traditional “macrocell” is the largest cell in a mobile phone network and provides the widest area of coverage. Often, the antennas for macrocells are located across existing structures like rooftops and antenna since they must be high enough to reduce obstruction from buildings or terrain. In technology, the word “microcell” applies to any cell that is smaller in size than the traditional macrocell. Fundamentally, most cells that cover less than two kilometers are femtocells and picocells which offer coverage below 200m. Picocells, microcells, and femtocells are each versions of “small cells”, which use cable and DSL connections to broadcast cellular service across a small space. Boosters for cellphone signals utilize an existing cellular signal and amplify it for broadcasting, requiring the use of an existing signal. Contrastingly, microcells can create their own signals, allowing for placement in areas with poor reception. Macrocells often have some gap between areas of coverage – imagine pouring basketballs into a bathtub – there would be quite a few empty spaces. However, even if you could cover an entire area with macrocells, the throughputs at locations in in-between areas have low signal strength which leads to a poor connection. The installation of microcells throughout these areas could resolve such issues. Concerns Surrounding Microcell Deployment Despite the fact that a number of major carriers already use microcells, there are a few concerns that exist. For instance, in order to install microcells within a densely populated area, carriers will need access to locations such as civic centers, stadiums, and hospitals. Sometimes, this need leads to objections from government personnel, who question why private carriers should have access to public space as a way of deploying their technology. Perhaps the most significant concern when deploying microcells in any location is whether an appropriate location will be available in which to install and utilize the data, allowing it to connect to the remaining network. Unfortunately for many carriers, it has become difficult to find a reasonable location in which to deploy a macrocell, due to the visual appearance of antenna towers that traditional large-area cell sites need to operate. With microcell positioning, the use of antenna towers isn’t typically required, however, decisions regarding the placement and size of cable conduits still exist. Many carriers hit a significant hurdle when attempting to work with the city to manage pole, space, and power-related issues, as although plenty of citizens are happy to access more powerful coverage for their cellphones, they are not willing to risk the aesthetics of their town for that purpose. The electrical equipment used to support microcell technology is no bigger than a household appliance, and it can run off the standard residential or commercial power grids that run throughout a city or town. However, in spite of this, the public prefer to keep their technology equipment and utilities underground and out of sight. For carriers hoping to install microcells, it’s crucial to strategize regarding power, physical space, and access. Carriers may need to rely upon the willingness of the end-users and allow for the installation of microcells in their local area to boost their coverage and internet access. Utilizing on-demand service labor to install microcells will become increasingly popular as we continue to see a rise in the use of mobile browsing and need for data usage. Find trusted, on-demand talent for your microcell projects through Field Nation. Or, visit our website to learn more about Field Nation’s support of carriers and OEMs servicing the microcell and distributed antenna systems industry. Assets include: - Efficiency, Transparency, Performance & Speed: Meeting the Needs of Today’s Wireless Infrastructure Industry – Ebook - ”Wireless & Broadband Infrastructure Trends 2016” – Infographic
<urn:uuid:5f5a132d-a3a5-4afe-a2af-88bd3943e28a>
CC-MAIN-2017-04
https://fieldnation.com/blog/understanding-the-trends-features-of-microcells
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947792
968
3.203125
3
History of Distributed Computing: Hadoop MapReduce and FlumeJava [Editor’s note: This is a guest post from Nolan Grace, a software developer and consultant at BP3 who has a passion for data science. This post is shared with permission] Distributed computing is definitely the cool kid in the tech world right now. From Amazon shopping recommendations, to Pandora deciding what music you may like, to AlphaGo mastering one of the most complicated board games of all time, distributed computing is at the heart of each solution. It took a lot of blood, sweat, and tears to get big data processing to the point we are at right now, and it is going to take a lot more to get where we are going; but I am as excited as can be for the ride. To really get a full respect for how and why the big data ecosystem looks the way it does you have to sit down and understand the major breakthroughs that were made, and how each step forward introduced a group of challenges just as difficult as the last. In this blog I would like to introduce you to two big papers that are important to read and understand if you are interested in getting into big data. The need for Distributed Computing arose from the realization that one supercomputer would never be enough and more will always be better. A group of basic computers could outperform the world’s most powerful supercomputers at a fraction of the cost and multiple supercomputers would perform better than one. A solution to this problem, “MapReduce: Simplified Data Processing on Large Clusters” by Jeffrey Dean and Sanjay Ghemawat was published by Google researchers in 2004 In this paper the MapReduce Design Pattern was documented and popularized. MapReduce involves breaking down any complicated operation in the series of simple Map and Reduce tasks. Map is the act of organizing information onto the correct machine in a cluster. Reduce is the act of performing some sort of action the data within an individual machine. By breaking down something complicated into simple pieces it makes it much easier to assign small generic pieces of work to individual machines across a cluster. This theory can be compared to the time tested concept of divide and conquer. How did the Romans perform a census? I can tell you they didn’t ask one person to count everyone, they split the empire into smaller pieces and those pieces were split into even smaller chunks, allowing individuals to count all the people in a reasonable area then come together in the end to sum up each piece. By taking something like conducting a national census and simplifying it into small simple tasks it makes it easy for everyday people to assist in a monumental operation. “How do you eat an elephant?” * In 2011 Apache Hadoop was released by a group of Yahoo employees. Hadoop may not have been the first application to leverage the MapReduce design pattern but it was by far the most successful. Hadoop allowed developers to build large scale MapReduce jobs that could be executed across massive clusters of commodity machines in a way that was extremely resilient and reliable. Thus far, Hadoop is still the default in massive and reliable computing. However, despite the stability and consistency of Hadoop it is still lacking in usability. Hadoop MapReduce jobs were still fairly unwieldy and problems with job optimization, scheduling, and writability were still commonplace. In addition, most real world applications of data processing require a large number of MapReduce jobs, executed in sequence on multiple and distinct sets of data in order to make useful discoveries. These data pipelines were very difficult to perform with Hadoop and led to the creation of FlumeJava. FlumeJava was introduced in 2010 in a paper called “FlumeJava: Easy, Efficient Data-Parallel Pipelines,” also written by a group of researchers at Google. FlumeJava introduced a framework for organizing, executing, and debugging a large scale pipeline of MapReduce jobs. This allowed developers to write code which would be used to build an execution plan for a series of MapReduce jobs. Think of it as a SQL optimizer for MapReduce. FlumeJava was able to take the MapReduce tasks that needed to be executed and build them into an “internal execution plan graph structure” that could be evaluated as each task was needed. This allowed the same code — that would normally need to run in a large scale cluster — to be debugged piece by piece on a local machine and transplanted directly to production. The optimized execution plans also significantly decreased the execution time of MapReduce pipelines by reducing the amount of rework and making it easy for failed jobs to roll back stages, rather than restart from the beginning. This innovation led to the creation of Apache Oozie for Hadoop and the popularization of the Directed Acyclic Graph(DAG) for large scale data processing. Many of the ideas from the FlumeJava paper have been implemented in Apache Spark and enable many of the features that have made Apache Spark a world class data processing platform. *denotes a Hadoop joke[The original post in its entirety is published on here, on Medium] http://research.google.com/archive/mapreduce.html https://en.wikipedia.org/wiki/Apache_Hadoop https://www.safaribooksonline.com/library/view/hadoop-application-architectures/9781491910313/ http://research.google.com/pubs/pub35650.html http://oozie.apache.org/ https://en.wikipedia.org/wiki/Directed_acyclic_graph
<urn:uuid:af8d4bf5-6124-42b0-a663-a9999181b312>
CC-MAIN-2017-04
http://www.bp-3.com/blogs/2016/07/history-of-distributed-computing-hadoop-mapreduce-and-flumejava/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957544
1,179
2.734375
3
Web Essentials: Introduction to HTML5, CSS3 and Responsive Design (TT4605) Design Flexible, Rich Web Applications using HTML5, CSS3 and Responsive Design In this course, you will learn how to use the latest web technologies and responsive design practices that are central to targeting the entire spectrum of user platforms and browsers. This course provides you with an in-depth introduction to and exploration of a variety of key cutting-edge technologies being used in modern web design. Working in a highly engaging hands-on format, you will learn the foundational technologies and skills needed necessary to design highly interactive, feature-rich and user-friendly websites and webpages. Topics and skills covered in this course are fast-changing and on the cutting edge of web development.
<urn:uuid:dce51b62-0f9a-466e-9414-c69b8a57e777>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/120958/web-essentials-introduction-to-html5-css3-and-responsive-design-tt4605/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.844147
166
2.5625
3
- Test for Heartbleed Vulnerability - Get details of a SSL certificate. - Detect weak ciphers and SSLv2, a version of SSL with known security vulnerabilities. About SSL Certificate Checking SSL (and TLS) provide encrypted communication over the Internet, SSL 2.0 has known vulnerabilities and it is recommended that it no longer be used. PCI compliance for example also mandates that the version SSL 2.0 not be used and the version must be SSL 3.0. While it can be used for any TCP based service such as FTP, NNTP, SMTP - it is most commonly used to encrypt web traffic. User awareness has become such that non-technical users are aware of the importance of the HTTPS in the URL and the "padlock" in the browser status bar when browsing secure sites such as Internet banking and email.
<urn:uuid:aac82fcc-5475-4472-9578-474c4e63af0c>
CC-MAIN-2017-04
https://hackertarget.com/ssl-check/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00037-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91817
175
3.109375
3
With the news that an American woman has received a pacemaker with a wireless connection to the Internet, the so-called “Internet of Things” has taken on a new dimension. Reuters reported this week that a 61-year-old woman became the first American recipient of the pacemaker, which was approved by the FDA just last month and allows the doctor to monitor how her heart is doing. At least once a day, a server will communicate with the pacemaker over the Internet and get an update. If there is anything unusual, the server can contact the doctor and patient, literally calling the doc on the phone in the middle of the night, if necessary. The Reuters article quotes the doctor as saying that in the future, wireless devices could monitor high blood pressure, glucose levels or heart failure. The technology is part of a much broader trend of reaching out to objects in the physical world to bring them into the Internet, so to speak, to build an “Internet of Things.” RFID, short-range wireless technologies and sensor networks are enabling this to happen as they become more commonly used. IPv6, with its greatly expanded address space, allows for many more devices to connect to the Internet. If all things are connected, all things can be tracked. The earliest applications have centered around tracking shipments in a supply chain, but if the tracking devices are left in objects when they are in use, that could be extremely powerful. It’s a little scary to think of connecting one’s heart to the Internet. I know the connection is being used in a very narrow way, but if it were at all possible for hackers to tamper with the pacemaker, they probably would, given what we know about what some are capable of.
<urn:uuid:5927aee2-69a1-445a-8544-0c789cc4e261>
CC-MAIN-2017-04
http://www.networkworld.com/article/2246930/lan-wan/-the-internet-of-things--now-includes-a-human-heart.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959555
362
2.8125
3
NASA to develop metroplex air traffic management tools As the nation’s air traffic grows increasingly complex, NASA intends to develop a planning system to help coordinate flights been enroute and metroplex air spaces as well as to manage surface traffic using Next Generation Air Transportation System concepts. A metroplex is a metropolitan area that has more than one principal anchor city, such as Baltimore, Md.-Washington, D.C., or Raleigh–Durham-Chapel Hill, North Carolina. The space agency awarded Intelligent Automation Inc. a $1.5 million SBIR Phase II contract to develop “futuristic tools to increase the efficiency of air traffic control systems [that] must deal with the complexity caused by a mixture of different aircraft types ... increased traffic volume and varying aircraft performance,” IAI said about the contract. IAI said its key innovation in the effort would be development of a MetroSim, a metroplex-based arrival, departure and surface optimization system that would allow airport planners, traffic flow experts, airline dispatchers, air traffic controllers and pilots to recover from disruptive events and handle the uncertainties of irregular operations. MetroSim will use a collection of different tools that will perform analytic computations, physics-based computations, and mathematical optimization calculations,” according to IAI. Initial development of MetroSim tools will be in the New York environment, including the adaptation of MetroSim for John F. Kennedy International Airport, Newark Liberty International, LaGuardia Airport, Teterboro Airport, and the Long Island Mac Arthur Airport. MetroSim will link to existing NASA and FAA terminal and surface planning tools, the firm said, and interoperate using thin interfaces, minimal data shared between the tools and limited reliance on a centralized database, “thus enabling coordination of the tools in the distributed environment.” The MetroSim architecture will also allow NASA researchers to reconfigure or replace any MetroSim component in order to experiment with new flight management techniques or new air traffic control concepts. Connect with the GCN staff on Twitter @GCNtech.
<urn:uuid:b30e2c3d-ef3e-419a-90e7-7dabca7b4ab4>
CC-MAIN-2017-04
https://gcn.com/articles/2014/08/26/nasa-metroplex-air-traffic.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00459-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896595
425
2.515625
3
The fact that most of computer hardware is produced outside the US and Europe has long presented a worry for the governments of those countries and for the companies and corporations based in them. They are especially concerned about the security of integrated circuits used in military devices, industrial control systems, medical and other critical devices, and are aware that the possibility of hardware Trojans being integrated in them during the manufacturing process is not at all far-fetched. A group of researchers from several universities in the US, Switzerland, the Netherlands and Germany have recently published a paper dealing with precisely that possibility, and have proposed an “extremely stealthy approach for implementing hardware Trojans below the gate level”. “Often circuit blocks in a single IC are designed by different parties, manufactured by an external and possibly off-shore foundry, packaged by a separate company and supplied by an independent distributor. This increased exploitation of out-sourcing and aggressive use of globalization in circuit manufacturing has given rise to several trust and security issues, as each of the parties involved potentially constitutes a security risk,” they pointed out, adding that threat of hardware Trojans is expected to only increase with time, especially with the recent concerns about cyberwar. Theirs is not the first research into creating a hardware Trojan, but it is among the first ones that instead of adding additional circuitry to the IC’s design have concentrated on changing the dopant polarity of a few of its transistors. “Doping” a transistor is effected by introducing impurities into its structure with the purpose of changing its electrical properties. Previous research has managed to make them fail before they should have, but this group has succeeded in making the protection provided by an Intel random number generator (RNG) weaker than intended, and to create a hidden side-channel into an AES SBox implementation in order to leak out secret keys. But most important of all, their modifications fooled a number of common Trojan testing methods that included optical inspection and checking against “golden chips” (i.e. a definitive, verified example of how the chip should look and be). “To the best of our knowledge, our dopant-based Trojans are the first proposed, implemented, tested, and evaluated layout-level hardware Trojans that can do more than act as denial-of-service Trojans based on aging effects,” they concluded.
<urn:uuid:d95819df-4c82-409b-968a-91b2f2f48773>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/09/17/researchers-create-undetectable-layout-level-hardware-trojans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963392
497
2.75
3
"It's a moment I'm going to remember for the rest of my life. Walking out ... experiencing the sunshine and wind on our faces." That's a quote from Oleg Abramov, a scientist who was among six people to have spent the past four months simulating life on Mars as part of NASA-funded research into what types of foods astronauts traveling in space or living on another planet could eat. The half-dozen researchers on Tuesday left the small dome in which they had been living in a desolate lava field 8,000 feet above sea level in Hawaii. The goal was to recreate as closely as possible conditions on the surface of the Red Planet. But that's really hard to do, even when the research "astronauts" were required to wear their spacesuits every time they stepped outside their dome into the Hawaiian air. Because they were still on Earth, and they knew they were still on Earth. None of them had to worry about oxygen starvation and near-instant death if their space suits were punctured. Abramov said he'll remember for the rest of his life the sunshine and wind on his face after emerging from four months in a simulated environment that was far from the real thing. That's my takeaway from this, not the food experiment. What he said hints at a huge challenge to humans leaving Earth for long stretches or even permanently, a challenge perhaps even more formidable than the many physical challenges yet to be overcome. And that's coping with the psychological and emotional impact of living in a harsh, foreign environment. More than 100,000 people have applied to be one of four astronauts aboard a Mars One ship that is scheduled for a one-way trip to the Red Planet in 2022. How many of these people have seriously considered how they would handle the isolation, the boredom, the loneliness, the deafening silence of living on Mars or elsewhere beyond Earth? The shrieking of nothing is killing me, indeed. They'd never again see a bird fly, smell the ocean, watch a dog run in a field, build a fire, or hold hands in a mall. None of that would exist anymore for Mars colonists. Humans have interacted with Earth in ways that have been encoded in us for thousands of years. What happens to your mind when all of that is gone? I doubt any of the Mars One applicants would go on vacation for two weeks in a barren lava field in Hawaii (except for maybe a geologist), yet they're willing to spend the rest of their lives in a far more perilous and pitiless environment. Good luck with that. Now read this: Some of today's 'desktop' mini-PCs make laptops seem downright bulky in comparison. Sensing a possible stall in your coding career? Here’s how to break free and tap your true potential Microsoft on Friday said it will again provide Internet Explorer security patches as a separate... Sponsored by Puppet Among many other provisions, the legislation "explicitly prohibits" the replacement of American workers... After ending an investigation into a fatal crash involving a Tesla sedan with its semi-autonomous... Viewers may soon see a big change coming in the way they experience the chills and thrills of live...
<urn:uuid:ef100659-242b-46ec-8838-1ce5c45a49f1>
CC-MAIN-2017-04
http://www.itworld.com/article/2708220/enterprise-software/can-humans-handle-the-psychological-challenge-of-living-on-mars-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00542-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969999
651
2.75
3
Looking to take a giant step toward taking part in low Earth orbit transportation, exploration and servicing orbiting space structures, the European Space Agency today it would team with Thales Alenia Space Italia to begin building an experimental spacecraft for launch in 2013. Planning for the ESA's IXV Intermediate eXperimental Vehicle (IXV) has been in the works for about two years and follows on the agency's Atmospheric Reentry Demonstrator flight which took place in 1998. More on space: Eight hot commercial space projects The IXV will be reusable, more maneuverable and able to make precise landings, the ESA stated. Its success will provide Europe with valuable know-how on reentry systems and flight-proven technologies that are necessary to support the Agency's future ambitions, including return missions from low Earth orbit, the agency stated. According to the ESA, the IXV will be launched into a suborbital trajectory on ESA's small Vega rocket from Europe's Spaceport in French Guiana, IXV will return to Earth as if from a low-orbit mission, to test and qualify new European critical reentry technologies such as advanced ceramic and ablative thermal protection. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:428ff260-34da-4e46-b044-a802fda9c694>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229559/security/european-space-agency-set-to-build-experimental-transport-spacecraft.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00450-ip-10-171-10-70.ec2.internal.warc.gz
en
0.881665
265
2.921875
3
The creation and deployment of applications has evolved rapidly over the last few years. Microservices have become the most pervasive software architecture approach and its adoption continues to rise in the enterprise world. Taking a monolithic app and decentralizing its components into independently developed, deployable and scalable micro apps allows IT departments to be more agile and efficient when it adds, updates, or changes the apps it delivers. Microservices have become an answer to how the largest web and cloud infrastructure properties across the world can quickly update, scale and adjust to service massive numbers of customers and meet changing business demands. Who benefits? Frankly, everyone. However, delivering apps on an enterprise scale can put heavy pressure on servers and can lead to greater costs for companies trying to keep up with consumer demand. This leaves the IT department with the choice of trying to convince decision makers to invest further in IT infrastructure or look for alternatives. The Challenges of Implementing Microservices and the appeal of Containers For deploying applications, server virtualization has been a transformative technology for IT. In its initial form, every major enterprise app had its own physical server. This quickly became costly and a less efficient form of app delivery. With the advent of server virtualization, which leverages a single physical server to run multiple operating systems instances, simultaneously and independently, the data center as we know it today began to take form. Since the concept of microservices is to simplify the development of a complex application by decentralizing its components, the number of dedicated operating system instances naturally increases and results in more complex infrastructure management. Containerization has allowed enterprises to move away from sharing the hardware with multiple operating systems operating independently and instead shares the operating system kernel for multiple applications to execute simultaneously. In effect, these ‘containers’ are lighter and more efficient than hypervisors, and scale without the overhead of the operating system. This is a great marriage for microservices considering the reduction in infrastructure Not only is this a more effective way of delivering virtualization, if there is ever an update to the app IT doesn’t need to update each individual virtual machine or operating system, just the containerized component. Containerization also helps eliminate errors when installing or deploying new apps and allows developers to review the performance of an app more effectively. Most significantly, developers and operations benefit from better security, with portability to move on-premise applications to the cloud. Of course, with the increased management needs, containerization is only as effective as the Application Delivery Controller (ADC) or load balancer that it is paired (or built) with. As the name suggests, ADCs control and manage the performance, security and resiliency of delivering applications on the servers. So, if one server goes down from a power outage or experiences higher loads, traffic is redirected, or balanced, to the remaining online servers or containers. Furthermore, ADCs can provide security policies and protection between containers. Solutions Embracing Containerization Over the last three years, the shift towards containers was primarily sparked by Docker – an open-sourced project built for the delivery of Linux-based apps. The Docker engine also powers the new container feature released in Windows Server 2016 and Windows 10 Anniversary Update. Since the Docker platform provides a single toolset and APIs for managing Linux and Windows app containers, it allows the IT department to have far more instances with or without hypervisors using the same server hardware. Combined with the portability of containers, it becomes much easier to package, deliver, and scale apps into the cloud. Other software development companies aren’t blind to the benefits that Docker has unearthed. This year, Citrix worked with Docker to release the first containerized Application Delivery Controller – NetScaler CPX. The CPX is designed to insert L4-L7 services early in the development cycle for DevOps and agile IT environments. On August 24th, Citrix went a step further and launched NetScaler CPX Express – a free version of CPX targeted at developers. It offers the same enterprise quality of NetScaler CPX but allows developers to explore the program on their own terms. Specifically, developers can create their apps with load balancing configurations during development; these verified configurations can then be rapidly pushed to quality assurance testing and into production by avoiding several time-consuming steps in the process. The new CPX is ideal for microservices app deployments; by providing app-to-app L4-L7 traffic management at scale, CPX allows e-Biz and cloud service providers to transition their services into containerized and microservices applications. Together with NetScaler Management and Analytics System (NMAS) (a centralized network management, analytics, and orchestration solution) CPX ties into the application creation workflow and automates the delivery of containerized and microservices apps. As a container, CPX will even run on a laptop and be programmed with ease so that NetScaler configurations can be verified from development to production. The CPX is portable across hosts within the data center and different cloud environments for both hybrid and multi-cloud initiatives. For operational consistency, IT teams can deploy a Platform-as-a-Service (PaaS) for their internal app developers and production apps, leveraging the same rich L4-L7 features they use with the NetScaler MPX (physical), SDX (multi-tenant), and VPX (virtual) appliances. Since these platforms use the same code base and configuration, they can leverage the same management platforms and tooling, including NMAS, to manage app delivery across all NetScaler form factors Whether containerization will overtake traditional full virtualization remains to be seen. However, the benefits have already been proven to be lucrative and full of potential. With so many free and open-source resources available, the art of possible is literally at your fingertips.
<urn:uuid:26441e33-8b92-4f8a-a102-2a4703461559>
CC-MAIN-2017-04
https://www.citrix.com/blogs/2016/12/27/embracing-change-with-containerization-microservices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00358-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931419
1,194
2.703125
3
Inside Today’s Communication Technologies Not until recently did I realize that technologies that use IP telephony (Internet Protocol telephony) and VoIP (Voice over IP) were different. The main difference is that IP telephony technologies use Internet Protocol packet-switched connections to exchange voice, fax and other forms of information, where as VoIP technologies send voice information in digital form in discrete packets. To simplify this description even further, VoIP sends only voice traffic over an IP network, while IP telephony refers to any telephone-type service carried over IP. Furthermore, a major advantage of VoIP and IP telephony are that they avoid the tolls charged by ordinary telephone service because it uses Internet protocols rather than the tradition circuit-committed protocols of the public switched telephone network (PSTN). Because both IP telephony and VoIP technologies have matured greatly over the last 10 years, it is no wonder that they may become vastly popular in the mass marketplace soon. Currently, IP telephony service and VoIP service is relatively unregulated by the Federal Communications Commission (FCC). However, VoIP regulation has become a hot topic recently, and there are new regulations on the table—one in particular, the Barton bill. The Barton bill states that national franchisees would be permitted to operate cable services in areas where local, municipal and some statewide authorities have already granted limited monopolies or duopolies to cable TV (CATV) providers in designated regions. Also, the U.S. House of Representatives Subcommittee on Telecommunications granted VoIP service providers access to critical Enhanced 911 (E911) infrastructure, which was a huge step for companies like Vonage. This bill grants access to selective routers, databases, numbering resources and other essential elements for the provision of E911 for nomadic VoIP services. One thing is for sure, the landscape of communications will continue to evolve and regulatory issues will continue to arise: It should be an interesting landscape to watch during the next few years.
<urn:uuid:c6460604-8813-492f-865f-52c7c23f5a4c>
CC-MAIN-2017-04
http://certmag.com/inside-todays-communication-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00266-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92644
405
3.203125
3
The Google Doodle on the search giant's main page today honors Grace Hopper, the American computer scientist and Navy Rear Admiral whose work led directly to creation of the COBOL programming language as well as to advances in computer networking. Hopper, who has had everything from a Navy destroyer to a supercomputer named after her, would have been 107 today, Dec. 9. The doodle shows Hopper sitting at a computer terminal, and upon first visting the search page, clicking on the doodle triggers a computing process that calculates what her current age would be. An annual conference named after Hopper focuses on bringing computer education to women around the world. RELATED: The unsung women of tech
<urn:uuid:24d907ae-bb86-4b3f-9013-52188588af55>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225958/google-doodle-geekier-than-ever-in-honoring-grace-hopper.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00174-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959075
145
2.953125
3
Definition: An algorithm that must process each input in turn, without detailed knowledge of future inputs. See also off-line algorithm, adversary. Note: From Algorithms and Theory of Computation Handbook, pages 10-17 and 19-27, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "on-line algorithm", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/online.html
<urn:uuid:f82fa193-66e1-4127-a45d-63cfa0366f21>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/online.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.776383
206
3.03125
3
Not since World War II has encryption received so much attention. Germany, during that time, had some of the most able scientists and cryptographers, but the Allies cracked Germany's submarine codes and discovered valuable information on Germany's strategic plans. Today, encryption has become an important key in securing computer data from prying eyes. Users can encrypt files that contain sensitive data and protect them from theft or access by unauthorized co-workers or network hackers. Information traveling between computers goes through numerous routes, systems and servers. A hacker can intercept message packets in transit and attempt to reconstruct your message before it reaches its destination. Computer data security concerns are similar to those of any confidential communication. The reality is that the Internet is no more insecure than any other medium of commerce, such as bank, postal or telephone credit card transactions. But Internet security concerns cannot be overstated either, because computerized tools such as network "sniffers" are employed by hackers to sort, filter and intercept sensitive information from a network. Many of the newer versions of popular applications -- Microsoft Word, Excel, Corel WordPerfect and others -- already provide encryption. Many experts predict that encryption will soon become an integral part of any application. While these applications feature less secure algorithms, their encryption is sufficient for most needs. Inexpensive but very effective software programs such as Symantec's Norton DiskLock, Pretty Good Privacy and Netscape Communicator 4.0 provide an excellent way for users to test encryption. Asymmetric or Symmetric Keys While the technical details of cryptography are very complicated, the concept is rather simple. Basically, encryption is the scrambling and altering of data until it is no longer readable by anyone who does not have the proper decryption key. Cryptographers have developed various methods to perform this task. The asymmetric method, also called public-key cryptography, requires two keys -- one to encrypt, and the other to decrypt a message. The user's public key is freely distributable to anyone through several key servers on the Internet. These servers act as public-key white pages. For example, say Joe wants to send Mary some secure files or messages. To do so, he must request and receive Mary's public key via e-mail or look for it in a public-key server and use that key to encrypt the files. When Mary receives the message, she uses her private key to decrypt the message, which was encrypted with her public key. The security of this system resides in the combination of the two keys; if the keys don't match, the file or message can't be viewed. Similarly, Mary uses Joe's public key to encrypt her reply before sending it. To assure Joe that she sent the answer and that it was not forged, Mary signs this message with her private key, which generates a digital signature block that Joe can verify using Mary's public key. Digital certificate authorities issue digital signatures and verify the user's identity much the same way a DMV verifies an identity and issues a driver's license. Symmetric cryptography uses a single key to encrypt and decrypt messages. Its weakness is that, to transmit an encoded message, users must also send the private key, which means a secure distribution route is needed. Key Bit Rate No matter how securely the doors are locked, a persistent intruder can find a way through. While no encryption program is 100 percent uncrackable, most intruders lack the time or skill to bypass or dismantle such security tools. One primary indicator of encryption strength is the key's bit rate, which is the number of bits in a key. A bit is a single digit in a binary number -- either 0 or 1. The amount of time required to decode depends on the length of the decryption key. A longer key means a hacker must try more combinations in order to decode the data. For example, a combination lock with only one, single-digit number on the tumbler is simple to open by just trying each number. With two or more numbers in the tumbler, the difficulty rises considerably. The cracker must put the first set on one number, then try each number in the second set, then repeat the process with the second number in the first set, etc. The more numbers to try, the more difficult the cracker's job. Just as with the combination lock example, the higher the bit rate, the harder it is to break the encryption scheme. A 40-bit key, for example -- the U.S. government restricts export of key lengths greater than 40 bits -- requires the cracker to attempt more than a trillion combinations. While this may seem like an extremely large number of keys, an Intel Pentium-based PC -- attempting various combinations in what is called "brute force" -- could crack the key in a matter of hours. A 56-bit key requires trying more than 72,000 trillion possible combinations. A conventional PC might take about 1,000,000,000,000,000,000,000 years to crack a 128-bit key. In the United States, domestic versions of 128-bit keys are used and are virtually impossible to crack by brute force methods using current computing technologies. The easiest way to crack a message is to obtain a copy of the sender's private key, or in case of symmetric encryption, to intercept the message and the key en route to its destination. When DES encryption was devised in the 1970s, the 56-bit key was considered very safe; with the computers of today, a DES-encrypted message is still fairly secure, but a 56-bit key was recently cracked. One of the shortcomings of public key technology is the extra time it takes to encrypt and decrypt data. The longer the key, the more time required to encrypt or decrypt a message. To increase the speed of encryption, nCipher's nFast line of cryptographic hardware could be used to accelerate the timing. It does that by off-loading the cryptographic burden from the CPU. Each nFast accelerator improves performance by up to 100 times and is able to handle up to 300 1024-bit key public signings per second. For additional information on the Internet: Previous versions of Symantec's DiskLock focused on locking the hard disk and preventing access to specific files. With the spread of Internet and other networks and e-mail, DiskLock shifted its focus to the encryption of files, thereby rendering them useless to an unauthorized user. The program comes with a group of encryption and decryption tools that provide protection at the file and folder level. Encrypted files and folders cannot be moved, copied or deleted by unauthorized users; if they are opened, the encryption renders them unreadable. After the encryption and screenlock components are installed on the system, users must enter their user name and password to activate the program each time the machine is turned on. Once the application is activated, users can access the encryption and decryption options. DiskLock uses an asymmetric encryption scheme that requires two different keys to encrypt and decrypt files. It works with a public and private key, allowing public keys to be exchanged between users wishing to access each other's work. Without a user's private key, however, the public key remains useless, so security is not compromised. Additionally, DiskLock provides a timeframe access during which someone can access information from a hard drive. It also features an audit log that tracks system activity, revealing what was done to the system and when it occurred. For additional information, contact Symantec, 10201 Torre Ave., Cupertino, CA 95014. Call 800/ 441-7234. Internet: . Netscape Communicator 4.0 Netscape Communicator 4.0 gives users the most powerful and flexible data security. For a secure communication across the Internet, Netscape developed Secure Socket Layer (SSL), which utilizes encryption. Web browsers, for example, routinely encrypt credit card numbers and other sensitive information when helping perform online purchases. The encrypted data goes to an online merchant, who decrypts the message and processes the order. SSL makes sure traffic between the two hosts is not modified in transit. It uses a technique called "hashing" to ensure that message integrity is guaranteed. Mutual authentication is guaranteed by SSL digital certificates, which are exchanged by the communicating machines at the time they initiate connection. SSL offers potentially broader security, since it works on a network-transport level. Any program conversing over the network can use SSL, which sets up a safe passageway or tunnel between a client and server. Once erected, everything traveling within the tunnel is secure from outsiders. For additional information, contact Netscape Communications, 501 East Middlefield Road, Mountain View, CA 94043. Call 415/937-3777. Internet: . Pretty Good Privacy PGP (Pretty Good Privacy) for Personal Privacy, written by Phillip Zimmerman in 1991, allows users to encrypt and decrypt files on demand. PGP combines multiple encryption algorithms, most notably those based on RSA Data Security's public key. According to the company, PGP automatically integrates with popular e-mail clients, such as Eudora (Pro or Light Versions) and Microsoft's Exchange. In September, PGP Inc. released its Business Security Suite -- a trial version of security applications available over the Net for DOS, Windows, OS/2, UNIX and Mac systems. For additional information, contact Internet: . A growing number of organizations are seeking another innovative and economical alternative -- the virtual private network (VPN). VPNs involve a vendor that controls the Internet connection at both ends, including protocol and secured encryption keys. VPNs use TCP/IP "tunneling" to let users dial in to their offices via the Internet. RedCreek's Ravlin 10 encryption hardware and software let users create a secure VPN. It is interoperable with firewalls and routers and provides data encryption without slowing the network. According to the company, this lets users create secure virtual private networks without forcing them to make radical changes. Ravlin 10 allows the establishment of secure VPNs over both private and public networks, and it uses standard DES encryption, authentication and access control using digital signature standards and X.509 digital certificates. For additional information, contact RedCreek, 3900 Newpark Mall Road, Newark, CA 94560. Call 510/745-3900. Internet: . The Government Agenda Longer keys and more complex algorithms are clearly required for meaningful security, but proposals for government access to data are having the opposite effect. Some government and law enforcement agencies want to keep strong encryption out of the hands of terrorists and other criminals. As a result, a mandatory key escrow has been proposed, whereby government agencies would keep a sort of "skeleton key" to all encrypted data. The FBI wants "realtime" access to all encrypted communications. Privacy advocates understandably worry that as voice and data networks increasingly carry a larger share of the nation's communications traffic, government agencies will be able to access private networks without safeguards. Encryption is also grabbing headlines elsewhere. Other countries are contemplating similar moves. The European Union is launching a pilot project called EuroTrust, which could be the first step in creating a single authority to manage the copies of private keys necessary for back-door access to all computer data. In the most extreme example, France has outlawed the use of encryption of any kind. So in the final analysis, encryption is not just a matter of technology and bit length, but a political, social and policy issue that will become more prominent as global electronic commerce increases and as computer networks reach into more and more homes, businesses and government agencies. * March Table of Contents
<urn:uuid:53ff3fda-90f5-414a-b401-381bf90893cd>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Simple-Concept-Complex-Technology-.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920374
2,402
3.578125
4
Joined: 03 Aug 2005 |1. When NASA began launching astronauts into space, they found out that the astronauts' pens wouldn't work at zero gravity (ink wouldn't flow down to the writing surface). It took them one decade and $12 million to solve this problem. They developed a pen that worked at zero gravity, upside down, underwater, on practically any surface including crystal, and at temperatures ranging from below freezing to over 300 degrees C. And what did the Russians do? The Russians used a pencil. 2. A 50 feet long trailer having 48" wheels got stuck while entering a midtown tunnel in New York because it was approximately 2.5 feet taller than the height of the tunnel. The fire department and the state department of transportation spent the whole day searching for a solution, to no avail. Then a child, aged about 9 years, asked his father, "Why can't they take out the air from the tyre tubes? The height will automatically come down." 3. One of the most memorable case studies on Japanese management techniques was the case of the empty soap box, which occurred in one of Japan's biggest cosmetics companies. The company received a complaint that a consumer had bought a soap box that was empty. Immediately the authorities isolated the problem to the assembly line, which transported all the packaged boxes of soap to the delivery department. For some reason, one soap box went through the assembly line empty. Management asked its engineers to solve the problem. Post-haste, the engineers worked hard to devise an X-ray machine with high-resolution monitors manned by two people to watch all the soap boxes that passed through the line, to make sure they were not empty. No doubt, they worked hard and they worked fast but they spent whoopee amount of time and money to do so. But when a rank-and-file employee in a small company was posed with the same problem, he did not get into the complications of X-rays, etc. but instead came out with another solution. He bought a strong industrial electric fan and pointed it at the assembly line. He switched the fan on, and as each soap box passed the fan, it simply blew the empty boxes out of the line. ____________ _________ _________ _________ _________ _________ _________ Moral: Always look for simple solutions. And learn to focus on solutions, not on problems. If you look at what you do not have in life, you don't have anything. If you look at what you have in life, you have everything. ___________ _________ ________ ____________ ________ _________ _________
<urn:uuid:625a68bc-51a6-439c-9e98-92c31047ed33>
CC-MAIN-2017-04
http://ibmmainframes.com/about30151.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00156-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965606
549
2.921875
3
In this PGP encrypted hard drive recovery case study, the client had used full-drive encryption to secure the data on their laptop. With Symantec PGP whole disk encryption, the entirety of their hard drive was password-protected. PGP encryption, also known as “Pretty Good Privacy” encryption, was invented by Phil Zimmerman in 1991. Technology companies such as Symantec offer software that uses this strong encryption method to protect users’ data. PGP encryption helps protect the data on your hard drive from unwanted access. But it doesn’t protect your hard drive from any physical or logical damage. When this client’s laptop failed to boot up one day, the client removed the hard drive. They found that the drive grew very hot when they tried to power it on, and could not get it to detect on another machine. The client quickly contacted our recovery client advisers here at Gillware Data Recovery and send the hard drive to our data recovery lab. PGP Encrypted Hard Drive Recovery Case Study: Laptop Not Booting Drive Model: Hitachi HTS725050A7E630 Drive Capacity: 500 GB Operating System: Windows Situation: Laptop became very hot and wouldn’t boot Type of Data Recovered: User Word and Excel documents Binary Read: 67.2% Gillware Data Recovery Case Rating: 9 Firmware and Parts Compatibility Issues When our data recovery engineers inspected the client’s hard drive in our cleanroom, they found that the drive’s read/write heads had crashed. There was some moderate damage to the drive’s platters as well. The drive needed its read/write heads replaced. Even when two hard drives share the same model number, they are both still special snowflakes. Each hard drive has to be calibrated in the factory for its unique tolerances and minor defects separately. The calibration makes sure the drive’s internal components work properly, according to its unique differences. The calibration data is stored in a ROM chip on the drive’s control board. A hard drive will never truly behave optimally if it has another drive’s read/write heads inside it. This is simply because the drive’s calibrations just do not line up with the unimaginably tiny variations between the two sets of read/write heads. This can make finding suitable donor parts frustrating. This hard drive was particularly uncooperative with our engineers. Normally, when a hard drive powers on, its read/write heads find the firmware, read it, and store the data in the drive’s RAM before continuing its normal operations. The drive’s new read/write heads wouldn’t do this properly. They could read the firmware, but our engineers had to manually load it into the drive’s RAM. Due to adaptive drift, it took multiple sets of donor heads to read this hard drive. As a repaired hard drive continues to operate, its operating conditions change. When the conditions shift too far, the hard drive’s replacement parts become incompatible, and must be themselves replaced. Eventually, after multiple donors have been used on this drive and the drive’s condition had continued to degrade, we had gotten all we could get: 67% of the drive’s binary. Symantec PGP Decryption Symantec PGP whole drive encryption encrypts the entire hard drive (hence the name). Well, almost all of it. The only part of the drive that remains unencrypted is a small portion at the beginning of Sector 0 that tells anything talking to the drive how it’s encrypted. There’s no way to decrypt the drive on the fly, unfortunately, which puts our engineers in a bind when the drive is damaged. There isn’t any way to target used areas of the disk, because there is no way to discern encrypted data from encrypted zeroes. When a drive is damaged to the point where a full (or near-full) disk image isn’t possible, the situation is very worrying for our engineers. There’s no way of knowing how much stuff we’ve gotten. If the tiny parts of the disk that contain the encryption metadata couldn’t be recovered, then we can’t decrypt the recovered data, even with the correct password. And so our logical engineer Cody took the encrypted disk image out of our cleanroom, used the client’s password to decrypt the disk, crossed his fingers, held his breath, and waited. As a byproduct of its design, Symantec PGP whole disk encryption actually takes a very long time to undo. Our engineers are, unfortunately, at its mercy. Cody began the decryption process on a Friday morning. By the end of that day, about five percent of the disk had been decrypted. It wasn’t until the next Tuesday that the process finished. PGP Encrypted Hard Drive Recovery – Conclusion Cody reviewed the results of this data recovery as soon as the operation finished. The results were very good. Imaging the drive had been a shot in the dark due to the encryption. But our engineers had gotten 99.8% of the drive’s file definitions. Of the 99.8% of the files we knew about on the disk, the vast majority had been completely recovered. All of the client’s critical data was there. We rated this PGP encrypted hard drive recovery case a 9 on our ten-point data recovery case rating scale.
<urn:uuid:dfa311b5-5ae2-4d47-b14d-fb2df48682db>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery-case/pgp-encrypted-hard-drive-recovery-case-study/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00156-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955681
1,154
2.84375
3
What could better fit a month of discussing UPS systems in the data center than a controversial design topic like AC versus DC power? Over the past two weeks, the Data Center Journal has reviewed the basics of UPS systems as well as broad guidelines for selecting an appropriate design. But what if you could use an electrical system that would simplify both your UPS system and other aspects of your electrical infrastructure? Proponents of DC (direct current) power believe this is possible, all with a significant increase in efficiency. Naysayers dismiss these benefits as resting on inaccurate comparisons with AC (alternating current). So, who’s right? AC Proponents Poisoning the Well? If the history of science shows anything, it shows that people who are quick to call one thing science and another pseudoscience are just as likely in the wrong as they are in the right. Although science is supposedly neutral, unbiased and based largely on empirical evidence, scientists are often the most dogmatic, biased and committed individuals on the planet. The debate over AC versus DC in the data center—although not as rancorous as, say, the matter of climate change—seems to bring out the dogmatist in many people. Case in point: in a company blog post, Kevin Brown (Vice President, Data Center Global Offer for Schneider Electric) consistently uses scare quotes around the word “study” when referencing publications on DC efficiency benefits. The tell is that he does this for a paper he admits to not having thoroughly investigated: “We are digging into this one but science is science. . . It’ll be interesting to understand the details of this ‘study’ to see if they tipped the scales in favor of DC.” Even the blog title says much: “Great Hoaxes: Bigfoot, UFO’s, and DC vs. AC efficiency studies.” (One wonders if even the capitalization, or lack thereof, is significant.) Disagree with the findings of a paper? Fine. But labeling your position as science and the other guy’s position as a hoax, pseudoscience, nonscience or whatever is often just unprofessional. To be sure, not all AC proponents are dogmatic, and not all DC proponents are detached and objective, and sorting through the hype to find the nuggets of truth can be difficult. But that’s the case for any topic subject to heated debate, whether it’s evolution, climate change or DC power in the data center. Maybe DC Really Is a Little Better in the Data Center To his credit, Brown does admit the potential for DC to be more efficient than AC, albeit reluctantly: “there is really very little difference (~2-4% at most) in the efficiency of DC versus a well designed AC system.” So, maybe the arguments for DC power in the data center do have some validity—at least they’re not as unscientific as some might suggest, even if their claims are a little exaggerated. Even if the efficiency improvement isn’t 20–30% but is low single digits, that’s potentially tremendous savings in the long term. And as data centers max out their benefits from virtualization, consolidation and free cooling, they’ll begin looking elsewhere to reduce power consumption: isn’t even a few percent efficiency improvement worth considering? As always, however, some restraint is needed when facing wild claims about a new or different technology. Every year, some development is touted in the press as the next (fill in the blank), or as something that will revolutionize the (fill in the blank) industry. Most of these do not pan out. Thus, a new strategy or technology should be viewed with caution, and sometimes even skepticism. Nevertheless, one should also keep an open mind. What DC Has Going for It Consider the double conversion UPS design: incoming power to the data center is converted from AC (what the electric generator produces) to DC to charge the battery and remove spikes, dips and other power quality anomalies. It must then be converted back to AC for distribution over the typical data center’s infrastructure—which is designed for AC. The conversion back to AC bears some inefficiency, as with any power conversion step. So what’s the advantage of DC in this case? The reconversion to AC can be eliminated, saving some waste energy. Furthermore, since IT equipment generally uses DC (most products convert input AC power to usable DC power internally), another conversion stage could be eliminated: conversion from the “cleaned” AC power (from the UPS/PDU) back to DC once more. The elimination of several power conversion stages is the most widely touted benefit of DC over AC. Furthermore, eliminating equipment (power conversion stages, for instance) increases available space (a commodity in data centers) and generally improves the reliability of systems (one less thing to break). And it reduces both capital and operating expenses. Some quibbling over other elements of the system, such as reduced losses in cabling and such, takes place as well. What DC Has Going Against It Now, the cons. Perhaps the biggest concern for companies building data centers is the lack of equipment designed to run on DC power. The problem here is not so much a technical one as it is a matter of product features. On the other hand, some equipment is designed to use either AC or DC (or just DC), and as more data centers implement DC infrastructure, more products will become available. Another concern is safety, with some opponents suggesting that high-voltage DC poses a greater risk to data center personnel compared with high-voltage AC (for instance, charge could build in areas of a cable, leading to arcing). The size and weight of cabling for DC relative to AC may also be a concern, although the amount of cabling could be less in a DC design. But the need for a solid reputation is what really hampers DC. This problem will ease as more data centers adopt DC and as more publications (unbiased and down to earth, preferably) review good DC designs compared with good AC designs, thereby giving companies solid data regarding potential savings for each approach. And the Winner Is... Perhaps the biggest impediment to adoption of DC power is the contentious nature of the subject, with each side making claims that either misrepresent the other side or that simply are not accurate regarding its own champion. If some design technique is controversial, a conservative company—and how many are willing to take a big risk in today’s struggling economy?—is likely to pursue the status quo rather than a unique approach. DC seems to offer some efficiency improvement over AC, and this will no doubt drive increasing adoption of DC power distribution in data centers. Whether it will become a tsunami, however, remains to be seen. Unless vastly superior efficiency can be demonstrated, AC and DC will probably remain competing options rather than one taking the lion’s share of the market (unless AC remains dominant simply owing to historical momentum). In the meantime, a healthy debate is wonderful—but a little respect to and from both sides would be nice. Photo courtesy of Philippe Put.
<urn:uuid:dd1c49ef-4859-471e-af40-8007d9025263>
CC-MAIN-2017-04
http://www.datacenterjournal.com/ac-versus-dc-in-the-data-center-redux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946321
1,479
2.609375
3
LONDON; Sept. 17, 2015 – New research from Accenture (NYSE: ACN) reports that more than half (60 percent) of 12-year-old girls in the United Kingdom and Ireland believe that science, technology, engineering and mathematics (STEM) subjects are too difficult to learn. The survey of more than 4,000 girls, young women, parents and teachers, demonstrates clearly that there is a perception that STEM subjects and careers are better suited to male personalities, hobbies and brains. Half (51 percent) of the teachers and 43 percent of the parents surveyed believe this perception helps explain the low uptake of STEM subjects by girls. Nearly half (47 percent) of the young girls surveyed said they believe such subjects are a better match for boys. The research also suggests that parents and teachers must do more to encourage girls in the early stages of development to embrace STEM subjects if government and business initiatives to increase the number of women in STEM careers are to succeed. Although girls ranked parents and teachers as their biggest influencers when making a decision about subject choice, more than half (51 percent) of parents say they feel ill-informed on the benefits of STEM subjects specifically, and only one in seven (14 percent) say they understand the different career opportunities that exist for their daughters. “It’s worrying that girls’ interest in STEM subjects tails off so early in their time at secondary school. With such a small percentage of parents understanding what these subjects can offer their daughters, it is not surprising that girls become disconnected from STEM,” said Emma McGuigan, managing director for Accenture Technology in the UK & Ireland. “Our research suggests that while getting girls enthused about subjects like technology or engineering must start at home, encouragement needs to continue in early education, such as nursery and primary school, so that girls don’t conclude at a young age that math and science are too difficult.” Additionally, while emerging sectors like technology are starting to bridge the gender gap — with groups and initiatives like TechFuture Girls, Stemettes, The Science Museum, techUK and Girls in Tech encouraging women to embrace the digital era – more than three-quarters (77 percent) of girls still believe that the science and technology sector lacks high-profile female role models. “It’s important that girls understand that these subjects are as much for them as they are for boys,” said the Tech Partnership’s CEO, Karen Price. “While a lot of fantastic work has been done to encourage women and girls to embrace STEM, females still only comprise a small percentage of the workforce in related industries. If STEM businesses work together to support teachers and parents to get young girls excited about these subjects from a much younger age, we will be much closer to the goal of making the balance of men versus women in these careers more equal.” Tom O’Leary, director of learning at the Science Museum, said: “At the Science Museum Group, we recognize the importance and scale of the challenge to ensure that young people, especially girls, see that a STEM career is for them. Our own Enterprising Science research project reflects findings similar to Accenture’s, and as such, we have put programmes in place to help more young people find science engaging outside of the classroom. Museums and science centres are in pivotal positions to help build science capital by developing connections between teachers, young people and their families. We support efforts by secondary schools to integrate engaging museum experiences and approaches into their teaching, and to help them tap into their students’ home-based knowledge and experiences to make science more meaningful and relevant to young people.” Commissioned by Accenture and conducted by Loudhouse, a specialist research division of the Octopus Group, the online research covered a total of 1,571 girls of secondary school age (11-18) and 2,509 young women (19-23) across the United Kingdom and the Republic of Ireland. Samples of 535 parents and 112 teachers were also taken to determine the influencing factors for girls in their academic subject choices. The survey was conducted in April 2015. Accenture is a global management consulting, technology services and outsourcing company, with more than 336,000 people serving clients in more than 120 countries. Combining unparalleled experience, comprehensive capabilities across all industries and business functions, and extensive research on the world’s most successful companies, Accenture collaborates with clients to help them become high-performance businesses and governments. Through its Skills to Succeed corporate citizenship initiative, Accenture is equipping more than 3 million people around the world with the skills to get a job or build a business. The company generated net revenues of US$30.0 billion for the fiscal year ended Aug. 31, 2014. Its home page is www.accenture.com. + 44 7825 023 622 + 44 7769 955302
<urn:uuid:10fc74ff-f222-47cd-9764-49bd5f663d3b>
CC-MAIN-2017-04
https://newsroom.accenture.com/news/accenture-finds-more-than-half-of-12-year-old-girls-in-the-uk-and-ireland-believe-stem-subjects-are-too-difficult-to-learn.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955593
1,011
2.765625
3
HTTPS is an acronym for HyperText Transfer Protocol plus Security Socket Layer. It’s a widely used Internet protocol for secure communication over a computer network. When a client accesses a website or a web application, HTTPS provides authentication for both the website and associated web server and encrypts data between the client and server. In other words, HTTPS creates a secure channel over an insecurity network and it guarantees that the website or web application you’re trying to access is in fact legitimate. The potential problem with SSL encryption is that many traditional network security products aren’t designed to inspect this traffic. As a result, attackers have leveraged SSL encryption to sneak past security controls. A10 helps organization eliminate this potential blind spot in their defenses by providing SSL Insight, an essential feature of the A10 application deliver controller (ADC) product line. To learn more, visit SSL Decryption, Encryption and Inspection with A10's SSL Insight.
<urn:uuid:a35bc724-2c2e-403c-9312-b998b247fb92>
CC-MAIN-2017-04
https://www.a10networks.com/resources/glossary/https
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902555
194
3.109375
3
Current international laws surrounding warfare can be applicable to instances of cyber war, according to experts. However, concerns of attribution and automation can complicate the degree of response a nation is legally allowed to take. “The problem is: What is the law?” said Michael Schmitt, chairman of the Stockton Center for the Study of International Law at the United States Naval War College and professor of public international law at the University of Exeter. Schmitt and 20 other experts have spent the past six years analyzing the applicability of international law to cyber conflict in a study commissioned by the NATO Cyber Defense Centre of Excellence and titled the “Tallinn Manual Project.” Many nations, including the United States, have struggled with how to define a digital act of war, and what the response should be. Schmitt and his colleagues focused on what a nation-state could legally do in the event of a cyberattack by another country or non-state actor. “We said you have four options in such cases,” said Schmitt. These options are self-defense, countermeasures, necessity, and traditional lawful responses. Schmitt defined self-defense under Article 51 of the U.N. Charter, which states, “Nothing in the present charter shall impair the inherent right of individual or collective self-defense if an armed attack occurs against a member of the United Nations.” “We felt in our discussions that this was a very high threshold,” Schmitt said of determining whether a cyberattack constitutes an “armed attack.” He and his colleagues agreed that there would have to be significant physical damage or harm to citizens taking place to meet that definition. Tom Wingfield, a professor of cyber law at National Defense University, listed seven characteristics that categorize a cyber event as a military attack: severity, immediacy, directness, invasiveness, measurability, presumptive legitimacy, and responsibility. “If you look at any event […] and you look at these seven different facets, you can figure out if it is a military attack or something else,” Wingfield said. The second option that nation-states have in responding legally to a cyberattack is to enact countermeasures. “You break the law, and I get to break the law in response,” Schmitt said, explaining that this is only an option if the action is legally attributable to a state actor. Wingfield added that the two biggest factors to consider in attribution of a cyberattack are a nation’s degree of certainty that another state was involved and the degree to which that state was involved. “This is about getting the other side to knock it off,” Schmitt said, adding that the countermeasure does not have to be in kind, such as hacking the election systems of a nation that has hacked your election systems. According to Schmitt, the third response option is that of necessity: “You can still strike back, if you don’t know who it is, if it impacts the essential interests of the state.” Finally, if none of the previous requirements can be met, a state can respond with traditional lawful responses such as diplomacy and sanctions. “What we found in our project is that the current law applies pretty well,” said Schmitt. “We may see a slow evolution in the law as states respond.” Though current law can by and large apply to cyber actions taken by another country or non-state actor, Wingfield noted that the rise of autonomous robots and weaponry, which make life-and-death decisions without human involvement, can complicate legal understanding of responsibility. “If I had to say what the next big thing is going to be, it’s going to be […] killer robots,” said Wingfield. “Whatever selects targets gets a huge amount of responsibility legally.” He added that an understanding of the law would have to be included in the “killer robot’s” code, and that those in charge of the robot would have to remain responsible for its actions, whether or not they knew about them. “If we start releasing autonomous lethal agents, what the commander should have known is how she is going to be judged,” Wingfield said.
<urn:uuid:9044d808-e0ab-43eb-95f6-bfd7936da1cb>
CC-MAIN-2017-04
https://www.meritalk.com/articles/international-law-applies-pretty-well-to-cyber-war-experts-find/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962799
895
2.5625
3
The change in regime in India in May 2014 was accompanied by expectations of bold reforms in economic and industrial policies. While the reforms have been incremental rather than radical, the government has clearly set the economy on a path of high growth. The chemical industry stands to benefit from this growth. The policy changes that specifically impact the sector include the following: Under the “Make in India” campaign, the government has altered policies to boost investments in the country, including in the chemical industry. The Foreign Direct Investment (FDI) policy allows 100 percent FDI under the automatic route in the chemical sector. In addition, the last 20 items that were reserved for the micro-, small- and medium-scale enterprises sector were de-reserved in April 2015, which opened these areas for greater investment. And upcoming petroleum, chemicals and petrochemicals investment regions and plastic parks will provide state-of-the-art infrastructure for the chemical and petrochemicals sectors.1 The national chemical policy—one of the most awaited in the history of the Indian chemical industry—creates an enabling framework to accelerate manufacturing of chemicals and petrochemicals in order to meet growing internal and external demands as well as reduce dependence on imports. A multifaceted approach, the policy will include establishing the Indian Bureau of Corrosion Control to regulate and prevent huge losses from corrosion (approximately USD$29.6 billion annually2), and a National Chemical Centre to act as a repository of information on the chemical sector. Further, the national policy aims to promote research and development with a focus on sustainability and green technologies for consistent, long-term growth. Talks have been initiated with the petroleum industry so that 20 percent of the total feedstock is made available to downstream chemical companies.3 Increasing the number of Central Institutes of Plastics Engineering and Technology (CIPET) will also help promote human resource development and skillsets for the chemical industry.4 Big infrastructural investments in the chemical industry are expected to significantly increase production in the coming years. Many large manufacturers are already investing heavily to set up greenfield or increase capacity. Moreover, several companies are improving the existing infrastructure to increase energy efficiency and cut pollution. There is a surge in domestic consumption driven by increasing demand for plastic in primary forms and synthetic rubber, as well as fertilizers and nitrogen compounds. Basic chemical domestic production growth is encouraged by India’s general economic growth and burgeoning consumerism, since the basic chemical industry is intertwined with many other industries related to consumer products. A global shift is being observed toward Asia as the world’s chemical manufacturing hub. India enjoys low-cost manufacturing capabilities by virtue of low-cost labor and geographic proximity with the Middle East, one of the world’s key sources of hydrocarbons, coupled with refining capacity in India. This further reduces the cost of production and brings economies of scale. With the government investing to plug the infrastructure gaps in the country, India could emerge as the next manufacturing hub for the chemical industry. In conclusion, the chemical industry is poised for a phase of growth and investment on the back of domestic demand. The coming years will provide an opportunity for domestic industry players to gain scale and consolidate—and for international players to set up a robust manufacturing base. 1Make in India, http://www.makeinindia.com/sector/chemicals. 2Converted INR 2 lakh crore using May 20, 2016 conversion rate on XE.com. 3"Plan for Growth of the Indian Chemicals Industry Revealed," Confederation of Indian Industry (CII), September 16, 2015, http://www.cii.in/ 4“Government will soon come out with a National Chemical Policy, says the Chemicals and Fertilizers Minister,” Press Information Bureau, Government of India, Ministry of Chemicals and Fertilizers, December 4, 2015, http://pib.nic.in/newsite/PrintRelease.aspx?relid=132490.
<urn:uuid:977a4877-065b-48a7-a8db-f2d5b3073258>
CC-MAIN-2017-04
https://www.accenture.com/bd-en/insight-highlights-chemicals-five-factors-growth-in-india
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936558
820
2.609375
3
Your mobile is fast becoming your new PC, wallet, identity card, but is it secure? The EU Agency ENISA (the European Network and Information Security Agency) launches a Position Paper on authentication issues for mobile eID, with 11 security threats and 7 key conclusions to enhance security. In the near future, we will pay our taxes, buy metro tickets or open bank accounts over our phone. Mobile devices, national ID-cards, smart phones and (Personal Digital Assistant) PDAs, will play an ever more important role in the digital environment. The mobile devices can act as an identity or payment card for online services. In Asia, there is already a growing demand for these services, particularly in Hong Kong, Singapore and Taiwan. The main driver in Asia is consumer interest for convenient, easy solutions, in as few devices as possible. In Europe, by contrast, the main driver is enhanced security with the mobile phone seen as a security identification tool for example in electronic ticketing, payment and even online banking. However, as is the case with many new technologies, the pervasive use of mobile devices also brings new security and privacy risks. Persons who make extensive use of mobile devices continuously leave traces of their identities and transactions, sometimes even by just carrying the devices around in their pockets. Statistics show an increase in the theft of mobile devices which nowadays store more and more personal information about their users. Although the secure elements (based on smart card technology) are very suitable for storing data, vulnerabilities do exist and new weaknesses might be discovered. Due to the increasing complexity of mobile devices, they are now prone to attacks which previously only applied to desktop PCs. BitDefender lists the exploitation of mobile device vulnerabilities three times among the top ten ‘e-Threats’ for 2008. According to the E-Threats Landscape Report, mobile devices are about to be increasingly targeted by new virus generations because of their permanent connectivity. Classical scam methods using SMS are expected to rise in parallel. Therefore the original notion of seeing the mobile device as a personally, trusted and trustworthy device needs to be re-evaluated. Throughout this paper ENISA looks at different use-cases for electronic authentication using mobile devices. They identify the security risks which need to be overcome, give an opinion about their relevance, and present mechanisms that help in mitigating these risks. Furthermore, they look at use-cases where mobile devices even act as a security-enhancing element by providing an out-of-band channel or a trustworthy display.
<urn:uuid:0b83ae45-cb1b-4903-af07-4b147327a473>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2008/11/21/mobile-eid-security-issues-examined-by-enisa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945969
510
2.515625
3