aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1611.00172 | 2953178061 | Society is often polarized by controversial issues, that split the population into groups of opposing views. When such issues emerge on social media, we often observe the creation of 'echo chambers', i.e., situations where like-minded people reinforce each other's opinion, but do not get exposed to the views of the opposing side. In this paper we study algorithmic techniques for bridging these chambers, and thus, reducing controversy. Specifically, we represent the discussion on a controversial issue with an endorsement graph, and cast our problem as an edge-recommendation problem on this graph. The goal of the recommendation is to reduce the controversy score of the graph, which is measured by a recently-developed metric based on random walks. At the same time, we take into account the acceptance probability of the recommended edge, which represents how likely the edge is to materialize in the endorsement graph. We propose a simple model based on a recently-developed user-level controversy score, that is competitive with state-of- the-art link-prediction algorithms. We thus aim at finding the edges that produce the largest reduction in the controversy score, in expectation. To solve this problem, we propose an efficient algorithm, which considers only a fraction of all the combinations of possible edges. Experimental results show that our algorithm is more efficient than a simple greedy heuristic, while producing comparable score reduction. Finally, a comparison with other state-of-the-art edge-addition algorithms shows that this problem is fundamentally different from what has been studied in the literature. | Graells- @cite_26 show that mere display of opposing-view content has negative emotional effect. To overcome this effect, they propose a visual interface for making recommendations from a diverse pool of users, where diversity is with respect to user stances on a topic. In contrast, @cite_21 show that not all users value diversity and that the way of presenting information (e.g., highlighting vs. ranking) makes a difference in the way users perceive information. In a different direction, Graells- @cite_6 propose to find intermediary topics'' (i.e., topics that may be of interest to both sides) by constructing a topic graph . They define intermediary topics to be those topics that have high betweenness centrality and topic diversity. | {
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_6"
],
"mid": [
"1571597797",
"2119758323",
"2294090774"
],
"abstract": [
"ABSTRACT Social networks allow people to connect with each otherand have conversations on a wide variety of topics. How-ever, users tend to connect with like-minded people and readagreeable information, a behavior that leads to group polar-ization. Motivated by this scenario, we study how to takeadvantage of partial homophily to suggest agreeable contentto users authored by people with opposite views on sensitiveissues. We introduce a paradigm to present a data portraitof users, in which their characterizing topics are visualizedand their corresponding tweets are displayed using an or-ganic design. Among their tweets we inject recommendedtweets from other people considering their views on sensitiveissues in addition to topical relevance, indirectly motivatingconnections between dissimilar people. To evaluate our ap-proach, we present a case study on Twitter about a sensitivetopic in Chile, where we estimate user stances for regularpeople and nd intermediary topics. We then evaluated ourdesign in a user study. We found that recommending topi-cally relevant content from authors with opposite views in abaseline interface had a negative emotional e ect. We sawthat our organic visualization design reverts that e ect. Wealso observed signi cant individual di erences linked to eval-uation of recommendations. Our results suggest that organicvisualization may revert the negative e ects of providingpotentially sensitive content.",
"Is a polarized society inevitable, where people choose to be exposed to only political news and commentary that reinforces their existing viewpoints? We examine the relationship between the numbers of supporting and challenging items in a collection of political opinion items and readers' satisfaction, and then evaluate whether simple presentation techniques such as highlighting agreeable items or showing them first can increase satisfaction when fewer agreeable items are present. We find individual differences: some people are diversity-seeking while others are challenge-averse. For challenge-averse readers, highlighting appears to make satisfaction with sets of mostly agreeable items more extreme, but does not increase satisfaction overall, and sorting agreeable content first appears to decrease satisfaction rather than increasing it. These findings have important implications for builders of websites that aggregate content reflecting different positions.",
"In online social networks, people tend to connect with like-minded people and read agreeable information. Direct recommendation of challenging content has not worked well because users do not value diversity and avoid challenging content. In this poster, we investigate the possibility of an indirect approach by introducing intermediary topics, which are topics that are common to people having opposing views on sensitive issues, i.e., those issues that tend to divide people. Through a case study about a sensitive issue discussed in Twitter, we show that such intermediary topics exist, opening a path for future work in recommendation promoting diversity of content to be shared."
]
} |
1611.00172 | 2953178061 | Society is often polarized by controversial issues, that split the population into groups of opposing views. When such issues emerge on social media, we often observe the creation of 'echo chambers', i.e., situations where like-minded people reinforce each other's opinion, but do not get exposed to the views of the opposing side. In this paper we study algorithmic techniques for bridging these chambers, and thus, reducing controversy. Specifically, we represent the discussion on a controversial issue with an endorsement graph, and cast our problem as an edge-recommendation problem on this graph. The goal of the recommendation is to reduce the controversy score of the graph, which is measured by a recently-developed metric based on random walks. At the same time, we take into account the acceptance probability of the recommended edge, which represents how likely the edge is to materialize in the endorsement graph. We propose a simple model based on a recently-developed user-level controversy score, that is competitive with state-of- the-art link-prediction algorithms. We thus aim at finding the edges that produce the largest reduction in the controversy score, in expectation. To solve this problem, we propose an efficient algorithm, which considers only a fraction of all the combinations of possible edges. Experimental results show that our algorithm is more efficient than a simple greedy heuristic, while producing comparable score reduction. Finally, a comparison with other state-of-the-art edge-addition algorithms shows that this problem is fundamentally different from what has been studied in the literature. | (c) The studies discussed above suggest that ( @math ) it is possible to nudge people by recommending content from an opposing side @cite_9 , ( @math ) extreme recommendations might not work @cite_6 , ( @math ) people in the middle'' are easier to convince @cite_22 , ( @math ) expert users and hubs are often less biased and can play a role in convincing others @cite_13 @cite_18 | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_9",
"@cite_6",
"@cite_13"
],
"mid": [
"2085731449",
"1959339536",
"2182265315",
"2294090774",
"2137809006"
],
"abstract": [
"Deciding whether a claim is true or false often requires a deeper understanding of the evidence supporting and contradicting the claim. However, when presented with many evidence documents, users do not necessarily read and trust them uniformly. Psychologists and other researchers have shown that users tend to follow and agree with articles and sources that hold viewpoints similar to their own, a phenomenon known as confirmation bias. This suggests that when learning about a controversial topic, human biases and viewpoints about the topic may affect what is considered \"trustworthy\" or credible. It is an interesting challenge to build systems that can help users overcome this bias and help them decide the truthfulness of claims. In this article, we study various factors that enable humans to acquire additional information about controversial claims in an unbiased fashion. Specifically, we designed a user study to understand how presenting evidence with contrasting viewpoints and source expertise ratings affect how users learn from the evidence documents. We find that users do not seek contrasting viewpoints by themselves, but explicitly presenting contrasting evidence helps them get a well-rounded understanding of the topic. Furthermore, explicit knowledge of the credibility of the sources and the context in which the source provides the evidence document not only affects what users read but also whether they perceive the document to be credible. Language: en",
"The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. Despite the enthusiastic rhetoric on the part of some that this process generates \"collective intelligence\", the WWW also allows the rapid dissemination of unsubstantiated conspiracy theories that often elicite rapid, large, but naive social responses such as the recent case of Jade Helm 15 -- where a simple military exercise turned out to be perceived as the beginning of the civil war in the US. We study how Facebook users consume information related to two different kinds of narrative: scientific and conspiracy news. We find that although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, the sizes of the spreading cascades differ. Homogeneity appears to be the primary driver for the diffusion of contents, but each echo chamber has its own cascade dynamics. To mimic these dynamics, we introduce a data-driven percolation model on signed networks.",
"The Internet gives individuals more choice in political news and information sources and more tools to filter out disagreeable information. Citing the preference described by selective exposure theory — people prefer information that supports their beliefs and avoid counter-attitudinal information — observers warn that people may use these tools to access only agreeable information and thus live in ideological echo chambers. We report on a field deployment of a browser extension that showed users feedback about the political lean of their weekly and all time reading behaviors. Compared to a control group, showing feedback led to a modest move toward balanced exposure, corresponding to 1-2 visits per week to ideologically opposing sites or 5-10 additional visits per week to centrist sites.",
"In online social networks, people tend to connect with like-minded people and read agreeable information. Direct recommendation of challenging content has not worked well because users do not value diversity and avoid challenging content. In this poster, we investigate the possibility of an indirect approach by introducing intermediary topics, which are topics that are common to people having opposing views on sensitive issues, i.e., those issues that tend to divide people. Through a case study about a sensitive issue discussed in Twitter, we show that such intermediary topics exist, opening a path for future work in recommendation promoting diversity of content to be shared.",
"A review of research suggests that the desire for opinion reinforcement may play a more important role in shaping individuals’ exposure to online political information than an aversion to opinion challenge. The article tests this idea using data collected via a webadministered behavior-tracking study with subjects recruited from the readership of 2 partisan online news sites (N = 727). The results demonstrate that opinion-reinforcing information promotes news story exposure while opinion-challenging information makes exposure only marginally less likely. The influence of both factors is modest, but opinionreinforcing information is a more important predictor. Having decided to view a news story, evidence of an aversion to opinion challenges disappears: There is no evidence that individuals abandon news stories that contain information with which they disagree. Implications and directions for future research are discussed."
]
} |
1611.00172 | 2953178061 | Society is often polarized by controversial issues, that split the population into groups of opposing views. When such issues emerge on social media, we often observe the creation of 'echo chambers', i.e., situations where like-minded people reinforce each other's opinion, but do not get exposed to the views of the opposing side. In this paper we study algorithmic techniques for bridging these chambers, and thus, reducing controversy. Specifically, we represent the discussion on a controversial issue with an endorsement graph, and cast our problem as an edge-recommendation problem on this graph. The goal of the recommendation is to reduce the controversy score of the graph, which is measured by a recently-developed metric based on random walks. At the same time, we take into account the acceptance probability of the recommended edge, which represents how likely the edge is to materialize in the endorsement graph. We propose a simple model based on a recently-developed user-level controversy score, that is competitive with state-of- the-art link-prediction algorithms. We thus aim at finding the edges that produce the largest reduction in the controversy score, in expectation. To solve this problem, we propose an efficient algorithm, which considers only a fraction of all the combinations of possible edges. Experimental results show that our algorithm is more efficient than a simple greedy heuristic, while producing comparable score reduction. Finally, a comparison with other state-of-the-art edge-addition algorithms shows that this problem is fundamentally different from what has been studied in the literature. | Adding edges to modify the graph structure. In addition to the work on explicitly reducing polarization in social media, there are many papers aiming to make a network more cohesive by edge additions, where cohesiveness is quantified using graph-theoretic properties, such as shortest paths @cite_3 @cite_32 , closeness centrality @cite_7 , diameter @cite_0 , eccentricity @cite_11 , communicability @cite_20 @cite_25 , synchronizability @cite_35 , and natural connectivity @cite_17 . | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_32",
"@cite_3",
"@cite_17",
"@cite_0",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"1877246760",
"",
"2053572557",
"2407891964",
"2406772186",
"1580292412",
"2964046693",
"2262000324",
"2067935583"
],
"abstract": [
"In this paper, we studied the strategies to enhance synchronization on directed networks by manipulating a fixed number of links. We proposed a centrality-based manipulating (CBM) method, where the node centrality is measured by the well-known PageRank algorithm. Extensive numerical simulation on many modeled networks demonstrated that the CBM method is more effective in facilitating synchronization than the degree-based manipulating method and the random manipulating method for adding or removing links. The reason is that the CBM method can effectively narrow the incoming degree distribution and reinforce the hierarchical structure of the network. Furthermore, we apply the CBM method to the links rewiring procedure where at each step one link is removed and one new link is added. The CBM method helps to decide which links should be removed or added. After several steps, the resulting networks are very close to the optimal structure from the theoretical analysis and the evolutionary optimization algorithm. The numerical simulations on the Kuramoto model further demonstrate that our method has an advantage in shortening the convergence time to synchronization on directed networks.",
"",
"Small changes in the network topology can have dramatic effects on its capacity to disseminate information. In this paper, we consider the problem of adding a small number of ghost edges in the network in order to minimize the average shortest-path distance between nodes, towards a smaller-world network. We formalize the problem of suggesting ghost edges and we propose a novel method for quickly evaluating the importance of ghost edges in sparse graphs. Through experiments on real and synthetic data sets, we demonstrate that our approach performs very well, for a varying range of conditions, and it outperforms sensible baselines.",
"The small world phenomenon is a desirable property of social networks, since it guarantees short paths between the nodes of the social graph and thus efficient information spread on the network. It is thus in the benefit of both network users and network owners to enforce and maintain this property. In this work, we study the problem of finding a subset of k edges from a set of candidate edges whose addition to a network leads to the greatest reduction in its average shortest path length. We formulate the problem as a combinatorial optimization problem, and show that it is NP-hard and that known approximation techniques are not applicable. We describe an efficient method for computing the exact effect of a single edge insertion on the average shortest path length, as well as several heuristics for efficiently estimating this effect. We perform experiments on real data to study the performance of our algorithms in practice.",
"The function and performance of networks rely on their robustness, defined as their ability to continue functioning in the face of damage (targeted attacks or random failures) to parts of the network. Prior research has proposed a variety of measures to quantify robustness and various manipulation strategies to alter it. In this paper, our contributions are twofold. First, we critically analyze various robustness measures and identify their strengths and weaknesses. Our analysis suggests natural connectivity, based on the weighted count of loops in a network, to be a reliable measure. Second, we propose the first principled manipulation algorithms that directly optimize this robustness measure, which lead to significant performance improvement over existing, ad-hoc heuristic solutions. Extensive experiments on real-world datasets demonstrate the effectiveness and scalability of our methods against a long list of competitor strategies.",
"We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1+e)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any @math -approximation for the single-source version must use Ω(klogn) shortcut edges assuming P≠NP.",
"The total communicability of a network (or graph) is defined as the sum of the entries in the exponential of the adjacency matrix of the network, possibly normalized by the number of nodes. This quantity offers a good measure of how easily information spreads across the network, and can be useful in the design of networks having certain desirable properties. The total communicability can be computed quickly even for large networks using techniques based on the Lanczos algorithm. In this work we introduce some heuristics that can be used to add, delete, or rewire a limited number of edges in a given sparse network so that the modified network has a large total communicability. To this end, we introduce new edge centrality measures, which can be used as a guide in the selection of edges to be added or removed. Moreover, we show experimentally that the total communicability provides an effective and easily computable measure of how \"well-connected\" a sparse network is.",
"We introduce new broadcast and receive communicability indices that can be used as global measures of how effectively information is spread in a directed network. Furthermore, we describe fast and effective criteria for the selection of edges to be added to (or deleted from) a given directed network so as to enhance these network communicability measures. Numerical experiments illustrate the effectiveness of the proposed techniques.",
"In practical military or first responder deployment scenarios, information flows need to adhere to specified policies regardless of the physical connectivity of nodes. Nodes in such networks are associated with various levels in a command-and-control hierarchy, and therefore typically form a logical hierarchical tree network that is used to route both command and data traffic. Associated with this logical hierarchical network is a communication network that represents the connectivity of these nodes in the deployed scenario. Such composite networks introduce constraints that can result in information flows having to traverse much longer paths in the underlying communication network. In this paper, we look at the problem of adding edges to a logical hierarchical network (or any other social network) so as to minimize the number of hops required to route data traffic in the underlying communication network from a node to other specified nodes. The edges added are a subset of all possible edges in the complementary logical hierarchical graph and have to satisfy specified hierarchical constraints. First, we consider the general problem of minimizing the eccentricity of a source node 's' (where eccentricity of 's' is the maximum of the shortest paths from 's' to all other nodes) in a metric graph on adding upto 'B' unequal cost metric edges from the set of all edges in the complementary graph. We develop an efficient constant factor approximation algorithm for this case that outperforms existing constant factor algorithms for eccentricity minimization. Here the added edge metric cost as well as the graph edge metric cost correspond to the number of hops in the shortest path required to route traffic in the actual deployed topology (i.e., underlying communication network). Next, we consider the case where the set of possible added edges is a specified subset of the edges in the complementary graph and the set of destinations is a subset of the graph nodes. For this case, we develop heuristic algorithms based on the previous eccentricity minimizing algorithm that show good performance. We validate our algorithms using two realistic military deployment scenarios. We find that adding even a low number of hierarchically constrained edges (of the order of 10) can cause a significant decrease (around 50 ) in the eccentricity of a node in the logical hierarchical network and thus can reduce the number of hops required for data traffic traversal."
]
} |
1611.00172 | 2953178061 | Society is often polarized by controversial issues, that split the population into groups of opposing views. When such issues emerge on social media, we often observe the creation of 'echo chambers', i.e., situations where like-minded people reinforce each other's opinion, but do not get exposed to the views of the opposing side. In this paper we study algorithmic techniques for bridging these chambers, and thus, reducing controversy. Specifically, we represent the discussion on a controversial issue with an endorsement graph, and cast our problem as an edge-recommendation problem on this graph. The goal of the recommendation is to reduce the controversy score of the graph, which is measured by a recently-developed metric based on random walks. At the same time, we take into account the acceptance probability of the recommended edge, which represents how likely the edge is to materialize in the endorsement graph. We propose a simple model based on a recently-developed user-level controversy score, that is competitive with state-of- the-art link-prediction algorithms. We thus aim at finding the edges that produce the largest reduction in the controversy score, in expectation. To solve this problem, we propose an efficient algorithm, which considers only a fraction of all the combinations of possible edges. Experimental results show that our algorithm is more efficient than a simple greedy heuristic, while producing comparable score reduction. Finally, a comparison with other state-of-the-art edge-addition algorithms shows that this problem is fundamentally different from what has been studied in the literature. | The paper that is is conceptually closest to ours is the one by @cite_34 , which aims to add and remove edges in a graph to reduce the dissemination of content (e.g., viruses). The proposed approach is to try to maximize the largest eigenvalue, which determines the epidemic threshold and, thus, the properties of information dissemination in networks. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2096423888"
],
"abstract": [
"Controlling the dissemination of an entity (e.g., meme, virus, etc) on a large graph is an interesting problem in many disciplines. Examples include epidemiology, computer security, marketing, etc. So far, previous studies have mostly focused on removing or inoculating nodes to achieve the desired outcome. We shift the problem to the level of edges and ask: which edges should we add or delete in order to speed-up or contain a dissemination? First, we propose effective and scalable algorithms to solve these dissemination problems. Second, we conduct a theoretical study of the two problems and our methods, including the hardness of the problem, the accuracy and complexity of our methods, and the equivalence between the different strategies and problems. Third and lastly, we conduct experiments on real topologies of varying sizes to demonstrate the effectiveness and scalability of our approaches."
]
} |
1610.09786 | 2952861497 | Most of the online news media outlets rely heavily on the revenues generated from the clicks made by their readers, and due to the presence of numerous such outlets, they need to compete with each other for reader attention. To attract the readers to click on an article and subsequently visit the media site, the outlets often come up with catchy headlines accompanying the article links, which lure the readers to click on the link. Such headlines are known as Clickbaits. While these baits may trick the readers into clicking, in the long run, clickbaits usually don't live up to the expectation of the readers, and leave them disappointed. In this work, we attempt to automatically detect clickbaits and then build a browser extension which warns the readers of different media sites about the possibility of being baited by such headlines. The extension also offers each reader an option to block clickbaits she doesn't want to see. Then, using such reader choices, the extension automatically blocks similar clickbaits during her future visits. We run extensive offline and online experiments across multiple media sites and find that the proposed clickbait detection and the personalized blocking approaches perform very well achieving 93 accuracy in detecting and 89 accuracy in blocking clickbaits. | There has been recent works to understand the psychological appeal of clickbaits. Blom et. al. @cite_5 examined how clickbaits employ two forms of forward referencing -- discourse deixis and cataphora -- to lure the readers to click on the article links. Chen et. al. @cite_16 argued for labeling clickbaits as misleading content or false news. | {
"cite_N": [
"@cite_5",
"@cite_16"
],
"mid": [
"2046881809",
"2248267741"
],
"abstract": [
"Abstract This is why you should read this article. Although such an opening statement does not make much sense read in isolation, journalists often write headlines like this on news websites. They use the forward-referring technique as a stylistic and narrative luring device trying to induce anticipation and curiosity so the readers click (or tap on) the headline and read on. In this article, we map the use of forward-referring headlines in online news journalism by conducting an analysis of 100,000 headlines from 10 different Danish news websites. The results show that commercialization and tabloidization seem to lead to a recurrent use of forward-reference in Danish online news headlines. In addition, the article contributes to reference theory by expanding previous models on phoricity to include multimodal references on the web.",
"Tabloid journalism is often criticized for its propensity for exaggeration, sensationalization, scare-mongering, and otherwise producing misleading and low quality news. As the news has moved online, a new form of tabloidization has emerged: ?clickbaiting.? ?Clickbait? refers to ?content whose main purpose is to attract attention and encourage visitors to click on a link to a particular web page? [?clickbait,? n.d.] and has been implicated in the rapid spread of rumor and misinformation online. This paper examines potential methods for the automatic detection of clickbait as a form of deception. Methods for recognizing both textual and non-textual clickbaiting cues are surveyed, leading to the suggestion that a hybrid approach may yield best results."
]
} |
1610.09712 | 2950016635 | Image distortion correction is a critical pre-processing step for a variety of computer vision and image processing algorithms. Standard real-time software implementations are generally not suited for direct hardware porting, so appropriated versions need to be designed in order to obtain implementations deployable on FPGAs. In this paper, hardware-compatible techniques for image distortion correction are introduced and analyzed in details. The considered solutions are compared in terms of output quality by using a geometrical-error-based approach, with particular emphasis on robustness with respect to increasing lens distortion. The required amount of hardware resources is also estimated for each considered approach. | Distortion correction algorithms have been subject of several studies in the literature. In regards to the specific mathematical models for defining the distortion geometry, different models have been introduced @cite_3 @cite_11 @cite_17 @cite_8 . Determining the distortion of the particular lenses requires a calibration stage, in which the image information from the camera is used to estimate the parameters for the distortion model through different techniques @cite_11 @cite_19 . The distortion correction methodologies considered in this paper are agnostic to the calibration approach deployed. | {
"cite_N": [
"@cite_11",
"@cite_8",
"@cite_3",
"@cite_19",
"@cite_17"
],
"mid": [
"2167667767",
"2144325227",
"2112731915",
"36923225",
""
],
"abstract": [
"We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.",
"We introduce a new rational function (RF) model for radial lens distortion in wide-angle and catadioptric lenses, which allows the simultaneous linear estimation of motion and lens geometry from two uncalibrated views of a 3D scene. In contrast to existing models which admit such linear estimates, the new model is not specialized to any particular lens geometry, but is sufficiently general to model a variety of extreme distortions. The key step is to define the mapping between image (pixel) coordinates and 3D rays in camera coordinates as a linear combination of nonlinear functions of the image coordinates. Like a \"kernel trick\", this allows a linear algorithm to estimate nonlinear models, and in particular offers a simple solution to the estimation of nonlinear image distortion. The model also yields an explicit form for the epipolar curves, allowing correspondence search to be efficiently guided by the epipolar geometry. We show results of an implementation of the RF model in estimating the geometry of a real camera lens from uncalibrated footage, and compare the estimate to one obtained using a calibration grid.",
"In geometrical camera calibration the objective is to determine a set of camera parameters that describe the mapping between 3-D reference coordinates and 2-D image coordinates. Various methods for camera calibration can be found from the literature. However surprisingly little attention has been paid to the whole calibration procedure, i.e., control point extraction from images, model fitting, image correction, and errors originating in these stages. The main interest has been in model fitting, although the other stages are also important. In this paper we present a four-step calibration procedure that is an extension to the two-step method. There is an additional step to compensate for distortion caused by circular features, and a step for correcting the distorted image coordinates. The image correction is performed with an empirical inverse model that accurately compensates for radial and tangential distortions. Finally, a linear method for solving the parameters of the inverse model is presented.",
"Most algorithms in 3D computer vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle or fish-eye lenses, generate a lot of non-linear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.",
""
]
} |
1610.09546 | 2544177338 | In future high-capacity wireless systems based on mmWave or massive multiple input multiple output (MIMO), the power consumption of receiver Analog to Digital Converters (ADC) is a concern. Although hybrid or analog systems with fewer ADCs have been proposed, fully digital receivers with many lower resolution ADCs (and lower power) may be a more versatile solution. In this paper, focusing on an uplink scenario, we propose to take the optimization of ADC resolution one step further by enabling variable resolutions in the ADCs that sample the signal received at each antenna. This allows to give more bits to the antennas that capture the strongest incoming signal and fewer bits to the antennas that capture little signal energy and mostly noise. Simulation results show that, depending on the unquantized link SNR, a power saving in the order of 20-80 can be obtained by our variable resolution proposal in comparison with a reference fully digital receiver with a fixed low number of bits in all its ADCs. | Recent works such as @cite_14 @cite_17 @cite_2 study the capacity and energy efficiency (EE) of large antenna array receiver designs depending on the ADC resolution. The effect of the number of ADC bits @math and sampling rate @math on capacity and power consumption is analyzed in @cite_10 for both AC and DC. | {
"cite_N": [
"@cite_10",
"@cite_14",
"@cite_2",
"@cite_17"
],
"mid": [
"1892643793",
"2069209527",
"",
"1801517207"
],
"abstract": [
"The wide bandwidth and large number of antennas used in millimeter wave systems put a heavy burden on the power consumption at the receiver. In this paper, using an additive quantization noise model, the effect of analog-digital conversion (ADC) resolution and bandwidth on the achievable rate is investigated for a multi-antenna system under a receiver power constraint. Two receiver architectures, analog and digital combining, are compared in terms of performance. Results demonstrate that: (i) For both analog and digital combining, there is a maximum bandwidth beyond which the achievable rate decreases; (ii) Depending on the operating regime of the system, analog combiner may have higher rate but digital combining uses less bandwidth when only ADC power consumption is considered, (iii) digital combining may have higher rate when power consumption of all the components in the receiver front-end are taken into account.",
"In this paper, the effects of receive signal quantization on the channel capacity and the performance of error control coding in multi-input multi-output (MIMO) systems are investigated. The receive antennas of a MIMO system experience a channel-dependent superposition of modulated signals originating from all transmit antennas. A fine-granular analog to digital conversion of the resulting irregular constellation is difficult to obtain in practice. The quantization is therefore likely to be rather coarse. It turns out, however, that the loss in channel capacity due to coarse quantization is surprisingly small. On the other hand, existing coding schemes do not seem to tolerate coarse quantization gracefully, producing rather high error floors. Closing the huge gap between information theoretic opportunities offered by coarse quantized MIMO systems on the one hand, and the actual poor performance of coding schemes which are designed without signal quantization in mind, remains a challenging task.",
"",
"This paper considers automatic gain control (AGC) and quantization for multiple-input multiple-output (MIMO) wireless systems. We examine the effect of clipping and quantization on capacity and bit error rate (BER). We find that even quite low resolution quantizers can perform close to the capacity of ideal unquantized systems. Results are presented for BPSK and M-ary QAM, and for 2times2, 3times3, and 4times4 MIMO configurations. We find that in each case less than 6 quantizer bits are required to achieve 98 of unquantized capacity for SNRs above 15 dB"
]
} |
1610.09546 | 2544177338 | In future high-capacity wireless systems based on mmWave or massive multiple input multiple output (MIMO), the power consumption of receiver Analog to Digital Converters (ADC) is a concern. Although hybrid or analog systems with fewer ADCs have been proposed, fully digital receivers with many lower resolution ADCs (and lower power) may be a more versatile solution. In this paper, focusing on an uplink scenario, we propose to take the optimization of ADC resolution one step further by enabling variable resolutions in the ADCs that sample the signal received at each antenna. This allows to give more bits to the antennas that capture the strongest incoming signal and fewer bits to the antennas that capture little signal energy and mostly noise. Simulation results show that, depending on the unquantized link SNR, a power saving in the order of 20-80 can be obtained by our variable resolution proposal in comparison with a reference fully digital receiver with a fixed low number of bits in all its ADCs. | DC systems using low-resolution ADCs to reduce power consumption are further analyzed in @cite_3 @cite_19 , showing that a few bits are enough to achieve almost the spectral efficiency (SE) of an unquantized system of the same characteristics. It is possible to use analog switches instead of analog mixers to create a hybrid scheme that samples only the best subset of the antennas of the array to reduce power consumption, as proposed in @cite_18 . In addition to switching the best antennas to high resolution ADCs, it is possible to add 1-bit ADCs to sample the rest of the antennas as in @cite_13 , achieving a large fraction of the capacity of a full-high-resolution architecture. | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_13",
"@cite_3"
],
"mid": [
"2189415357",
"2195833401",
"1560563382",
"2262641688"
],
"abstract": [
"The low-resolution analog-to-digital convertor (ADC) is a promising solution to significantly reduce the power consumption of radio frequency circuits in massive multiple-input multiple-output (MIMO) systems. In this letter, we investigate the uplink spectral efficiency (SE) of massive MIMO systems with low-resolution ADCs over Rician fading channels, where both perfect and imperfect channel state information are considered. By modeling the quantization noise of low-resolution ADCs as an additive quantization noise, we derive tractable and exact approximation expressions of the uplink SE of massive MIMO with the typical maximal-ratio combining (MRC) receivers. We also analyze the impact of the ADC resolution, the Rician @math -factor, and the number of antennas on the uplink SE. Our derived results reveal that the use of low-cost and low-resolution ADCs can still achieve satisfying SE in massive MIMO systems.",
"Hybrid analog digital multiple-input multiple-output architectures were recently proposed as an alternative for fully digital-precoding in millimeter wave wireless communication systems. This is motivated by the possible reduction in the number of RF chains and analog-to-digital converters. In these architectures, the analog processing network is usually based on variable phase shifters. In this paper, we propose hybrid architectures based on switching networks to reduce the complexity and the power consumption of the structures based on phase shifters. We define a power consumption model and use it to evaluate the energy efficiency of both structures. To estimate the complete MIMO channel, we propose an open-loop compressive channel estimation technique that is independent of the hardware used in the analog processing stage. We analyze the performance of the new estimation algorithm for hybrid architectures based on phase shifters and switches. Using the estimate, we develop two algorithms for the design of the hybrid combiner based on switches and analyze the achieved spectral efficiency. Finally, we study the tradeoffs between power consumption, hardware complexity, and spectral efficiency for hybrid architectures based on phase shifting networks and switching networks. Numerical results show that architectures based on switches obtain equal or better channel estimation performance to that obtained using phase shifters, while reducing hardware complexity and power consumption. For equal power consumption, all the hybrid architectures provide similar spectral efficiencies.",
"Motivated by the demand for energy-efficient communication solutions in the next generation cellular network, a mixed-ADC architecture for massive multiple-input-multiple-output (MIMO) systems is proposed, which differs from previous works in that herein one-bit analog-to-digital converters (ADCs) partially replace the conventionally assumed high-resolution ADCs. The information-theoretic tool of generalized mutual information (GMI) is exploited to analyze the achievable data rates of the proposed system architecture and an array of analytical results of engineering interest are obtained. For fixed single-input-multiple-output (SIMO) channels, a closed-form expression of the GMI is derived, based on which the linear combiner is optimized. The analysis is then extended to ergodic fading channels, for which tight lower and upper bounds of the GMI are obtained. Impacts of dithering and imperfect channel state information (CSI) are also investigated, and it is shown that dithering can remarkably improve the system performance while imperfect CSI only introduces a marginal rate loss. Finally, the analytical framework is applied to the multiuser access scenario. Numerical results demonstrate that the mixed-ADC architecture with a relatively small number of high-resolution ADCs is able to achieve a large fraction of the channel capacity of conventional architecture, while reduce the energy consumption considerably even compared with antenna selection, for both single-user and multiuser scenarios.",
"In this letter, we derive an approximate analytical expression for the uplink achievable rate of a massive multiinput multioutput (MIMO) antenna system when finite precision analog-digital converters (ADCs) and the common maximal-ratio combining technique are used at the receivers. To obtain this expression, we treat quantization noise as an additive quantization noise model. Considering the obtained expression, we show that low-resolution ADCs lead to a decrease in the achievable rate but the performance loss can be compensated by increasing the number of receiving antennas. In addition, we investigate the relation between the number of antennas and the ADC resolution, as well as the power-scaling law. These discussions support the feasibility of equipping highly economical ADCs with low resolution in practical massive MIMO systems."
]
} |
1610.09704 | 2548881486 | Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain. | A legal prerequisite for a patient note to be shared with a medical investigator is that it must be de-identified. The objective of the de-identification process is to remove all Protected Health Information (PHI). Not appropriately removing PHI may result in financial penalties @cite_27 @cite_32 . In the United States, the Health Insurance Portability and Accountability Act (HIPAA) @cite_19 defines PHI types that must be removed, ranging from phone numbers to patient names. Failure to accurately de-identify a patient note would jeopardize the patient's privacy: the performance of a de-identification system is therefore critical. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_32"
],
"mid": [
"",
"2074609720",
"2079946332"
],
"abstract": [
"",
"With nearly $30 billion in incentives available, it is critical to know to what extent US hospitals have been able to respond to those incentives by adopting electronic health record (EHR) systems that meet Medicare’s criteria for their “meaningful use.” Medicare has provided aggregate incentive payment data, but still missing is an understanding of how these payments are distributed across hospital types and years. Our analysis of Medicare data found a substantial increase in the percentage of hospitals receiving EHR incentive payments between 2011 (17.4 percent) and 2012 (36.8 percent). However, this increase was not uniform across all hospitals, and the overall proportion of hospitals receiving a payment for meaningful use was low. Critical-access, smaller, and publicly owned or nonprofit hospitals appeared to be at particular risk for failing to meet Medicare’s meaningful-use criteria, and the overall proportion of hospitals receiving a payment for meaningful use was low. Starting in 2015, hospitals t...",
"The HITECH Act created incentives to encourage adoption of electronic health records. As of May 2012, only 12.2 of 62,226 eligible professionals had attested to meaningful use, including 9.8 of specialists and 17.8 of primary care providers."
]
} |
1610.09704 | 2548881486 | Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain. | A naive approach to de-identification is to manually identify PHI. However, this is costly @cite_28 @cite_3 and unreliable @cite_1 . Consequently, there has been much work developing automated de-identification systems. These systems are either based on rules or machine-learning models. Rule-based systems typically rely on patterns, expressed as regular expressions and gazetteers, defined and tuned by humans @cite_20 @cite_30 @cite_25 @cite_18 @cite_9 @cite_21 @cite_1 @cite_23 @cite_0 @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"50158085",
"",
"",
"2104218280",
"",
"2117273822",
"1262131959",
"19235523",
"1540056032",
"",
""
],
"abstract": [
"",
"We created a software tool that accurately removes all patient identifying information from various kinds of clinical data documents, including laboratory and narrative reports. We created the Medical De-identification System (MeDS), a software tool that de-identifies clinical documents, and performed 2 evaluations. Our first evaluation used 2,400 Health Level Seven (HL7) messages from 10 different HL7 message producers. After modifying the software based on the results of this first evaluation, we performed a second evaluation using 7,190 pathology report HL7 messages. We compared the results of MeDS de-identification process to a gold standard of human review to find identifying strings. For both evaluations, we calculated the number of successful scrubs, missed identifiers, and over-scrubs committed by MeDS and evaluated the readability and interpretability of the scrubbed messages. We categorized all missed identifiers into 3 groups: (1) complete HIPAA-specified identifiers, (2) HIPAA-specified identifier fragments, (3) non-HIPAA–specified identifiers (such as provider names and addresses). In the results of the first-pass evaluation, MeDS scrubbed 11,273 (99.06 ) of the 11,380 HIPAA-specified identifiers and 38,095 (98.26 ) of the 38,768 non-HIPAA–specified identifiers. In our second evaluation (status postmodification to the software), MeDS scrubbed 79,993 (99.47 ) of the 80,418 HIPAA-specified identifiers and 12,689 (96.93 ) of the 13,091 non-HIPAA–specified identifiers. Approximately 95 of scrubbed messages were both readable and interpretable. We conclude that MeDS successfully de-identified a wide range of medical documents from numerous sources and creates scrubbed reports that retain their interpretability, thereby maintaining their usefulness for research.",
"",
"",
"Electronic clinical documentation can be useful for activities such as public health surveillance, quality improvement, and research, but existing methods of de-identification may not provide sufficient protection of patient data. The general-purpose natural language processor MedLEE retains medical concepts while excluding the remaining text so, in addition to processing text into structured data, it may be able provide a secondary benefit of de-identification. Without modifying the system, the authors tested the ability of MedLEE to remove protected health information (PHI) by comparing 100 outpatient clinical notes with the corresponding XML-tagged output. Of 809 instances of PHI, 26 (3.2 ) were detected in output as a result of processing and identification errors. However, PHI in the output was highly transformed, much appearing as normalized terms for medical concepts, potentially making re-identification more difficult. The MedLEE processor may be a good enhancement to other de-identification systems, both removing PHI and providing coded data from clinical text.",
"",
"Medical researchers are legally required to protect patients' privacy by removing personally identifiable information from medical records before sharing the data with other researchers. We present an evaluation of methods for computer-assisted removal and replacement of protected health information (PHI) from free-text nursing notes collected in the intensive care unit as part of the MIMIC II project. A semiautomated method was developed to allow clinicians to highlight PHI on the screen of a tablet PC and to compare and combine the selections of different experts reading the same notes. An analysis of the performance of three human expert de-identifiers and of an automated system is presented. Expert adjudication demonstrated that inter-human variability was high, with few false positives and many false negatives. The sensitivity of human experts working alone ranged from 0.63 to 0.93, with an average of 0.81, and the average positive predictive value was 0.98. An algorithm generated few false negatives but many false positives. Its sensitivity was 0.85, but its positive predictive value was only 0.37. The de-identified database of nursing notes was re-identified with realistic surrogate (but unprotected) dates, serial numbers, names, and phrases to provide a gold standard database of over 2600 notes (approximately 340,000 words) with over 1700 instances of PHI. This reference gold standard database of nursing notes and the Java source code used to evaluate algorithm performance will be made freely available on Physionet in order to facilitate the development and validation of future de-identification algorithms.",
"Abstract We define a new approach to locating and replacing personally-identifying information in medical records that extends beyond straight search-and-replace procedures, and we provide techniques for minimizing risk to patient confidentiality. The straightforward approach of global search and replace properly located no more than 30-60 of all personally-identifying information that appeared explicitly in our sample database. On the other hand, our Scrub system found 99-100 of these references. Scrub uses detection algorithms that employ templates and specialized knowledge of what constitutes a name, address, phone number and so forth.",
"We present an original system for locating and removing personally-identifying information in patient records. In this experiment, anonymization is seen as a particular case of knowledge extraction. We use natural language processing tools provided by the MEDTAG framework: a semantic lexicon specialized in medicine, and a toolkit for word-sense and morpho-syntactic tagging. The system finds 98-99 of all personally-identifying information.",
"Abstract The ability to access large amounts of de-identified clinical data would facilitate epidemiologic and retrospective research. Previously described de-identification methods require knowledge of natural language processing or have not been made available to the public. We take advantage of the fact that the vast majority of proper names in pathology reports occur in pairs. In rare cases where one proper name is by itself, it is preceded or followed by an affix that identifies it as a proper name (Mrs., Dr., PhD). We created a tool based on this observation using substitution methods that was easy to implement and was largely based on publicly available data sources. We compiled a Clinical and Common Usage Word (CCUW) list as well as a fairly comprehensive proper name list. Despite the large overlap between these two lists, we were able to refine our methods to achieve accuracy similar to previous attempts at de-identification. Our method found 98.7 of 231 proper names in the narrative sections of pathology reports. Three single proper names were missed out of 1001 pathology reports (0.3 , no first name last name pairs). It is unlikely that identification could be implied from this information. We will continue to refine our methods, specifically working to improve the quality of our CCUW and proper name lists to obtain higher levels of accuracy.",
"",
""
]
} |
1610.09704 | 2548881486 | Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain. | A more recent system has introduced the use of artificial neural networks (ANNs) for de-identification @cite_24 , and obtained state-of-the-art results. The system does not use any manually-curated features. Instead, it solely relies on character and token embeddings. While this allows the system to be developed and deployed faster, it fails to give users the possibility to add features engineered by human experts. Additionally, in practical settings of de-identification, patient notes typically come from a hospital EHR database, which contains metadata such as which patient each note pertains to, and other information such as the names of all doctors who work at the hospital where the patient was treated. The features derived from EHR databases may be useful for boosting the performance of de-identification systems. In this work, we present a method to incorporate features to this ANN-based system, and show that it further improves the state-of-the-art. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2433185791"
],
"abstract": [
"Objective: Patient notes in electronic health records (EHRs) may contain critical information for medical investigations. However, the vast majority of medical investigators can only access de-identified notes, in order to protect the confidentiality of patients. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) defines 18 types of protected health information (PHI) that needs to be removed to de-identify patient notes. Manual de-identification is impractical given the size of EHR databases, the limited number of researchers with access to the non-de-identified notes, and the frequent mistakes of human annotators. A reliable automated de-identification system would consequently be of high value. Materials and Methods: We introduce the first de-identification system based on artificial neural networks (ANNs), which requires no handcrafted features or rules, unlike existing systems. We compare the performance of the system with state-of-the-art systems on two datasets: the i2b2 2014 de-identification challenge dataset, which is the largest publicly available de-identification dataset, and the MIMIC de-identification dataset, which we assembled and is twice as large as the i2b2 2014 dataset. Results: Our ANN model outperforms the state-of-the-art systems. It yields an F1-score of 97.85 on the i2b2 2014 dataset, with a recall 97.38 and a precision of 97.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with a recall 99.25 and a precision of 99.06. Conclusion: Our findings support the use of ANNs for de-identification of patient notes, as they show better performance than previously published systems while requiring no feature engineering."
]
} |
1610.09950 | 2540057053 | Most of the existing graph embedding methods focus on nodes, which aim to output a vector representation for each node in the graph such that two nodes being "close" on the graph are close too in the low-dimensional space. Despite the success of embedding individual nodes for graph analytics, we notice that an important concept of embedding communities (i.e., groups of nodes) is missing. Embedding communities is useful, not only for supporting various community-level applications, but also to help preserve community structure in graph embedding. In fact, we see community embedding as providing a higher-order proximity to define the node closeness, whereas most of the popular graph embedding methods focus on first-order and or second-order proximities. To learn the community embedding, we hinge upon the insight that community embedding and node embedding reinforce with each other. As a result, we propose ComEmbed, the first community embedding method, which jointly optimizes the community embedding and node embedding together. We evaluate ComEmbed on real-world data sets. We show it outperforms the state-of-the-art baselines in both tasks of node classification and community prediction. | In terms of the target to embed, most graph embedding methods focus on nodes. For example, earlier methods, such as MDS @cite_22 , LLE @cite_12 , IsoMap @cite_20 and Laplacian eigenmap @cite_14 , typically aim to solve the leading eigenvectors of graph affinity matrices as node embedding. Recent methods typically rely on neural networks to learn the representation for each node, with either shallow architectures @cite_8 @cite_6 @cite_19 or deep architectures @cite_0 @cite_17 @cite_4 . Other than node embedding, there is some attempt to learn edge embedding in a knowledge base @cite_21 or proximity embedding between two possibly distant nodes in a general heterogeneous graph @cite_16 . But there is no community embedding so far as we know. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"2156718197",
"2062797058",
"2479500547",
"",
"2251363251",
"",
"2406128552",
"2366141641",
"",
"2053186076",
"2001141328",
"2393319904"
],
"abstract": [
"Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.",
"Data embedding is used in many machine learning applications to create low-dimensional feature representations, which preserves the structure of data points in their original space. In this paper, we examine the scenario of a heterogeneous network with nodes and content of various types. Such networks are notoriously difficult to mine because of the bewildering combination of heterogeneous contents and structures. The creation of a multidimensional embedding of such data opens the door to the use of a wide variety of off-the-shelf mining techniques for multidimensional data. Despite the importance of this problem, limited efforts have been made on embedding a network of scalable, dynamic and heterogeneous data. In such cases, both the content and linkage structure provide important cues for creating a unified feature representation of the underlying network. In this paper, we design a deep embedding algorithm for networked data. A highly nonlinear multi-layered embedding function is used to capture the complex interactions between the heterogeneous data in a network. Our goal is to create a multi-resolution deep embedding function, that reflects both the local and global network structures, and makes the resulting embedding useful for a variety of data mining tasks. In particular, we demonstrate that the rich content and linkage information in a heterogeneous network can be captured by such an approach, so that similarities among cross-modal data can be measured directly in a common embedding space. Once this goal has been achieved, a wide variety of data mining problems can be solved by applying off-the-shelf algorithms designed for handling vector representations. Our experiments on real-world network datasets show the effectiveness and scalability of the proposed algorithm as compared to the state-of-the-art embedding methods.",
"",
"",
"We consider the problem of embedding knowledge graphs (KGs) into continuous vector spaces. Existing methods can only deal with explicit relationships within each triple, i.e., local connectivity patterns, but cannot handle implicit relationships across different triples, i.e., contextual connectivity patterns. This paper proposes context-dependent KG embedding, a twostage scheme that takes into account both types of connectivity patterns and obtains more accurate embeddings. We evaluate our approach on the tasks of link prediction and triple classification, and achieve significant and consistent improvements over state-of-the-art methods.",
"",
"Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.",
"Network embedding is an important method to learn low-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embedding methods adopt shallow models. However, since the underlying network structure is complex, shallow models cannot capture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to find a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a Structural Deep Network Embedding method, namely SDNE. More specifically, we first propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the first-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used by the unsupervised component to capture the global network structure. While the first-order proximity is used as the supervised information in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structure and is robust to sparse networks. Empirically, we conduct the experiments on five real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network significantly better and achieves substantial gains in three applications, i.e. multi-label classification, link prediction and visualization."
]
} |
1610.09950 | 2540057053 | Most of the existing graph embedding methods focus on nodes, which aim to output a vector representation for each node in the graph such that two nodes being "close" on the graph are close too in the low-dimensional space. Despite the success of embedding individual nodes for graph analytics, we notice that an important concept of embedding communities (i.e., groups of nodes) is missing. Embedding communities is useful, not only for supporting various community-level applications, but also to help preserve community structure in graph embedding. In fact, we see community embedding as providing a higher-order proximity to define the node closeness, whereas most of the popular graph embedding methods focus on first-order and or second-order proximities. To learn the community embedding, we hinge upon the insight that community embedding and node embedding reinforce with each other. As a result, we propose ComEmbed, the first community embedding method, which jointly optimizes the community embedding and node embedding together. We evaluate ComEmbed on real-world data sets. We show it outperforms the state-of-the-art baselines in both tasks of node classification and community prediction. | In terms of the information to preserve, most graph embedding methods try to preserve first-order proximity and or second-order proximity @cite_1 @cite_19 @cite_6 @cite_17 . Some recent attempts consider higher-order proximity, by factorizing a higher-order node-node proximity matrix by PageRank or Katz index @cite_2 @cite_3 . Hence their higher-order proximity is based on the graph reachability via random walk, where the notion of community is missing. In contrast, our community embedding tries to preserve all the first-, second- and community-aware higher-order proximity. | {
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_17"
],
"mid": [
"",
"",
"2387462954",
"2366141641",
"2090891622",
"2393319904"
],
"abstract": [
"",
"",
"Graph embedding algorithms embed a graph into a vector space where the structure and the inherent properties of the graph are preserved. The existing graph embedding methods cannot preserve the asymmetric transitivity well, which is a critical property of directed graphs. Asymmetric transitivity depicts the correlation among directed edges, that is, if there is a directed path from u to v, then there is likely a directed edge from u to v. Asymmetric transitivity can help in capturing structures of graphs and recovering from partially observed graphs. To tackle this challenge, we propose the idea of preserving asymmetric transitivity by approximating high-order proximity which are based on asymmetric transitivity. In particular, we develop a novel graph embedding algorithm, High-Order Proximity preserved Embedding (HOPE for short), which is scalable to preserve high-order proximities of large scale graphs and capable of capturing the asymmetric transitivity. More specifically, we first derive a general formulation that cover multiple popular high-order proximity measurements, then propose a scalable embedding algorithm to approximate the high-order proximity measurements based on their general formulation. Moreover, we provide a theoretical upper bound on the RMSE (Root Mean Squared Error) of the approximation. Our empirical experiments on a synthetic dataset and three real-world datasets demonstrate that HOPE can approximate the high-order proximities significantly better than the state-of-art algorithms and outperform the state-of-art algorithms in tasks of reconstruction, link prediction and vertex recommendation.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"In this paper, we present GraRep , a novel model for learning vertex representations of weighted graphs. This model learns low dimensional vectors to represent vertices appearing in a graph and, unlike existing work, integrates global structural information of the graph into the learning process. We also formally analyze the connections between our work and several previous research efforts, including the DeepWalk model of as well as the skip-gram model with negative sampling of We conduct experiments on a language network, a social network as well as a citation network and show that our learned global representations can be effectively used as features in tasks such as clustering, classification and visualization. Empirical results demonstrate that our representation significantly outperforms other state-of-the-art methods in such tasks.",
"Network embedding is an important method to learn low-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embedding methods adopt shallow models. However, since the underlying network structure is complex, shallow models cannot capture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to find a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a Structural Deep Network Embedding method, namely SDNE. More specifically, we first propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the first-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used by the unsupervised component to capture the global network structure. While the first-order proximity is used as the supervised information in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structure and is robust to sparse networks. Empirically, we conduct the experiments on five real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network significantly better and achieves substantial gains in three applications, i.e. multi-label classification, link prediction and visualization."
]
} |
1610.09950 | 2540057053 | Most of the existing graph embedding methods focus on nodes, which aim to output a vector representation for each node in the graph such that two nodes being "close" on the graph are close too in the low-dimensional space. Despite the success of embedding individual nodes for graph analytics, we notice that an important concept of embedding communities (i.e., groups of nodes) is missing. Embedding communities is useful, not only for supporting various community-level applications, but also to help preserve community structure in graph embedding. In fact, we see community embedding as providing a higher-order proximity to define the node closeness, whereas most of the popular graph embedding methods focus on first-order and or second-order proximities. To learn the community embedding, we hinge upon the insight that community embedding and node embedding reinforce with each other. As a result, we propose ComEmbed, the first community embedding method, which jointly optimizes the community embedding and node embedding together. We evaluate ComEmbed on real-world data sets. We show it outperforms the state-of-the-art baselines in both tasks of node classification and community prediction. | In terms of interaction between node and community, there are a few graph embedding models that use node embedding to assist community detection @cite_5 @cite_10 , but they do not have the notion of community in their node embedding. There is little work that allows community feedback to guide the node embedding @cite_11 , but it lacks the concept of community embedding and its community feedback requires extra supervision on must-links. In contrast, we optimize node embedding, community embedding and community detection in a closed loop, and let them reinforce each other. | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"2415243320",
"2187032421",
"2577326063"
],
"abstract": [
"In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by (2014). The advantages of our approach will be illustrated from both theorical and empirical perspectives. We also give a new perspective for the matrix factorization method proposed by Levy and Goldberg (2014), in which the pointwise mutual information (PMI) matrix is considered as an analytical solution to the objective function of the skip-gram model with negative sampling proposed by (2013). Unlike their approach which involves the use of the SVD for finding the low-dimensitonal projections from the PMI matrix, however, the stacked denoising autoencoder is introduced in our model to extract complex features and model non-linearities. To demonstrate the effectiveness of our model, we conduct experiments on clustering and visualization tasks, employing the learned vertex representations as features. Empirical results on datasets of varying sizes show that our model outperforms other stat-of-the-art models in such tasks.",
"We present a new algorithm for community detection. The algorithm uses random walks to embed the graph in a space of measures, after which a modification of k-means in that space is applied. The algorithm is therefore fast and easily parallelizable. We evaluate the algorithm on standard random graph benchmarks, including some overlapping community benchmarks, and find its performance to be better or at least as good as previously known algorithms. We also prove a linear time (in number of edges) guarantee for the algorithm on a p, q-stochastic block model with where p ≥ c · N- ½+ ∊ and p - q ≥ c′ √pN- ½+ ∊ log N.",
"Identification of module or community structures is important for characterizing and understanding complex systems. While designed with different objectives, i.e., stochastic models for regeneration and modularity maximization models for discrimination, both these two types of model look for low-rank embedding to best represent and reconstruct network topology. However, the mapping through such embedding is linear, whereas real networks have various nonlinear features, making these models less effective in practice. Inspired by the strong representation power of deep neural networks, we propose a novel nonlinear reconstruction method by adopting deep neural networks for representation. We then extend the method to a semi-supervised community detection algorithm by incorporating pairwise constraints among graph nodes. Extensive experimental results on synthetic and real networks show that the new methods are effective, outperforming most state-of-the-art methods for community detection."
]
} |
1610.09996 | 2548872772 | This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions. DCR is able to predict answers of variable lengths, whereas previous neural RC models primarily focused on predicting single tokens or entities. DCR encodes a document and an input question with recurrent neural networks, and then applies a word-by-word attention mechanism to acquire question-aware representations for the document, followed by the generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achieve a 66.3 Exact match and 74.7 F1 score on the Stanford Question Answering Dataset. | Attentive Reader was the first neural model for factoid RCQA @cite_0 . It uses Bidirectional RNN (, 2014; ,2014) to encode document and query respectively, and use query representation to match with every token from the document. Attention Sum Reader @cite_1 simplifies the model to just predicting positions of correct answer in the document and the training speed and test accuracy are both greatly improved on the CNN Daily Mail dataset. @cite_24 also simplified Attentive Reader and reported higher accuracy. Window-based Memory Networks (MemN2N) is introduced along with the CBT dataset @cite_29 , which does not use RNN encoders, but embeds contexts as memory and matches questions with embedded contexts. Those models' mechanism is to learn the match between answer context with question query representation. In contrast, memory enhanced neural networks like Neural Turing Machines @cite_31 and its variants @cite_8 @cite_22 @cite_32 were also potential candidates for the task, and reported results on the bAbI task, which is worse than memory networks. Similarly, sequence-to-sequence models were also used @cite_25 @cite_0 , but they did not yield better results either. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_0",
"@cite_24",
"@cite_31",
"@cite_25"
],
"mid": [
"2470713034",
"2211729040",
"2126209950",
"2963595025",
"2204302769",
"2949615363",
"2962809918",
"2950527759",
"2190253607"
],
"abstract": [
"We extend neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We have done extensive analysis of our model and different variations of NTM on bAbI task. We also provide further experimental results on sequential pMNIST, Stanford Natural Language Inference, associative recall and copy tasks.",
"Neural Turing Machines (NTM) contain memory component that simulates \"working memory\" in the brain to store and retrieve information to ease simple algorithms learning. So far, only linearly organized memory is proposed, and during experiments, we observed that the model does not always converge, and overfits easily when handling certain tasks. We think memory component is key to some faulty behaviors of NTM, and better organization of memory component could help fight those problems. In this paper, we propose several different structures of memory for NTM, and we proved in experiments that two of our proposed structured-memory NTMs could lead to better convergence, in term of speed and prediction accuracy on copy task and associative recall task as in ( 2014).",
"We introduce a new test of how well language models capture meaning in children's books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.",
"",
"The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.",
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4 and 75.8 on these two datasets, exceeding current state-of-the-art results by over 5 and approaching what we believe is the ceiling for performance on this task.1",
"We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.",
"In this paper we explore deep learning models with memory component or attention mechanism for question answering task. We combine and compare three models, Neural Machine Translation, Neural Turing Machine, and Memory Networks for a simulated QA data set. This paper is the first one that uses Neural Machine Translation and Neural Turing Machines for solving QA tasks. Our results suggest that the combination of attention and memory have potential to solve certain QA problem."
]
} |
1610.09785 | 2545779952 | This paper considers a diffusion-based molecular communication system, where the transmitter uses reaction shift keying (RSK) as the modulation scheme. We focus on the demodulation of RSK signal at the receiver. The receiver consists of a front-end molecular circuit and a back-end demodulator. The front-end molecular circuit is a set of chemical reactions consisting of multiple chemical species. The optimal demodulator computes the posteriori probability of the transmitted symbols given the history of the observation. The derivation of the optimal demodulator requires the solution to a specific Bayesian filtering problem. The solution to this Bayesian filtering problem had been derived for a few specific molecular circuits and specific choice(s) of observed chemical species. The derivation of such solution is also lengthy. The key contribution of this paper is to present a general solution to this Bayesian filtering problem, which can be applied to any molecular circuit and any choice of observed species. | The interest of research community in molecular communication is on the rise as shown by recent surveys @cite_38 @cite_15 @cite_21 @cite_27 @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_21",
"@cite_27",
"@cite_15"
],
"mid": [
"2168002711",
"2116194016",
"2018036523",
"2333907923",
"1997193784"
],
"abstract": [
"With much advancement in the field of nanotechnology, bioengineering, and synthetic biology over the past decade, microscales and nanoscales devices are becoming a reality. Yet the problem of engineering a reliable communication system between tiny devices is still an open problem. At the same time, despite the prevalence of radio communication, there are still areas where traditional electromagnetic waves find it difficult or expensive to reach. Points of interest in industry, cities, and medical applications often lie in embedded and entrenched areas, accessible only by ventricles at scales too small for conventional radio waves and microwaves, or they are located in such a way that directional high frequency systems are ineffective. Inspired by nature, one solution to these problems is molecular communication (MC), where chemical signals are used to transfer information. Although biologists have studied MC for decades, it has only been researched for roughly 10 year from a communication engineering lens. Significant number of papers have been published to date, but owing to the need for interdisciplinary work, much of the results are preliminary. In this survey, the recent advancements in the field of MC engineering are highlighted. First, the biological, chemical, and physical processes used by an MC system are discussed. This includes different components of the MC transmitter and receiver, as well as the propagation and transport mechanisms. Then, a comprehensive survey of some of the recent works on MC through a communication engineering lens is provided. The survey ends with a technology readiness analysis of MC and future research directions.",
"Nanotechnologies promise new solutions for several applications in biomedical, industrial and military fields. At nano-scale, a nano-machine can be considered as the most basic functional unit. Nano-machines are tiny components consisting of an arranged set of molecules, which are able to perform very simple tasks. Nanonetworks. i.e., the interconnection of nano-machines are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not suitable for nanonetworks mainly due to the size and power consumption of transceivers, receivers and other components. The use of molecules, instead of electromagnetic or acoustic waves, to encode and transmit the information represents a new communication paradigm that demands novel solutions such as molecular transceivers, channel models or protocols for nanonetworks. In this paper, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Also some interesting and important applications for nanonetworks are highlighted to motivate the communication needs between the nano-machines. Furthermore, nanonetworks for short-range communication based on calcium signaling and molecular motors as well as for long-range communication based on pheromones are explained in detail. Finally, open research challenges, such as the development of network components, molecular communication theory, and the development of new architectures and protocols, are presented which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades.",
"The ability of engineered biological nanomachines to communicate with biological systems at the molecular level is anticipated to enable future applications such as monitoring the condition of a human body, regenerating biological tissues and organs, and interfacing artificial devices with neural systems. From the viewpoint of communication theory and engineering, molecular communication is proposed as a new paradigm for engineered biological nanomachines to communicate with the natural biological nanomachines which form a biological system. Distinct from the current telecommunication paradigm, molecular communication uses molecules as the carriers of information; sender biological nanomachines encode information on molecules and release the molecules in the environment, the molecules then propagate in the environment to receiver biological nanomachines, and the receiver biological nanomachines biochemically react with the molecules to decode information. Current molecular communication research is limited to small-scale networks of several biological nanomachines. Key challenges to bridge the gap between current research and practical applications include developing robust and scalable techniques to create a functional network from a large number of biological nanomachines. Developing networking mechanisms and communication protocols is anticipated to introduce new avenues into integrating engineered and natural biological nanomachines into a single networked system. In this paper, we present the state-of-the-art in the area of molecular communication by discussing its architecture, features, applications, design, engineering, and physical modeling. We then discuss challenges and opportunities in developing networking mechanisms and communication protocols to create a network from a large number of bio-nanomachines for future applications.",
"Molecular communication is an emerging communication paradigm for biological nanomachines. It allows biological nanomachines to communicate through exchanging molecules in an aqueous environment and to perform collaborative tasks through integrating functionalities of individual biological nanomachines. This paper develops the layered architecture of molecular communication and describes research issues that molecular communication faces at each layer of the architecture. Specifically, this paper applies a layered architecture approach, traditionally used in communication networks, to molecular communication, decomposes complex molecular communication functionality into a set of manageable layers, identifies basic functionalities of each layer, and develops a descriptive model consisting of key components of the layer for each layer. This paper also discusses open research issues that need to be addressed at each layer. In addition, this paper provides an example design of targeted drug delivery, a nanomedical application, to illustrate how the layered architecture helps design an application of molecular communication. The primary contribution of this paper is to provide an in-depth architectural view of molecular communication. Establishing a layered architecture of molecular communication helps organize various research issues and design concerns into layers that are relatively independent of each other, and thus accelerates research in each layer and facilitates the design and development of applications of molecular communication.",
"Abstract Molecular communication uses molecules (i.e., biochemical signals) as an information medium and allows biologically and artificially created nano- or microscale entities to communicate over a short distance. It is a new communication paradigm; it is different from the traditional communication paradigm, which uses electromagnetic waves (i.e., electronic and optical signals) as an information medium. Key research challenges in molecular communication include design of system components (i.e., a sender, a molecular propagation system, a receiver, and a molecular communication interface) and mathematical modeling of each system component as well as entire systems. We review all research activities in molecular communication to date, from its origin to recent experimental studies and theoretical approaches for each system component. As a model molecular communication system, we describe an integrated system that combines a molecular communication interface (using a lipid vesicle embedded with channel-forming proteins), a molecular propagation system (using microtubule motility on kinesin molecular motors and DNA hybridization), and a sender receiver (using giant lipid vesicles embedded with gemini-peptide lipids). We also present potential applications and the future outlook of molecular communication."
]
} |
1610.09785 | 2545779952 | This paper considers a diffusion-based molecular communication system, where the transmitter uses reaction shift keying (RSK) as the modulation scheme. We focus on the demodulation of RSK signal at the receiver. The receiver consists of a front-end molecular circuit and a back-end demodulator. The front-end molecular circuit is a set of chemical reactions consisting of multiple chemical species. The optimal demodulator computes the posteriori probability of the transmitted symbols given the history of the observation. The derivation of the optimal demodulator requires the solution to a specific Bayesian filtering problem. The solution to this Bayesian filtering problem had been derived for a few specific molecular circuits and specific choice(s) of observed chemical species. The derivation of such solution is also lengthy. The key contribution of this paper is to present a general solution to this Bayesian filtering problem, which can be applied to any molecular circuit and any choice of observed species. | On the transmitter side different modulation schemes have been proposed in literature as mentioned in Section . These schemes also use different signalling molecules emission pattern at the transmitter, e.g. impulse @cite_8 , Poisson process @cite_16 . However, this paper focuses on RSK where the transmitter uses different chemical reactions to generate different emission patterns to represent different symbols @cite_20 @cite_23 . | {
"cite_N": [
"@cite_16",
"@cite_23",
"@cite_20",
"@cite_8"
],
"mid": [
"2963922654",
"2053226033",
"2115084976",
"2148147164"
],
"abstract": [
"In this paper, a diffusion-based molecular communication channel between two nano-machines is considered. The effect of the amount of memory on performance is characterized, and a simple memory-limited decoder is proposed; its performance is shown to be close to that of the best possible decoder (without any restrictions on the computational complexity or its functional form), using genie-aided upper bounds. This effect is adapted to the case of Molecular Concentration Shift Keying; it is shown that a four-bit memory achieves nearly the same performance as infinite memory for all of the examples considered. A general class of threshold decoders is considered and shown to be suboptimal for a Poisson channel with memory, unless the SNR is higher than a computed threshold. During each symbol duration (symbol period), the probability that a released molecule hits the receiver changes over the duration of the period; thus, we also consider a receiver that samples at a rate higher than the transmission rate (a multi-read system). A multi-read system improves performance. The associated decision rule for this system is shown to be a weighted sum of the samples during each symbol interval. The performance of the system is analyzed using the saddle point approximation. The best performance gains are achieved for an oversampling factor of three for the examples considered.",
"Reaction shift keying (RSK) is a recently proposed modulation scheme for diffusion-based molecular communication networks. For RSK, the transmitter uses different chemical reactions to generate different emission patterns to represent different symbols. The receiver front end consists of a molecular circuit and the back end uses a bank of demodulation filter for detection. The aim of this paper is to study the impact of the choice of molecular circuits on communication performance. We consider two circuits: (1) ligand-receptor binding with feedback regulation; (2) ligand-receptor with multiple binding sites. For each circuit, we derive the maximum a posteriori demodulation filter by solving a Bayesian filtering problem. We show that appropriate amount of feedback can reduce symbol error rate. We also show how the choice of measurements in the multiple multiple binding site case can affect communication performance.",
"In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.",
"In this paper we have investigated into a multi-level amplitude modulation (M-AM) scheme for concentration-encoded unicast molecular communication between a transmitting nanomachine (TN) and a receiving nanomachine (RN) in a nanonetwork. The performance of M-AM scheme has been evaluated on the basis of signal strength in the form of average loss of concentration of received molecules at the location of RN, and interference strength in the form of concentration of undesired molecules that originate at previous symbol but provide additional concentration of molecules over the transmission rate, and interfere with the detection of current symbol. In order to evaluate signal strength and interference strength characteristics two performance metrics have been proposed and explained by observing their behavior as a function of communication range and number of amplitude levels. In addition, results of M-AM scheme have been compared with conventional binary scheme in order to show the corresponding improvement and or drawbacks when a random sequence of bits is transmitted."
]
} |
1610.09785 | 2545779952 | This paper considers a diffusion-based molecular communication system, where the transmitter uses reaction shift keying (RSK) as the modulation scheme. We focus on the demodulation of RSK signal at the receiver. The receiver consists of a front-end molecular circuit and a back-end demodulator. The front-end molecular circuit is a set of chemical reactions consisting of multiple chemical species. The optimal demodulator computes the posteriori probability of the transmitted symbols given the history of the observation. The derivation of the optimal demodulator requires the solution to a specific Bayesian filtering problem. The solution to this Bayesian filtering problem had been derived for a few specific molecular circuits and specific choice(s) of observed chemical species. The derivation of such solution is also lengthy. The key contribution of this paper is to present a general solution to this Bayesian filtering problem, which can be applied to any molecular circuit and any choice of observed species. | For the receiver side, different receiver designs have been proposed in literature for molecular communication systems, e.g. @cite_13 @cite_9 @cite_35 @cite_43 @cite_25 . Similarly different demodulation schemes for molecular communication systems are presented in @cite_9 @cite_43 . A common idea in these papers is that discrete-time samples of the number of output molecules are used to compute the likelihood of the transmitted symbol. However, the demodulation of RSK uses continuous-time signals @cite_20 @cite_23 where the processing of such continuous-time signals requires an analog filter. We further shown in our earlier work @cite_20 @cite_36 that information processing using the uniformly sampled version of the signals generally will result in information loss. Similar conclusion was also arrived in @cite_11 from an information theoretic analysis on capacity. | {
"cite_N": [
"@cite_35",
"@cite_36",
"@cite_9",
"@cite_43",
"@cite_23",
"@cite_13",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"1984920148",
"2093072521",
"2031515082",
"1991545742",
"2053226033",
"2059488492",
"2004388769",
"2115084976",
"1622908138"
],
"abstract": [
"In the Molecular Communication (MC), molecules are utilized to encode, transmit, and receive information. Transmission of the information is achieved by means of diffusion of molecules and the information is recovered based on the molecule concentration variations at the receiver location. The MC is very prone to intersymbol interference (ISI) due to residual molecules emitted previously. Furthermore, the stochastic nature of the molecule movements adds noise to the MC. For the first time, we propose four methods for a receiver in the MC to recover the transmitted information distorted by both ISI and noise. We introduce sequence detection methods based on maximum a posteriori (MAP) and maximum likelihood (ML) criterions, a linear equalizer based on minimum mean-square error (MMSE) criterion, and a decision-feedback equalizer (DFE) which is a nonlinear equalizer. We present a channel estimator to estimate time varying MC channel at the receiver. The performances of the proposed methods based on bit error rates are evaluated. The sequence detection methods reveal the best performance at the expense of computational complexity. However, the MMSE equalizer has the lowest performance with the lowest computational complexity. The results show that using these methods significantly increases the information transmission rate in the MC.",
"Molecular communication is a promising approach to realize the communication between nanoscale devices. In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules. The transmitter uses different time-varying functions of concentration of signalling molecules (called emission patterns) to represent different transmission symbols. The signalling molecules diffuse freely in the medium. The receiver is assumed to consist of a number of receptors, which can be in ON or OFF state. When the signalling molecules arrive at the receiver, they react with the receptors and switch them from OFF to ON state probabilistically. The receptors remain ON for a random amount of time before reverting to the OFF state. This paper assumes that the receiver uses the continuous history of receptor state to infer the transmitted symbol. Furthermore, it assumes that the transmitter uses two transmission symbols and approaches the decoding problem from the maximum a posteriori (MAP) framework. Specifically, the decoding is realized by calculating the logarithm of the ratio of the posteriori probabilities of the two transmission symbols, or log-MAP ratio. A contribution of this paper is to show that the computation of log-MAP ratio can be performed by an analog filter. The receiver can therefore use the output of this filter to decide which symbol has been sent. This analog filter provides insight on what information is important for decoding. In particular, the timing at which the receptors switch from OFF to ON state, the number of OFF receptors and the mean number of signalling molecules at the receiver are important. Numerical examples are used to illustrate the property of this decoding method.",
"In this paper, we perform receiver design for a diffusive molecular communication environment. Our model includes flow in any direction, sources of information molecules in addition to the transmitter, and enzymes in the propagation environment to mitigate intersymbol interference. We characterize the mutual information between receiver observations to show how often independent observations can be made. We derive the maximum likelihood sequence detector to provide a lower bound on the bit error probability. We propose the family of weighted sum detectors for more practical implementation and derive their expected bit error probability. Under certain conditions, the performance of the optimal weighted sum detector is shown to be equivalent to a matched filter. Receiver simulation results show the tradeoff in detector complexity versus achievable bit error probability, and that a slow flow in any direction can improve the performance of a weighted sum detector.",
"Abstract In this paper, a strength-based optimum signal detection scheme for binary concentration-encoded molecular communication (CEMC) system has been presented. In CEMC, a single type of information molecule is assumed to carry the information from the transmitting nanomachine (TN), through the propagation medium, to the receiving nanomachine (RN) in the form of received concentration of information molecules at the location of the RN. We consider a pair of nanomachines communicating by means of on–off keying (OOK) transmission protocol in a three-dimensional ideal (i.e. free) diffusion-based unbounded propagation environment. First, based on stochastic chemical kinetics of the reaction events between ligand molecules and receptors, we develop a mathematical receiver model of strength-based detection scheme for OOK CEMC system. Using an analytical approach, we explain the receiver operating characteristic (ROC) curves of the receiver thus developed. Finally, we propose a variable threshold -based detection scheme and explain its communication range and rate dependent characteristics. We show that it provides an improvement in the communication ranges compared to fixed threshold -based detection scheme. (Part of this paper has been peer-reviewed and published in BWCCA-2012 conference in Victoria, BC, 12–14 November, 2012 [20] .)",
"Reaction shift keying (RSK) is a recently proposed modulation scheme for diffusion-based molecular communication networks. For RSK, the transmitter uses different chemical reactions to generate different emission patterns to represent different symbols. The receiver front end consists of a molecular circuit and the back end uses a bank of demodulation filter for detection. The aim of this paper is to study the impact of the choice of molecular circuits on communication performance. We consider two circuits: (1) ligand-receptor binding with feedback regulation; (2) ligand-receptor with multiple binding sites. For each circuit, we derive the maximum a posteriori demodulation filter by solving a Bayesian filtering problem. We show that appropriate amount of feedback can reduce symbol error rate. We also show how the choice of measurements in the multiple multiple binding site case can affect communication performance.",
"This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present.",
"Diffusion-based communication refers to the transfer of information using molecules as message carriers whos ep rop- agation is governed by the laws of molecular diffusion. It has been identified that diffusion-based communication is one of the most promising solutions for end-to-end communication between nanoscale devices. In this paper, the design of a diffusion-based communication system considering stochastic signaling, arbitrary orders of channel memory, and noisy receptio ni s proposed. The diffusion in the cases of one, two, and three dimensions are all considered. Three signal processing techniques for the molecular concentration with low computational complexity are proposed. For the detector design, both a low-complexity one-shot optimal detector for mutual information maximization and a near Max- imum Likelihood (ML) sequence detector are proposed. To the best of our knowledge, our paper is thefirst that gives an analytical treatment of the signal processing, estimation, and detection prob- lems for diffusion-based communication in the presence of ISI and reception noise. Numerical results indicate that the proposed signal processing technique followed by the one-shot detector achieves near-optimal throughput without the need of ap riori information in both short-range and long-range diffusion-based communication scenarios, whic hs uggests an ML sequence de- tector is not necessary. Furthermore, the proposed receiver design guarantees diffusion-based communication to operate without failure even in the case of infinite channel memory. A channel capacity of 1 bit per channel utilization can be ultimately achieved by extending the duration of the signaling interval.",
"In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.",
"We consider a channel model based on the diffusion of particles in the medium which is motivated by the natural communication mechanisms between biological cells based on exchange of molecules. In this model, the transmitter secretes particles into the medium via a particle dissemination rate. The concentration of particles at any point in the medium is a function of its distance from the transmitter and the particle dissemination rate. The reception process is a doubly stochastic Poisson process whose rate is a function of the concentration of the particles in the vicinity of the receiver. We derive a closed-form for the mutual information between the input and output processes in this communication scenario and establish useful properties about the mutual information. We also provide a signaling strategy using which we derive a lower bound on the capacity of the diffusion channel with Poisson reception process under average and peak power constraints. Furthermore, it is shown that the capacity of discretized diffusion channel can be a negligible factor of the capacity of continuous time diffusion channel. Finally, the application of the considered model to the molecular communication systems is discussed."
]
} |
1610.09785 | 2545779952 | This paper considers a diffusion-based molecular communication system, where the transmitter uses reaction shift keying (RSK) as the modulation scheme. We focus on the demodulation of RSK signal at the receiver. The receiver consists of a front-end molecular circuit and a back-end demodulator. The front-end molecular circuit is a set of chemical reactions consisting of multiple chemical species. The optimal demodulator computes the posteriori probability of the transmitted symbols given the history of the observation. The derivation of the optimal demodulator requires the solution to a specific Bayesian filtering problem. The solution to this Bayesian filtering problem had been derived for a few specific molecular circuits and specific choice(s) of observed chemical species. The derivation of such solution is also lengthy. The key contribution of this paper is to present a general solution to this Bayesian filtering problem, which can be applied to any molecular circuit and any choice of observed species. | black The demodulation of RSK signals have previously been considered in @cite_20 @cite_23 . However, each of these works considers only a specific choice of molecular circuit. Instead the results of this paper are general. The general algorithm can be applied to any receiver molecular circuit and for any choice of measurements. | {
"cite_N": [
"@cite_23",
"@cite_20"
],
"mid": [
"2053226033",
"2115084976"
],
"abstract": [
"Reaction shift keying (RSK) is a recently proposed modulation scheme for diffusion-based molecular communication networks. For RSK, the transmitter uses different chemical reactions to generate different emission patterns to represent different symbols. The receiver front end consists of a molecular circuit and the back end uses a bank of demodulation filter for detection. The aim of this paper is to study the impact of the choice of molecular circuits on communication performance. We consider two circuits: (1) ligand-receptor binding with feedback regulation; (2) ligand-receptor with multiple binding sites. For each circuit, we derive the maximum a posteriori demodulation filter by solving a Bayesian filtering problem. We show that appropriate amount of feedback can reduce symbol error rate. We also show how the choice of measurements in the multiple multiple binding site case can affect communication performance.",
"In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator."
]
} |
1610.09785 | 2545779952 | This paper considers a diffusion-based molecular communication system, where the transmitter uses reaction shift keying (RSK) as the modulation scheme. We focus on the demodulation of RSK signal at the receiver. The receiver consists of a front-end molecular circuit and a back-end demodulator. The front-end molecular circuit is a set of chemical reactions consisting of multiple chemical species. The optimal demodulator computes the posteriori probability of the transmitted symbols given the history of the observation. The derivation of the optimal demodulator requires the solution to a specific Bayesian filtering problem. The solution to this Bayesian filtering problem had been derived for a few specific molecular circuits and specific choice(s) of observed chemical species. The derivation of such solution is also lengthy. The key contribution of this paper is to present a general solution to this Bayesian filtering problem, which can be applied to any molecular circuit and any choice of observed species. | An alternative way of designing receivers for molecular communication is by using molecular circuits, see @cite_28 @cite_46 for example. Various aspects of receiver molecular circuits have been studied in the literature. We will discuss two aspects here: capacity and noise properties. The information transmission capacity of a number of types of linear receiver molecular circuits is compared in @cite_28 . The capacity analysis for molecular communication based on ligand receptor binding has been presented in @cite_14 @cite_2 . The capacity of these systems in the continuous-time is presented in @cite_10 . The noise properties of ligand-receptor binding type of receivers is studied in @cite_7 @cite_32 . All the above papers assume that the receiver is a ligand-receptor binding process with only two reactions: binding and unbinding. However, in this paper, we propose a methodology that can be used for any molecular circuit. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_32",
"@cite_2",
"@cite_46",
"@cite_10"
],
"mid": [
"1968363642",
"2070597721",
"2003495286",
"2125863473",
"2150275923",
"2145401736",
"1996262444"
],
"abstract": [
"A diffusion-based molecular communication system has two major components: the diffusion in the medium, and the ligand-reception. Information bits, encoded in the time variations of the concentration of molecules, are conveyed to the receiver front through the molecular diffusion in the medium. The receiver, in turn, measures the concentration of the molecules in its vicinity in order to retrieve the information. This is done via ligand-reception process. In this paper, we develop models to study the constraints imposed by the concentration sensing at the receiver side and derive the maximum rate by which a ligand-receiver can receive information. Therefore, the overall capacity of the diffusion channel with the ligand receptors can be obtained by combining the results presented in this paper with our previous work on the achievable information rate of molecular communication over the diffusion channel.",
"Molecular Communication (MC), i.e., the exchange of information through the emission, propagation, and reception of molecules, is a promising paradigm for the interconnection of autonomous nanoscale devices, known as nanomachines. Synthetic biology techniques, and in particular the engineering of biological circuits, are enabling research towards the programming of functions within biological cells, thus paving the way for the realization of biological nanomachines. The design of MC systems built upon biological circuits is particularly interesting since cells naturally employ the MC paradigm in their interactions, and possess many of the elements required to realize this type of communication. This paper focuses on the identification and systems-theoretic modeling of a minimal subset of biological circuit elements necessary to be included in an MC system design where the message-bearing molecules are propagated via free diffusion between two cells. The system-theoretic models are here detailed in terms of transfer functions, from which analytical expressions are derived for the attenuation and the delay experienced by an information signal through the MC system. Numerical results are presented to evaluate the attenuation and delay expressions as functions of realistic biological parameters.",
"We consider diffusion-based molecular communication networks where the receivers consist of a set of chemical reactions or a molecular circuit. At the receivers of these networks, the signalling molecules react with the molecular circuit to produce output molecules. The counts of output molecules over time is the output signal of the receiver. The aim of this paper is to investigate the impact of different molecular circuits on the noise properties and information transmission capacity of molecular communication networks. In particular, we show that some molecular circuits have lower noise and higher information transmission capacity.",
"Molecular communication (MC) will enable the exchange of information among nanoscale devices. In this novel bio-inspired communication paradigm, molecules are employed to encode, transmit and receive information. In the most general case, these molecules are propagated in the medium by means of free diffusion. An information theoretical analysis of diffusion-based MC is required to better understand the potential of this novel communication mechanism. The study and the modeling of the noise sources is of utmost importance for this analysis. The objective of this paper is to provide a mathematical study of the noise at the reception of the molecular information in a diffusion-based MC system when the ligand-binding reception is employed. The reference diffusion-based MC system for this analysis is the physical end-to-end model introduced in a previous work by the same authors, where the reception process is realized through ligand-binding chemical receptors. The reception noise is modeled in this paper by following two different approaches, namely, through the ligand-receptor kinetics and through the stochastic chemical kinetics. The ligand-receptor kinetics allows to simulate the random perturbations in the chemical processes of the reception, while the stochastic chemical kinetics provides the tools to derive a closed-form solution to the modeling of the reception noise. The ligand-receptor kinetics model is expressed through a block scheme, while the stochastic chemical kinetics results in the characterization of the reception noise using stochastic differential equations. Numerical results are provided to demonstrate that the analytical formulation of the reception noise in terms of stochastic chemical kinetics is compliant with the reception noise behavior resulting from the ligand-receptor kinetics simulations.",
"In diffusion-based molecular communications, messages can be conveyed via the variation in the concentration of molecules in the medium. In this paper, we intend to analyze the achievable capacity in transmission of information from one node to another in a diffusion channel. We observe that because of the molecular diffusion in the medium, the channel possesses memory. We then model the memory of the channel by a two-step Markov chain and obtain the equations describing the capacity of the diffusion channel. By performing a numerical analysis, we obtain the maximum achievable rate for different levels of the transmitter power, i.e., the molecule production rate.",
"Molecular communication networks can be used to realise communication between nanoscale devices. In a molecular communication network, transmitters and receivers communicate by using signalling molecules. At the receivers, the signalling molecules react, via a chain of chemical reactions, to produce output molecules. The counts of output molecules over time is the output signal of the receiver. The output signal is noisy due to the stochastic nature of diffusion and chemical reactions. This paper aims to characterise the properties of the output signal. We do this by modelling the transmission medium, transmitter and receiver. In order to simplify the analysis, we model the transmitter as a sequence which specifies the number of molecules emitted by the transmitter over time. This paper considers two receiver reaction mechanisms, reversible conversion and linear catalytic, which can be used to approximate, respectively, ligand-receptor binding and enzymatic reactions. These two mechanisms are chosen because, if we consider them on their own (i.e. without the transmitter and diffusion), the ordinary differential equations describing the mean behaviour of these two reaction mechanisms have the same form; however, if we consider the end-to-end behaviour from the transmitter signal to the mean variance of the number of output molecules, then these two receiver reaction mechanisms have very different behaviours. We show this by deriving analytical expressions for the mean, variance and frequency properties of the number of output molecules of these two receiver reaction mechanisms. In addition, for reversible conversion, we are able to derive the exact probability distribution of the number of output molecules. Our model allows us to study the impact of design parameters on the communication performance. For example, we assume that our receiver is enclosed by a membrane and we study the impact of the diffusibility of molecules across this membrane on the communication performance.",
"We model the ligand-receptor molecular communication channel with a discrete-time Markov model, and show how to obtain the capacity of this channel. We show that the capacity-achieving input distribution is iid; further, unusually for a channel with memory, we show that feedback does not increase the capacity of this channel."
]
} |
1610.09785 | 2545779952 | This paper considers a diffusion-based molecular communication system, where the transmitter uses reaction shift keying (RSK) as the modulation scheme. We focus on the demodulation of RSK signal at the receiver. The receiver consists of a front-end molecular circuit and a back-end demodulator. The front-end molecular circuit is a set of chemical reactions consisting of multiple chemical species. The optimal demodulator computes the posteriori probability of the transmitted symbols given the history of the observation. The derivation of the optimal demodulator requires the solution to a specific Bayesian filtering problem. The solution to this Bayesian filtering problem had been derived for a few specific molecular circuits and specific choice(s) of observed chemical species. The derivation of such solution is also lengthy. The key contribution of this paper is to present a general solution to this Bayesian filtering problem, which can be applied to any molecular circuit and any choice of observed species. | Different models have been used in molecular communication literature to model the transmission medium. The papers @cite_8 @cite_16 assume that medium is continuous while in this paper, as well as in our previous work @cite_20 @cite_23 , we assume that the medium is divided into cubic voxels. The use of voxels allows us to model the end-to-end communication system using reaction diffusion master equation (RDME) @cite_6 @cite_5 @cite_20 , which is a continuous-time Markov Process (CTMP). An alternative end-to-end model appears in @cite_32 @cite_4 which is based on particle tracking. An advantage of the RDME approach is that we can use the Markovian properties to analyse molecular communication @cite_36 @cite_20 . | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_36",
"@cite_32",
"@cite_6",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_20"
],
"mid": [
"2171125625",
"2148147164",
"2093072521",
"2125863473",
"1967221240",
"2053226033",
"2046221159",
"2963922654",
"2115084976"
],
"abstract": [
"Molecular communication (MC) is a promising bio-inspired paradigm, in which molecules are used to encode, transmit and receive information at the nanoscale. Very limited research has addressed the problem of modeling and analyzing the MC in nanonetworks. One of the main challenges in MC is the proper study and characterization of the noise sources. The objective of this paper is the analysis of the noise sources in diffusion-based MC using tools from signal processing, statistics and communication engineering. The reference diffusion-based MC system for this analysis is the physical end-to-end model introduced in a previous work by the same authors. The particle sampling noise and the particle counting noise are analyzed as the most relevant diffusion-based noise sources. The analysis of each noise source results in two types of models, namely, the physical model and the stochastic model. The physical model mathematically expresses the processes underlying the physics of the noise source. The stochastic model captures the noise source behavior through statistical parameters. The physical model results in block schemes, while the stochastic model results in the characterization of the noises using random processes. Simulations are conducted to evaluate the capability of the stochastic model to express the diffusion-based noise sources represented by the physical model.",
"In this paper we have investigated into a multi-level amplitude modulation (M-AM) scheme for concentration-encoded unicast molecular communication between a transmitting nanomachine (TN) and a receiving nanomachine (RN) in a nanonetwork. The performance of M-AM scheme has been evaluated on the basis of signal strength in the form of average loss of concentration of received molecules at the location of RN, and interference strength in the form of concentration of undesired molecules that originate at previous symbol but provide additional concentration of molecules over the transmission rate, and interfere with the detection of current symbol. In order to evaluate signal strength and interference strength characteristics two performance metrics have been proposed and explained by observing their behavior as a function of communication range and number of amplitude levels. In addition, results of M-AM scheme have been compared with conventional binary scheme in order to show the corresponding improvement and or drawbacks when a random sequence of bits is transmitted.",
"Molecular communication is a promising approach to realize the communication between nanoscale devices. In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules. The transmitter uses different time-varying functions of concentration of signalling molecules (called emission patterns) to represent different transmission symbols. The signalling molecules diffuse freely in the medium. The receiver is assumed to consist of a number of receptors, which can be in ON or OFF state. When the signalling molecules arrive at the receiver, they react with the receptors and switch them from OFF to ON state probabilistically. The receptors remain ON for a random amount of time before reverting to the OFF state. This paper assumes that the receiver uses the continuous history of receptor state to infer the transmitted symbol. Furthermore, it assumes that the transmitter uses two transmission symbols and approaches the decoding problem from the maximum a posteriori (MAP) framework. Specifically, the decoding is realized by calculating the logarithm of the ratio of the posteriori probabilities of the two transmission symbols, or log-MAP ratio. A contribution of this paper is to show that the computation of log-MAP ratio can be performed by an analog filter. The receiver can therefore use the output of this filter to decide which symbol has been sent. This analog filter provides insight on what information is important for decoding. In particular, the timing at which the receptors switch from OFF to ON state, the number of OFF receptors and the mean number of signalling molecules at the receiver are important. Numerical examples are used to illustrate the property of this decoding method.",
"Molecular communication (MC) will enable the exchange of information among nanoscale devices. In this novel bio-inspired communication paradigm, molecules are employed to encode, transmit and receive information. In the most general case, these molecules are propagated in the medium by means of free diffusion. An information theoretical analysis of diffusion-based MC is required to better understand the potential of this novel communication mechanism. The study and the modeling of the noise sources is of utmost importance for this analysis. The objective of this paper is to provide a mathematical study of the noise at the reception of the molecular information in a diffusion-based MC system when the ligand-binding reception is employed. The reference diffusion-based MC system for this analysis is the physical end-to-end model introduced in a previous work by the same authors, where the reception process is realized through ligand-binding chemical receptors. The reception noise is modeled in this paper by following two different approaches, namely, through the ligand-receptor kinetics and through the stochastic chemical kinetics. The ligand-receptor kinetics allows to simulate the random perturbations in the chemical processes of the reception, while the stochastic chemical kinetics provides the tools to derive a closed-form solution to the modeling of the reception noise. The ligand-receptor kinetics model is expressed through a block scheme, while the stochastic chemical kinetics results in the characterization of the reception noise using stochastic differential equations. Numerical results are provided to demonstrate that the analytical formulation of the reception noise in terms of stochastic chemical kinetics is compliant with the reception noise behavior resulting from the ligand-receptor kinetics simulations.",
"We consider molecular communication networks consisting of transmitters and receivers distributed in a fluidic medium. In such networks, a transmitter sends one or more signaling molecules, which are diffused over the medium, to the receiver to realize the communication. In order to be able to engineer synthetic molecular communication networks, mathematical models for these networks are required. This paper proposes a new stochastic model for molecular communication networks called reaction-diffusion master equation with exogenous input (RDMEX). The key idea behind RDMEX is to model the transmitters as time series of signaling molecule counts, while diffusion in the medium and chemical reactions at the receivers are modeled as Markov processes using master equation. An advantage of RDMEX is that it can readily be used to model molecular communication networks with multiple transmitters and receivers. For the case where the reaction kinetics at the receivers is linear, we show how RDMEX can be used to determine the mean and covariance of the receiver output signals, and derive closed-form expressions for the mean receiver output signal of the RDMEX model. These closed-form expressions reveal that the output signal of a receiver can be affected by the presence of other receivers. Numerical examples are provided to demonstrate the properties of the model.",
"Reaction shift keying (RSK) is a recently proposed modulation scheme for diffusion-based molecular communication networks. For RSK, the transmitter uses different chemical reactions to generate different emission patterns to represent different symbols. The receiver front end consists of a molecular circuit and the back end uses a bank of demodulation filter for detection. The aim of this paper is to study the impact of the choice of molecular circuits on communication performance. We consider two circuits: (1) ligand-receptor binding with feedback regulation; (2) ligand-receptor with multiple binding sites. For each circuit, we derive the maximum a posteriori demodulation filter by solving a Bayesian filtering problem. We show that appropriate amount of feedback can reduce symbol error rate. We also show how the choice of measurements in the multiple multiple binding site case can affect communication performance.",
"Abstract Molecular communication networks consist of transmitters and receivers distributed in a fluid medium. The communication in these networks is realised by the transmitters emitting signalling molecules, which are diffused in the medium to reach the receivers. This paper investigates the properties of noise, or the variance of the receiver output, in molecular communication networks. The noise in these networks come from multiple sources: stochastic emission of signalling molecules by the transmitters, diffusion in the fluid medium and stochastic reaction kinetics at the receivers. We model these stochastic fluctuations by using an extension of the master equation. We show that, under certain conditions, the receiver outputs of linear molecular communication networks are Poisson distributed. The derivation also shows that noise in these networks is a nonlinear function of the network parameters and is non-additive. Numerical examples are provided to illustrate the properties of this type of Poisson channels.",
"In this paper, a diffusion-based molecular communication channel between two nano-machines is considered. The effect of the amount of memory on performance is characterized, and a simple memory-limited decoder is proposed; its performance is shown to be close to that of the best possible decoder (without any restrictions on the computational complexity or its functional form), using genie-aided upper bounds. This effect is adapted to the case of Molecular Concentration Shift Keying; it is shown that a four-bit memory achieves nearly the same performance as infinite memory for all of the examples considered. A general class of threshold decoders is considered and shown to be suboptimal for a Poisson channel with memory, unless the SNR is higher than a computed threshold. During each symbol duration (symbol period), the probability that a released molecule hits the receiver changes over the duration of the period; thus, we also consider a receiver that samples at a rate higher than the transmission rate (a multi-read system). A multi-read system improves performance. The associated decision rule for this system is shown to be a weighted sum of the samples during each symbol interval. The performance of the system is analyzed using the saddle point approximation. The best performance gains are achieved for an oversampling factor of three for the examples considered.",
"In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator."
]
} |
1610.09722 | 2548952790 | In order to extract event information from text, a machine reading model must learn to accurately read and interpret the ways in which that information is expressed. But it must also, as the human reader must, aggregate numerous individual value hypotheses into a single coherent global analysis, applying global constraints which reflect prior knowledge of the domain. In this work we focus on the task of extracting plane crash event information from clusters of related news articles whose labels are derived via distant supervision. Unlike previous machine reading work, we assume that while most target values will occur frequently in most clusters, they may also be missing or incorrect. We introduce a novel neural architecture to explicitly model the noisy nature of the data and to deal with these aforementioned learning issues. Our models are trained end-to-end and achieve an improvement of more than 12.1 F @math over previous work, despite using far less linguistic annotation. We apply factor graph constraints to promote more coherent event analyses, with belief propagation inference formulated within the transitions of a recurrent neural network. We show this technique additionally improves maximum F @math by up to 2.8 points, resulting in a relative improvement of @math over the previous state-of-the-art. | In terms of reading methodology, our scoring method is a slot-specific interpretation of the attentive reader @cite_19 , and our sum aggregation is closely-related to , with differences described previously in Sec. . A similar method is found in the entailment model of , where alignment scores (between a premise and a hypothesis) are generated via attention and summed. Recent machine reading models have used an iterative attention to refine model predictions @cite_0 . Such methods play a role similar to our factor graph constraint, though they incorporate no prior knowledge. | {
"cite_N": [
"@cite_0",
"@cite_19"
],
"mid": [
"2417356443",
"2949615363"
],
"abstract": [
"Described herein are systems and methods for providing a natural language comprehension system (NLCS) that iteratively performs an alternating search to gather information that may be used to predict the answer to the question. The NLCS first attends to a query glimpse of the question, and then finds one or more corresponding matches by attending to a text glimpse of the text.",
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure."
]
} |
1610.09204 | 2541811364 | Book covers communicate information to potential readers, but can that same information be learned by computers? We propose using a deep Convolutional Neural Network (CNN) to predict the genre of a book based on the visual clues provided by its cover. The purpose of this research is to investigate whether relationships between books and their covers can be learned. However, determining the genre of a book is a difficult task because covers can be ambiguous and genres can be overarching. Despite this, we show that a CNN can extract features and learn underlying design rules set by the designer to define a genre. Using machine learning, we can bring the large amount of resources available to the book cover design process. In addition, we present a new challenging dataset that can be used for many pattern recognition tasks. | In the field of genre classification, there have been attempts to classify music by genre @cite_25 @cite_13 @cite_23 . It was also done in the fields of paintings @cite_17 @cite_21 and text @cite_14 @cite_24 . However, most of these methods use designed features or features specific to the task. In a more general sense, document classification tackles a similar problem in that it classes documents into architectural categories. In particular, deep CNNs have been successful in document classification @cite_3 @cite_16 . @cite_1 , used a region-based CNNs to guide the document classification. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"2071238250",
"2166242527",
"1523493493",
"",
"",
"1861596447",
"",
"1801907668",
"",
"2128901179"
],
"abstract": [
"Current document-retrieval tools succeed in locating large numbers of documents relevant to a given query. While search results may be relevant according to the topic of the documents, it is more difficult to identify which of the relevant documents are most suitable for a particular user. Automatic genre analysis (i.e., the ability to distinguish documents according to style) would be a useful tool for identifying documents that are most suitable for a particular user. We investigate the use of machine learning for automatic genre classification. We introduce the idea of domain transfer—genre classifiers should be reusable across multiple topics—which does not arise in standard text classification. We investigate different features for building genre classifiers and their ability to transfer across multiple-topic domains. We also show how different feature-sets can be used in conjunction with each other to improve performance and reduce the number of documents that need to be labeled.",
"The style of an image plays a significant role in how it is viewed, but style has received little attention in computer vision research. We describe an approach to predicting style of images, and perform a thorough evaluation of different image features for these tasks. We find that features learned in a multi-layer network generally perform best -- even when trained with object class (not style) labels. Our large-scale learning methods results in the best published performance on an existing dataset of aesthetic ratings and photographic style annotations. We present two novel datasets: 80K Flickr photographs annotated with 20 curated style labels, and 85K paintings annotated with 25 style genre labels. Our approach shows excellent classification performance on both datasets. We use the learned classifiers to extend traditional tag-based image search to consider stylistic constraints, and demonstrate cross-dataset understanding of style.",
"A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over nonoverlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57 on the NORB normalized-uniform dataset and 5.6 on the NORB jittered-cluttered dataset.",
"",
"",
"The literature on content-based music retrieval has largely finessed acoustic issues by using MIDI format music. This paper however considers content-based classification and retrieval of a typical (MPEG layer III) digital music archive. Two statistical techniques are investigated and appraised. Gaussian mixture modelling performs well with an accuracy of 92 on a music classification task. A tree-based vector quantization scheme offers marginally worse performance in a faster, scalable framework. Good results are also reported for music retrieval-by-similarity using the same techniques. Mel-frequency cepstral coefficients parameterize the audio well, though are slow to compute from the compressed domain. A new parameterization (MP3CEP), based on a partial decompression of MPEG layer III audio, is therefore proposed to facilitate music processing at user-interactive speeds. Overall, the techniques described provide useful tools in the management of a typical digital music library.",
"",
"This paper presents a system that extracts 109 musical features from symbolic recordings (MIDI, in this case) and uses them to classify the recordings by genre. The features used here are based on instrumentation, texture, rhythm, dynamics, pitch statistics, melody and chords. The classification is performed hierarchically using different sets of features at different levels of the hierarchy. Which features are used at each level, and their relative weightings, are determined using genetic algorithms. Classification is performed using a novel ensemble of feedforward neural networks and k-nearest neighbour classifiers. Arguments are presented emphasizing the importance of using high-level musical features, something that has been largely neglected in automatic classification systems to date in favour of low-level features. The effect on classification performance of varying the number of candidate features is examined in order to empirically demonstrate the importance of using a large variety of musically meaningful features. Two differently sized hierarchies are used in order to test the performance of the system under different conditions. Very encouraging classification success rates of 98 for root genres and 90 for leaf genres are obtained for a hierarchical taxonomy consisting of 9 leaf genres.",
"",
"This paper describes an approach to automatically classify digital pictures of paintings by artistic genre. While the task of artistic classification is often entrusted to human experts, recent advances in machine learning and multimedia feature extraction has made this task easier to automate. Automatic classification is useful for organizing large digital collections, for automatic artistic recommendation, and even for mobile capture and identification by consumers. Our evaluation uses variableresolution painting data gathered across Internet sources rather than solely using professional high-resolution data. Consequently, we believe this solution better addresses the task of classifying consumer-quality digital captures than other existing approaches. We include a comparison to existing feature extraction and classification methods as well as an analysis of our own approach across classifiers and feature vectors."
]
} |
1610.09112 | 2545849206 | We consider the problem of decentralized clustering and estimation over multitask networks, where agents infer and track different models of interest. The agents do not know beforehand which model is generating their own data. They also do not know which agents in their neighborhood belong to the same cluster. We propose a decentralized clustering algorithm aimed at identifying and forming clusters of agents of similar objectives, and at guiding cooperation to enhance the inference performance. One key feature of the proposed technique is the integration of the learning and clustering tasks into a single strategy. We analyze the performance of the procedure and show that the error probabilities of types I and II decay exponentially to zero with the step-size parameter. While links between agents following different objectives are ignored in the clustering process, we nevertheless show how to exploit these links to relay critical information across the network for enhanced performance. Simulation results illustrate the performance of the proposed method in comparison to other useful techniques. | Distributed learning is a powerful technique for extracting information from networked agents (see, e.g., @cite_18 @cite_1 @cite_20 @cite_12 @cite_22 @cite_15 @cite_2 and the references therein). In this work, we consider a network of agents connected by a graph. Each agent senses data generated by some unknown model. It is assumed that there are clusters of agents within the network, where agents in the same cluster observe data arising from the same model. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_1",
"@cite_2",
"@cite_15",
"@cite_20",
"@cite_12"
],
"mid": [
"",
"2169818402",
"2012445782",
"2330981327",
"2044212084",
"2949243244",
"2166935429"
],
"abstract": [
"",
"In this paper, the problem of adaptive distributed learning in diffusion networks is considered. The algorithms are developed within the convex set theoretic framework. More specifically, they are based on computationally simple geometric projections onto closed convex sets. The paper suggests a novel combine-project-adapt protocol for cooperation among the nodes of the network; such a protocol fits naturally with the philosophy that underlies the projection-based rationale. Moreover, the possibility that some of the nodes may fail is also considered and it is addressed by employing robust statistics loss functions. Such loss functions can easily be accommodated in the adopted algorithmic framework; all that is required from a loss function is convexity. Under some mild assumptions, the proposed algorithms enjoy monotonicity, asymptotic optimality, asymptotic consensus, strong convergence and linear complexity with respect to the number of unknown parameters. Finally, experiments in the context of the system-identification task verify the validity of the proposed algorithmic schemes, which are compared to other recent algorithms that have been developed for adaptive distributed learning.",
"This work deals with the topic of information processing over graphs. The presentation is largely self-contained and covers results that relate to the analysis and design of multi-agent networks for the distributed solution of optimization, adaptation, and learning problems from streaming data through localized interactions among agents. The results derived in this work are useful in comparing network topologies against each other, and in comparing networked solutions against centralized or batch implementations. There are many good reasons for the peaked interest in distributed implementations, especially in this day and age when the word \"network\" has become commonplace whether one is referring to social networks, power networks, transportation networks, biological networks, or other types of networks. Some of these reasons have to do with the benefits of cooperation in terms of improved performance and improved resilience to failure. Other reasons deal with privacy and secrecy considerations where agents may not be comfortable sharing their data with remote fusion centers. In other situations, the data may already be available in dispersed locations, as happens with cloud computing. One may also be interested in learning through data mining from big data sets. Motivated by these considerations, this work examines the limits of performance of distributed solutions and discusses procedures that help bring forth their potential more fully. The presentation adopts a useful statistical framework and derives performance results that elucidate the mean-square stability, convergence, and steady-state behavior of the learning networks. At the same time, the work illustrates how distributed processing over graphs gives rise to some revealing phenomena due to the coupling effect among the agents. These phenomena are discussed in the context of adaptive networks, along with examples from a variety of areas including distributed sensing, intrusion detection, distributed estimation, online adaptation, network system theory, and machine learning.",
"The popular least-mean-squares (LMS) algorithm for adaptive filtering is nonrobust against impulsive noise in the measurements. The presence of this type of noise degrades the transient and steady-state performance of the algorithm. Since the distribution of the impulsive noise is generally unknown, a robust semi-parametric approach to adaptive filtering is warranted, where the output error nonlinearity is adapted jointly with the parameter of interest. In this paper, a robust adaptive filtering algorithm is developed that effectively learns and tracks the output error distribution to improve estimation performance. The performance of the algorithm is analyzed mathematically and validated experimentally.",
"We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.",
"The problem of distributed or decentralized detection and estimation in applications such as wireless sensor networks has often been considered in the framework of parametric models, in which strong assumptions are made about a statistical description of nature. In certain applications, such assumptions are warranted and systems designed from these models show promise. However, in other scenarios, prior knowledge is at best vague and translating such knowledge into a statistical model is undesirable. Applications such as these pave the way for a nonparametric study of distributed detection and estimation. In this paper, we review recent work of the authors in which some elementary models for distributed learning are considered. These models are in the spirit of classical work in nonparametric statistics and are applicable to wireless sensor networks.",
"We introduce a distributed adaptive algorithm for linear minimum mean squared error (MMSE) estimation of node-specific signals in a fully connected broadcasting sensor network where the nodes collect multichannel sensor signal observations. We assume that the node-specific signals to be estimated share a common latent signal subspace with a dimension that is small compared to the number of available sensor channels at each node. In this case, the algorithm can significantly reduce the required communication bandwidth and still provide the same optimal linear MMSE estimators as the centralized case. Furthermore, the computational load at each node is smaller than in a centralized architecture in which all computations are performed in a single fusion center. We consider the case where nodes update their parameters in a sequential round robin fashion. Numerical simulations support the theoretical results. Because of its adaptive nature, the algorithm is suited for real-time signal estimation in dynamic environments, such as speech enhancement with acoustic sensor networks."
]
} |
1610.09112 | 2545849206 | We consider the problem of decentralized clustering and estimation over multitask networks, where agents infer and track different models of interest. The agents do not know beforehand which model is generating their own data. They also do not know which agents in their neighborhood belong to the same cluster. We propose a decentralized clustering algorithm aimed at identifying and forming clusters of agents of similar objectives, and at guiding cooperation to enhance the inference performance. One key feature of the proposed technique is the integration of the learning and clustering tasks into a single strategy. We analyze the performance of the procedure and show that the error probabilities of types I and II decay exponentially to zero with the step-size parameter. While links between agents following different objectives are ignored in the clustering process, we nevertheless show how to exploit these links to relay critical information across the network for enhanced performance. Simulation results illustrate the performance of the proposed method in comparison to other useful techniques. | However, the agents do not know which model is generating their own data. They also do not know which agents in their neighborhood belong to the same cluster. Scenarios of this type arise, for example, in tracking applications when a collection of networked agents is tasked with tracking several moving objects @cite_26 @cite_8 @cite_25 . Clusters end up being formed within the network with different clusters following different targets. The quality of the tracking estimation performance will be improved if neighboring agents following the same target know of each other to promote cooperation. It is not only cooperation within clusters that is useful, but also cooperation across clusters, especially when targets move in formation and the location of the targets are correlated. Motivated by these considerations, the main objective of this work is to develop a distributed technique that enables agents to recognize neighbors from the same cluster and promotes cooperation for improved inference performance. | {
"cite_N": [
"@cite_26",
"@cite_25",
"@cite_8"
],
"mid": [
"2170190949",
"1482016817",
"2117189568"
],
"abstract": [
"In this article, a survey of techniques for tracking multiple targets in distributed sensor networks is provided and introduce some recent developments. The single target tracking in distributed sensor networks is reviewed. The tracking and resource management issues can be readily extended to MTT. The MTT problem is also briefly reviewed and describe the traditional approaches in centralized systems. Then focus on MTT in resource-constrained sensor networks and present two distinct example methods demonstrating how limited resources can be utilized in MTT applications. Finally, the most important remaining problems are discussed and suggest future directions",
"We examine the design of self-organizing mobile adaptive networks with multiple targets in which the network nodes form distinct clusters to learn about and purse multiple targets, all while moving in a cohesive collision-free manner. We build upon previous distributed diffusion-based adaptive learning networks that focused on a single target to examine the case with multiple targets in which the nodes do not know the number of targets, and exchange local information with their neighbors in their learning objectives. In particular, we design a method allowing the nodes to switch the target they are tracking thereby engendering the formation of distinct stable learning groups that can split up and purse their distinct targets over time. We provide analytical mean stability and steady state mean-square deviation results along with simulations that demonstrate the efficacy of the proposed method.",
"We propose the adaptive control and reconfiguration schemes for mobile wireless sensor networks (MWSN) to achieve timely and accurate mobile multi-target tracking (MMTT) with cost-effective energy consumption. In particular, our proposed schemes can detect the mobile multi-targets' random appearance and disappearance in the clutter environments with high accuracy and low energy cost. We develop the optimal mutual-information based techniques to adaptively control the reconfiguration of the proposed MWSN by designing the Distributed Decentralized Probability Hypothesis Density (DPHD) filtering algorithms. By dynamically adjusting the sensors' states, including their positions and activations, our schemes can efficiently improve the observabilities of the tracked multi-targets. We further analyze the asymptotic performance of our proposed schemes by deriving the upper-bounds of the detection-error probabilities. Also presented are the performance analyses which validate and evaluate our proposed adaptive control and reconfiguration schemes for MWSN in terms of the multi-target states estimation accuracy, the energy-consumption efficiency, and the robustness to the interference noise."
]
} |
1610.09112 | 2545849206 | We consider the problem of decentralized clustering and estimation over multitask networks, where agents infer and track different models of interest. The agents do not know beforehand which model is generating their own data. They also do not know which agents in their neighborhood belong to the same cluster. We propose a decentralized clustering algorithm aimed at identifying and forming clusters of agents of similar objectives, and at guiding cooperation to enhance the inference performance. One key feature of the proposed technique is the integration of the learning and clustering tasks into a single strategy. We analyze the performance of the procedure and show that the error probabilities of types I and II decay exponentially to zero with the step-size parameter. While links between agents following different objectives are ignored in the clustering process, we nevertheless show how to exploit these links to relay critical information across the network for enhanced performance. Simulation results illustrate the performance of the proposed method in comparison to other useful techniques. | Still, it is preferable to merge the clustering and learning mechanisms rather than have them run separately of each other. Doing so reduces the computational burden and, if successful, can also lead to enhancement in clustering accuracy relative to the earlier approaches @cite_11 @cite_3 @cite_10 . We showed in preliminary work @cite_0 that this is indeed possible for a particular class of inference problems involving mean-square-error risks. In this work, we generalize the results and devise an integrated clustering-learning approach for general-purpose risk functions. Additionally, and motivated by the results from @cite_7 on adaptive decision-making by networked agents, we further incorporate a smoothing mechanism into our strategy to enhance the belief that agents have about their clusters. We also show how to exploit the unused links among neighboring agents belonging to different clusters to relay useful information among agents. We carry out a detailed analysis of the resulting framework, and illustrate its superior performance by means of computer simulations. | {
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_0",
"@cite_10",
"@cite_11"
],
"mid": [
"2112603423",
"2093795816",
"2211210082",
"2212984315",
"2024747714"
],
"abstract": [
"In distributed processing, agents generally collect data generated by the same underlying unknown model (represented by a vector of parameters) and then solve an estimation or inference task cooperatively. In this paper, we consider the situation in which the data observed by the agents may have risen from two different models. Agents do not know beforehand which model accounts for their data and the data of their neighbors. The objective for the network is for all agents to reach agreement on which model to track and to estimate this model cooperatively. In these situations, where agents are subject to data from unknown different sources, conventional distributed estimation strategies would lead to biased estimates relative to any of the underlying models. We first show how to modify existing strategies to guarantee unbiasedness. We then develop a classification scheme for the agents to identify the models that generated the data, and propose a procedure by which the entire network can be made to converge towards the same model through a collaborative decision-making process. The resulting algorithm is applied to model fish foraging behavior in the presence of two food sources.",
"The diffusion LMS algorithm has been extensively studied in recent years. This efficient strategy allows to address distributed optimization problems over networks in the case where nodes have to collaboratively estimate a single parameter vector. Nevertheless, there are several problems in practice that are multitask-oriented in the sense that the optimum parameter vector may not be the same for every node. This brings up the issue of studying the performance of the diffusion LMS algorithm when it is run, either intentionally or unintentionally, in a multitask environment. In this paper, we conduct a theoretical analysis on the stochastic behavior of diffusion LMS in the case where the single-task hypothesis is violated. We analyze the competing factors that influence the performance of diffusion LMS in the multitask environment, and which allow the algorithm to continue to deliver performance superior to non-cooperative strategies in some useful circumstances. We also propose an unsupervised clustering strategy that allows each node to select, via adaptive adjustments of combination weights, the neighboring nodes with which it can collaborate to estimate a common parameter vector. Simulations are presented to illustrate the theoretical results, and to demonstrate the efficiency of the proposed clustering strategy.",
"Cooperation among agents across the network leads to bet ter estimation accuracy. However, in many network applications the agents infer and track different models of interest in an environment where agents do not know beforehand which models are being observed by their neighbors. In this work, we propose an adaptive and distributed clustering technique that allows agents to learn and form clusters from streaming data in a robust manner. Once clusters are formed, cooperation among agents with similar objectives then enhances the performance of the inference task. The performance of the proposed clustering algorithm is discussed by commenting on the behavior of probabilities of erroneous decision. We validate the performance of the algorithm by numerical sim ulations, that show how the clustering process enhances the mean-square-error performance of the agents across the net work.",
"Diffusion LMS was originally conceived for online distributed parameter estimation in single-task environments where agents pursue a common objective. However, estimating distinct but correlated objects (multitask problems) is useful in many applications. To address multitask problems with combine-then-adapt diffusion LMS strategies, we derive an unsupervised strategy that allows each node to continuously select the neighboring nodes with which it should exchange information to improve its estimation accuracy. Simulation experiments illustrate the efficiency of this clustering strategy. In particular, nodes do not know which other nodes share similar objectives.",
"Distributed processing over networks relies on in-network processing and cooperation among neighboring agents. Cooperation is beneficial when all agents share the same objective or belong to the same group. However, if agents belong to different clusters or are interested in different objectives, then cooperation can be damaging. In this work, we devise an adaptive combination rule that allows agents to learn which neighbors belong to the same cluster and which other neighbors should be ignored. In doing so, the resulting algorithm enables the agents to identify their grouping and to attain improved learning and estimation performance over networks."
]
} |
1610.09077 | 2546193274 | The research of personalized recommendation techniques today has mostly parted into two mainstream directions, i.e., the factorization-based approaches and topic models. Practically, they aim to benefit from the numerical ratings and textual reviews, correspondingly, which compose two major information sources in various real-world systems. However, although the two approaches are supposed to be correlated for their same goal of accurate recommendation, there still lacks a clear theoretical understanding of how their objective functions can be mathematically bridged to leverage the numerical ratings and textual reviews collectively, and why such a bridge is intuitively reasonable to match up their learning procedures for the rating prediction and top-N recommendation tasks, respectively. In this work, we exposit with mathematical analysis that, the vector-level randomization functions to coordinate the optimization objectives of factorizational and topic models unfortunately do not exist at all, although they are usually pre-assumed and intuitively designed in the literature. Fortunately, we also point out that one can avoid the seeking of such a randomization function by optimizing a Joint Factorizational Topic (JFT) model directly. We apply our JFT model to restaurant recommendation, and study its performance in both normal and cross-city recommendation scenarios, where the latter is an extremely difficult task for its inherent cold-start nature. Experimental results on real-world datasets verified the appealing performance of our approach against previous methods, on both rating prediction and top-N recommendation tasks. | To alleviate these problems, researchers have been investigating the incorporation of textual reviews for recommendation @cite_33 @cite_20 @cite_7 @cite_45 , which is another important information source beyond the star ratings in many systems. Early approaches rely on manually extracted item aspects from reviews for more informed recommendation @cite_48 @cite_17 and rating prediction @cite_31 @cite_26 , which improved the performance but also required extensive human participations. As a results, researchers recently have begun to investigate the possibility of integrating the automatic topic modeling techniques on textual reviews and the latent factor modeling approach on numerical ratings for boosted recommendation, and have achieved appealing results @cite_13 @cite_3 @cite_28 @cite_6 @cite_39 . | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_48",
"@cite_3",
"@cite_6",
"@cite_39",
"@cite_45",
"@cite_31",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2046216022",
"2001892351",
"",
"2045471213",
"",
"",
"",
"2251814753",
"2166956738",
"",
"",
"2088435121"
],
"abstract": [
"",
"Previous research on Recommender Systems (RS), especially the continuously popular approach of Collaborative Filtering (CF), has been mostly focusing on the information resource of explicit user numerical ratings or implicit (still numerical) feedbacks. However, the ever-growing availability of textual user reviews has become an important information resource, where a wealth of explicit product attributes features and user attitudes sentiments are expressed therein. This information rich resource of textual reviews have clearly exhibited brand-new approaches to solving many of the important problems that have been perplexing the research community for years, such as the paradox of cold-start, the explanation of recommendation, and the automatic generation of user or item profiles. However, it is only recently that the fundamental importance of textual reviews has gained wide recognition, perhaps mainly because of the difficulty in formatting, structuring and analyzing the free-texts. In this research, we stress the importance of incorporating textual reviews for recommendation through phrase-level sentiment analysis, and further investigate the role that the texts play in various important recommendation tasks.",
"Current approaches for contextual sentiment lexicon construction in phrase-level sentiment analysis assume that the numerical star rating of a review represents the overall sentiment orientation of the review text. Although widely adopted, we find through user rating analysis that this is not necessarily true. In this paper, we attempt to bridge the gap between phrase-level and review document-level sentiment analysis by leveraging the results given by review-level sentiment classification to boost phrase-level sentiment polarity labeling in contextual sentiment lexicon construction tasks, using a novel constrained convex optimization framework. Experimental results on both English and Chinese reviews show that our framework improves the precision of sentiment polarity labeling by up to 5.6 , which is a significant improvement from current approaches.",
"",
"Recommender systems attempt to predict items in which a user might be interested, given some information about the user's and items' profiles. Most existing recommender systems use content-based or collaborative filtering methods or hybrid methods that combine both techniques (see the sidebar for more details). We created Informed Recommender to address the problem of using consumer opinion about products, expressed online in free-form text, to generate product recommendations. Informed recommender uses prioritized consumer product reviews to make recommendations. Using text-mining techniques, it maps each piece of each review comment automatically into an ontology.",
"",
"",
"",
"The frequently changing user preferences and or item profiles have put essential importance on the dynamic modeling of users and items in personalized recommender systems. However, due to the insufficiency of per user item records when splitting the already sparse data across time dimension, previous methods have to restrict the drifting purchasing patterns to pre-assumed distributions, and were hardly able to model them rather directly with, for example, time series analysis. Integrating content information helps to alleviate the problem in practical systems, but the domain-dependent content knowledge is expensive to obtain due to the large amount of manual efforts. In this paper, we make use of the large volume of textual reviews for the automatic extraction of domain knowledge, namely, the explicit features aspects in a specific product domain. We thus degrade the product-level modeling of user preferences, which suffers from the lack of data, to the feature-level modeling, which not only grants us the ability to predict user preferences through direct time series analysis, but also allows us to know the essence under the surface of product-level changes in purchasing patterns. Besides, the expanded feature space also helps to make cold-start recommendations for users with few purchasing records. Technically, we develop the Fourier-assisted Auto-Regressive Integrated Moving Average (FARIMA) process to tackle with the year-long seasonal period of purchasing data to achieve daily-aware preference predictions, and we leverage the conditional opportunity models for daily-aware personalized recommendation. Extensive experimental results on real-world cosmetic purchasing data from a major e-commerce website (JD.com) in China verified both the effectiveness and efficiency of our approach.",
"In this paper we show that the extraction of opinions from free-text reviews can improve the accuracy of movie recommendations. We present three approaches to extract movie aspects as opinion targets and use them as features for the collaborative filtering. Each of these approaches requires different amounts of manual interaction. We collected a data set of reviews with corresponding ordinal (star) ratings of several thousand movies to evaluate the different features for the collaborative filtering. We employ a state-of-the-art collaborative filtering engine for the recommendations during our evaluation and compare the performance with and without using the features representing user preferences mined from the free-text reviews provided by the users. The opinion mining based features perform significantly better than the baseline, which is based on star ratings and genre information only.",
"",
"",
"Collaborative filtering relies on numerical ratings for recommendations. While users consider various aspects of content as a basis of their evaluation, a numeric rating provides only an aggregated report of final assessment. The performance of a collaborative recommender system could be enhanced if the ratings are augmented by more specific information used for evaluation. In this paper, we present MovieCommenter, a recommender system that utilizes movie aspects - key features and users' opinions about the movie. We conducted a series of experiments to perform both qualitative and quantitative evauations of the system performance. The results show that our approach makes more precise recommendations than traditional approaches. Moreover, the interface of MovieCommenter was found to enhance the recommendation explanability, ability to explain how the recommendation was made. Because our approach is based on independent schema, this approach could be easily applied for recommending other domain contents."
]
} |
1610.09434 | 2547753369 | Barnum, Crepeau, Gottesman, Tapp, and Smith (quant-ph 0205128) proposed methods for authentication of quantum messages. The first method is an interactive protocol (TQA') based on teleportation. The second method is a noninteractive protocol (QA) in which the sender first encrypts the message using a protocol QEnc and then encodes the quantum ciphertext with an error correcting code chosen secretly from a set (a purity test code (PTC)). Encryption was shown to be necessary for authentication. We augment the protocol QA with an extra step which recycles the entire encryption key provided QA accepts the message. We analyze the resulting integrated protocol for quantum authentication and key generation, which we call QA+KG. Our main result is a proof that QA+KG is universal composably (UC) secure in the Ben-Or-Mayers model (quant-ph 0409062). More specifically, this implies the UC-security of (a) QA, (b) recycling of the encryption key in QA, and (c) key-recycling of the encryption scheme QEnc by appending PTC. For an m-qubit message, encryption requires 2m bits of key; but PTC can be performed using only O(log m) + O(log e) bits of key for probability of failure e. Thus, we reduce the key required for both QA and QEnc, from linear to logarithmic net consumption, at the expense of one bit of back communication which can happen any time after the conclusion of QA and before reusing the key. UC-security of QA also extends security to settings not obvious from quant-ph 0205128. Our security proof structure is inspired by and similar to that of quant-ph 0205128, reducing the security of QA to that of TQA'. In the process, we define UC-secure entanglement, and prove the UC-security of the entanglement generating protocol given in quant-ph 0205128, which could be of independent interest. | We also formalize how TQA uses a subroutine EBIT[PTC]'' which generates entanglement via insecure channel using PTC as a subroutine. TQA+KG teleports @cite_5 the quantum message using EBIT[PTC] and a perfect encrypted and authenticated classical channel denoted by C @math . After using the classical message to complete teleportation, it is output as a key. In other words, the protocol TQA+KG can be interpreted as (TP+KG)[EBIT[PTC],C @math ] where TP stands for teleportation. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1978553093"
],
"abstract": [
"An unknown quantum state 〉 can be disassembled into, then later reconstructed from, purely classical information and purely nonclassical Einstein-Podolsky-Rosen (EPR) correlations. To do so the sender, Alice,'' and the receiver, Bob,'' must prearrange the sharing of an EPR-correlated pair of particles. Alice makes a joint measurement on her EPR particle and the unknown quantum system, and sends Bob the classical result of this measurement. Knowing this, Bob can convert the state of his EPR particle into an exact replica of the unknown state 〉 which Alice destroyed."
]
} |
1610.09434 | 2547753369 | Barnum, Crepeau, Gottesman, Tapp, and Smith (quant-ph 0205128) proposed methods for authentication of quantum messages. The first method is an interactive protocol (TQA') based on teleportation. The second method is a noninteractive protocol (QA) in which the sender first encrypts the message using a protocol QEnc and then encodes the quantum ciphertext with an error correcting code chosen secretly from a set (a purity test code (PTC)). Encryption was shown to be necessary for authentication. We augment the protocol QA with an extra step which recycles the entire encryption key provided QA accepts the message. We analyze the resulting integrated protocol for quantum authentication and key generation, which we call QA+KG. Our main result is a proof that QA+KG is universal composably (UC) secure in the Ben-Or-Mayers model (quant-ph 0409062). More specifically, this implies the UC-security of (a) QA, (b) recycling of the encryption key in QA, and (c) key-recycling of the encryption scheme QEnc by appending PTC. For an m-qubit message, encryption requires 2m bits of key; but PTC can be performed using only O(log m) + O(log e) bits of key for probability of failure e. Thus, we reduce the key required for both QA and QEnc, from linear to logarithmic net consumption, at the expense of one bit of back communication which can happen any time after the conclusion of QA and before reusing the key. UC-security of QA also extends security to settings not obvious from quant-ph 0205128. Our security proof structure is inspired by and similar to that of quant-ph 0205128, reducing the security of QA to that of TQA'. In the process, we define UC-secure entanglement, and prove the UC-security of the entanglement generating protocol given in quant-ph 0205128, which could be of independent interest. | The security of key recycling in QA was studied independently by M. Horodecki and Oppenheim @cite_23 in 2003. However @cite_23 does not address the security of QA, and it assumes an adversary who does not possess the purification. For that reason, we believe their claim to UC security, even if it holds, requires a nontrivial proof, but none was given. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2001213350"
],
"abstract": [
"Quantum information is a valuable resource which can be encrypted in order to protect it. We consider the size of the one-time pad that is needed to protect quantum information in a number of cases. The situation is dramatically different from the classical case: we prove that one can recycle the one-time pad without compromising security. The protocol for recycling relies on detecting whether eavesdropping has occurred, and further relies on the fact that information contained in the encrypted quantum state cannot be fully accessed. We prove the security of recycling rates when authentication of quantum states is accepted, and when it is rejected. We note that recycling schemes respect a general law of cryptography which we introduce relating the size of private keys, sent qubits, and encrypted messages. We discuss applications for encryption of quantum information in light of the resources needed for teleportation. Potential uses include the protection of resources such as entanglement and the memory of quantum computers. We also introduce another application: encrypted secret sharing and find that one can even reuse the private key that is used to encrypt a classical message. In a number of cases, one finds that the amount of private keymore » needed for authentication or protection is smaller than in the general case.« less"
]
} |
1610.09434 | 2547753369 | Barnum, Crepeau, Gottesman, Tapp, and Smith (quant-ph 0205128) proposed methods for authentication of quantum messages. The first method is an interactive protocol (TQA') based on teleportation. The second method is a noninteractive protocol (QA) in which the sender first encrypts the message using a protocol QEnc and then encodes the quantum ciphertext with an error correcting code chosen secretly from a set (a purity test code (PTC)). Encryption was shown to be necessary for authentication. We augment the protocol QA with an extra step which recycles the entire encryption key provided QA accepts the message. We analyze the resulting integrated protocol for quantum authentication and key generation, which we call QA+KG. Our main result is a proof that QA+KG is universal composably (UC) secure in the Ben-Or-Mayers model (quant-ph 0409062). More specifically, this implies the UC-security of (a) QA, (b) recycling of the encryption key in QA, and (c) key-recycling of the encryption scheme QEnc by appending PTC. For an m-qubit message, encryption requires 2m bits of key; but PTC can be performed using only O(log m) + O(log e) bits of key for probability of failure e. Thus, we reduce the key required for both QA and QEnc, from linear to logarithmic net consumption, at the expense of one bit of back communication which can happen any time after the conclusion of QA and before reusing the key. UC-security of QA also extends security to settings not obvious from quant-ph 0205128. Our security proof structure is inspired by and similar to that of quant-ph 0205128, reducing the security of QA to that of TQA'. In the process, we define UC-secure entanglement, and prove the UC-security of the entanglement generating protocol given in quant-ph 0205128, which could be of independent interest. | In 2005, Damgard, Pedersen, and Salvail @cite_20 proposed key recycling for the encryption of classical messages by using the Wegman-Carter classical authentication scheme followed by a quantum encryption scheme based on key uncertainty or locking @cite_34 . Encryption of quantum messages was said to be possible in the introduction but no proof of this assertion was given in the text. Regardless, the results in @cite_20 are quite different from ours because encryption and authentication of classical messages are much weaker tasks cryptographically. Also, locking is highly non-composable when a quantum adversary has quantum memory and delays measurements. It is unclear how the analysis in @cite_20 fits into the composability framework, despite a claim (without formal definition or proof) of the composable security of the regenerated key. (We detail the differences here since an earlier version of our paper was rejected in 2007 by a referee who assumed this work to be similar to @cite_20 .) | {
"cite_N": [
"@cite_34",
"@cite_20"
],
"mid": [
"1982305921",
"1498523337"
],
"abstract": [
"We show that there exist bipartite quantum states which contain a large locked classical correlation that is unlocked by a disproportionately small amount of classical communication. In particular, there are (2n + 1)-qubit states for which a one-bit message doubles the optimal classical mutual information between measurement results on the subsystems, from n 2 bits to n bits. This phenomenon is impossible classically. However, states exhibiting this behavior need not be entangled. We study the range of states exhibiting this phenomenon and bound its magnitude.",
"Assuming an insecure quantum channel and an authenticated classical channel, we propose an unconditionally secure scheme for encrypting classical messages under a shared key, where attempts to eavesdrop the ciphertext can be detected. If no eavesdropping is detected, we can securely re-use the entire key for encrypting new messages. If eavesdropping is detected, we must discard a number of key bits corresponding to the length of the message, but can re-use almost all of the rest. We show this is essentially optimal. Thus, provided the adversary does not interfere (too much) with the quantum channel, we can securely send an arbitrary number of message bits, independently of the length of the initial key. Moreover, the key-recycling mechanism only requires one-bit feedback. While ordinary quantum key distribution with a classical one time pad could be used instead to obtain a similar functionality, this would need more rounds of interaction and more communication."
]
} |
1610.09434 | 2547753369 | Barnum, Crepeau, Gottesman, Tapp, and Smith (quant-ph 0205128) proposed methods for authentication of quantum messages. The first method is an interactive protocol (TQA') based on teleportation. The second method is a noninteractive protocol (QA) in which the sender first encrypts the message using a protocol QEnc and then encodes the quantum ciphertext with an error correcting code chosen secretly from a set (a purity test code (PTC)). Encryption was shown to be necessary for authentication. We augment the protocol QA with an extra step which recycles the entire encryption key provided QA accepts the message. We analyze the resulting integrated protocol for quantum authentication and key generation, which we call QA+KG. Our main result is a proof that QA+KG is universal composably (UC) secure in the Ben-Or-Mayers model (quant-ph 0409062). More specifically, this implies the UC-security of (a) QA, (b) recycling of the encryption key in QA, and (c) key-recycling of the encryption scheme QEnc by appending PTC. For an m-qubit message, encryption requires 2m bits of key; but PTC can be performed using only O(log m) + O(log e) bits of key for probability of failure e. Thus, we reduce the key required for both QA and QEnc, from linear to logarithmic net consumption, at the expense of one bit of back communication which can happen any time after the conclusion of QA and before reusing the key. UC-security of QA also extends security to settings not obvious from quant-ph 0205128. Our security proof structure is inspired by and similar to that of quant-ph 0205128, reducing the security of QA to that of TQA'. In the process, we define UC-secure entanglement, and prove the UC-security of the entanglement generating protocol given in quant-ph 0205128, which could be of independent interest. | This paper has had an unusually long gestation. We presented a preliminary version of our results at QIP 2004 and a draft has informally circulated since 2008. An updated version appeared in QCRYPT 2011. (The full submission was provided to the authors of related works @cite_42 @cite_0 @cite_38 prior to their appearing in the arXiv.) | {
"cite_N": [
"@cite_0",
"@cite_38",
"@cite_42"
],
"mid": [
"2951334183",
"2530225612",
"2136675218"
],
"abstract": [
"We give a new class of security definitions for authentication in the quantum setting. These definitions capture and strengthen existing definitions of security against quantum adversaries for both classical message authentication codes (MACs) and well as full quantum state authentication schemes. The main feature of our definitions is that they precisely characterize the effective behavior of any adversary when the authentication protocol accepts, including correlations with the key. Our definitions readily yield a host of desirable properties and interesting consequences; for example, our security definition for full quantum state authentication implies that the entire secret key can be re-used if the authentication protocol succeeds. Next, we present several protocols satisfying our security definitions. We show that the classical Wegman-Carter authentication scheme with 3-universal hashing is secure against superposition attacks, as well as adversaries with quantum side information. We then present conceptually simple constructions of full quantum state authentication. Finally, we prove a lifting theorem which shows that, as long as a protocol can securely authenticate the maximally entangled state, it can securely authenticate any state, even those that are entangled with the adversary. Thus, this shows that protocols satisfying a fairly weak form of authentication security automatically satisfy a stronger notion of security (in particular, the definition of Dupuis, et al (2012)).",
"We show that a family of quantum authentication protocols introduced in [, FOCS 2002] can be used to construct a secure quantum channel and additionally recycle all of the secret key if the message is successfully authenticated, and recycle part of the key if tampering is detected. We give a full security proof that constructs the secure channel given only insecure noisy channels and a shared secret key. We also prove that the number of recycled key bits is optimal for this family of protocols, i.e., there exists an adversarial strategy to obtain all non-recycled bits. Previous works recycled less key and only gave partial security proofs, since they did not consider all possible distinguishers (environments) that may be used to distinguish the real setting from the ideal secure quantum channel and secret key resource.",
"In their seminal work on authentication, Wegman and Carter propose that to authenticate multiple messages, it is sufficient to reuse the same hash function as long as each tag is encrypted with a one-time pad. They argue that because the one-time pad is perfectly hiding, the hash function used remains completely unknown to the adversary. Since their proof is not composable, we revisit it using a composable security framework. It turns out that the above argument is insufficient: if the adversary learns whether a corrupted message was accepted or rejected, information about the hash function is leaked, and after a bounded finite amount of rounds it is completely known. We show however that this leak is very small: Wegman and Carter's protocol is still ( ) -secure, if ( ) -almost strongly universal (_2 ) hash functions are used. This implies that the secret key corresponding to the choice of hash function can be reused in the next round of authentication without any additional error than this ( ) . We also show that if the players have a mild form of synchronization, namely that the receiver knows when a message should be received, the key can be recycled for any arbitrary task, not only new rounds of authentication."
]
} |
1610.09434 | 2547753369 | Barnum, Crepeau, Gottesman, Tapp, and Smith (quant-ph 0205128) proposed methods for authentication of quantum messages. The first method is an interactive protocol (TQA') based on teleportation. The second method is a noninteractive protocol (QA) in which the sender first encrypts the message using a protocol QEnc and then encodes the quantum ciphertext with an error correcting code chosen secretly from a set (a purity test code (PTC)). Encryption was shown to be necessary for authentication. We augment the protocol QA with an extra step which recycles the entire encryption key provided QA accepts the message. We analyze the resulting integrated protocol for quantum authentication and key generation, which we call QA+KG. Our main result is a proof that QA+KG is universal composably (UC) secure in the Ben-Or-Mayers model (quant-ph 0409062). More specifically, this implies the UC-security of (a) QA, (b) recycling of the encryption key in QA, and (c) key-recycling of the encryption scheme QEnc by appending PTC. For an m-qubit message, encryption requires 2m bits of key; but PTC can be performed using only O(log m) + O(log e) bits of key for probability of failure e. Thus, we reduce the key required for both QA and QEnc, from linear to logarithmic net consumption, at the expense of one bit of back communication which can happen any time after the conclusion of QA and before reusing the key. UC-security of QA also extends security to settings not obvious from quant-ph 0205128. Our security proof structure is inspired by and similar to that of quant-ph 0205128, reducing the security of QA to that of TQA'. In the process, we define UC-secure entanglement, and prove the UC-security of the entanglement generating protocol given in quant-ph 0205128, which could be of independent interest. | One feature that slightly distinguishes @cite_0 @cite_38 from ours is that they demonstrate that the entire key can be recycled whereas we sacrifice a vanishing fraction of the key. While interesting theoretically, the distinction is not practically important because, in our case, additional key to make up for the small loss can be added to the message with negligible additional cost. Furthermore, some of the schemes that allow total key recycling require substantially more initial key (while QA is key-optimal up to an additive logarithmic amount, which we believe can be reduced to a constant in view of results in @cite_6 ). | {
"cite_N": [
"@cite_0",
"@cite_38",
"@cite_6"
],
"mid": [
"2951334183",
"2530225612",
""
],
"abstract": [
"We give a new class of security definitions for authentication in the quantum setting. These definitions capture and strengthen existing definitions of security against quantum adversaries for both classical message authentication codes (MACs) and well as full quantum state authentication schemes. The main feature of our definitions is that they precisely characterize the effective behavior of any adversary when the authentication protocol accepts, including correlations with the key. Our definitions readily yield a host of desirable properties and interesting consequences; for example, our security definition for full quantum state authentication implies that the entire secret key can be re-used if the authentication protocol succeeds. Next, we present several protocols satisfying our security definitions. We show that the classical Wegman-Carter authentication scheme with 3-universal hashing is secure against superposition attacks, as well as adversaries with quantum side information. We then present conceptually simple constructions of full quantum state authentication. Finally, we prove a lifting theorem which shows that, as long as a protocol can securely authenticate the maximally entangled state, it can securely authenticate any state, even those that are entangled with the adversary. Thus, this shows that protocols satisfying a fairly weak form of authentication security automatically satisfy a stronger notion of security (in particular, the definition of Dupuis, et al (2012)).",
"We show that a family of quantum authentication protocols introduced in [, FOCS 2002] can be used to construct a secure quantum channel and additionally recycle all of the secret key if the message is successfully authenticated, and recycle part of the key if tampering is detected. We give a full security proof that constructs the secure channel given only insecure noisy channels and a shared secret key. We also prove that the number of recycled key bits is optimal for this family of protocols, i.e., there exists an adversarial strategy to obtain all non-recycled bits. Previous works recycled less key and only gave partial security proofs, since they did not consider all possible distinguishers (environments) that may be used to distinguish the real setting from the ideal secure quantum channel and secret key resource.",
""
]
} |
1610.09434 | 2547753369 | Barnum, Crepeau, Gottesman, Tapp, and Smith (quant-ph 0205128) proposed methods for authentication of quantum messages. The first method is an interactive protocol (TQA') based on teleportation. The second method is a noninteractive protocol (QA) in which the sender first encrypts the message using a protocol QEnc and then encodes the quantum ciphertext with an error correcting code chosen secretly from a set (a purity test code (PTC)). Encryption was shown to be necessary for authentication. We augment the protocol QA with an extra step which recycles the entire encryption key provided QA accepts the message. We analyze the resulting integrated protocol for quantum authentication and key generation, which we call QA+KG. Our main result is a proof that QA+KG is universal composably (UC) secure in the Ben-Or-Mayers model (quant-ph 0409062). More specifically, this implies the UC-security of (a) QA, (b) recycling of the encryption key in QA, and (c) key-recycling of the encryption scheme QEnc by appending PTC. For an m-qubit message, encryption requires 2m bits of key; but PTC can be performed using only O(log m) + O(log e) bits of key for probability of failure e. Thus, we reduce the key required for both QA and QEnc, from linear to logarithmic net consumption, at the expense of one bit of back communication which can happen any time after the conclusion of QA and before reusing the key. UC-security of QA also extends security to settings not obvious from quant-ph 0205128. Our security proof structure is inspired by and similar to that of quant-ph 0205128, reducing the security of QA to that of TQA'. In the process, we define UC-secure entanglement, and prove the UC-security of the entanglement generating protocol given in quant-ph 0205128, which could be of independent interest. | The current manuscript differs from our QCRYPT'11 submission in four ways. (1) We found a mis-statement of the adversarial power in the QCRYPT'11 submission which is corrected here -- the adversary should be given the purification of the message (full quantum side information) for the attack. Our proof is independent of whether the adversary is given this purification or not. (2) We simplified the last step of the proof (and as a bonus reduced the insecurity parameter by a factor of 3). (3) In view of @cite_42 , we removed claims of proof of the composability of the Wegman-Carter authentication scheme for classical messages in this paper. Our claim was based on a simple (but slightly mistaken) proof in a half-page appendix. We decide to keep the appendix for readers who want a quick main idea, but refer to the detailed subsequent result in @cite_42 . (4) We revived an appendix on authentication of pure quantum states (which was removed in QCRYPT'11 due to page limit). Finally, as mentioned earlier, we briefly discussed the case for transmission through noisy channel, and made other minor changes. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2136675218"
],
"abstract": [
"In their seminal work on authentication, Wegman and Carter propose that to authenticate multiple messages, it is sufficient to reuse the same hash function as long as each tag is encrypted with a one-time pad. They argue that because the one-time pad is perfectly hiding, the hash function used remains completely unknown to the adversary. Since their proof is not composable, we revisit it using a composable security framework. It turns out that the above argument is insufficient: if the adversary learns whether a corrupted message was accepted or rejected, information about the hash function is leaked, and after a bounded finite amount of rounds it is completely known. We show however that this leak is very small: Wegman and Carter's protocol is still ( ) -secure, if ( ) -almost strongly universal (_2 ) hash functions are used. This implies that the secret key corresponding to the choice of hash function can be reused in the next round of authentication without any additional error than this ( ) . We also show that if the players have a mild form of synchronization, namely that the receiver knows when a message should be received, the key can be recycled for any arbitrary task, not only new rounds of authentication."
]
} |
1610.09434 | 2547753369 | Barnum, Crepeau, Gottesman, Tapp, and Smith (quant-ph 0205128) proposed methods for authentication of quantum messages. The first method is an interactive protocol (TQA') based on teleportation. The second method is a noninteractive protocol (QA) in which the sender first encrypts the message using a protocol QEnc and then encodes the quantum ciphertext with an error correcting code chosen secretly from a set (a purity test code (PTC)). Encryption was shown to be necessary for authentication. We augment the protocol QA with an extra step which recycles the entire encryption key provided QA accepts the message. We analyze the resulting integrated protocol for quantum authentication and key generation, which we call QA+KG. Our main result is a proof that QA+KG is universal composably (UC) secure in the Ben-Or-Mayers model (quant-ph 0409062). More specifically, this implies the UC-security of (a) QA, (b) recycling of the encryption key in QA, and (c) key-recycling of the encryption scheme QEnc by appending PTC. For an m-qubit message, encryption requires 2m bits of key; but PTC can be performed using only O(log m) + O(log e) bits of key for probability of failure e. Thus, we reduce the key required for both QA and QEnc, from linear to logarithmic net consumption, at the expense of one bit of back communication which can happen any time after the conclusion of QA and before reusing the key. UC-security of QA also extends security to settings not obvious from quant-ph 0205128. Our security proof structure is inspired by and similar to that of quant-ph 0205128, reducing the security of QA to that of TQA'. In the process, we define UC-secure entanglement, and prove the UC-security of the entanglement generating protocol given in quant-ph 0205128, which could be of independent interest. | A second alternative is TQA -- teleport the quantum message using ebits obtained by potentially insecure means in addition to an insecure forward classical channel that needs classical message authentication. (Since we prove the composable security of EBIT[PTC] and the Wegman-Carter scheme is composably secure @cite_42 , this method is composably secure.) Classical message authentication requires a long key, but most of it can be reused securely regardless of the authentication result. EBIT[PTC] uses a small key and back communication, and generates a quantum key. (Thus, back communication is needed during the protocol itself, unlike for QA+KG.) Compared to QA+KG, this scheme uses a similar amount of quantum communication, more initial key and forward classical communication, in addition to a similar amount of classical back communication. But TQA offers two advantages over QA+KG. First, failing PTC when generating ebits does not destroy the quantum message itself (so the message is not only authenticated, but protected). Second, the classical authentication key is always recycled. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2136675218"
],
"abstract": [
"In their seminal work on authentication, Wegman and Carter propose that to authenticate multiple messages, it is sufficient to reuse the same hash function as long as each tag is encrypted with a one-time pad. They argue that because the one-time pad is perfectly hiding, the hash function used remains completely unknown to the adversary. Since their proof is not composable, we revisit it using a composable security framework. It turns out that the above argument is insufficient: if the adversary learns whether a corrupted message was accepted or rejected, information about the hash function is leaked, and after a bounded finite amount of rounds it is completely known. We show however that this leak is very small: Wegman and Carter's protocol is still ( ) -secure, if ( ) -almost strongly universal (_2 ) hash functions are used. This implies that the secret key corresponding to the choice of hash function can be reused in the next round of authentication without any additional error than this ( ) . We also show that if the players have a mild form of synchronization, namely that the receiver knows when a message should be received, the key can be recycled for any arbitrary task, not only new rounds of authentication."
]
} |
1610.09453 | 2963900225 | Degrees of freedom (DoFs) gains are studied in wireless networks with cooperative transmission under a backhaul load constraint that limits the average number of messages that can be delivered from a centralized controller to base station transmitters. The backhaul load is defined as the sum of all the messages available at all the transmitters per channel use, normalized by the number of users. For Wyner’s linear interference network, where each transmitter is connected to the receiver having the same index as well as one succeeding receiver, the per user DoF is characterized and the optimal scheme is presented. Furthermore, it is shown that the optimal assignment of messages to transmitters is asymmetric and satisfies a local cooperation constraint and the optimal coding scheme relies only on one-shot cooperative zero-forcing transmit beamforming. Using insights from the analysis of Wyner’s linear interference network, the results are extended to the more practical hexagonal sectored cellular network, and coding schemes based on cooperative zero-forcing are shown to deliver significant DoF gains. It is established that by allowing for cooperative transmission and a flexible message assignment that is constrained only by an average backhaul load, one can deliver the rate gains promised by information-theoretic upper bounds with practical one-shot schemes that incur little or no additional load on the backhaul. Finally, useful upper bounds on the per user DoF for schemes based on cooperative zero-forcing are presented for lower values of the average backhaul load constraint, and an optimization framework is formulated for the general converse problem. | A major advance in the theoretical analysis of interference management in large wireless networks took place with the introduction of asymptotic interference alignment in @cite_6 (IA). IA beamforming relies on signaling over a number of time slots ( symbol extension ) that goes to infinity in order to enable the achievability of a per user DoF of @math in a fully connected interference network. However, the gains offered by IA are considered to be infeasible in practice, and a major reason for the infeasability is the excessive requirement on the length of symbol extension, which would lead to impractical delays. An important aspect of this work is that we show that the promised gains of interference alignment can be achieved with one-shot coding schemes that do not require symbol extension, if we consider more practical network models than the fully connected model and allow for cooperative transmission, even without requiring additional overall load on the supporting backhaul. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1979408141"
],
"abstract": [
"For the fully connected K user wireless interference channel where the channel coefficients are time-varying and are drawn from a continuous distribution, the sum capacity is characterized as C(SNR)=K 2log(SNR)+o(log(SNR)) . Thus, the K user time-varying interference channel almost surely has K 2 degrees of freedom. Achievability is based on the idea of interference alignment. Examples are also provided of fully connected K user interference channels with constant (not time-varying) coefficients where the capacity is exactly achieved by interference alignment at all SNR values."
]
} |
1610.09453 | 2963900225 | Degrees of freedom (DoFs) gains are studied in wireless networks with cooperative transmission under a backhaul load constraint that limits the average number of messages that can be delivered from a centralized controller to base station transmitters. The backhaul load is defined as the sum of all the messages available at all the transmitters per channel use, normalized by the number of users. For Wyner’s linear interference network, where each transmitter is connected to the receiver having the same index as well as one succeeding receiver, the per user DoF is characterized and the optimal scheme is presented. Furthermore, it is shown that the optimal assignment of messages to transmitters is asymmetric and satisfies a local cooperation constraint and the optimal coding scheme relies only on one-shot cooperative zero-forcing transmit beamforming. Using insights from the analysis of Wyner’s linear interference network, the results are extended to the more practical hexagonal sectored cellular network, and coding schemes based on cooperative zero-forcing are shown to deliver significant DoF gains. It is established that by allowing for cooperative transmission and a flexible message assignment that is constrained only by an average backhaul load, one can deliver the rate gains promised by information-theoretic upper bounds with practical one-shot schemes that incur little or no additional load on the backhaul. Finally, useful upper bounds on the per user DoF for schemes based on cooperative zero-forcing are presented for lower values of the average backhaul load constraint, and an optimization framework is formulated for the general converse problem. | Degrees of freedom gains in the hexagonal cellular downlink using cooperative transmission was considered in @cite_8 , where the transmitting basestations cooperate by exchanging quantized dirty paper coded signals. However, implementing such a scheme can face practical challenges as each transmitter gets its message only after a series of preceding transmitters have encoded their messages; this will require either significant delay or coding over multiple time slots. Further, under this setting, the only way for messages to be delivered to transmitters through a centralized controller, is for the controller to be aware of the channel state information. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2180674075"
],
"abstract": [
"The implementation of a cloud radio access network (C-RAN) with full dimensional (FD) multiple-input multiple-output (MIMO) is faced with the challenge of controlling the fronthaul overhead for the transmission of baseband signals as the number of horizontal and vertical antennas grows larger. This paper proposes to leverage the special low-rank structure of the FD-MIMO channel, which is characterized by a time-invariant elevation component and a time-varying azimuth component, by means of a layered precoding approach, to reduce the fronthaul overhead. According to this scheme, separate precoding matrices are applied for the azimuth and elevation channel components, with different rates of adaptation to the channel variations and correspondingly different impacts on the fronthaul capacity. Moreover, we consider two different central unit (CU)-radio unit (RU) functional splits at the physical layer, namely, the conventional C-RAN implementation and an alternative one in which coding and precoding are performed at the RUs. Via numerical results, it is shown that the layered schemes significantly outperform conventional nonlayered schemes, particularly in the regime of low fronthaul capacity and a large number of vertical antennas."
]
} |
1610.09369 | 2952558995 | We present discriminative Gaifman models, a novel family of relational machine learning models. Gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases (KBs). Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing. Gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world KBs. We present the core ideas of Gaifman models and apply them to large-scale relational learning problems. We also discuss the ways in which Gaifman models relate to some existing relational machine learning approaches. | Recent work on relational machine learning for knowledge graphs is surveyed in @cite_0 . We focus on a select few methods we deem most related to Gaifman models and refer the interested reader to the above article. A large body of work exists on learning inference rules from knowledge bases. Examples include @cite_9 and @cite_3 where inference rules of length one are learned; and @cite_2 where general inference rules are learned by applying a support threshold. Their method does not scale to large KBs and depends on predetermined thresholds. @cite_19 train a logistic regression classifier with path features to perform KB completion. The idea is to perform a random walk between objects and to exploit the discovered paths as features. SFE @cite_1 improves PRA by making the generation of random walks more efficient. More recent embedding methods have combined paths in KBs with KB embedding methods @cite_12 . Gaifman models support a much broader class of relational features subsuming path features. For instance, Gaifman models incorporate counting features that have shown to be beneficial for relational models. | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_12"
],
"mid": [
"314565566",
"2250601658",
"2103729963",
"1529533208",
"1756422141",
"",
"2952854166"
],
"abstract": [
"The task of identifying synonymous relations and objects, or Synonym Resolution (SR), is critical for high-quality information extraction. The bulk of previous SR work assumed strong domain knowledge or hand-tagged training examples. This paper investigates SR in the context of unsupervised information extraction, where neither is available. The paper presents a scalable, fully-implemented system for SR that runs in O(KN log N) time in the number of extractions N and the maximum number of synonyms per word, K. The system, called RESOLVER, introduces a probabilistic relational model for predicting whether two strings are co-referential based on the similarity of the assertions containing them. Given two million assertions extracted from the Web, RESOLVER resolves objects with 78 precision and an estimated 68 recall and resolves relations with 90 precision and 35 recall.",
"We explore some of the practicalities of using random walk inference methods, such as the Path Ranking Algorithm (PRA), for the task of knowledge base completion. We show that the random walk probabilities computed (at great expense) by PRA provide no discernible benefit to performance on this task, so they can safely be dropped. This allows us to define a simpler algorithm for generating feature matrices from graphs, which we call subgraph feature extraction (SFE). In addition to being conceptually simpler than PRA, SFE is much more efficient, reducing computation by an order of magnitude, and more expressive, allowing for much richer features than paths between two nodes in a graph. We show experimentally that this technique gives substantially better performance than PRA and its variants, improving mean average precision from .432 to .528 on a knowledge base completion task using the NELL KB.",
"Extensive knowledge bases of entailment rules between predicates are crucial for applied semantic inference. In this paper we propose an algorithm that utilizes transitivity constraints to learn a globally-optimal set of entailment rules for typed predicates. We model the task as a graph learning problem and suggest methods that scale the algorithm to larger graphs. We apply the algorithm over a large data set of extracted predicate instances, from which a resource of typed entailment rules has been recently released (, 2010). Our results show that using global transitivity information substantially improves performance over this resource and several baselines, and that our scaling methods allow us to increase the scope of global learning of entailment-rule graphs.",
"Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive data sets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's knowledge vault project as an example of such combination.",
"We consider the problem of performing learning and inference in a large scale knowledge base containing imperfect knowledge with incomplete coverage. We show that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for the knowledge base. More specifically, we show that the system can learn to infer different target relations by tuning the weights associated with random walks that follow different paths through the graph, using a version of the Path Ranking Algorithm (Lao and Cohen, 2010b). We apply this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by NELL, a never-ending language learner (, 2010). This new system improves significantly over NELL's earlier Horn-clause learning and inference method: it obtains nearly double the precision at rank 100, and the new learning method is also applicable to many more inference tasks.",
"",
"Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text."
]
} |
1610.09369 | 2952558995 | We present discriminative Gaifman models, a novel family of relational machine learning models. Gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases (KBs). Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing. Gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world KBs. We present the core ideas of Gaifman models and apply them to large-scale relational learning problems. We also discuss the ways in which Gaifman models relate to some existing relational machine learning approaches. | Latent feature models learn features for objects and relations that are not directly observed in the data. Examples of latent feature models are tensor factorization @cite_29 @cite_28 @cite_8 and embedding models @cite_4 @cite_13 @cite_16 @cite_26 @cite_30 @cite_6 . The majority of these models can be understood as more or less complex neural networks operating on object and relation representations. Gaifman models can also be used to learn knowledge base embeddings. Indeed, one can show that it generalizes or complements existing approaches. For instance, the universal schema @cite_28 considers pairs of objects where relation membership variables comprise the model's features. We have the following interesting relationship between universal schemas @cite_28 and Gaifman models. Given a knowledge base @math . The Gaifman model for @math with @math , @math , @math , @math and @math is equivalent to the Universal Schema @cite_28 for @math up to the base model class @math . More recent methods combine embedding methods and inference-based logical approaches for relation extraction @cite_24 . Contrary to most existing multi-relational ML models @cite_0 , Gaifman models natively support higher-arity relations, functional and type constraints, numerical features, and complex target queries. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_16",
"@cite_13"
],
"mid": [
"2433281745",
"2184957013",
"2156954687",
"2127426251",
"1852412531",
"205829674",
"2432356473",
"2296268288",
"1529533208",
"2127795553",
"1596986901"
],
"abstract": [
"We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-of-the-art performance.",
"Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.",
"",
"Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively.",
"© 2013 Association for Computational Linguistics. Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of preexisting databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present matrix factorization models that learn latent feature vectors for entity tuples and relations. We show that such latent models achieve substantially higher accuracy than a traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms stateof- the-Art distant supervision.",
"Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.",
"In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.",
"Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae.",
"Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive data sets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's knowledge vault project as an example of such combination.",
"We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.",
"Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR – a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to a lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and wordsense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach."
]
} |
1610.09237 | 2951860819 | We propose a new approach to designing visual markers (analogous to QR-codes, markers for augmented reality, and robotic fiducial tags) based on the advances in deep generative networks. In our approach, the markers are obtained as color images synthesized by a deep network from input bit strings, whereas another deep network is trained to recover the bit strings back from the photos of these markers. The two networks are trained simultaneously in a joint backpropagation process that takes characteristic photometric and geometric distortions associated with marker fabrication and marker scanning into account. Additionally, a stylization loss based on statistics of activations in a pretrained classification network can be inserted into the learning in order to shift the marker appearance towards some texture prototype. In the experiments, we demonstrate that the markers obtained using our approach are capable of retaining bit strings that are long enough to be practical. The ability to automatically adapt markers according to the usage scenario and the desired capacity as well as the ability to combine information encoding with artistic stylization are the unique properties of our approach. As a byproduct, our approach provides an insight on the structure of patterns that are most suitable for recognition by ConvNets and on their ability to distinguish composite patterns. | Our work is partially motivated by the recent approaches that analyze and visualize pretrained deep networks by synthesizing color images evoking certain responses in these networks. Towards this end @cite_20 generate examples that maximize probabilities of certain classes according to the network, @cite_18 generate visual illusions that maximize such probabilities while retaining similarity to a predefined image of a potentially different class, @cite_23 also investigate ways of generating highly-abstract and structured color images that maximize probabilities of a certain class. Finally, @cite_13 synthesize color images that evoke a predefined vector of responses at a certain level of the network for the purpose of network inversion. Our approach is related to these approaches, since our markers can be regarded as stimuli invoking certain responses in the recognizer network. Unlike these approaches, our recognizer network is not kept fixed but is updated together with the synthesizer network that generates the marker images. | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_23",
"@cite_20"
],
"mid": [
"2949987032",
"1849277567",
"",
"2962851944"
],
"abstract": [
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13]."
]
} |
1610.09237 | 2951860819 | We propose a new approach to designing visual markers (analogous to QR-codes, markers for augmented reality, and robotic fiducial tags) based on the advances in deep generative networks. In our approach, the markers are obtained as color images synthesized by a deep network from input bit strings, whereas another deep network is trained to recover the bit strings back from the photos of these markers. The two networks are trained simultaneously in a joint backpropagation process that takes characteristic photometric and geometric distortions associated with marker fabrication and marker scanning into account. Additionally, a stylization loss based on statistics of activations in a pretrained classification network can be inserted into the learning in order to shift the marker appearance towards some texture prototype. In the experiments, we demonstrate that the markers obtained using our approach are capable of retaining bit strings that are long enough to be practical. The ability to automatically adapt markers according to the usage scenario and the desired capacity as well as the ability to combine information encoding with artistic stylization are the unique properties of our approach. As a byproduct, our approach provides an insight on the structure of patterns that are most suitable for recognition by ConvNets and on their ability to distinguish composite patterns. | Another obvious connection are autoencoders @cite_24 , which are models trained to (1) encode inputs into a compact intermediate representation through the encoder network and (2) recover the original input by passing the compact representation through the decoder network. Our system can be regarded as a special kind of autoencoder with the certain format of the intermediate representation (a color image). Our decoder is trained to be robust to certain class of transformations of the intermediate representations that are modeled by the rendering network. In this respect, our approach is related to variational autoencoders @cite_21 that are trained with stochastic intermediate representations and to denoising autoencoders @cite_14 that are trained to be robust to noise. | {
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_21"
],
"mid": [
"2072128103",
"2025768430",
"2951004968"
],
"abstract": [
"Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.",
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results."
]
} |
1610.09237 | 2951860819 | We propose a new approach to designing visual markers (analogous to QR-codes, markers for augmented reality, and robotic fiducial tags) based on the advances in deep generative networks. In our approach, the markers are obtained as color images synthesized by a deep network from input bit strings, whereas another deep network is trained to recover the bit strings back from the photos of these markers. The two networks are trained simultaneously in a joint backpropagation process that takes characteristic photometric and geometric distortions associated with marker fabrication and marker scanning into account. Additionally, a stylization loss based on statistics of activations in a pretrained classification network can be inserted into the learning in order to shift the marker appearance towards some texture prototype. In the experiments, we demonstrate that the markers obtained using our approach are capable of retaining bit strings that are long enough to be practical. The ability to automatically adapt markers according to the usage scenario and the desired capacity as well as the ability to combine information encoding with artistic stylization are the unique properties of our approach. As a byproduct, our approach provides an insight on the structure of patterns that are most suitable for recognition by ConvNets and on their ability to distinguish composite patterns. | Finally, our approach for creating textured markers can be related to steganography @cite_7 , which aims at hiding a signal in a image. Unlike steganography, we do not aim to conceal information, but just to minimize its intrusiveness'', while keeping the information machine-readable in the presence of distortions associated with printing and scanning. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2124890704"
],
"abstract": [
"Information-hiding techniques have recently become important in a number of application areas. Digital audio, video, and pictures are increasingly furnished with distinguishing but imperceptible marks, which may contain a hidden copyright notice or serial number or even help to prevent unauthorized copying directly. Military communications systems make increasing use of traffic security techniques which, rather than merely concealing the content of a message using encryption, seek to conceal its sender, its receiver, or its very existence. Similar techniques are used in some mobile phone systems and schemes proposed for digital elections. Criminals try to use whatever traffic security properties are provided intentionally or otherwise in the available communications systems, and police forces try to restrict their use. However, many of the techniques proposed in this young and rapidly evolving field can trace their history back to antiquity, and many of them are surprisingly easy to circumvent. In this article, we try to give an overview of the field, of what we know, what works, what does not, and what are the interesting topics for research."
]
} |
1610.08815 | 2544767710 | Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an "apparently positive" sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network's baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase. | NLP research is gradually evolving from lexical to compositional semantics @cite_22 through the adoption of novel meaning-preserving and context-aware paradigms such as convolutional networks @cite_35 , recurrent belief networks @cite_3 , statistical learning theory @cite_19 , convolutional multiple kernel learning @cite_12 , and commonsense reasoning @cite_14 . But while other NLP tasks have been extensively investigated, sarcasm detection is a relatively new research topic which has gained increasing interest only recently, partly thanks to the rise of social media analytics and sentiment analysis. Sentiment analysis @cite_7 and using multimodal information as a new trend @cite_33 @cite_23 @cite_36 @cite_4 @cite_12 is a popular branch of NLP research that aims to understand sentiment of documents automatically using combination of various machine learning approaches @cite_15 @cite_24 @cite_4 @cite_21 . | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_33",
"@cite_36",
"@cite_21",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_12"
],
"mid": [
"2427312199",
"2481359644",
"2740550900",
"2341587966",
"2556418146",
"2465534249",
"",
"",
"2466376234",
"",
"2493920898",
"2251394420",
"2244706744",
"2583643061"
],
"abstract": [
"In this paper, we present the first deep learning approach to aspect extraction in opinion mining. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about. We used a 7-layer deep convolutional neural network to tag each word in opinionated sentences as either aspect or non-aspect word. We also developed a set of linguistic patterns for the same purpose and combined them with the neural network. The resulting ensemble classifier, coupled with a word-embedding model for sentiment analysis, allowed our approach to obtain significantly better accuracy than state-of-the-art methods.",
"This volume presents a knowledge-based approach to concept-level sentiment analysis at the crossroads between affective computing, information extraction, and common-sense computing, which exploits both computer and social sciences to better interpret and process information on the Web. Concept-level sentiment analysis goes beyond a mere word-level analysis of text in order to enable a more efficient passage from (unstructured) textual information to (structured) machine-processable data, in potentially any domain. Readers will discover the following key novelties, that make this approach so unique and avant-garde, being reviewed and discussed: Sentic Computing's multi-disciplinary approach to sentiment analysis-evidenced by the concomitant use of AI, linguistics and psychology for knowledge representation and inference Sentic Computings shift from syntax to semantics-enabled by the adoption of the bag-of-concepts model instead of simply counting word co-occurrence frequencies in text Sentic Computing's shift from statistics to linguistics-implemented by allowing sentiments to flow from concept to concept based on the dependency relation between clauses This volume is the first in the Series Socio-Affective Computing edited byDr Amir Hussain and Dr Erik Cambria and will be of interest to researchers in the fields of socially intelligent, affective and multimodal human-machine interaction and systems.",
"",
"",
"People share their opinions, stories, and reviews through online video sharing websites every day. The automatic analysis of these online opinion videos is bringing new or understudied research challenges to the field of computational linguistics and multimodal analysis. Among these challenges is the fundamental question of exploiting the dynamics between visual gestures and verbal messages to be able to better model sentiment. This article addresses this question in four ways: introducing the first multimodal dataset with opinion-level sentiment intensity annotations; studying the prototypical interaction patterns between facial gestures and spoken words when inferring sentiment intensity; proposing a new computational representation, called multimodal dictionary, based on a language-gesture study; and evaluating the authors' proposed approach in a speaker-independent paradigm for sentiment intensity prediction. The authors' study identifies four interaction types between facial gestures and verbal content: neutral, emphasizer, positive, and negative interactions. Experiments show statistically significant improvement when using multimodal dictionary representation over the conventional early fusion representation (that is, feature concatenation).",
"People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinion-level Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.",
"",
"",
"We propose a deep recurrent belief network with distributed time delays for learning multivariate Gaussians. Learning long time delays in deep belief networks is difficult due to the problem of vanishing or exploding gradients with increase in delay. To mitigate this problem and improve the transparency of learning time-delays, we introduce the use of Gaussian networks with time-delays to initialize the weights of each hidden neuron. From our knowledge of time delays, it is possible to learn the long delays from short delays in a hierarchical manner. In contrast to previous works, here dynamic Gaussian Bayesian networks over training samples are evolved using Markov Chain Monte Carlo to determine the initial weights of each hidden layer of neurons. In this way, the time-delayed network motifs of increasing Markov order across layers can be modeled hierarchically using a deep model. To validate the proposed Variable-order Belief Network (VBN) framework, it is applied for modeling word dependencies in text. To explore the generality of VBN, it is further considered for a real-world scenario where the dynamic movements of basketball players are modeled. Experimental results obtained showed that the proposed VBN could achieve over 30 improvement in accuracy on real-world scenarios compared to the state-of-the-art baselines.",
"",
"The science of opinion analysis based on data from social networks and other forms of mass media has garnered the interest of the scientific community and the business world. Dealing with the increasing amount of information present on the Web is a critical task and requires efficient models developed by the emerging field of sentiment analysis. To this end, current research proposes an efficient approach to support emotion recognition and polarity detection in natural language text. In this paper, we show how to exploit the most recent technological tools and advances in Statistical Learning Theory (SLT) in order to efficiently build an Extreme Learning Machine (ELM) and assess the resultant model's performance when applied to big social data analysis. ELM represents a powerful learning tool, developed to overcome some issues in back-propagation networks. The main problem with ELM is in training them to work in the event of a large number of available samples, where the generalization performance has to be carefully assessed. For this reason, we propose an ELM implementation that exploits the Spark distributed in memory technology and show how to take advantage of the most recent advances in SLT in order to address the issue of selecting ELM hyperparameters that give the best generalization performance.",
"We present a novel way of extracting features from short texts, based on the activation values of an inner layer of a deep convolutional neural network. We use the extracted features in multimodal sentiment analysis of short video clips representing one sentence each. We use the combined feature vectors of textual, visual, and audio modalities to train a classifier based on multiple kernel learning, which is known to be good at heterogeneous data. We obtain 14 performance improvement over the state of the art and present a parallelizable decision-level data fusion method, which is much faster, though slightly less accurate.",
"There has been substantial progress in the field of text based sentiment analysis but little effort has been made to incorporate other modalities. Previous work in sentiment analysis has shown that using multimodal data yields to more accurate models of sentiment. Efforts have been made towards expressing sentiment as a spectrum of intensity rather than just positive or negative. Such models are useful not only for detection of positivity or negativity, but also giving out a score of how positive or negative a statement is. Based on the state of the art studies in sentiment analysis, prediction in terms of sentiment score is still far from accurate, even in large datasets [27]. Another challenge in sentiment analysis is dealing with small segments or micro opinions as they carry less context than large segments thus making analysis of the sentiment harder. This paper presents a Ph.D. thesis shaped towards comprehensive studies in multimodal micro-opinion sentiment intensity analysis.",
"Technology has enabled anyone with an Internet connection to easily create and share their ideas, opinions and content with millions of other people around the world. Much of the content being posted and consumed online is multimodal. With billions of phones, tablets and PCs shipping today with built-in cameras and a host of new video-equipped wearables like Google Glass on the horizon, the amount of video on the Internet will only continue to increase. It has become increasingly difficult for researchers to keep up with this deluge of multimodal content, let alone organize or make sense of it. Mining useful knowledge from video is a critical need that will grow exponentially, in pace with the global growth of content. This is particularly important in sentiment analysis, as both service and product reviews are gradually shifting from unimodal to multimodal. We present a novel method to extract features from visual and textual modalities using deep convolutional neural networks. By feeding such features to a multiple kernel learning classifier, we significantly outperform the state of the art of multimodal emotion recognition and sentiment analysis on different datasets."
]
} |
1610.08815 | 2544767710 | Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an "apparently positive" sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network's baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase. | An early work in this field was done by @cite_18 on a dataset of 6,600 manually annotated Amazon reviews using a kNN-classifier over punctuation-based and pattern-based features, i.e., ordered sequence of high frequency words. @cite_37 used support vector machine (SVM) and logistic regression over a feature set of unigrams, dictionary-based lexical features and pragmatic features (e.g., emoticons) and compared the performance of the classifier with that of humans. @cite_28 described a set of textual features for recognizing irony at a linguistic level, especially in short texts created via Twitter, and constructed a new model that was assessed along two dimensions: representativeness and relevance. @cite_13 used the presence of a positive sentiment in close proximity of a negative situation phrase as a feature for sarcasm detection. @cite_9 used the Balanced Window algorithm for classifying Dutch tweets as sarcastic vs. non-sarcastic; n-grams (uni, bi and tri) and intensifiers were used as features for classification. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_28",
"@cite_9",
"@cite_13"
],
"mid": [
"2099653665",
"2250489604",
"",
"2114661483",
"2250710744"
],
"abstract": [
"Sarcasm is a sophisticated form of speech act widely used in online communities. Automatic recognition of sarcasm is, however, a novel task. Sarcasm recognition could contribute to the performance of review summarization and ranking systems. This paper presents SASI, a novel Semi-supervised Algorithm for Sarcasm Identification that recognizes sarcastic sentences in product reviews. SASI has two stages: semisupervised pattern acquisition, and sarcasm classification. We experimented on a data set of about 66000 Amazon reviews for various books and products. Using a gold standard in which each sentence was tagged by 3 annotators, we obtained precision of 77 and recall of 83.1 for identifying sarcastic sentences. We found some strong features that characterize sarcastic utterances. However, a combination of more subtle pattern-based features proved more promising in identifying the various facets of sarcasm. We also speculate on the motivation for using sarcasm in online communities and social networks.",
"Sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. We report on a method for constructing a corpus of sarcastic Twitter messages in which determination of the sarcasm of each message has been made by its author. We use this reliable corpus to compare sarcastic utterances in Twitter to utterances that express positive or negative attitudes without sarcasm. We investigate the impact of lexical and pragmatic factors on machine learning effectiveness for identifying sarcastic utterances and we compare the performance of machine learning techniques and human judges on this task. Perhaps unsurprisingly, neither the human judges nor the machine learning techniques perform very well.",
"",
"To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with the hashtag ‘#sarcasm’. We collected a training corpus of about 78 thousand Dutch tweets with this hashtag. Assuming that the human labeling is correct (annotation of a sample indicates that about 85 of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a test set of a day’s stream of 3.3 million Dutch tweets. Of the 135 explicitly marked tweets on this day, we detect 101 (75 ) when we remove the hashtag. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 30 of the top-250 ranked tweets are indeed sarcastic. Analysis shows that sarcasm is often signalled by hyperbole, using intensifiers and exclamations; in contrast, non-hyperbolic sarcastic messages often receive an explicit marker. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of nonverbal expressions that people employ in live interaction when conveying sarcasm.",
"A common form of sarcasm on Twitter consists of a positive sentiment contrasted with a negative situation. For example, many sarcastic tweets include a positive sentiment, such as “love” or “enjoy”, followed by an expression that describes an undesirable activity or state (e.g., “taking exams” or “being ignored”). We have developed a sarcasm recognizer to identify this type of sarcasm in tweets. We present a novel bootstrapping algorithm that automatically learns lists of positive sentiment phrases and negative situation phrases from sarcastic tweets. We show that identifying contrasting contexts using the phrases learned through bootstrapping yields improved recall for sarcasm recognition."
]
} |
1610.08815 | 2544767710 | Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an "apparently positive" sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network's baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase. | @cite_10 compared the performance of different classifiers on the Amazon review dataset using the imbalance between the sentiment expressed by the review and the user-given star rating. Features based on frequency (gap between rare and common words), written spoken gap (in terms of difference between usage), synonyms (based on the difference in frequency of synonyms) and ambiguity (number of words with many synonyms) were used by @cite_0 for sarcasm detection in tweets. @cite_2 proposed the use of implicit incongruity and explicit incongruity based features along with lexical and pragmatic features, such as emoticons and punctuation marks. Their method is very much similar to the method proposed by @cite_13 except @cite_2 used explicit incongruity features. Their method outperforms the approach by @cite_13 on two datasets. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_13",
"@cite_2"
],
"mid": [
"2251379416",
"2157961599",
"2250710744",
"2251920663"
],
"abstract": [
"Automatic detection of figurative language is a challenging task in computational linguistics. Recognising both literal and figurative meaning is not trivial for a machine and in some cases it is hard even for humans. For this reason novel and accurate systems able to recognise figurative languages are necessary. We present in this paper a novel computational model capable to detect sarcasm in the social network Twitter (a popular microblogging service which allows users to post short messages). Our model is easy to implement and, unlike previous systems, it does not include patterns of words as features. Our seven sets of lexical features aim to detect sarcasm by its inner structure (for example unexpectedness, intensity of the terms or imbalance between registers), abstracting from the use of specific terms.",
"Irony is an important device in human communication, both in everyday spoken conversations as well as in written texts including books, websites, chats, reviews, and Twitter messages among others. Specific cases of irony and sarcasm have been studied in different contexts but, to the best of our knowledge, only recently the first publicly available corpus including annotations about whether a text is ironic or not has been published by Filatova (2012). However, no baseline for classification of ironic or sarcastic reviews has been provided. With this paper, we aim at closing this gap. We formulate the problem as a supervised classification task and evaluate different classifiers, reaching an F1-measure of up to 74 using logistic regression. We analyze the impact of a number of features which have been proposed in previous research as well as combinations of them.",
"A common form of sarcasm on Twitter consists of a positive sentiment contrasted with a negative situation. For example, many sarcastic tweets include a positive sentiment, such as “love” or “enjoy”, followed by an expression that describes an undesirable activity or state (e.g., “taking exams” or “being ignored”). We have developed a sarcasm recognizer to identify this type of sarcasm in tweets. We present a novel bootstrapping algorithm that automatically learns lists of positive sentiment phrases and negative situation phrases from sarcastic tweets. We show that identifying contrasting contexts using the phrases learned through bootstrapping yields improved recall for sarcasm recognition.",
"The relationship between context incongruity and sarcasm has been studied in linguistics. We present a computational system that harnesses context incongruity as a basis for sarcasm detection. Our statistical sarcasm classifiers incorporate two kinds of incongruity features: explicit and implicit. We show the benefit of our incongruity features for two text forms tweets and discussion forum posts. Our system also outperforms two past works (with Fscore improvement of 10-20 ). We also show how our features can capture intersentential incongruity."
]
} |
1610.08815 | 2544767710 | Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an "apparently positive" sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network's baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase. | @cite_25 compared the performance with different language-independent features and pre-processing techniques for classifying text as sarcastic and non-sarcastic. The comparison was done over three Twitter dataset in two different languages, two of these in English with a balanced and an imbalanced distribution and the third one in Czech. The feature set included n-grams, word-shape patterns, pointedness and punctuation-based features. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2251210340"
],
"abstract": [
"This paper presents a machine learning approach to sarcasm detection on Twitter in two languages – English and Czech. Although there has been some research in sarcasm detection in languages other than English (e.g., Dutch, Italian, and Brazilian Portuguese), our work is the first attempt at sarcasm detection in the Czech language. We created a large Czech Twitter corpus consisting of 7,000 manually-labeled tweets and provide it to the community. We evaluate two classifiers with various combinations of features on both the Czech and English datasets. Furthermore, we tackle the issues of rich Czech morphology by examining different preprocessing techniques. Experiments show that our language-independent approach significantly outperforms adapted state-of-the-art methods in English (F-measure 0.947) and also represents a strong baseline for further research in Czech (F-measure 0.582)."
]
} |
1610.09044 | 2950743331 | We propose that by integrating behavioural biometric gestures---such as drawing figures on a touch screen---with challenge-response based cognitive authentication schemes, we can benefit from the properties of both. On the one hand, we can improve the usability of existing cognitive schemes by significantly reducing the number of challenge-response rounds by (partially) relying on the hardness of mimicking carefully designed behavioural biometric gestures. On the other hand, the observation resistant property of cognitive schemes provides an extra layer of protection for behavioural biometrics; an attacker is unsure if a failed impersonation is due to a biometric failure or a wrong response to the challenge. We design and develop an instantiation of such a "hybrid" scheme, and call it BehavioCog. To provide security close to a 4-digit PIN---one in 10,000 chance to impersonate---we only need two challenge-response rounds, which can be completed in less than 38 seconds on average (as estimated in our user study), with the advantage that unlike PINs or passwords, the scheme is secure under observation. | A number of touch-based behavioural biometric schemes have been proposed for user authentication @cite_49 @cite_45 @cite_39 @cite_34 . Most of these schemes rely on simple gestures on smartphones such as swipes. We have argued that if we were to use simple gestures then a much larger number of them need to be accumulated to get good accuracy. Also, swipes are prone to observation attacks @cite_59 . The work of @cite_52 does indeed include more complex (free-form) gestures and partly inspired our symbol set of complex figures. However, their gestures are only known to resist shoulder-surfing attacks and not video based observation attacks where the attacker has full control over the playback. attacks. The closest work similar to ours is by Toan et. al. @cite_32 . Their scheme authenticates users on the basis of how they write their PINs on the smartphone touch screen using @math coordinates. In comparison, we do a more detailed feature selection process to identify features, which are repeatable and resilient against observation attacks. Furthermore, they report an equal error rate (EER) of 6.7 | {
"cite_N": [
"@cite_52",
"@cite_32",
"@cite_39",
"@cite_45",
"@cite_59",
"@cite_49",
"@cite_34"
],
"mid": [
"2052525588",
"",
"2404603298",
"2151854612",
"2468988960",
"2102932275",
"2055389916"
],
"abstract": [
"This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We discuss strategies for generating secure and memorable free-form gestures. We conclude that free-form gestures present a robust method for mobile authentication.",
"",
"The widespread usage of smartphones gives rise to new security and privacy concerns. Smartphones are becoming a personal entrance to networks, and may store private information. Due to its small size, a smartphone could be easily taken away and used by an attacker. Using a victim’s smartphone, the attacker can launch an impersonation attack, which threatens the security of current networks, especially online social networks. Therefore, it is necessary to design a mechanism for smartphones to re-authenticate the current user’s identity and alert the owner when necessary. Such a mechanism can help to inhibit smartphone theft and safeguard the information stored in smartphones. In this paper, we propose a novel biometric-based system to achieve continuous and unobservable re-authentication for smartphones. The system uses a classifier to learn the owner’s finger movement patterns and checks the current user’s finger movement patterns against the owner’s. The system continuously re-authenticates the current user without interrupting user-smartphone interactions. Experiments show that our system is efficient on smartphones and achieves high accuracy.",
"We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0 for intrasession authentication, 2 -3 for intersession authentication, and below 4 when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system.",
"Touch input implicit authentication ( touch IA'') employs behavioural biometrics like touch location and pressure to continuously and transparently authenticate smartphone users. We provide the first ever evaluation of targeted mimicry attacks on touch IA and show that it fails against shoulder surfing and offline training attacks. Based on experiments with three diverse touch IA schemes and 256 unique attacker-victim pairs, we show that shoulder surfing attacks have a bypass success rate of 84 with the majority of successful attackers observing the victim's behaviour for less than two minutes. Therefore, the accepted assumption that shoulder surfing attacks on touch IA are infeasible due to the hidden nature of some features is incorrect. For offline training attacks, we created an open-source training app for attackers to train on their victims' touch data. With this training, attackers achieved bypass success rates of 86 , even with only partial knowledge of the underlying features used by the IA scheme. Previous work failed to find these severe vulnerabilities due to its focus on random, non-targeted attacks. Our work demonstrates the importance of considering targeted mimicry attacks to evaluate the security of an implicit authentication scheme. Based on our results, we conclude that touch IA is unsuitable from a security standpoint.",
"Current smartphones generally cannot continuously authenticate users during runtime. This poses severe security and privacy threats: A malicious user can manipulate the phone if bypassing the screen lock. To solve this problem, our work adopts a continuous and passive authentication mechanism based on a user’s touch operations on the touchscreen. Such a mechanism is suitable for smartphones, as it requires no extra hardware or intrusive user interface. We study how to model multiple types of touch data and perform continuous authentication accordingly. As a first attempt, we also investigate the fundamentals of touch operations as biometrics by justifying their distinctiveness and permanence. A onemonth experiment is conducted involving over 30 users. Our experiment results verify that touch biometrics can serve as a promising method for continuous and passive authentication.",
"With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples."
]
} |
1610.08904 | 2543665857 | Existing deep embedding methods in vision tasks are capable of learning a compact Euclidean space from images, where Euclidean distances correspond to a similarity metric. To make learning more effective and efficient, hard sample mining is usually employed, with samples identified through computing the Euclidean feature distance. However, the global Euclidean distance cannot faithfully characterize the true feature similarity in a complex visual feature space, where the intraclass distance in a high-density region may be larger than the interclass distance in low-density regions. In this paper, we introduce a Position-Dependent Deep Metric (PDDM) unit, which is capable of learning a similarity metric adaptive to local feature structure. The metric can be used to select genuinely hard samples in a local neighborhood to guide the deep embedding learning in an online and robust manner. The new layer is appealing in that it is pluggable to any convolutional networks and is trained end-to-end. Our local similarity-aware feature embedding not only demonstrates faster convergence and boosted performance on two complex image retrieval datasets, its large margin nature also leads to superior generalization results under the large and open set scenarios of transfer learning and zero-shot learning on ImageNet 2010 and ImageNet-10K datasets. | Hard sample mining is a popular technique used in computer vision for training robust classifier. The method aims at augmenting a training set progressively with false positive examples with the model learned so far. It is the core of many successful vision solutions, pedestrian detection @cite_21 @cite_12 . In a similar spirit, contemporary deep embedding methods @cite_20 @cite_22 choose hard samples in a mini-batch by computing the Euclidean distance in the embedding space. For instance, Schroff al @cite_20 selected online the semi-hard negative samples with relatively small Euclidean distances. Wang al @cite_34 proposed an online reservoir importance sampling algorithm to sample triplets by relevance scores, which are computed offline with different distance metrics. Similar studies on image descriptor learning @cite_29 and unsupervised feature learning @cite_4 also select hard samples according to the Euclidean distance-based losses in their respective CNNs. We argue in this paper that the global Euclidean distance is a suboptimal similarity metric for hard sample mining, and propose a locally adaptive metric for better mining. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_34",
"@cite_12",
"@cite_20"
],
"mid": [
"219040644",
"2176040302",
"1577117850",
"2161969291",
"1975517671",
"2168356304",
"2096733369"
],
"abstract": [
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"Learning the distance metric between pairs of examples is of great importance for learning and visual recognition. With the remarkable success from the state of the art convolutional neural networks, recent works have shown promising results on discriminatively training the networks to learn semantic feature embeddings where similar examples are mapped close to each other and dissimilar examples are mapped farther apart. In this paper, we describe an algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of pairwise distances. This step enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem. Additionally, we collected Online Products dataset: 120k images of 23k classes of online products for metric learning. Our experiments on the CUB-200-2011, CARS196, and Online Products datasets demonstrate significant improvement over existing deep feature embedding methods on all experimented embedding sizes with the GoogLeNet network.",
"In this paper we propose a novel framework for learning local image descriptors in a discriminative manner. For this purpose we explore a siamese architecture of Deep Convolutional Neural Networks (CNN), with a Hinge embedding loss on the L2 distance between descriptors. Since a siamese architecture uses pairs rather than single image patches to train, there exist a large number of positive samples and an exponential number of negative samples. We propose to explore this space with a stochastic sampling of the training set, in combination with an aggressive mining strategy over both the positive and negative samples which we denote as \"fracking\". We perform a thorough evaluation of the architecture hyper-parameters, and demonstrate large performance gains compared to both standard CNN learning strategies, hand-crafted image descriptors like SIFT, and the state-of-the-art on learned descriptors: up to 2.5x vs SIFT and 1.5x vs the state-of-the-art in terms of the area under the curve (AUC) of the Precision-Recall curve.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors."
]
} |
1610.08469 | 2949651313 | Food and nutrition occupy an increasingly prevalent space on the web, and dishes and recipes shared online provide an invaluable mirror into culinary cultures and attitudes around the world. More specifically, ingredients, flavors, and nutrition information become strong signals of the taste preferences of individuals and civilizations. However, there is little understanding of these palate varieties. In this paper, we present a large-scale study of recipes published on the web and their content, aiming to understand cuisines and culinary habits around the world. Using a database of more than 157K recipes from over 200 different cuisines, we analyze ingredients, flavors, and nutritional values which distinguish dishes from different regions, and use this knowledge to assess the predictability of recipes from different cuisines. We then use country health statistics to understand the relation between these factors and health indicators of different nations, such as obesity, diabetes, migration, and health expenditure. Our results confirm the strong effects of geographical and cultural similarities on recipes, health indicators, and culinary preferences across the globe. | Recently, public health has been increasingly analyzed through the lens of the web and social media. We refer the reader to @cite_20 for an overview of the recent research in this area. @cite_7 relate food mentions on Twitter conversations to the obesity and diabetes rates, using caloric values, and find a high correlation (coefficient 0.77) between caloric values of tweets and obesity values in various states in the US. Low-obesity areas of USA have also been shown to be more socially active on Instagram (posting comments and likes) than those from high-obesity ones by @cite_16 , who present a large-scale analysis of pictures taken at 164K restaurants in the US. @cite_26 identify cultural boundaries and similarities across populations at different scales based on the analysis of Foursquare check-ins. | {
"cite_N": [
"@cite_16",
"@cite_7",
"@cite_20",
"@cite_26"
],
"mid": [
"1974289028",
"2001488574",
"2049461911",
"1481628590"
],
"abstract": [
"We present a large-scale analysis of Instagram pictures taken at 164,753 restaurants by millions of users. Motivated by the obesity epidemic in the United States, our aim is three-fold: (i) to assess the relationship between fast food and chain restaurants and obesity, (ii) to better understand people's thoughts on and perceptions of their daily dining experiences, and (iii) to reveal the nature of social reinforcement and approval in the context of dietary health on social media. When we correlate the prominence of fast food restaurants in US counties with obesity, we find the Foursquare data to show a greater correlation at 0.424 than official survey data from the County Health Rankings would show. Our analysis further reveals a relationship between small businesses and local foods with better dietary health, with such restaurants getting more attention in areas of lower obesity. However, even in such areas, social approval favors the unhealthy foods high in sugar, with donut shops producing the most liked photos. Thus, the dietary landscape our study reveals is a complex ecosystem, with fast food playing a role alongside social interactions and personal perceptions, which often may be at odds.",
"Food is an integral part of our lives, cultures, and well-being, and is of major interest to public health. The collection of daily nutritional data involves keeping detailed diaries or periodic surveys and is limited in scope and reach. Alternatively, social media is infamous for allowing its users to update the world on the minutiae of their daily lives, including their eating habits. In this work we examine the potential of Twitter to provide insight into US-wide dietary choices by linking the tweeted dining experiences of 210K users to their interests, demographics, and social networks. We validate our approach by relating the caloric values of the foods mentioned in the tweets to the state-wide obesity rates, achieving a Pearson correlation of 0.77 across the 50 US states and the District of Columbia. We then build a model to predict county-wide obesity and diabetes statistics based on a combination of demographic variables and food names mentioned on Twitter. Our results show significant improvement over previous CHI research (Culotta 2014). We further link this data to societ al and economic factors, such as education and income, illustrating that areas with higher education levels tweet about food that is significantly less caloric. Finally, we address the somewhat controversial issue of the social nature of obesity (Christakis & Fowler 2007) by inducing two social networks using mentions and reciprocal following relationships.",
"Background: Social networking sites (SNSs) have the potential to increase the reach and efficiency of essential public health services, such as surveillance, research, and communication. Objective: The objective of this study was to conduct a systematic literature review to identify the use of SNSs for public health research and practice and to identify existing knowledge gaps. Methods: We performed a systematic literature review of articles related to public health and SNSs using PubMed, EMBASE, and CINAHL to search for peer-reviewed publications describing the use of SNSs for public health research and practice. We also conducted manual searches of relevant publications. Each publication was independently reviewed by 2 researchers for inclusion and extracted relevant study data. Results: A total of 73 articles met our inclusion criteria. Most articles (n=50) were published in the final 2 years covered by our search. In all, 58 articles were in the domain of public health research and 15 were in public health practice. Only 1 study was conducted in a low-income country. Most articles (63 73, 86 ) described observational studies involving users or usages of SNSs; only 5 studies involved randomized controlled trials. A large proportion (43 73, 59 ) of the identified studies included populations considered hard to reach, such as young individuals, adolescents, and individuals at risk of sexually transmitted diseases or alcohol and substance abuse. Few articles (2 73, 3 ) described using the multidirectional communication potential of SNSs to engage study populations. Conclusions: The number of publications about public health uses for SNSs has been steadily increasing in the past 5 years. With few exceptions, the literature largely consists of observational studies describing users and usages of SNSs regarding topics of public health interest. More studies that fully exploit the communication tools embedded in SNSs and study their potential to produce significant effects in the overall population’s health are needed. [J Med Internet Res 2014;16(3):e79]",
"Food and drink are two of the most basic needs of human beings. However, as society evolved, food and drink became also a strong cultural aspect, being able to describe strong differences among people. Traditional methods used to analyze cross-cultural differences are mainly based on surveys and, for this reason, they are very difficult to represent a significant statistical sample at a global scale. In this paper, we propose a new methodology to identify cultural boundaries and similarities across populations at different scales based on the analysis of Foursquare check-ins. This approach might be useful not only for economic purposes, but also to support existing and novel marketing and social applications. Our methodology consists of the following steps. First, we map food and drink related check-ins extracted from Foursquare into users' cultural preferences. Second, we identify particular individual preferences, such as the taste for a certain type of food or drink, e.g., pizza or sake, as well as temporal habits, such as the time and day of the week when an individual goes to a restaurant or a bar. Third, we show how to analyze this information to assess the cultural distance between two countries, cities or even areas of a city. Fourth, we apply a simple clustering technique, using this cultural distance measure, to draw cultural boundaries across countries, cities and regions."
]
} |
1610.08469 | 2949651313 | Food and nutrition occupy an increasingly prevalent space on the web, and dishes and recipes shared online provide an invaluable mirror into culinary cultures and attitudes around the world. More specifically, ingredients, flavors, and nutrition information become strong signals of the taste preferences of individuals and civilizations. However, there is little understanding of these palate varieties. In this paper, we present a large-scale study of recipes published on the web and their content, aiming to understand cuisines and culinary habits around the world. Using a database of more than 157K recipes from over 200 different cuisines, we analyze ingredients, flavors, and nutritional values which distinguish dishes from different regions, and use this knowledge to assess the predictability of recipes from different cuisines. We then use country health statistics to understand the relation between these factors and health indicators of different nations, such as obesity, diabetes, migration, and health expenditure. Our results confirm the strong effects of geographical and cultural similarities on recipes, health indicators, and culinary preferences across the globe. | @cite_11 study culture-specific ingredient connections, creating a flavor network'' from a dataset of about 56K recipes and relating them to the geographical groupings of countries. Similar flavor-based'' food pairing studies are conducted on cuisines in distinct geographical areas such as India @cite_0 . @cite_8 mine logs of recipe-related queries to uncover temporal patterns in consumption. Using Fourier transforms, they show the yearly and weekly periodicity in food density'' of the searched recipes, with different trends in Southern and Northern hemispheres, suggesting a link between food selection and climate. A study of Austrian recipe sites by @cite_21 also highlights differences in the recipes of regions which are further apart. @cite_6 conduct a similar study on Chinese recipes to investigate the effect of geographical and climatic proximities on ingredients similarity of domestic cuisines. | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_11"
],
"mid": [
"2953044842",
"54088320",
"1992169730",
"2155844147",
""
],
"abstract": [
"Nutrition is a key factor in people's overall health. Hence, understanding the nature and dynamics of population-wide dietary preferences over time and space can be valuable in public health. To date, studies have leveraged small samples of participants via food intake logs or treatment data. We propose a complementary source of population data on nutrition obtained via Web logs. Our main contribution is a spatiotemporal analysis of population-wide dietary preferences through the lens of logs gathered by a widely distributed Web-browser add-on, using the access volume of recipes that users seek via search as a proxy for actual food consumption. We discover that variation in dietary preferences as expressed via recipe access has two main periodic components, one yearly and the other weekly, and that there exist characteristic regional differences in terms of diet within the United States. In a second study, we identify users who show evidence of having made an acute decision to lose weight. We characterize the shifts in interests that they express in their search queries and focus on changes in their recipe queries in particular. Last, we correlate nutritional time series obtained from recipe queries with time-aligned data on hospital admissions, aimed at understanding how behavioral data captured in Web logs might be harnessed to identify potential relationships between diet and acute health problems. In this preliminary study, we focus on patterns of sodium identified in recipes over time and patterns of admission for congestive heart failure, a chronic illness that can be exacerbated by increases in sodium intake.",
"Since food is one of the central elements of all human beings, a high interest exists in exploring temporal and spatial food and dietary patterns of humans. Predominantly, data for such investigations stem from consumer panels which continuously capture food consumption patterns from individuals and households. In this work we leverage data from a large online recipe platform which is frequently used in the German speaking regions in Europe and explore (i) the association between geographic proximity and shared food preferences and (ii) to what extent temporal information helps to predict the food preferences of users. Our results reveal that online food preferences of geographically closer regions are more similar than those of distant ones and show that specific types of ingredients are more popular on specific days of the week. The observed patterns can successfully be mapped to known real-world patterns which suggests that existing methods for the investigation of dietary and food patterns (e.g., consumer panels) may benefit from incorporating the vast amount of data generated by users browsing recipes on the Web.",
"Food occupies a central position in every culture and it is therefore of great interest to understand the evolution of food culture. The advent of the World Wide Web and online recipe repositories have begun to provide unprecedented opportunities for data-driven, quantitative study of food culture. Here we harness an online database documenting recipes from various Chinese regional cuisines and investigate the similarity of regional cuisines in terms of geography and climate. We find that geographical proximity, rather than climate proximity, is a crucial factor that determines the similarity of regional cuisines. We develop a model of regional cuisine evolution that provides helpful clues for understanding the evolution of cuisines and cultures.",
"Any national cuisine is a sum total of its variety of regional cuisines, which are the cultural and historical identifiers of their respective regions. India is home to a number of regional cuisines that showcase its culinary diversity. Here, we study recipes from eight different regional cuisines of India spanning various geographies and climates. We investigate the phenomenon of food pairing which examines compatibility of two ingredients in a recipe in terms of their shared flavor compounds. Food pairing was enumerated at the level of cuisine, recipes as well as ingredient pairs by quantifying flavor sharing between pairs of ingredients. Our results indicate that each regional cuisine follows negative food pairing pattern; more the extent of flavor sharing between two ingredients, lesser their co-occurrence in that cuisine. We find that frequency of ingredient usage is central in rendering the characteristic food pairing in each of these cuisines. Spice and dairy emerged as the most significant ingredient classes responsible for the biased pattern of food pairing. Interestingly while individual spices contribute to negative food pairing, dairy products on the other hand tend to deviate food pairing towards positive side. Our data analytical study highlighting statistical properties of the regional cuisines, brings out their culinary fingerprints that could be used to design algorithms for generating novel recipes and recipe recommender systems. It forms a basis for exploring possible causal connection between diet and health as well as prospection of therapeutic molecules from food ingredients. Our study also provides insights as to how big data can change the way we look at food.",
""
]
} |
1610.08557 | 2949483916 | Biomedical word sense disambiguation (WSD) is an important intermediate task in many natural language processing applications such as named entity recognition, syntactic parsing, and relation extraction. In this paper, we employ knowledge-based approaches that also exploit recent advances in neural word concept embeddings to improve over the state-of-the-art in biomedical WSD using the MSH WSD dataset as the test set. Our methods involve weak supervision - we do not use any hand-labeled examples for WSD to build our prediction models; however, we employ an existing well known named entity recognition and concept mapping program, MetaMap, to obtain our concept vectors. Over the MSH WSD dataset, our linear time (in terms of numbers of senses and words in the test instance) method achieves an accuracy of 92.24 which is an absolute 3 improvement over the best known results obtained via unsupervised or knowledge-based means. A more expensive approach that we developed relies on a nearest neighbor framework and achieves an accuracy of 94.34 . Employing dense vector representations learned from unlabeled free text has been shown to benefit many language processing tasks recently and our efforts show that biomedical WSD is no exception to this trend. For a complex and rapidly evolving domain such as biomedicine, building labeled datasets for larger sets of ambiguous terms may be impractical. Here, we show that weak supervision that leverages recent advances in representation learning can rival supervised approaches in biomedical WSD. However, external knowledge bases (here sense inventories) play a key role in the improvements achieved. | Neural word representations have been shown to capture both semantic and syntactic information and a few recent approaches learn word vectors @cite_20 @cite_40 @cite_1 (as elements of @math , where @math is the dimension) in an unsupervised fashion from textual corpora. These dense word vectors obviate the sparsity issues inherent to the so called representations of words One-hot representations lead to very large dimensionality (typically the size of the vocabulary) resulting in further issues in similarity computations, a phenomenon often termed as the [Chapter 1.4] bishop2006pattern . @cite_3 adapted neural word embeddings to compute different sense embeddings (of the same word) and showed competitive performance on the SemEval 2007 WSD dataset @cite_30 . Disambiguation is achieved by picking the sense that maximizes the cosine similarity of the corresponding sense vector with the context vector for an ambiguous term. Recently, @cite_42 evaluated and demonstrated the superiority of neural word embeddings as features in supervised WSD models on the same SemEval dataset. | {
"cite_N": [
"@cite_30",
"@cite_42",
"@cite_1",
"@cite_3",
"@cite_40",
"@cite_20"
],
"mid": [
"1988325893",
"2518202280",
"",
"2338526423",
"",
"2132339004"
],
"abstract": [
"This paper presents the coarse-grained English all-words task at SemEval-2007. We describe our experience in producing a coarse version of the WordNet sense inventory and preparing the sense-tagged corpus for the task. We present the results of participating systems and discuss future directions.",
"Recent years have seen a dramatic growth in the popularity of word embeddings mainly owing to their ability to capture semantic information from massive amounts of textual content. As a result, many tasks in Natural Language Processing have tried to take advantage of the potential of these distributional models. In this work, we study how word embeddings can be used in Word Sense Disambiguation, one of the oldest tasks in Natural Language Processing and Artificial Intelligence. We propose different methods through which word embeddings can be leveraged in a state-of-the-art supervised WSD system architecture, and perform a deep analysis of how different parameters affect performance. We show how a WSD system that makes use of word embeddings alone, if designed properly, can provide significant performance improvement over a state-ofthe-art WSD system that incorporates several standard WSD features.",
"",
"Radiological reporting has generated large quantities of digital content within the electronic health record, which is potentially a valuable source of information for improving clinical care and supporting research. Although radiology reports are stored for communication and documentation of diagnostic imaging, harnessing their potential requires efficient and automated information extraction: they exist mainly as free-text clinical narrative, from which it is a major challenge to obtain structured data. Natural language processing (NLP) provides techniques that aid the conversion of text into a structured representation, and thus enables computers to derive meaning from human (ie, natural language) input. Used on radiology reports, NLP techniques enable automatic identification and extraction of information. By exploring the various purposes for their use, this review examines how radiology benefits from NLP. A systematic literature search identified 67 relevant publications describing NLP methods that support practical applications in radiology. This review takes a close look at the individual studies in terms of tasks (ie, the extracted information), the NLP methodology and tools used, and their application purpose and performance results. Additionally, limitations, future challenges, and requirements for advancing NLP in radiology will be discussed.",
"",
"A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts."
]
} |
1610.08442 | 2542188028 | Internet data has surfaced as a primary source for investigation of different aspects of human behavior. A crucial step in such studies is finding a suitable cohort (i.e., a set of users) that shares a common trait of interest to researchers. However, direct identification of users sharing this trait is often impossible, as the data available to researchers is usually anonymized to preserve user privacy. To facilitate research on specific topics of interest, especially in medicine, we introduce an algorithm for identifying a trait of interest in anonymous users. We illustrate how a small set of labeled examples, together with statistical information about the entire population, can be aggregated to obtain labels on unseen examples. We validate our approach using labeled data from the political domain. We provide two applications of the proposed algorithm to the medical domain. In the first, we demonstrate how to identify users whose search patterns indicate they might be suffering from certain types of cancer. This shows, for the first time, that search queries can be used as a screening device for diseases that are currently often discovered too late, because no early screening tests exists. In the second, we detail an algorithm to predict the distribution of diseases given their incidence in a subset of the population at study, making it possible to predict disease spread from partial epidemiological data. | Traditionally, most of the medical research exploiting internet data has focused on population-level disease incidence. The question therein are of the form '' @cite_6 . Because of the large number of people involved, it is superfluous to identify each individual with the condition. Instead, it is sufficient to find correlations between disease incidence and specific keywords @cite_21 @cite_1 or even website visits @cite_23 . | {
"cite_N": [
"@cite_23",
"@cite_21",
"@cite_1",
"@cite_6"
],
"mid": [
"2070272165",
"1979941595",
"2151932005",
"2117239687"
],
"abstract": [
"Circulating levels of both seasonal and pandemic influenza require constant surveillance to ensure the health and safety of the population. While up-to-date information is critical, traditional surveillance systems can have data availability lags of up to two weeks. We introduce a novel method of estimating, in near-real time, the level of influenza-like illness (ILI) in the United States (US) by monitoring the rate of particular Wikipedia article views on a daily basis. We calculated the number of times certain influenza- or health-related Wikipedia articles were accessed each day between December 2007 and August 2013 and compared these data to official ILI activity levels provided by the Centers for Disease Control and Prevention (CDC). We developed a Poisson model that accurately estimates the level of ILI activity in the American population, up to two weeks ahead of the CDC, with an absolute average difference between the two estimates of just 0.27 over 294 weeks of data. Wikipedia-derived ILI models performed well through both abnormally high media coverage events (such as during the 2009 H1N1 pandemic) as well as unusually severe influenza seasons (such as the 2012–2013 influenza season). Wikipedia usage accurately estimated the week of peak ILI activity 17 more often than Google Flu Trends data and was often more accurate in its measure of ILI intensity. With further study, this method could potentially be implemented for continuous monitoring of ILI activity in the US and to provide support for traditional influenza surveillance tools.",
"",
"The Internet is an important source of health information. Thus, the frequency of Internet searches may provide information regarding infectious disease activity. As an example, we examined the relationship between searches for influenza and actual influenza occurrence. Using search queries from the Yahoo! search engine (http: search.yahoo.com) from March 2004 through May 2008, we counted daily unique queries originating in the United States that contained influenza-related search terms. Counts were divided by the total number of searches, and the resulting daily fraction of searches was averaged over the week. We estimated linear models, using searches with 1-10-week lead times as explanatory variables to predict the percentage of cultures positive for influenza and deaths attributable to pneumonia and influenza in the United States. With use of the frequency of searches, our models predicted an increase in cultures positive for influenza 1-3 weeks in advance of when they occurred (P<.001), and similar models predicted an increase in mortality attributable to pneumonia and influenza up to 5 weeks in advance (P<.001). Search-term surveillance may provide an additional tool for disease surveillance.",
"This report introduces a computational model based on internet search queries for real-time surveillance of influenza-like illness (ILI), which reproduces the patterns observed in ILI data from the Centers for Disease Control and Prevention."
]
} |
1610.08442 | 2542188028 | Internet data has surfaced as a primary source for investigation of different aspects of human behavior. A crucial step in such studies is finding a suitable cohort (i.e., a set of users) that shares a common trait of interest to researchers. However, direct identification of users sharing this trait is often impossible, as the data available to researchers is usually anonymized to preserve user privacy. To facilitate research on specific topics of interest, especially in medicine, we introduce an algorithm for identifying a trait of interest in anonymous users. We illustrate how a small set of labeled examples, together with statistical information about the entire population, can be aggregated to obtain labels on unseen examples. We validate our approach using labeled data from the political domain. We provide two applications of the proposed algorithm to the medical domain. In the first, we demonstrate how to identify users whose search patterns indicate they might be suffering from certain types of cancer. This shows, for the first time, that search queries can be used as a screening device for diseases that are currently often discovered too late, because no early screening tests exists. In the second, we detail an algorithm to predict the distribution of diseases given their incidence in a subset of the population at study, making it possible to predict disease spread from partial epidemiological data. | The task of determining labels for individuals from population statistics relates to the ecological inference problem. Ecological inference aims at inferring characteristics about individuals from ecological data (i.e., of the entire population). As an example, it might be used to answer the following question: '' Ecological inference has a long history in the fields of statistics and social studies @cite_17 . Recently, Flaxman, et al @cite_16 used kernel embeddings of distributions to predict which demographics groups supported Barack Obama in the 2012 US Presidential Election. Park and Gosh @cite_25 introduced LUDIA, a low-level rank approximation algorithm designed that leverages ecological inference to predict hospital spending for individuals based on their length of stay. Culotta, et al @cite_7 used website traffic data to predict demographics of Twitter user. Ultimately, our problem differs from ecological inference in that we are interested in identifying individuals whose distribution is known rather than inferring behaviors at an individual level from population data. | {
"cite_N": [
"@cite_16",
"@cite_25",
"@cite_7",
"@cite_17"
],
"mid": [
"2073020428",
"2014974005",
"2286737780",
"2319964564"
],
"abstract": [
"We present a new solution to the ecological inference'' problem, of learning individual-level associations from aggregate data. This problem has a long history and has attracted much attention, debate, claims that it is unsolvable, and purported solutions. Unlike other ecological inference techniques, our method makes use of unlabeled individual-level data by embedding the distribution over these predictors into a vector in Hilbert space. Our approach relies on recent learning theory results for distribution regression, using kernel embeddings of distributions. Our novel approach to distribution regression exploits the connection between Gaussian process regression and kernel ridge regression, giving us a coherent, Bayesian approach to learning and inference and a convenient way to include prior information in the form of a spatial covariance function. Our approach is highly scalable as it relies on FastFood, a randomized explicit feature representation for kernel embeddings. We apply our approach to the challenging political science problem of modeling the voting behavior of demographic groups based on aggregate voting data. We consider the 2012 US Presidential election, and ask: what was the probability that members of various demographic groups supported Barack Obama, and how did this vary spatially across the country? Our results match standard survey-based exit polling data for the small number of states for which it is available, and serve to fill in the large gaps in this data, at a much higher degree of granularity.",
"In the past few years, the government and other agencies have publicly released a prodigious amount of data that can be potentially mined to benefit the society at large. However, data such as health records are typically only provided at aggregated levels (e.g. per State, per Hospital Referral Region, etc.) to protect privacy. Unfortunately aggregation can severely diminish the utility of such data when modeling or analysis is desired at a per-individual basis. So, not surprisingly, despite the increasing abundance of aggregate data, there have been very few successful attempts in exploiting them for individual-level analyses. This paper introduces LUDIA, a novel low-rank approximation algorithm that utilizes aggregation constraints in addition to auxiliary information in order to estimate or \"reconstruct\" the original individual-level values from aggregate data. If the reconstructed data are statistically similar to the original individual-level data, off-the-shelf individual-level models can be readily and reliably applied for subsequent predictive or descriptive analytics. LUDIA is more robust to nonlinear estimates and random effects than other reconstruction algorithms. It solves a Sylvester equation and leverages multi-level (also known as hierarchical or mixed-effect) modeling approaches efficiently. A novel graphical model is also introduced to provide a probabilistic viewpoint of LUDIA. Experimental results using a Texas inpatient dataset show that individual-level data can be reasonably reconstructed from county-, hospital-, and zip code-level aggregate data. Several factors affecting the reconstruction quality are discussed, along with the implications of this work for current aggregation guidelines.",
"Understanding the demographics of users of online social networks has important applications for health, marketing, and public messaging. Whereas most prior approaches rely on a supervised learning approach, in which individual users are labeled with demographics for training, we instead create a distantly labeled dataset by collecting audience measurement data for 1,500 websites (e.g., 50 of visitors to gizmodo.com are estimated to have a bachelor's degree). We then fit a regression model to predict these demographics from information about the followers of each website on Twitter. Using patterns derived both from textual content and the social network of each user, our final model produces an average held-out correlation of .77 across seven different variables (age, gender, education, ethnicity, income, parental status, and political preference). We then apply this model to classify individual Twitter users by ethnicity, gender, and political preference, finding performance that is surprisingly competitive with a fully supervised approach.",
""
]
} |
1610.08442 | 2542188028 | Internet data has surfaced as a primary source for investigation of different aspects of human behavior. A crucial step in such studies is finding a suitable cohort (i.e., a set of users) that shares a common trait of interest to researchers. However, direct identification of users sharing this trait is often impossible, as the data available to researchers is usually anonymized to preserve user privacy. To facilitate research on specific topics of interest, especially in medicine, we introduce an algorithm for identifying a trait of interest in anonymous users. We illustrate how a small set of labeled examples, together with statistical information about the entire population, can be aggregated to obtain labels on unseen examples. We validate our approach using labeled data from the political domain. We provide two applications of the proposed algorithm to the medical domain. In the first, we demonstrate how to identify users whose search patterns indicate they might be suffering from certain types of cancer. This shows, for the first time, that search queries can be used as a screening device for diseases that are currently often discovered too late, because no early screening tests exists. In the second, we detail an algorithm to predict the distribution of diseases given their incidence in a subset of the population at study, making it possible to predict disease spread from partial epidemiological data. | Another area of study that bears a similarity with our proposed algorithm is Learning with Label Proportions (LLP). In LLP, the training data is provided to the classifier in groups on which only the distribution of classes in each group is known. Many solutions have been proposed for the problem @cite_4 @cite_22 ; yet---to the best of our knowledge---none of them is designed to bias the learning process by incorporating individuals with known labels. Keerthi, et al @cite_20 introduced a semi-supervised SVM classifier that uses a small labeled dataset in conjunction to class proportion on the training data to predict labels on test data. While sharing some similarity with our algorithm, their method is less generalizable, as it does not handle learning from training data drawn from sets with different class distributions. Instead, our proposed approach solves this issue by conjunctively optimizing correlation with all sets the training data is drawn from. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_20"
],
"mid": [
"2166886337",
"1607038179",
"2114018781"
],
"abstract": [
"We propose a new problem formulation which is similar to, but more informative than, the binary multiple-instance learning problem. In this setting, we are given groups of instances (described by feature vectors) along with estimates of the fraction of positively-labeled instances per group. The task is to learn an instance level classifier from this information. That is, we are trying to estimate the unknown binary labels of individuals from knowledge of group statistics. We propose a principled probabilistic model to solve this problem that accounts for uncertainty in the parameters and in the unknown individual labels. This model is trained with an efficient MCMC algorithm. Its performance is demonstrated on both synthetic and real-world data arising in general object recognition.",
"Consider the following problem: given sets of unlabeled observations, each set with known label proportions, predict the labels of another set of observations, possibly with known label proportions. This problem occurs in areas like e-commerce, politics, spam filtering and improper content detection. We present consistent estimators which can reconstruct the correct labels with high probability in a uniform convergence sense. Experiments show that our method works well in practice.",
"In the design of practical web page classification systems one often encounters a situation in which the labeled training set is created by choosing some examples from each class; but, the class proportions in this set are not the same as those in the test distribution to which the classifier will be actually applied. The problem is made worse when the amount of training data is also small. In this paper we explore and adapt binary SVM methods that make use of unlabeled data from the test distribution, viz., Transductive SVMs (TSVMs) and expectation regularization constraint (ER EC) methods to deal with this situation. We empirically show that when the labeled training data is small, TSVM designed using the class ratio tuned by minimizing the loss on the labeled set yields the best performance; its performance is good even when the deviation between the class ratios of the labeled training set and the test set is quite large. When the labeled training data is sufficiently large, an unsupervised Gaussian mixture model can be used to get a very good estimate of the class ratio in the test set; also, when this estimate is used, both TSVM and EC ER give their best possible performance, with TSVM coming out superior. The ideas in the paper can be easily extended to multi-class SVMs and MaxEnt models."
]
} |
1610.08442 | 2542188028 | Internet data has surfaced as a primary source for investigation of different aspects of human behavior. A crucial step in such studies is finding a suitable cohort (i.e., a set of users) that shares a common trait of interest to researchers. However, direct identification of users sharing this trait is often impossible, as the data available to researchers is usually anonymized to preserve user privacy. To facilitate research on specific topics of interest, especially in medicine, we introduce an algorithm for identifying a trait of interest in anonymous users. We illustrate how a small set of labeled examples, together with statistical information about the entire population, can be aggregated to obtain labels on unseen examples. We validate our approach using labeled data from the political domain. We provide two applications of the proposed algorithm to the medical domain. In the first, we demonstrate how to identify users whose search patterns indicate they might be suffering from certain types of cancer. This shows, for the first time, that search queries can be used as a screening device for diseases that are currently often discovered too late, because no early screening tests exists. In the second, we detail an algorithm to predict the distribution of diseases given their incidence in a subset of the population at study, making it possible to predict disease spread from partial epidemiological data. | Finally, many have studied semi-supervised learning (SSL), the problem of learning when a combination of labeled and unlabeled examples are available @cite_19 . For example, Druck, et al @cite_5 proposed a framework that leverages labeled features---that is, features that are highly representative for a class---to learn constrains for a multinomial logistic regression. More recently, Ravi and Diao @cite_31 have proposed a graph model to efficiently use SSL on large datasets. Compared to a classic SSL model, we not only leverage individual level features, but also take advantage of population data. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_31"
],
"mid": [
"",
"2125327503",
"2295507045"
],
"abstract": [
"",
"It is difficult to apply machine learning to new domains because often we lack labeled problem instances. In this paper, we provide a solution to this problem that leverages domain knowledge in the form of affinities between input features and classes. For example, in a baseball vs. hockey text classification problem, even without any labeled data, we know that the presence of the word puck is a strong indicator of hockey. We refer to this type of domain knowledge as a labeled feature. In this paper, we propose a method for training discriminative probabilistic models with labeled features and unlabeled instances. Unlike previous approaches that use labeled features to create labeled pseudo-instances, we use labeled features directly to constrain the model's predictions on unlabeled instances. We express these soft constraints using generalized expectation (GE) criteria --- terms in a parameter estimation objective function that express preferences on values of a model expectation. In this paper we train multinomial logistic regression models using GE criteria, but the method we develop is applicable to other discriminative probabilistic models. The complete objective function also includes a Gaussian prior on parameters, which encourages generalization by spreading parameter weight to unlabeled features. Experimental results on text classification data sets show that this method outperforms heuristic approaches to training classifiers with labeled features. Experiments with human annotators show that it is more beneficial to spend limited annotation time labeling features rather than labeling instances. For example, after only one minute of labeling features, we can achieve 80 accuracy on the ibm vs. mac text classification problem using GE-FL, whereas ten minutes labeling documents results in an accuracy of only 77",
"Traditional graph-based semi-supervised learning (SSL) approaches, even though widely applied, are not suited for massive data and large label scenarios since they scale linearly with the number of edges @math and distinct labels @math . To deal with the large label size problem, recent works propose sketch-based methods to approximate the distribution on labels per node thereby achieving a space reduction from @math to @math , under certain conditions. In this paper, we present a novel streaming graph-based SSL approximation that captures the sparsity of the label distribution and ensures the algorithm propagates labels accurately, and further reduces the space complexity per node to @math . We also provide a distributed version of the algorithm that scales well to large data sizes. Experiments on real-world datasets demonstrate that the new method achieves better performance than existing state-of-the-art algorithms with significant reduction in memory footprint. We also study different graph construction mechanisms for natural language applications and propose a robust graph augmentation strategy trained using state-of-the-art unsupervised deep learning architectures that yields further significant quality gains."
]
} |
1610.08462 | 2540831494 | Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences. Whether and how such an approach can be extended to help model larger spans of text, e.g., documents, is intriguing, and further investigation would still be desirable. This paper aims to enhance neural network models for such a purpose. A typical problem of document-level modeling is automatic summarization, which aims to model documents in order to generate summaries. In this paper, we propose neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization. Without engineering any features, we train the models on two large datasets. The models achieve the state-of-the-art performance, and they significantly benefit from the distraction modeling, particularly when input documents are long. | Distributed representation has shown to be effective in modeling fine granularities of text as discussed above. Much recent work has also attempted to model longer spans of text with neural networks @cite_30 @cite_36 @cite_23 @cite_22 @cite_0 . This includes research that incorporates document-level information for language modeling @cite_22 @cite_23 and that answers questions @cite_0 by comprehending input documents with attention-based models. More relevant to ours, the work of @cite_30 learned distributed representation for short documents with the averaged length of about a hundred word tokens, although the objective is not summarization. Summarization typically faces documents longer than those, and summarization may be more necessary when documents are long. In this paper, we propose neural models for summarizing typical news articles with up to thousands of word tokens. We find it is necessary to enable computers not just to pay attention to specific content of input documents with attention models, but also distract them to traverse between different content so as to better grasp the overall meaning for summarization, particularly when documents are long. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_36",
"@cite_0",
"@cite_23"
],
"mid": [
"2950752421",
"",
"609399965",
"2949615363",
"2251849926"
],
"abstract": [
"Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Long-short term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization Code for the three models described in this paper can be found at www.stanford.edu jiweil .",
"",
"Automatic text summarization is widely regarded as the highly difficult problem, partially because of the lack of large text summarization data set. Due to the great challenge of constructing the large scale summaries for full text, in this paper, we introduce a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which is released to the public this http URL . This corpus consists of over 2 million real Chinese short texts with short summaries given by the author of each text. We also manually tagged the relevance of 10,666 short summaries with their corresponding short texts. Based on the corpus, we introduce recurrent neural network for the summary generation and achieve promising results, which not only shows the usefulness of the proposed corpus for short text summarization research, but also provides a baseline for further research on this topic.",
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"This paper proposes a novel hierarchical recurrent neural network language model (HRNNLM) for document modeling. After establishing a RNN to capture the coherence between sentences in a document, HRNNLM integrates it as the sentence history information into the word level RNN to predict the word sequence with cross-sentence contextual information. A two-step training approach is designed, in which sentence-level and word-level language models are approximated for the convergence in a pipeline style. Examined by the standard sentence reordering scenario, HRNNLM is proved for its better accuracy in modeling the sentence coherence. And at the word level, experimental results also indicate a significant lower model perplexity, followed by a practical better translation result when applied to a Chinese-English document translation reranking task."
]
} |
1610.08462 | 2540831494 | Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences. Whether and how such an approach can be extended to help model larger spans of text, e.g., documents, is intriguing, and further investigation would still be desirable. This paper aims to enhance neural network models for such a purpose. A typical problem of document-level modeling is automatic summarization, which aims to model documents in order to generate summaries. In this paper, we propose neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization. Without engineering any features, we train the models on two large datasets. The models achieve the state-of-the-art performance, and they significantly benefit from the distraction modeling, particularly when input documents are long. | Automatic summarization has been intensively studied for both text @cite_34 @cite_25 @cite_35 and speech @cite_26 @cite_17 . Most state-of-the-art summarization models have focused on summarization, although some efforts have also been exerted on summarization. Recent neural summarization models include the recent efforts of @cite_43 @cite_1 @cite_36 . The research performed in @cite_43 focuses on neural models for sentence compression and rewriting, but not full document summarization. The work of @cite_1 leverages neural networks to generate news headline, where input documents are limited to 50 word tokens, and the work of @cite_36 also deals with short texts (up to dozens of word tokens), in which summarization problems such as content redundancy is less prominent and attention-based models seem to be sufficient. However, summarization typically faces documents longer than that and summarization is often more needed when documents are long. In this work we attempt to explore neural summarization technologies for news articles with up to thousands of word tokens, in which we find distraction-based summarization models help improve performance. Note that our improvement is achieved over the model that has already outperformed the attention-based model reported in @cite_36 on short documents. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_36",
"@cite_1",
"@cite_43",
"@cite_34",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"2076749833",
"609399965",
"2191070669",
"1843891098",
"",
"2143205289",
"2125247927"
],
"abstract": [
"",
"This paper is concerned with the summarization of spontaneous conversations. Compared with broadcast news, which has received intensive study, spontaneous conversations have been less addressed in the literature. Previous work has focused on textual features extracted from transcripts. This paper explores and compares the effectiveness of both textual features and speech-related features. The experiments show that these features incrementally improve summarization performance. We also find that speech disfluencies, which have been removed as noise in previous work, help identify important utterances, while the structural feature is less effective than it is in broadcast news.",
"Automatic text summarization is widely regarded as the highly difficult problem, partially because of the lack of large text summarization data set. Due to the great challenge of constructing the large scale summaries for full text, in this paper, we introduce a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which is released to the public this http URL . This corpus consists of over 2 million real Chinese short texts with short summaries given by the author of each text. We also manually tagged the relevance of 10,666 short summaries with their corresponding short texts. Based on the corpus, we introduce recurrent neural network for the summary generation and achieve promising results, which not only shows the usefulness of the proposed corpus for short text summarization research, but also provides a baseline for further research on this topic.",
"We describe an application of an encoder-decoder recurrent neural network with LSTM units and attention to generating headlines from the text of news articles. We find that the model is quite effective at concisely paraphrasing news articles. Furthermore, we study how the neural network decides which input words to pay attention to, and specifically we identify the function of the different neurons in a simplified attention mechanism. Interestingly, our simplified attention mechanism performs better that the more complex attention mechanism on a held out set of articles.",
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"",
"The increasing availability of online information has necessitated intensive research in the area of automatic text summarization within the Natural Language Processing (NLP) community. Over the past half a century, the problem has been addressed from many dierent perspectives, in varying domains and using various paradigms. This survey intends to investigate some of the most relevant approaches both in the areas of single-document and multipledocument summarization, giving special emphasis to empirical methods and extractive techniques. Some promising approaches that concentrate on specic details of the summarization problem are also discussed. Special attention is devoted to automatic evaluation of summarization systems, as future research on summarization is strongly dependent on progress in this area.",
"This paper presents a model for summarizing multiple untranscribed spoken documents. Without assuming the availability of transcripts, the model modifies a recently proposed unsupervised algorithm to detect re-occurring acoustic patterns in speech and uses them to estimate similarities between utterances, which are in turn used to identify salient utterances and remove redundancies. This model is of interest due to its independence from spoken language transcription, an error-prone and resource-intensive process, its ability to integrate multiple sources of information on the same topic, and its novel use of acoustic patterns that extends previous work on low-level prosodic feature detection. We compare the performance of this model with that achieved using manual and automatic transcripts, and find that this new approach is roughly equivalent to having access to ASR transcripts with word error rates in the 33--37 range without actually having to do the ASR, plus it better handles utterances with out-of-vocabulary words."
]
} |
1610.08136 | 2951239298 | Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favorable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or duet' performs significantly better than either neural network individually on a Web page ranking task, and also significantly outperforms traditional baselines and other recently proposed models based on neural networks. | This paper considers local and distributed representations of queries and documents for use in Web page ranking. Our measure of ranking quality is NDCG @cite_32 , which rewards a ranker for returning documents with higher gain nearer to the top, where gain is determined according to labels from human relevance assessors. We describe different ranking methods in terms of their representations and how this should help them achieve good NDCG. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2069870183"
],
"abstract": [
"Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and precision based on binary relevance judgments, to graded relevance judgments. Alternatively, novel measures based on graded relevance judgments may be developed. This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. The first one accumulates the relevance scores of retrieved documents along the ranked result list. The second one is similar but applies a discount factor to the relevance scores in order to devaluate late-retrieved documents. The third one computes the relative-to-the-ideal performance of IR techniques, based on the cumulative gain they are able to yield. These novel measures are defined and discussed and their use is demonstrated in a case study using TREC data: sample system run results for 20 queries in TREC-7. As a relevance base we used novel graded relevance judgments on a four-point scale. The test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. The graphs based on the measures also provide insight into the performance IR techniques and allow interpretation, for example, from the user point of view."
]
} |
1610.08136 | 2951239298 | Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favorable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or duet' performs significantly better than either neural network individually on a Web page ranking task, and also significantly outperforms traditional baselines and other recently proposed models based on neural networks. | propose the use of matching matrices to represent the similarity of short texts, then apply a convolutional neural network inspired by those in computer vision. They populate the matching matrix using both local and distributed term representations. In the local representation, an exact match is used to generate binary indicators of whether the @math th term of one text and @math th term of the other are the same, as in our local model. In the distributed representation, a pre-trained term embedding is used instead, populating the match matrix with cosine or inner product similarities. The method works for some problems with short text, but not for document ranking @cite_23 . However, by using the match matrix to generate summary statistics it is possible to make the method work well @cite_28 , which is our DRMM baseline. | {
"cite_N": [
"@cite_28",
"@cite_23"
],
"mid": [
"2536015822",
"2429667833"
],
"abstract": [
"In recent years, deep neural networks have led to exciting breakthroughs in speech recognition, computer vision, and natural language processing (NLP) tasks. However, there have been few positive results of deep models on ad-hoc retrieval tasks. This is partially due to the fact that many important characteristics of the ad-hoc retrieval task have not been well addressed in deep models yet. Typically, the ad-hoc retrieval task is formalized as a matching problem between two pieces of text in existing work using deep models, and treated equivalent to many NLP tasks such as paraphrase identification, question answering and automatic conversation. However, we argue that the ad-hoc retrieval task is mainly about relevance matching while most NLP matching tasks concern semantic matching, and there are some fundamental differences between these two matching tasks. Successful relevance matching requires proper handling of the exact matching signals, query term importance, and diverse matching requirements. In this paper, we propose a novel deep relevance matching model (DRMM) for ad-hoc retrieval. Specifically, our model employs a joint deep architecture at the query term level for relevance matching. By using matching histogram mapping, a feed forward matching network, and a term gating network, we can effectively deal with the three relevance matching factors mentioned above. Experimental results on two representative benchmark collections show that our model can significantly outperform some well-known retrieval models as well as state-of-the-art deep matching models.",
"Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models."
]
} |
1610.08136 | 2951239298 | Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favorable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or duet' performs significantly better than either neural network individually on a Web page ranking task, and also significantly outperforms traditional baselines and other recently proposed models based on neural networks. | This paper learns a text representation end-to-end based on query-document ranking labels. This has not been done often in related work with document body text, but we can point to related papers that use short text such as title, for document ranking or related tasks. learn a distributed representation of query and title, for document ranking. The input representation is character trigraphs, the training procedure asks the model to rank clicked titles over randomly chosen titles, and the test metric is NDCG with human labels. developed a convolutional version of the model. These are our DSSM and CDSSM baselines. Other convolutional models that match short texts using distributed representations include @cite_45 @cite_35 , also showing good performance on short text ranking tasks. | {
"cite_N": [
"@cite_35",
"@cite_45"
],
"mid": [
"1966443646",
"2951359136"
],
"abstract": [
"Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3 absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"Semantic matching is of central importance to many natural language tasks bordes2014semantic,RetrievalQA . A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models."
]
} |
1610.08266 | 2949446367 | Network Function Virtualization (NFV) is a new paradigm, enabling service innovation through virtualization of traditional network functions located flexibly in the network in form of Virtual Network Functions (VNFs). Since VNFs can only be placed onto servers located in networked data centers, which is the NFV's salient feature, the traffic directed to these data center areas has significant impact on network load balancing. Network load balancing can be even more critical for an ordered sequence of VNFs, also known as Service Function Chains (SFCs), a common cloud and network service approach today. To balance the network load, VNF's can be placed in a smaller cluster of servers in the network thus minimizing the distance to the data center. The optimization of the placement of these clusters is a challenge as also other factors need to be considered, such as the resource utilization. To address this issue, we study the problem of VNF placement with replications, and especially the potential of VNFs replications to help load balance the network. We design and compare three optimization methods, including Linear Programing (LP) model, Genetic Algorithm (GA) and Random Fit Placement Algorithm (RFPA) for the allocation and replication of VNFs. Our results show that the optimum placement and replication can significantly improve load balancing, for which we also propose a GA heuristics applicable to larger networks. | Early work in @cite_7 studies the optimal VNFs placement in hybrid scenarios, where some network functions are provided by dedicated physical hardware and some are virtualized, depending on demand. They propose an ILP model model with the objective to minimize the number of physical nodes used, which limits the network size that can be studied due to complexity of the ILP model. In @cite_9 , a context-free language is proposed for the specification of VNFs and a Mixed Integer Quadratically Constrained Program (MIQCP) for the chaining and placement of VNFs in the network. The paper finds that the VNF placement depends on the objective, such as latency, number of allocated nodes, and link utilizations. In mobile core networks, @cite_8 discuss the virtualization of mobile gateways, i.e., Serving Gateways (S-GWs) and Packet Data Network Gateways (P-GWs) hosted in data centers. They analyze the optimum placements by taking into consideration the delay and network load. In @cite_12 also propose the instantiation and placement of PDN-GWs in form of VNFs. | {
"cite_N": [
"@cite_9",
"@cite_12",
"@cite_7",
"@cite_8"
],
"mid": [
"2000238032",
"",
"2043352445",
"1967912924"
],
"abstract": [
"Network appliances perform different functions on network flows and constitute an important part of an operator’s network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives.",
"",
"Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.",
"With the rapid growth of user data, service innovation, and the persistent necessity to reduce costs, today's mobile operators are faced with severe challenges. In networking, two new concepts have emerged aiming at cost reduction, increase of network scalability and service flexibility, namely Network Functions Virtualization (NFV) and Software Defined Networking (SDN). NFV proposes to run the mobile network functions as software instances on commodity servers or datacenters (DC), while SDN supports a decomposition of the mobile network into control-plane and data-plane functions. Whereas these new concepts are considered as very promising drivers to design cost efficient mobile network architectures, limited attention has been drawn to the network load and infringed data-plane delay imposed by introducing NFV or SDN. We argue that within a widely-spanned mobile network, there is in fact a high potential to combine both concepts. Taking load and delay into account, there will be areas of the mobile network rather benefiting from an NFV deployment with all functions virtualized, while for other areas, an SDN deployment with functions decomposition is more advantageous. We refer to this problem as the functions placement problem. We propose a model that resolves the functions placement and aims at minimizing the transport network load overhead against several parameters such as data-plane delay, number of potential datacenters and SDN control overhead. We illustrate our proposed concept along with a concrete use case example."
]
} |
1610.08266 | 2949446367 | Network Function Virtualization (NFV) is a new paradigm, enabling service innovation through virtualization of traditional network functions located flexibly in the network in form of Virtual Network Functions (VNFs). Since VNFs can only be placed onto servers located in networked data centers, which is the NFV's salient feature, the traffic directed to these data center areas has significant impact on network load balancing. Network load balancing can be even more critical for an ordered sequence of VNFs, also known as Service Function Chains (SFCs), a common cloud and network service approach today. To balance the network load, VNF's can be placed in a smaller cluster of servers in the network thus minimizing the distance to the data center. The optimization of the placement of these clusters is a challenge as also other factors need to be considered, such as the resource utilization. To address this issue, we study the problem of VNF placement with replications, and especially the potential of VNFs replications to help load balance the network. We design and compare three optimization methods, including Linear Programing (LP) model, Genetic Algorithm (GA) and Random Fit Placement Algorithm (RFPA) for the allocation and replication of VNFs. Our results show that the optimum placement and replication can significantly improve load balancing, for which we also propose a GA heuristics applicable to larger networks. | Due to the inherent complexity of optimizations, heuristic or meta-heuristic algorithms have been proposed to finding near optimal solutions. Paper @cite_1 minimizes the OPEX in the VNF placement problem separating into two NP-hard sub-problems, and proposing heuristic algorithms. Similarly, paper @cite_6 proposes heuristics to reducing computational complexity considering the resource demand in data centers. In @cite_2 two solutions are presented to the VNF-orchestration problem, an ILP model computing the optimal solution using CPLEX for small networks and a heuristic computing sub-optimal solutions for large networks. Paper @cite_0 proposes a genetic algorithm for the VNF chain placement to satisfy the SLA and QoS objectives with dynamic traffic demands. Similar to the previous work we use optimizations and heuristics to solve the VNF placement problem. Unlike previous work, we consider the replications, which is novel. Also, our approach is tailored to suiting the operational mobile core networks, where the optimum placement of VNFs in data centers can be found based on maximizing the network load balancing, thus enabling a scalable growth of the mobile data traffic over years, critical to the emerging 5G networks. | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_6",
"@cite_2"
],
"mid": [
"2154389424",
"1578960134",
"1997650479",
"2374516805"
],
"abstract": [
"By allowing network functions to be virtualized and run on commodity hardware, NFV enables new properties (e.g., elastic scaling), and new service models for Service Providers, Enterprises, and Telecommunication Service Providers. However, for NFV to be offered as a service, several research problems still need to be addressed. In this paper, we focus and propose a new service chaining algorithm. Existing solutions suffer two main limitations: First, existing proposals often rely on mixed Integer Linear Programming to optimize VM allocation and network management, but our experiments show that such approach is too slow taking hours to find a solution. Second, although existing proposals have considered the VM placement and network configuration jointly, they frequently assume the network configuration cannot be changed. Instead, we believe that both computing and network resources should be able to be updated concurrently for increased flexibility and to satisfy SLA and Qos requirements. As such, we formulate and propose a Genetic Algorithm based approach to solve the VM allocation and network management problem. We built an experimental NFV platform, and run a set of experiments. The results show that our proposed GA approach can compute configurations to to three orders of magnitude faster than traditional solutions.",
"Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.",
"In an operator's datacenter, optical technologies can be employed to perform network function (NF) chaining for larger aggregated flows in parallel with the conventional packet-based fine-grained traffic steering schemes. When network function virtualization (NFV) is enabled, virtualized NFs (vNF) can be placed when and where needed. In this study, we identify the possibility of minimizing the expensive optical electronic optical (O E O) conversions for NFV chaining in packet optical datacenters, which is introduced by the on-demand placement of vNFs. When the vNFs of the same NF chain are properly grouped into fewer pods, traffic flows can avoid unnecessary traversals in the optical domain. We formulate the problem of optimal vNF placement in binary integer programming (BIP), and propose an alternative efficient heuristic algorithm to solve this problem. Evaluation results show that our algorithm can achieve near-optimal O E O conversions comparable to BIP. We also demonstrate the effectiveness of our algorithm under various scenarios, with comparison to a simple first-fit algorithm.",
"Middleboxes or network appliances like firewalls, proxies, and WAN optimizers have become an integral part of today’s ISP and enterprise networks. Middlebox functionalities are usually deployed on expensive and proprietary hardware that require trained personnel for deployment and maintenance. Middleboxes contribute significantly to a network’s capital and operation costs. In addition, organizations often require their traffic to pass through a specific sequence of middleboxes for compliance with security and performance policies. This makes the middlebox deployment and maintenance tasks even more complicated. Network function virtualization (NFV) is an emerging and promising technology that is envisioned to overcome these challenges. It proposes to move packet processing from dedicated hardware middleboxes to software running on commodity servers. In NFV terminology, software middleboxes are referred to as virtualized network functions (VNFs). It is a challenging problem to determine the required number and placement of VNFs that optimizes network operational costs and utilization, without violating service level agreements. We call this the VNF orchestration problem (VNF-OP) and provide an integer linear programming formulation with implementation in CPLEX. We also provide a dynamic programming-based heuristic to solve larger instances of VNF-OP. Trace driven simulations on real-world network topologies demonstrate that the heuristic can provide solutions that are within 1.3 times of the optimal solution. Our experiments suggest that a VNF-based approach can provide more than @math reduction in the operational cost of a network."
]
} |
1610.08452 | 2540757487 | Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy. | Related Work There have been a number of studies, including our own prior work @cite_12 , proposing methods for detecting @cite_4 @cite_22 @cite_14 @cite_28 and removing @cite_29 @cite_4 @cite_18 @cite_5 @cite_0 @cite_14 @cite_12 @cite_2 unfairness when it is defined in terms of disparate treatment, disparate impact or both. However, as pointed out earlier, the disparate impact notion might be less meaningful in scenarios where ground truth decisions are available. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_29",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_12"
],
"mid": [
"2473695717",
"2026019770",
"2014352947",
"2059141064",
"2166454173",
"2100960835",
"1961345416",
"2162670686",
"2145234462",
"2295073825"
],
"abstract": [
"The goal of minimizing misclassification error on a training set is often just one of several real-world goals that might be defined on different datasets. For example, one may require a classifier to also make positive predictions at some specified rate for some subpopulation (fairness), or to achieve a specified empirical recall. Other real-world goals include reducing churn with respect to a previously deployed model, or stabilizing online training. In this paper we propose handling multiple goals on multiple datasets by training with dataset constraints, using the ramp penalty to accurately quantify costs, and present an efficient algorithm to approximately optimize the resulting non-convex constrained optimization problem. Experiments on both benchmark and real-world industry datasets demonstrate the effectiveness of our approach.",
"In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.",
"What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process. When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses. We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.",
"With the support of the legally-grounded methodology of situation testing, we tackle the problems of discrimination discovery and prevention from a dataset of historical decisions by adopting a variant of k-NN classification. A tuple is labeled as discriminated if we can observe a significant difference of treatment among its neighbors belonging to a protected-by-law group and its neighbors not belonging to it. Discrimination discovery boils down to extracting a classification model from the labeled tuples. Discrimination prevention is tackled by changing the decision value for tuples labeled as discriminated before training a classifier. The approach of this paper overcomes legal weaknesses and technical limitations of existing proposals.",
"Most of the decisions in the today's knowledge society are taken on the basis of historical data by extracting models, patterns, profiles, and rules of human behavior in support of (automated) decision making. There is then the need of developing models, methods and technologies for modelling the processes of discrimination analysis in order to discover and prevent discrimination phenomena. In this respect, discrimination analysis from data should build over the large body of existing legal and economic studies. This paper intends to provide a multi-disciplinary survey of the literature on discrimination data analysis, including methods for data collection, empirical studies, controlled experiments, statistical evidence, and their legal requirements and grounds. We cover the following mainstream research lines: labour economic models, (quasi-)experimental approaches such as auditing and controlled experiments, profiling-based approaches such as racial profiling and credit markets, and the recently blooming research on knowledge discovery approaches.",
"We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of \"fair affirmative action,\" which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.",
"With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect individuals' lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be nondiscriminatory and fair in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency.",
"We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification.",
"The concept of classification without discrimination is a new area of research. (Kamiran & Calders, ) introduced the idea of Classification with No Discrimination (CND)and proposed a solution based on “massaging” the data to remove the discrimination from it with the least possible changes. In this paper, we propose a new solution to the CND problem by introducing a sampling scheme for making the data discrimination free instead of relabeling the dataset. On the resulting non-discriminatory dataset we then learn a classifier. This new method is not only less intrusive as compared to the “massaging” but also outperforms the “reweighing” approach of (, 2009). The proposed method has been implemented and experimental results on the Census Income dataset show promising results: in all experiments our method performs onpar with the state-of-the art non-discriminatory techniques.",
""
]
} |
1610.08452 | 2540757487 | Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy. | A number of previous studies have pointed out racial disparities in both automated @cite_11 as well as human @cite_1 @cite_7 decision making systems related to criminal justice. For example, a recent work by @cite_1 detects racial disparities in NYPD SQF program, inspired by a notion of unfairness similar to our notion of disparate mistreatment. More specifically, it uses ground truth (stops leading to successful discovery of an illegal weapon on the suspect) to show that blacks were treated unfairly since false positive rates in stops were higher for them than for whites. The study's findings provide further justification for the need for data-driven decision making systems without . | {
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_11"
],
"mid": [
"2271514814",
"1999173016",
""
],
"abstract": [
"Recent studies have examined racial disparities in stop-and-frisk, a widely employed but controversial policing tactic. The statistical evidence, however, has been limited and contradictory. We investigate by analyzing three million stops in New York City over five years, focusing on cases where officers suspected the stopped individual of criminal possession of a weapon (CPW). For each CPW stop, we estimate the ex ante probability that the detained suspect has a weapon. We find that in more than 40 of cases, the likelihood of finding a weapon (typically a knife) was less than 1 , raising concerns that the legal requirement of “reasonable suspicion” was often not met. We further find that blacks and Hispanics were disproportionately stopped in these low hit rate contexts, a phenomenon that we trace to two factors: (1) lower thresholds for stopping individuals — regardless of race — in high-crime, predominately minority areas, particularly public housing; and (2) lower thresholds for stopping minorities relative to similarly situated whites. Finally, we demonstrate that by conducting only the 6 of stops that are statistically most likely to result in weapons seizure, one can both recover the majority of weapons and mitigate racial disparities in who is stopped. We show that this statistically informed stopping strategy can be approximated by simple, easily implemented heuristics with little loss in efficiency.",
"Allegations of racially biased policing are a contentious issue in many communities. Processes that flag potential problem officers have become a key component of risk management systems at major police departments. We present a statistical method to flag potential problem officers by blending three methodologies that are the focus of active research efforts: propensity score weighting, doubly robust estimation, and false discovery rates. Compared with other systems currently in use, the proposed method reduces the risk of flagging a substantial number of false positives by more rigorously adjusting for potential confounders and by using the false discovery rate as a measure to flag officers. We apply the methodology to data on 500,000 pedestrian stops in New York City in 2006. Of the nearly 3,000 New York City Police Department officers regularly involved in pedestrian stops, we flag 15 officers who stopped a substantially greater fraction of black and Hispanic suspects than our statistical benchmark pre...",
""
]
} |
1610.08372 | 2503291992 | On-line social networks are complex ensembles of inter-linked communities that interact on different topics. Some communities are characterized by what are usually referred to as deviant behaviors, conducts that are commonly considered inappropriate with respect to the society's norms or moral standards. Eating disorders, drug use, and adult content consumption are just a few examples. We refer to such communities as deviant networks. It is commonly believed that such deviant networks are niche, isolated social groups, whose activity is well separated from the mainstream social-media life. According to this assumption, research studies have mostly considered them in isolation. In this work we focused on adult content consumption networks, which are present in many on-line social media and in the Web in general. We found that few small and densely connected communities are responsible for most of the content production. Differently from previous work, we studied how such communities interact with the whole social network. We found that the produced content flows to the rest of the network mostly directly or through bridge-communities, reaching at least 450 times more users. We also show that a large fraction of the users can be inadvertently exposed to such content through indirect content resharing. We also discuss a demographic analysis of the producers and consumers networks. Finally, we show that it is easily possible to identify a few core users to radically uproot the diffusion process. We aim at setting the basis to study deviant communities in context. | Computer science research has dealt extensively with the problem of classification of groups along structural, temporal, behavioral, and topical dimensions @cite_27 @cite_22 @cite_7 . The relationship between group connectivity and shape of information cascades has also been explored, revealing an intertwinement between community boundaries and cascade reach that is particularly tight in communities built upon a common theme shared by all of their members @cite_31 @cite_3 @cite_9 @cite_17 . The degree of inter-community interaction has been analyzed mostly in the context of heavily polarized networks, the most classical example being online discussions between two opposing political views @cite_28 @cite_0 @cite_10 . These studies explored methods to quantify segregation @cite_29 , but mainly focus on networks formed by two main divergent clusters. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_29",
"@cite_3",
"@cite_0",
"@cite_27",
"@cite_31",
"@cite_10",
"@cite_17"
],
"mid": [
"2043383814",
"2282565172",
"2152284345",
"",
"2178519838",
"1806085624",
"91442942",
"",
"19838944",
"162554323",
"2156577639"
],
"abstract": [
"Social groups play a crucial role in social media platforms because they form the basis for user participation and engagement. Groups are created explicitly by members of the community, but also form organically as members interact. Due to their importance, they have been studied widely (e.g., community detection, evolution, activity, etc.). One of the key questions for understanding how such groups evolve is whether there are different types of groups and how they differ. In Sociology, theories have been proposed to help explain how such groups form. In particular, the common identity and common bond theory states that people join groups based on identity (i.e., interest in the topics discussed) or bond attachment (i.e., social relationships). The theory has been applied qualitatively to small groups to classify them as either topical or social. We use the identity and bond theory to define a set of features to classify groups into those two categories. Using a dataset from Flickr, we extract user-defined groups and automatically-detected groups, obtained from a community detection algorithm. We discuss the process of manual labeling of groups into social or topical and present results of predicting the group label based on the defined features. We directly validate the predictions of the theory showing that the metrics are able to forecast the group type with high accuracy. In addition, we present a comparison between declared and detected groups along topicality and sociality dimensions.",
"Dynamics of social systems are the result of the complex superposition of interactions taking place at different scales, ranging from the pairwise communications between individuals to the macroscopic evolutionary patterns of the full interaction graph. Social communities, namely groups of people originated by any spontaneous aggregation process, constitute the mid-ground between such two extremes. Groups are important constituents of social environments as they form the basis for people’s participation and engagement beyond their minute dyadic interactions. Communities in online social media have been studied widely in their static and evolutionary aspects, but only recently some attention has been devoted to the exploration of their nature. Besides the characterization of online communities along their spatio-temporal and activity features, the recent advancements in the emerging field of computational sociology have provided a new lens to study social aggregations along their social and topical dimensions. Using the online photo sharing community Flickr as a main running example, we survey some techniques that have been used to get a multi-faceted description of group types and we show that different types of groups impact on orthogonal interaction processes on the social graph, such as the diffusion of information along social ties. Our overview supports the intuition that a more nuanced description of groups could not only improve the understanding of the activity of the user base but can also foster a better interpretation of other phenomena occurring on social graphs.",
"In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 \"A-list\" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.",
"",
"Polarization in social media networks is a fact in several scenarios such as political debates and other contexts such as same-sex marriage, abortion and gun control. Understanding and quantifying polarization is a long-term challenge to researchers from several areas, also being a key information for tasks such as opinion analysis. In this paper, we perform a systematic comparison between social networks that arise from both polarized and non-polarized contexts. This comparison shows that the traditional polarization metric -modularity - is not a direct measure of antagonism between groups, since non-polarized networks may be also divided into fairly modular communities. To bridge this conceptual gap, we propose a novel polarization metric based on the analysis of the boundary of a pair of (potentially polarized) communities, which better captures the notions of antagonism and polarization. We then characterize polarized and non-polarized social networks according to the concentration of high-degree nodes in the boundary of communities, and found that polarized networks tend to exhibit low concentration of popular nodes along the boundary. To demonstrate the usefulness of our polarization measures, we analyze opinions expressed on Twitter on the gun control issue in the United States, and conclude that our novel metrics help making sense of opinions expressed on online media.",
"People's interests and people's social relationships are intuitively connected, but understanding their interplay and whether they can help predict each other has remained an open question. We examine the interface of two decisive structures forming the backbone of online social media: the graph structure of social networks - who connects with whom - and the set structure of topical affiliations - who is interested in what. In studying this interface, we identify key relationships whereby each of these structures can be understood in terms of the other. The context for our analysis is Twitter, a complex social network of both follower relationships and communication relationships. On Twitter, \"hashtags\" are used to label conversation topics, and we examine hashtag usage alongside these social structures. We find that the hashtags that users adopt can predict their social relationships, and also that the social relationships between the initial adopters of a hashtag can predict the future popularity of that hashtag. By studying weighted social relationships, we observe that while strong reciprocated ties are the easiest to predict from hashtag structure, they are also much less useful than weak directed ties for predicting hashtag popularity. Importantly, we show that computationally simple structural determinants can provide remarkable performance in both tasks. While our analyses focus on Twitter, we view our findings as broadly applicable to topical affiliations and social relationships in a host of diverse contexts, including the movies people watch, the brands people like, or the locations people frequent.",
"In this study we investigate how social media shape the networked public sphere and facilitate communication between communities with different political orientations. We examine two networks of political communication on Twitter, comprised of more than 250,000 tweets from the six weeks leading up to the 2010 U.S. congressional midterm elections. Using a combination of network clustering algorithms and manually-annotated data we demonstrate that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between left- and right-leaning users. Surprisingly this is not the case for the user-to-user mention network, which is dominated by a single politically heterogeneous cluster of users in which ideologically-opposed individuals interact at a much higher rate compared to the network of retweets. To explain the distinct topologies of the retweet and mention networks we conjecture that politically motivated individuals provoke interaction by injecting partisan content into information streams whose primary audience consists of ideologically-opposed users. We conclude with statistical evidence in support of this hypothesis.",
"",
"Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.",
"In the context of a national election, this study explores more than 69,000 Twitter messages containing mentions of political parties and about 2,500 related user profiles to investigate the network structure of political microbloggers with respect to, first, their party preference and, second, the topics they discuss. We find that political microbloggers tend to follow like-minded peers. Microbloggers in a cohesive group tend to have the same political preferences. In addition, we conduct a content analysis of the political debate on Twitter to explore which topics and politicians are discussed and whether this debate reflects an ideological divide among participating users. While there are some discussion topics that are dominated by politically like-minded microbloggers, the majority of topics is discussed by a diverse group of microbloggers with various political preferences.",
"Social groups play a crucial role in online social media because they form the basis for user participation and engagement. Although widely studied in their static and evolutionary aspects, no much attention has been devoted to the exploration of the nature of groups. In fact, groups can originate from different aggregation processes that may be determined by several orthogonal factors. A key question in this scenario is whether it is possible to identify the different types of groups that emerge spontaneously in online social media and how they differ. We propose a general framework for the characterization of groups along the geographical, temporal, and socio-topical dimensions and we apply it on a very large dataset from Flickr. In particular, we define a new metric to account for geographic dispersion, we use a clustering approach on activity traces to extract classes of different temporal footprints, and we transpose the “common identity and common bond” theory into metrics to identify the skew of a group towards sociality or topicality. We directly validate the predictions of the sociological theory showing that the metrics are able to forecast with high accuracy the group type when compared to a human-generated ground truth. Last, we frame our contribution into a wider context by putting in relation different types of groups with communities detected algorithmically on the social graph and by showing the effect that the group type might have on processes of information diffusion. Results support the intuition that a more nuanced description of groups could improve not only the understanding of the activity of the user base but also the interpretation of other phenomena occurring on social graphs."
]
} |
1610.08372 | 2503291992 | On-line social networks are complex ensembles of inter-linked communities that interact on different topics. Some communities are characterized by what are usually referred to as deviant behaviors, conducts that are commonly considered inappropriate with respect to the society's norms or moral standards. Eating disorders, drug use, and adult content consumption are just a few examples. We refer to such communities as deviant networks. It is commonly believed that such deviant networks are niche, isolated social groups, whose activity is well separated from the mainstream social-media life. According to this assumption, research studies have mostly considered them in isolation. In this work we focused on adult content consumption networks, which are present in many on-line social media and in the Web in general. We found that few small and densely connected communities are responsible for most of the content production. Differently from previous work, we studied how such communities interact with the whole social network. We found that the produced content flows to the rest of the network mostly directly or through bridge-communities, reaching at least 450 times more users. We also show that a large fraction of the users can be inadvertently exposed to such content through indirect content resharing. We also discuss a demographic analysis of the producers and consumers networks. Finally, we show that it is easily possible to identify a few core users to radically uproot the diffusion process. We aim at setting the basis to study deviant communities in context. | In the context of internet pornography consumption, computer science literature studied the categorization of content and frequency of use @cite_16 @cite_35 @cite_32 . A wider corpus of research has been produced by social and behavioral scientists by means of surveys administered to relatively small groups. Special attention has been given to the relationship between age or gender and the exposure (voluntary or unwanted) to internet porn @cite_34 @cite_42 @cite_4 @cite_14 @cite_46 , with particular interest to the age band of young teens @cite_14 @cite_46 @cite_20 . Numbers vary substantially between studies, but clearly men are more exposed than women (approximately 75 | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_4",
"@cite_42",
"@cite_32",
"@cite_34",
"@cite_46",
"@cite_16",
"@cite_20"
],
"mid": [
"2092079672",
"1991211884",
"",
"",
"1915498292",
"2089055658",
"2109163339",
"2339701177",
"2149815291"
],
"abstract": [
"The Internet has evolved into a huge video delivery infrastructure, with websites such as YouTube and Netflix appearing at the top of most traffic measurement studies. However, most traffic studies have largely kept silent about an area of the Internet that (even today) is poorly understood: adult media distribution. Whereas ten years ago, such services were provided primarily via peer-to-peer file sharing and bespoke websites, recently these have converged towards what is known as Porn 2.0''. These popular web portals allow users to upload, view, rate and comment videos for free. Despite this, we still lack even a basic understanding of how users interact with these services. This paper seeks to address this gap by performing the first large-scale measurement study of one of the most popular Porn 2.0 websites: YouPorn. We have repeatedly crawled the website to collect statistics about 183k videos, witnessing over 60 billion views. Through this, we offer the first characterisation of this type of corpus, highlighting the nature of YouPorn's repository. We also inspect the popularity of objects and how they relate to other features such as the categories to which they belong. We find evidence for a high level of flexibility in the interests of its user base, manifested in the extremely rapid decay of content popularity over time, as well as high susceptibility to browsing order. Using a small-scale user study, we validate some of our findings and explore the infrastructure design and management implications of our observations.",
"This national survey of youth, ages 10 to 17, and their caretakers has several implications for the current debate about young people and Internet pornography. Twenty five percent of youth had unwanted exposure to sexual pictures on the Internet in the past year, challenging the prevalent assumption that the problem is primarily about young people motivated to actively seek out pornography. Most youth had no negative reactions to their unwanted exposure, but one quarter said they were very or extremely upset, suggesting a priority need for more research on and interventions directed toward such negative effects. The use of filtering and blocking software was associated with a modest reduction in unwanted exposure, suggesting that it may help but is far from foolproof. Various forms of parental supervision were not associated with any reduction in exposure. The authors urge that social scientific research be undertaken to inform this highly contentious public policy controversy.",
"",
"",
"Previous research on exposure to different types of pornography has primarily relied on analyses of millions of search terms and histories or on user exposure patterns within a given time period rather than the self-reported frequency of consumption. Further, previous research has almost exclusively relied on theoretical or ad hoc overarching categorizations of different types of pornography, when investigating patterns of pornography exposure, rather than latent structure analyses of these exposure patterns. In contrast, using a large sample of 18- to 40-year-old heterosexual and nonheterosexual Croatian men and women, this study investigated the self-reported frequency of using 27 different types of pornography and statistically explored their latent structures. The results showed substantial differences in consumption patterns across gender and sexual orientation. However, latent structure analyses of the 27 different types of pornography assessed suggested that although several categories of consumpti...",
"Abstract We examined exposure to Internet pornography before the age of 18, as reported by college students (n = 563), via an online survey. Ninety-three percent of boys and 62 of girls were exposed to online pornography during adolescence. Exposure prior to age 13 was relatively uncommon. Boys were more likely to be exposed at an earlier age, to see more images, to see more extreme images (e.g., rape, child pornography), and to view pornography more often, while girls reported more involuntary exposure. If participants in this study are typical of young people, exposure to pornography on the Internet can be described as a normative experience, and more study of its impact is clearly warranted.",
"The purpose of this study was to analyze pornography exposure in a sample of 702 Italian adolescents (46 males; mean age = 18.2, SD = 0.8). Among male students, 11 were not exposed, 44.5 were exposed to nonviolent material, and 44.5 were exposed to violent degrading material. Among female students, 60.8 were not exposed, 20.4 were exposed to nonviolent material, and 18.8 were exposed to violent degrading material. Among males, adjusted odds ratio (AdjOR) of exposure to violent degrading pornography were higher if using alcohol, having friends who sell buy sex, and taking sexual pictures. Females who were victims of family violence, attending technical vocational schools, and taking sexual pictures had higher AdjOR of watching violent pornography; smoking and having friends who sell buy sex were associated with both nonviolent and violent degrading exposure. Exposure to violent degrading pornography is common among adolescents, associated with at-risk behaviors, and, for females, it correlates with a history of victimization. School nurses have a pivotal role in including discussions about pornography in interventions about relationships, sexuality, or violence. Language: en",
"YouPorn is one of the largest providers of adult content on the web. Being free of charge, the video portal allows users - besides watching - to upload, categorize, and comment on pornographic videos. With this position paper, we point out the challenges of analyzing the textual data offered with the videos. We report on first experiments and problems with our YouPorn dataset , which we extracted from the non-graphical content of the YP website. To gain some insights, we performed association rule mining on the video categories and tags, and investigated preferences of users based on their nickname. Hoping that future research will be able to build upon our initial experiences, we make the ready-to-use YP dataset publicly available.",
"OBJECTIVE.The goal was to assess the extent of unwanted and wanted exposure to online pornography among youth Internet users and associated risk factors. METHODS.A telephone survey of a nationally representative sample of 1500 youth Internet users aged 10 to 17 years was conducted between March and June 2005. RESULTS.Forty-two percent of youth Internet users had been exposed to online pornography in the past year. Of those, 66 reported only unwanted exposure. Multinomial logistic regression analysis was used to compare youth with unwanted exposure only or any wanted exposure with those with no exposure. Unwanted exposure was related to only 1 Internet activity, namely, using filesharing programs to download images. Filtering and blocking software reduced the risk of unwanted exposure, as did attending an Internet safety presentation by law enforcement personnel. Unwanted exposure rates were higher for teens, youth who reported being harassed or sexually solicited online or interpersonally victimized offline, and youth who scored in the borderline or clinically significant range on the Child Behavior Checklist subscale for depression. Wanted exposure rates were higher for teens, boys, and youth who used file-sharing programs to download images, talked online to unknown persons about sex, used the Internet at friends’ homes, or scored in the borderline or clinically significant range on the Child Behavior Checklist subscale for rule-breaking. Depression also could be a risk factor for some youth. Youth who used filtering and blocking software had lower odds of wanted exposure. CONCLUSIONS.More research concerning the potential impact of Internet pornography on youth is warranted, given the high rate of exposure, the fact that much exposure is unwanted, and the fact that youth with certain vulnerabilities, such as depression, interpersonal victimization, and delinquent tendencies, have more exposure."
]
} |
1610.07733 | 2539535812 | Cross-validation (CV) is a technique for evaluating the ability of statistical models learning systems based on a given data set. Despite its wide applicability, the rather heavy computational cost can prevent its use as the system size grows. To resolve this difficulty in the case of Bayesian linear regression, we develop a formula for evaluating the leave-one-out CV error approximately without actually performing CV. The usefulness of the developed formula is tested by statistical mechanical analysis for a synthetic model. This is confirmed by application to a real-world supernova data set as well. | There have been several studies of CV for linear regression. The analytical formula for evaluating the LOO CV error (LOOE) exactly, without actually performing CV, is widely known for standard linear regression and ridge regression @cite_8 . This formula was extended to the case in which linear constraints are present @cite_5 . An alternative measure, which has a property similar to that of LOOE and can be evaluated at a lower computational cost, was proposed as the generalized cross--validation'' in @cite_7 for regularized linear regression. Two types of LOOE approximation formulas for LASSO were recently provided in @cite_15 . In contrast to these, our aim here is to develop a computationally feasible approximate formula to evaluate LOOE in the Bayesian formalism in which sparse (singular) priors can be employed. A similar attempt has been made for feedforward neural networks in @cite_9 . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_5",
"@cite_15"
],
"mid": [
"1990381576",
"2050297026",
"",
"2121504083",
"2231154347"
],
"abstract": [
"Consider the ridge estimate (λ) for β in the model unknown, (λ) = (X T X + nλI)−1 X T y. We study the method of generalized cross-validation (GCV) for choosing a good value for λ from the data. The estimate is the minimizer of V(λ) given by where A(λ) = X(X T X + nλI)−1 X T . This estimate is a rotation-invariant version of Allen's PRESS, or ordinary cross-validation. This estimate behaves like a risk improvement estimator, but does not require an estimate of σ2, so can be used when n − p is small, or even if p ≥ 2 n in certain cases. The GCV method can also be used in subset selection and singular value truncation methods for regression, and even to choose from among mixtures of these methods.",
"We show that data augmentation provides a rather general formulation for the study of biased prediction techniques using multiple linear regression. Variable selection is a limiting case, and Ridge regression is a special case of data augmentation. We propose a way to obtain predictors given a credible criterion of good prediction.",
"",
"Abstract There is a well-known simple formula for computing prediction sum of squares (PRESS) residuals in a regression problem without having to refit the curve for each observation. This note shows that the same basic result holds for fitting a regression function when the regression coefficients are subject to linear constraints.",
"We investigate leave-one-out cross validation (CV) as a determinator of the weight of the penalty term in the least absolute shrinkage and selection operator (LASSO). First, on the basis of the message passing algorithm and a perturbative discussion assuming that the number of observations is sufficiently large, we provide simple formulas for approximately assessing two types of CV errors, which enable us to significantly reduce the necessary cost of computation. These formulas also provide a simple connection of the CV errors to the residual sums of squares between the reconstructed and the given measurements. Second, on the basis of this finding, we analytically evaluate the CV errors when the design matrix is given as a simple random matrix in the large size limit by using the replica method. Finally, these results are compared with those of numerical simulations on finite-size systems and are confirmed to be correct. We also apply the simple formulas of the first type of CV error to an actual dataset of the supernovae."
]
} |
1610.07671 | 2544211228 | We present methods for offline generation of sparse roadmap spanners that result in graphs 79 smaller than existing approaches while returning solutions of equivalent path quality. Our method uses a hybrid approach to sampling that combines traditional graph discretization with random sampling. We present techniques that optimize the graph for the L1-norm metric function commonly used in joint-based robotic planning, purposefully choosing a @math -stretch factor based on the geometry of the space, and removing redundant edges that do not contribute to the graph quality. A high-quality pre-processed sparse roadmap is then available for re-use across many different planning scenarios using standard repair and re-plan methods. Pre-computing the roadmap offline results in more deterministic solutions, reduces the memory requirements by affording complex rejection criteria, and increases the speed of planning in high-dimensional spaces allowing more complex problems to be solved such as multi-modal task planning. Our method is validated through simulated benchmarks against the SPARS2 algorithm. The source code is freely available online as an open source extension to OMPL. | One of the most popular methods for solving the robotic motion planning problem has been the sampling-based probabilistic roadmap (PRM) @cite_8 . While in theory PRMs are pre-processed and reusable across all environments, in practice they are typically only re-usable for a given environment, though in some implementations more general re-use is actually achieved such as in Dynamic Roadmaps @cite_4 . Even for versions of PRMs that can re-use their roadmap in changing environments, there are open questions on how to generate efficient and high quality graphs. Traditional PRMs are probabilistically complete but do not provide any guarantees on the quality of the path returned. PRM* @cite_5 has been proven to be asymptotically optimal as samples are infinitely added to the roadmap. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_8"
],
"mid": [
"1971086298",
"2022115960",
""
],
"abstract": [
"During the last decade, sampling-based path planning algorithms, such as probabilistic roadmaps (PRM) and rapidly exploring random trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g. as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g. showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e. such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.",
"We present a practical strategy for real-time path planning for articulated robot arms in changing environments by integrating PRM for Changing Environments with 3D sensor data. Our implementation on Care-O-Bot 3 identifies bottlenecks in the algorithm and introduces new methods that solve the overall task of detecting obstacles and planning a path around them in under 100 ms. A fast planner is necessary to enable the robot to react to quickly changing human environments. We have tested our implementation in real-world experiments where a human subject enters the manipulation area, is detected and safely avoided by the robot. This capability is critical for future applications in automation and service robotics where humans will work closely with robots to jointly perform tasks.",
""
]
} |
1610.07671 | 2544211228 | We present methods for offline generation of sparse roadmap spanners that result in graphs 79 smaller than existing approaches while returning solutions of equivalent path quality. Our method uses a hybrid approach to sampling that combines traditional graph discretization with random sampling. We present techniques that optimize the graph for the L1-norm metric function commonly used in joint-based robotic planning, purposefully choosing a @math -stretch factor based on the geometry of the space, and removing redundant edges that do not contribute to the graph quality. A high-quality pre-processed sparse roadmap is then available for re-use across many different planning scenarios using standard repair and re-plan methods. Pre-computing the roadmap offline results in more deterministic solutions, reduces the memory requirements by affording complex rejection criteria, and increases the speed of planning in high-dimensional spaces allowing more complex problems to be solved such as multi-modal task planning. Our method is validated through simulated benchmarks against the SPARS2 algorithm. The source code is freely available online as an open source extension to OMPL. | Sparse roadmap spanners, instead, have recently been proven to provide asymptotically near-optimal guarantees within a @math -stretch factor. For example, if the @math -stretch factor is 1.1, then the maximum length a path can be from its optimal solution is 10 In order to have the same asymptotic optimality guarantees as PRM* within a @math -stretch factor, a number of checks are required to determine which potential vertices and edges should be saved to have coverage across a robot's free space. Only configurations that are useful for 1) coverage, 2) connectivity, or 3) improving the quality of paths on the sparse roadmap relative to the optimal paths in the c-space. Two parameters @math and the sparse delta factor @math control the sparsity of the graph. For more background on these criteria the reader is encouraged to reference @cite_2 . | {
"cite_N": [
"@cite_2"
],
"mid": [
"2072127296"
],
"abstract": [
"Roadmap spanners provide a way to acquire sparse data structures that efficiently answer motion planning queries with probabilistic completeness and asymptotic near-optimality. The current SPARS method provides these properties by building two graphs in parallel: a dense asymptotically-optimal roadmap based on PRM* and its spanner. This paper shows that it is possible to relax the conditions under which a sample is added to the spanner and provide guarantees, while not requiring the use of a dense graph. A key aspect of SPARS is that the probability of adding nodes to the roadmap goes to zero as iterations increase, which is maintained in the proposed extension. The paper describes the new algorithm, argues its theoretical properties and evaluates it against PRM* and the original SPARS algorithm. The experimental results show that the memory requirements of the method upon construction are dramatically reduced, while returning competitive quality paths with PRM*. There is a small sacrifice in the size of the final spanner relative to SPARS but the new method still returns graphs orders of magnitudes smaller than PRM*, leading to very efficient online query resolution."
]
} |
1610.07671 | 2544211228 | We present methods for offline generation of sparse roadmap spanners that result in graphs 79 smaller than existing approaches while returning solutions of equivalent path quality. Our method uses a hybrid approach to sampling that combines traditional graph discretization with random sampling. We present techniques that optimize the graph for the L1-norm metric function commonly used in joint-based robotic planning, purposefully choosing a @math -stretch factor based on the geometry of the space, and removing redundant edges that do not contribute to the graph quality. A high-quality pre-processed sparse roadmap is then available for re-use across many different planning scenarios using standard repair and re-plan methods. Pre-computing the roadmap offline results in more deterministic solutions, reduces the memory requirements by affording complex rejection criteria, and increases the speed of planning in high-dimensional spaces allowing more complex problems to be solved such as multi-modal task planning. Our method is validated through simulated benchmarks against the SPARS2 algorithm. The source code is freely available online as an open source extension to OMPL. | Our hybrid approach of combining discretized lattices with random sampling is similar to the extensive work on the subject in @cite_3 . They surprisingly found that deterministic sampling methods are superior the original PRM, noting that by definition a collection of pseudo-random samples should have too many points in some places, and not enough in others.'' Our approach is most similar to their proposed subsampled grid search (SGS), where discretized vertices along a grid are coarsely spaced and a local planner is used to collision check the edges between the grid. The unique aspect of our approach is that we size the grid optimally for the requirements of the spanning graph, and additionally perform random sampling as a second step. To the best of our knowledge this hybrid approach is unique in the literature. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2158207287"
],
"abstract": [
"We present, implement, and analyze a spectrum of closely-related planners, designed to gain insight into the relationship between classical grid search and probabilistic roadmaps (PRMs). Building on quasi-Monte Carlo sampling literature, we have developed deterministic variants of the PRM that use low-discrepancy and low-dispersion samples, including lattices. Classical grid search is extended using subsampling for collision detection and also the optimal-dispersion Sukharev grid, which can be considered as a kind of lattice-based roadmap to complete the spectrum. Our experimental results show that the deterministic variants of the PRM offer performance advantages in comparison to the original PRM and the recent Lazy PRM. This even includes searching using a grid with subsampled collision checking. Our theoretical analysis shows that all of our deterministic PRM variants are resolution complete and achieve the best possible asymptotic convergence rate, which is shown superior to that obtained by random sampling. Thus, in surprising contrast to recent trends, there is both experimental and theoretical evidence that some forms of grid search are superior to the original PRM."
]
} |
1610.07874 | 2768038943 | We provide new upper bounds for mixing times of general finite Markov chains. We use these bounds to show that the total variation mixing time is robust under rough isometry for bounded degree graphs that are roughly isometric to trees. | Returning to mixing times, Oliveira @cite_7 and independently Peres and Sousi @cite_6 showed that for reversible chains, not only does hold, but in fact @math for any @math . Peres and Sousi then used this equivalence to show that if @math and @math are two lazy reversible Markov chains on the same tree @math with conductances bounded above and below by some strictly positive constants, then Our Theorem yields another simple proof of ): see Corollary . The equivalence @math in the more delicate case @math was later established by @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_6",
"@cite_7"
],
"mid": [
"2128271519",
"1989183425",
"1985980054"
],
"abstract": [
"Given an irreducible discrete time Markov chain on a finite state space, we consider the largest expected hitting time T(α) of a set of stationary measure at least α for α ∈ (0, 1). We obtain tight inequalities among the values of T(α) for different choices of α. One consequence is that T(α) ≤ T(1 2) α for all α < 1 2. As a corollary we have that if the chain is lazy in a certain sense as well as reversible, then T(1 2) is equivalent to the chain’s mixing time, answering a question of Peres. We furthermore demonstrate that the inequalities we establish give an almost everywhere pointwise limiting characterisation of possible hitting time functions T(α) over the domain α ∈ (0, 1 2].",
"We consider irreducible reversible discrete time Markov chains on a finite state space. Mixing times and hitting times are fundamental parameters of the chain. We relate them by showing that the mixing time of the lazy chain is equivalent to the maximum over initial states (x ) and large sets (A ) of the hitting time of (A ) starting from (x ). We also prove that the first time when averaging over two consecutive time steps is close to stationarity is equivalent to the mixing time of the lazy version of the chain.",
"Let @math . We show that that the mixing time of a continuous-time Markov chain on a finite state space is about as large as the largest expected hitting time of a subset of the state space with stationary measure @math . Suitably modified results hold in discrete time and or without the reversibility assumption. The key technical tool in the proof is the construction of random set @math such that the hitting time of @math is a light-tailed stationary time for the chain. We note that essentially the same results were obtained independently by Peres and Sousi."
]
} |
1610.07874 | 2768038943 | We provide new upper bounds for mixing times of general finite Markov chains. We use these bounds to show that the total variation mixing time is robust under rough isometry for bounded degree graphs that are roughly isometric to trees. | Another variant on the Lov 'asz and Kannan result was obtained by Morris and Peres @cite_12 , who sharpened in several ways, including allowing non-reversible chains and bounding the larger @math -mixing time. This mantle was then taken up by Goel, Montenegro and Tet ali @cite_0 , who used the spectral profile @math instead of the conductance profile @math . For reversible chains @math may be thought of as the smallest eigenvalue of the Laplacian on the graph restricted to @math , minimized over all sets @math of invariant measure at most @math ; in fact the definition of @math used in @cite_0 is different, but for reversible chains it is equivalent to this definition up to constants provided that @math . Goel, Montenegro and Tet ali gave an upper bound involving @math , and were able to recover the result of Morris and Peres @cite_12 in the reversible case by using a discrete Cheeger inequality to relate @math to @math . | {
"cite_N": [
"@cite_0",
"@cite_12"
],
"mid": [
"1996447604",
"2028920027"
],
"abstract": [
"On complete, non-compact manifolds and infinite graphs, Faber-Krahn inequalities have been used to estimate the rate of decay of the heat kernel. We develop this technique in the setting of finite Markov chains, proving upper and lower @math mixing time bounds via the spectral profile. This approach lets us recover and refine previous conductance-based bounds of mixing time (including the Morris-Peres result), and in general leads to sharper estimates of convergence rates. We apply this method to several models including groups with moderate growth, the fractal-like Viscek graphs, and the product group @math , to obtain tight bounds on the corresponding mixing times.",
"We show that a new probabilistic technique, recently introduced by the first author, yields the sharpest bounds obtained to date on mixing times of Markov chains in terms of isoperimetric properties of the state space (also known as conductance bounds or Cheeger inequalities). We prove that the bounds for mixing time in total variation obtained by Lovasz and Kannan, can be refined to apply to the maximum relative deviation |p n (x,y) π(y)−1| of the distribution at time n from the stationary distribution π. We then extend our results to Markov chains on infinite state spaces and to continuous-time chains. Our approach yields a direct link between isoperimetric inequalities and heat kernel bounds; previously, this link rested on analytic estimates known as Nash inequalities."
]
} |
1610.07874 | 2768038943 | We provide new upper bounds for mixing times of general finite Markov chains. We use these bounds to show that the total variation mixing time is robust under rough isometry for bounded degree graphs that are roughly isometric to trees. | Kozma @cite_13 showed that the upper bound on the @math -mixing time given in @cite_0 is not always correct up to constant factors. He then asked the general question is the mixing time a geometric property?'' and conjectured that the mixing time was robust (up to constant factors) under rough isometry for bounded degree graphs. | {
"cite_N": [
"@cite_0",
"@cite_13"
],
"mid": [
"1996447604",
"2118713653"
],
"abstract": [
"On complete, non-compact manifolds and infinite graphs, Faber-Krahn inequalities have been used to estimate the rate of decay of the heat kernel. We develop this technique in the setting of finite Markov chains, proving upper and lower @math mixing time bounds via the spectral profile. This approach lets us recover and refine previous conductance-based bounds of mixing time (including the Morris-Peres result), and in general leads to sharper estimates of convergence rates. We apply this method to several models including groups with moderate growth, the fractal-like Viscek graphs, and the product group @math , to obtain tight bounds on the corresponding mixing times.",
"We examine the spectral prole bound of Goel, Montenegro and Tet ali for the L 1 mixing time of continuous-time random walk in reversible settings. We nd that it is precise up to a log log factor, and that this log log factor cannot be improved."
]
} |
1610.07874 | 2768038943 | We provide new upper bounds for mixing times of general finite Markov chains. We use these bounds to show that the total variation mixing time is robust under rough isometry for bounded degree graphs that are roughly isometric to trees. | However, a construction of Ding and Peres @cite_10 shows that cannot hold in general if the underlying graph is not a tree, even if it has bounded degree. The message that we take from this is that the total variation mixing time is geometrically robust in general. One of the main aims of this article is to show that there is robustness amongst a wider class than just trees: indeed, holds if the graph has bounded degree and is roughly isometric to a tree. We will explain Ding and Peres' construction in more detail in Section . | {
"cite_N": [
"@cite_10"
],
"mid": [
"2063224062"
],
"abstract": [
"In this note, we demonstrate an instance of bounded-degree graphs of size @math , for which the total variation mixing time for the random walk is decreased by a factor of @math if we multiply the edge-conductances by bounded factors in a certain way."
]
} |
1610.08119 | 2542392706 | Describable visual facial attributes are now commonplace in human biometrics and affective computing, with existing algorithms even reaching a sufficient point of maturity for placement into commercial products. These algorithms model objective facets of facial appearance, such as hair and eye color, expression, and aspects of the geometry of the face. A natural extension, which has not been studied to any great extent thus far, is the ability to model subjective attributes that are assigned to a face based purely on visual judgements. For instance, with just a glance, our first impression of a face may lead us to believe that a person is smart, worthy of our trust, and perhaps even our admiration - regardless of the underlying truth behind such attributes. Psychologists believe that these judgements are based on a variety of factors such as emotional states, personality traits, and other physiognomic cues. But work in this direction leads to an interesting question: how do we create models for problems where there is no ground truth, only measurable behavior? In this paper, we introduce a new convolutional neural network-based regression framework that allows us to train predictive models of crowd behavior for social attribute assignment. Over images from the AFLW face database, these models demonstrate strong correlations with human crowd ratings. | Due to the proliferation of low-cost high performance computing resources (, GPUs) and web-scale image data, large-scale image classification and labeling is now commonplace in computer vision. With respect to face images from the web, Labeled Faces in the Wild @cite_18 , YouTube Faces @cite_42 , MegaFace @cite_23 , Janus Benchmark A @cite_29 , and CelebA @cite_8 are all popular choices for a variety of facial modeling tasks beyond conventional face recognition. Attribute prediction, where the objective is to assign semantically meaningful labels to faces in order to build a human interpretable description of facial appearance, is the particular task we concentrate on in this paper. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_29",
"@cite_42",
"@cite_23"
],
"mid": [
"2157558673",
"1834627138",
"1949778830",
"2019464758",
""
],
"abstract": [
"Unsupervised joint alignment of images has been demonstrated to improve performance on recognition tasks such as face verification. Such alignment reduces undesired variability due to factors such as pose, while only requiring weak supervision in the form of poorly aligned examples. However, prior work on unsupervised alignment of complex, real-world images has required the careful selection of feature representation based on hand-crafted image descriptors, in order to achieve an appropriate, smooth optimization landscape. In this paper, we instead propose a novel combination of unsupervised joint alignment with unsupervised feature learning. Specifically, we incorporate deep learning into the congealing alignment framework. Through deep learning, we obtain features that can represent the image at differing resolutions based on network depth, and that are tuned to the statistics of the specific data being aligned. In addition, we modify the learning algorithm for the restricted Boltzmann machine by incorporating a group sparsity penalty, leading to a topographic organization of the learned filters and improving subsequent alignment results. We apply our method to the Labeled Faces in the Wild database (LFW). Using the aligned images produced by our proposed unsupervised algorithm, we achieve higher accuracy in face verification compared to prior work in both unsupervised and supervised alignment. We also match the accuracy for the best available commercial method.",
"Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.",
"Rapid progress in unconstrained face recognition has resulted in a saturation in recognition accuracy for current benchmark datasets. While important for early progress, a chief limitation in most benchmark datasets is the use of a commodity face detector to select face imagery. The implication of this strategy is restricted variations in face pose and other confounding factors. This paper introduces the IARPA Janus Benchmark A (IJB-A), a publicly available media in the wild dataset containing 500 subjects with manually localized face images. Key features of the IJB-A dataset are: (i) full pose variation, (ii) joint use for face recognition and face detection benchmarking, (iii) a mix of images and videos, (iv) wider geographic variation of subjects, (v) protocols supporting both open-set identification (1∶N search) and verification (1∶1 comparison), (vi) an optional protocol that allows modeling of gallery subjects, and (vii) ground truth eye and nose locations. The dataset has been developed using 1,501,267 million crowd sourced annotations. Baseline accuracies for both face detection and face recognition from commercial and open source algorithms demonstrate the challenge offered by this new unconstrained benchmark.",
"Recognizing faces in unconstrained videos is a task of mounting importance. While obviously related to face recognition in still images, it has its own unique characteristics and algorithmic requirements. Over the years several methods have been suggested for this problem, and a few benchmark data sets have been assembled to facilitate its study. However, there is a sizable gap between the actual application needs and the current state of the art. In this paper we make the following contributions. (a) We present a comprehensive database of labeled videos of faces in challenging, uncontrolled conditions (i.e., ‘in the wild’), the ‘YouTube Faces’ database, along with benchmark, pair-matching tests1. (b) We employ our benchmark to survey and compare the performance of a large variety of existing video face recognition techniques. Finally, (c) we describe a novel set-to-set similarity measure, the Matched Background Similarity (MBGS). This similarity is shown to considerably improve performance on the benchmark tests.",
""
]
} |
1610.08119 | 2542392706 | Describable visual facial attributes are now commonplace in human biometrics and affective computing, with existing algorithms even reaching a sufficient point of maturity for placement into commercial products. These algorithms model objective facets of facial appearance, such as hair and eye color, expression, and aspects of the geometry of the face. A natural extension, which has not been studied to any great extent thus far, is the ability to model subjective attributes that are assigned to a face based purely on visual judgements. For instance, with just a glance, our first impression of a face may lead us to believe that a person is smart, worthy of our trust, and perhaps even our admiration - regardless of the underlying truth behind such attributes. Psychologists believe that these judgements are based on a variety of factors such as emotional states, personality traits, and other physiognomic cues. But work in this direction leads to an interesting question: how do we create models for problems where there is no ground truth, only measurable behavior? In this paper, we introduce a new convolutional neural network-based regression framework that allows us to train predictive models of crowd behavior for social attribute assignment. Over images from the AFLW face database, these models demonstrate strong correlations with human crowd ratings. | Both @cite_30 and @cite_1 originally conceived of visual attributes as a development supporting object recognition, rather than a primary goal in and of itself. Faces, however, are a special case where standalone analysis supports applications in biometrics and affective computing. used facial attributes for face verification and image search @cite_4 . applied the statistical extreme value theory to facial attribute search spaces to create accurate multi-dimensional representations of attribute searches @cite_0 . modeled the relationships between different attributes to create more accurate multi-attribute searches @cite_33 . And captured the interdependencies of local face regions to increase classification accuracy @cite_16 . | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_33",
"@cite_1",
"@cite_0",
"@cite_16"
],
"mid": [
"2098411764",
"",
"2085660690",
"2134270519",
"2018006179",
"2143352446"
],
"abstract": [
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"",
"We propose a novel approach for ranking and retrieval of images based on multi-attribute queries. Existing image retrieval methods train separate classifiers for each word and heuristically combine their outputs for retrieving multiword queries. Moreover, these approaches also ignore the interdependencies among the query terms. In contrast, we propose a principled approach for multi-attribute retrieval which explicitly models the correlations that are present between the attributes. Given a multi-attribute query, we also utilize other attributes in the vocabulary which are not present in the query, for ranking retrieval. Furthermore, we integrate ranking and retrieval within the same formulation, by posing them as structured prediction problems. Extensive experimental evaluation on the Labeled Faces in the Wild(LFW), FaceTracer and PASCAL VOC datasets show that our approach significantly outperforms several state-of-the-art ranking and retrieval methods.",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.",
"Recent work has shown that visual attributes are a powerful approach for applications such as recognition, image description and retrieval. However, fusing multiple attribute scores — as required during multi-attribute queries or similarity searches — presents a significant challenge. Scores from different attribute classifiers cannot be combined in a simple way; the same score for different attributes can mean different things. In this work, we show how to construct normalized “multi-attribute spaces” from raw classifier outputs, using techniques based on the statistical Extreme Value Theory. Our method calibrates each raw score to a probability that the given attribute is present in the image. We describe how these probabilities can be fused in a simple way to perform more accurate multiattribute searches, as well as enable attribute-based similarity searches. A significant advantage of our approach is that the normalization is done after-the-fact, requiring neither modification to the attribute classification system nor ground truth attribute annotations. We demonstrate results on a large data set of nearly 2 million face images and show significant improvements over prior work. We also show that perceptual similarity of search results increases by using contextual attributes.",
"Recent works have shown that facial attributes are useful in a number of applications such as face recognition and retrieval. However, estimating attributes in images with large variations remains a big challenge. This challenge is addressed in this paper. Unlike existing methods that assume the independence of attributes during their estimation, our approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region's localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach."
]
} |
1610.08119 | 2542392706 | Describable visual facial attributes are now commonplace in human biometrics and affective computing, with existing algorithms even reaching a sufficient point of maturity for placement into commercial products. These algorithms model objective facets of facial appearance, such as hair and eye color, expression, and aspects of the geometry of the face. A natural extension, which has not been studied to any great extent thus far, is the ability to model subjective attributes that are assigned to a face based purely on visual judgements. For instance, with just a glance, our first impression of a face may lead us to believe that a person is smart, worthy of our trust, and perhaps even our admiration - regardless of the underlying truth behind such attributes. Psychologists believe that these judgements are based on a variety of factors such as emotional states, personality traits, and other physiognomic cues. But work in this direction leads to an interesting question: how do we create models for problems where there is no ground truth, only measurable behavior? In this paper, we introduce a new convolutional neural network-based regression framework that allows us to train predictive models of crowd behavior for social attribute assignment. Over images from the AFLW face database, these models demonstrate strong correlations with human crowd ratings. | Certain traits such as Age @cite_34 @cite_37 @cite_5 and gender @cite_14 @cite_5 have enjoyed disproportionate attention, but researchers also model numerous other facial attributes. The release of the large CelebA dataset @cite_8 also prompted several novel studies of facial attributes on all @math traits in the dataset @cite_7 @cite_31 @cite_9 . For a comprehensive review of facial attribute work in practical biometric systems, see the review authored by @cite_15 . | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_15",
"@cite_9",
"@cite_5",
"@cite_31",
"@cite_34"
],
"mid": [
"",
"2952552312",
"2311038409",
"1834627138",
"297909767",
"",
"",
"",
"2151451863"
],
"abstract": [
"",
"We consider the task of predicting various traits of a person given an image of their face. We estimate both objective traits, such as gender, ethnicity and hair-color; as well as subjective traits, such as the emotion a person expresses or whether he is humorous or attractive. For sizeable experimentation, we contribute a new Face Attributes Dataset (FAD), having roughly 200,000 attribute labels for the above traits, for over 10,000 facial images. Due to the recent surge of research on Deep Convolutional Neural Networks (CNNs), we begin by using a CNN architecture for estimating facial attributes and show that they indeed provide an impressive baseline performance. To further improve performance, we propose a novel approach that incorporates facial landmark information for input images as an additional channel, helping the CNN learn better attribute-specific features so that the landmarks across various training images hold correspondence. We empirically analyse the performance of our method, showing consistent improvement over the baseline across traits.",
"Attribute recognition, particularly facial, extracts many labels for each image. While some multi-task vision problems can be decomposed into separate tasks and stages, e.g., training independent models for each task, for a growing set of problems joint optimization across all tasks has been shown to improve performance. We show that for deep convolutional neural network (DCNN) facial attribute extraction, multi-task optimization is better. Unfortunately, it can be difficult to apply joint optimization to DCNNs when training data is imbalanced, and re-balancing multi-label data directly is structurally infeasible, since adding removing data to balance one label will change the sampling of the other labels. This paper addresses the multi-label imbalance problem by introducing a novel mixed objective optimization network (MOON) with a loss function that mixes multiple task objectives with domain adaptive re-weighting of propagated loss. Experiments demonstrate that not only does MOON advance the state of the art in facial attribute recognition, but it also outperforms independently trained DCNNs using the same data. When using facial attributes for the LFW face recognition task, we show that our balanced (domain adapted) network outperforms the unbalanced trained network.",
"Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.",
"Recent research has explored the possibility of extracting ancillary information from primary biometric traits viz., face, fingerprints, hand geometry, and iris. This ancillary information includes personal attributes, such as gender, age, ethnicity, hair color, height, weight, and so on. Such attributes are known as soft biometrics and have applications in surveillance and indexing biometric databases. These attributes can be used in a fusion framework to improve the matching accuracy of a primary biometric system (e.g., fusing face with gender information), or can be used to generate qualitative descriptions of an individual (e.g., young Asian female with dark eyes and brown hair). The latter is particularly useful in bridging the semantic gap between human and machine descriptions of the biometric data. In this paper, we provide an overview of soft biometrics and discuss some of the techniques that have been proposed to extract them from the image and the video data. We also introduce a taxonomy for organizing and classifying soft biometric attributes, and enumerate the strengths and limitations of these attributes in the context of an operational biometric system. Finally, we discuss open research problems in this field. This survey is intended for researchers and practitioners in the field of biometrics.",
"",
"",
"",
"Predicting the age of a person through face image analysis holds the potential to drive an extensive array of real world applications from human computer interaction and security to advertising and multimedia. In this paper the first application of the random forest for age regression is proposed. This method offers the advantage of few parameters that are relatively easy to initialize. Our method learns salient anthropometric quantities without a prior model. Significant implications include a dramatic reduction in training time while maintaining high regression accuracy throughout human development."
]
} |
1610.08119 | 2542392706 | Describable visual facial attributes are now commonplace in human biometrics and affective computing, with existing algorithms even reaching a sufficient point of maturity for placement into commercial products. These algorithms model objective facets of facial appearance, such as hair and eye color, expression, and aspects of the geometry of the face. A natural extension, which has not been studied to any great extent thus far, is the ability to model subjective attributes that are assigned to a face based purely on visual judgements. For instance, with just a glance, our first impression of a face may lead us to believe that a person is smart, worthy of our trust, and perhaps even our admiration - regardless of the underlying truth behind such attributes. Psychologists believe that these judgements are based on a variety of factors such as emotional states, personality traits, and other physiognomic cues. But work in this direction leads to an interesting question: how do we create models for problems where there is no ground truth, only measurable behavior? In this paper, we introduce a new convolutional neural network-based regression framework that allows us to train predictive models of crowd behavior for social attribute assignment. Over images from the AFLW face database, these models demonstrate strong correlations with human crowd ratings. | Also parallel to our work, and the current state-of-the-art attribute prediction, is the work of @cite_7 . employ a single custom Mixed Objective Optimization Network (MOON) to multi-task facial attribute recognition, minimizing the error of their networks over all forty traits of the CelebA dataset @cite_8 . We use our own implementation of the MOON architecture as a basis for each separate trait in our modeling. | {
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2311038409",
"1834627138"
],
"abstract": [
"Attribute recognition, particularly facial, extracts many labels for each image. While some multi-task vision problems can be decomposed into separate tasks and stages, e.g., training independent models for each task, for a growing set of problems joint optimization across all tasks has been shown to improve performance. We show that for deep convolutional neural network (DCNN) facial attribute extraction, multi-task optimization is better. Unfortunately, it can be difficult to apply joint optimization to DCNNs when training data is imbalanced, and re-balancing multi-label data directly is structurally infeasible, since adding removing data to balance one label will change the sampling of the other labels. This paper addresses the multi-label imbalance problem by introducing a novel mixed objective optimization network (MOON) with a loss function that mixes multiple task objectives with domain adaptive re-weighting of propagated loss. Experiments demonstrate that not only does MOON advance the state of the art in facial attribute recognition, but it also outperforms independently trained DCNNs using the same data. When using facial attributes for the LFW face recognition task, we show that our balanced (domain adapted) network outperforms the unbalanced trained network.",
"Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts."
]
} |
1610.07650 | 2952300832 | Subspace clustering is the problem of partitioning unlabeled data points into a number of clusters so that data points within one cluster lie approximately on a low-dimensional linear subspace. In many practical scenarios, the dimensionality of data points to be clustered are compressed due to constraints of measurement, computation or privacy. In this paper, we study the theoretical properties of a popular subspace clustering algorithm named sparse subspace clustering (SSC) and establish formal success conditions of SSC on dimensionality-reduced data. Our analysis applies to the most general fully deterministic model where both underlying subspaces and data points within each subspace are deterministically positioned, and also a wide range of dimensionality reduction techniques (e.g., Gaussian random projection, uniform subsampling, sketching) that fall into a subspace embedding framework (Meng & Mahoney, 2013; , 2014). Finally, we apply our analysis to a differentially private SSC algorithm and established both privacy and utility guarantees of the proposed method. | proposed a novel dimensionality reduction algorithm to preserve independent subspace structures @cite_8 . They showed that by using @math , where @math is the number of subspaces, one can preserve the independence structure among subspaces. However, their analysis only applies to noiseless and independent subspaces, while our analysis applies even when the least principal angle between two subspaces diminishes and can tolerate a fair amount of noise. Furthermore, in our analysis the target dimension @math required depends on the maximum intrinsic subspace dimension @math instead of @math . Usually @math is quite small in practice . | {
"cite_N": [
"@cite_8"
],
"mid": [
"2132788060"
],
"abstract": [
"Modeling data as being sampled from a union of independent subspaces has been widely applied to a number of real world applications. However, dimensionality reduction approaches that theoretically preserve this independence assumption have not been well studied. Our key contribution is to show that @math projection vectors are sufficient for the independence preservation of any @math class data sampled from a union of independent subspaces. It is this non-trivial observation that we use for designing our dimensionality reduction technique. In this paper, we propose a novel dimensionality reduction algorithm that theoretically preserves this structure for a given dataset. We support our theoretical analysis with empirical results on both synthetic and real world data achieving results compared to popular dimensionality reduction techniques."
]
} |
1610.07650 | 2952300832 | Subspace clustering is the problem of partitioning unlabeled data points into a number of clusters so that data points within one cluster lie approximately on a low-dimensional linear subspace. In many practical scenarios, the dimensionality of data points to be clustered are compressed due to constraints of measurement, computation or privacy. In this paper, we study the theoretical properties of a popular subspace clustering algorithm named sparse subspace clustering (SSC) and establish formal success conditions of SSC on dimensionality-reduced data. Our analysis applies to the most general fully deterministic model where both underlying subspaces and data points within each subspace are deterministically positioned, and also a wide range of dimensionality reduction techniques (e.g., Gaussian random projection, uniform subsampling, sketching) that fall into a subspace embedding framework (Meng & Mahoney, 2013; , 2014). Finally, we apply our analysis to a differentially private SSC algorithm and established both privacy and utility guarantees of the proposed method. | Another relevant line of research is . In @cite_14 the authors proposed a neighborhood selection based algorithm to solve multiple matrix completion problems. However, @cite_14 requires an exponential number of data points to effectively recover the underlying subspaces. In contrast, in our analysis @math only needs to scale polynomially with @math . In addition, strong distributional assumptions are imposed in @cite_14 to ensure that data points within the same subspace lie close to each other, while our analysis is applicable to the fully general deterministic setting where no such distributional properties are required. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2613579343"
],
"abstract": [
"This paper considers the problem of completing a matrix with many missing entries under the assumption that the columns of the matrix belong to a union of multiple low-rank subspaces. This generalizesthe standardlow-rankmatrix completion problem to situations in which the matrix rank can be quite high or even full rank. Since the columns belong to a union of subspaces, this problem may also be viewed as a missing-data version of the subspace clustering problem. Let X be an n×N matrix whose (complete) columns lie in a union of at most k subspaces, each of rank ≤ r 1 a constant depending on the usual incoherence conditions, the geometrical arrangement of subspaces, and the distribution of columns over the subspaces. The result is illustrated with numerical experiments and an application to Internet distance matrix completion and topology identification."
]
} |
1610.07384 | 2542779095 | In this paper, we study an NP-hard problem of a single machine scheduling minimizing the makespan, where the mixed-critical tasks with an uncertain processing time are scheduled. We show the derivation of F-shaped tasks from the probability distribution function of the processing time, then we study the structure of problems with two and three criticality levels for which we propose efficient exact algorithms and we present computational experiments for instances with up to 200 tasks. Moreover, we show that the considered problem is approximable within a constant multiplicative factor. | The concept of match-up scheduling was introduced by @cite_0 . In a case of a disruption, the goal is to construct a new schedule that matches the original one at some point in the future. This concept is mostly studied in the context of manufacturing problems @cite_5 . | {
"cite_N": [
"@cite_0",
"@cite_5"
],
"mid": [
"2058056324",
"2145151750"
],
"abstract": [
"This paper considers the rescheduling of operations with release dates and multiple resources when disruptions prevent the use of a preplanned schedule. The overall strategy is to follow the preschedule until a disruption occurs. After a disruption, part of the schedule is reconstructed to match up with the preschedule at some future time. Conditions are given for the optimality of this approach. A practical implementation is compared with the alternatives of preplanned static scheduling and myopic dynamic scheduling. A set of practical test problems demonstrates the advantages of the matchup approach. We also explore the solution of the matchup scheduling problem and show the advantages of an integer programming approach for allocating resources to jobs.",
"Abstract This paper addresses the problem of updating a machine schedule when either a random or an anticipated disruption occurs after a subset of the jobs has been processed. In such cases, continuing with the original schedule is likely to be suboptimal and may not even be feasible. The approach taken here differs from most rescheduling analysis in that the cost associated with the deviation between the original and the new schedule is included in the model. We concentrate on cases in which the shortest processing time (SPT) rule is optimal for the original problem. Both single and parallel two-machine environments are considered."
]
} |
1610.07384 | 2542779095 | In this paper, we study an NP-hard problem of a single machine scheduling minimizing the makespan, where the mixed-critical tasks with an uncertain processing time are scheduled. We show the derivation of F-shaped tasks from the probability distribution function of the processing time, then we study the structure of problems with two and three criticality levels for which we propose efficient exact algorithms and we present computational experiments for instances with up to 200 tasks. Moreover, we show that the considered problem is approximable within a constant multiplicative factor. | Taking broader perspective, the problem can be viewed as a case of robust and stochastic optimization due to uncertainty about transmission times while satisfying safety requirements. @cite_4 surveys robust versions of various optimization problems, but rather continuous than discrete ones. The field of stochastic optimization is reviewed by @cite_6 . They state that integer variables introduced to stochastic programming complicate its solution, yielding suboptimal results even for small-sized problems. | {
"cite_N": [
"@cite_4",
"@cite_6"
],
"mid": [
"2131116400",
"2083259944"
],
"abstract": [
"In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.",
"Abstract A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Uncertainty, for instance, governs the prices of fuels, the availability of electricity, and the demand for chemicals. A key difficulty in optimization under uncertainty is in dealing with an uncertainty space that is huge and frequently leads to very large-scale optimization models. Decision-making under uncertainty is often further complicated by the presence of integer decision variables to model logical and other discrete decisions in a multi-period or multi-stage setting. This paper reviews theory and methodology that have been developed to cope with the complexity of optimization problems under uncertainty. We discuss and contrast the classical recourse-based stochastic programming, robust stochastic programming, probabilistic (chance-constraint) programming, fuzzy programming, and stochastic dynamic programming. The advantages and shortcomings of these models are reviewed and illustrated through examples. Applications and the state-of-the-art in computations are also reviewed. Finally, we discuss several main areas for future development in this field. These include development of polynomial-time approximation schemes for multi-stage stochastic programs and the application of global optimization algorithms to two-stage and chance-constraint formulations."
]
} |
1610.07363 | 2540896251 | Breaking news leads to situations of fast-paced reporting in social media, producing all kinds of updates related to news stories, albeit with the caveat that some of those early updates tend to be rumours, i.e., information with an unverified status at the time of posting. Flagging information that is unverified can be helpful to avoid the spread of information that may turn out to be false. Detection of rumours can also feed a rumour tracking system that ultimately determines their veracity. In this paper we introduce a novel approach to rumour detection that learns from the sequential dynamics of reporting during breaking news in social media to detect rumours in new stories. Using Twitter datasets collected during five breaking news stories, we experiment with Conditional Random Fields as a sequential classifier that leverages context learnt during an event for rumour detection, which we compare with the state-of-the-art rumour detection system as well as other baselines. In contrast to existing work, our classifier does not need to observe tweets querying a piece of information to deem it a rumour, but instead we detect rumours from the tweet alone by exploiting context learnt during the event. Our classifier achieves competitive performance, beating the state-of-the-art classifier that relies on querying tweets with improved precision and recall, as well as outperforming our best baseline with nearly 40 improvement in terms of F1 score. The scale and diversity of our experiments reinforces the generalisability of our classifier. | Despite the increasing interest in analysing rumours in social media @cite_17 @cite_23 @cite_15 @cite_13 @cite_19 @cite_32 @cite_20 and the building of tools to deal with rumours that had been previously identified @cite_22 @cite_36 , there has been very little work in automatic rumour detection. Some of the work in rumour detection @cite_2 @cite_5 @cite_38 has been limited to finding rumours known . A classifier is fed with a set of predefined rumours (e.g., ), which then classifies new tweets as being related to one of the known rumours or not (e.g., would be about the rumour, while wouldn't). An approach like this can be useful for long-standing rumours, where one wants to identify relevant tweets to track the rumours that have already been identified; one may also refer to this task as rather than . However, this would not work for fast-paced contexts such as breaking news, where new, previously unseen rumours emerge, and one does not know the specific keywords linked to the rumour, which is yet to be detected. To deal with such situations, a classifier would need to learn generalisable patterns that will help identify new rumours during breaking stories. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_36",
"@cite_32",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2031490232",
"",
"",
"",
"",
"2159981908",
"2057026632",
"",
"1906632846",
"2281420995",
""
],
"abstract": [
"",
"Information that propagates through social networks can carry a lot of false claims. For example, rumors on certain topics can propagate rapidly leading to a large number of nodes reporting the same (incorrect) observations. In this paper, we describe an approach for nding the rumor source and assessing the likelihood that a piece of information is in fact a rumor, in the absence of data provenance information. We model the social network as a directed graph, where vertices represent individuals and directed edges represent information ow (e.g., who follows whom on Twitter). A number of monitor nodes are injected into the network whose job is to report data they receive. Our algorithm identies rumors and their sources by observing which of the monitors received the given piece of information and which did not. We show that, with a sucient number of monitor nodes, it is possible to recognize most rumors and their sources with high accuracy.",
"",
"",
"",
"",
"A rumor is commonly defined as a statement whose true value is unverifiable. Rumors may spread misinformation (false information) or disinformation (deliberately false information) on a network of people. Identifying rumors is crucial in online social media where large amounts of information are easily spread across a large network by sources with unverified authority. In this paper, we address the problem of rumor detection in microblogs and explore the effectiveness of 3 categories of features: content-based, network-based, and microblog-specific memes for correctly identifying rumors. Moreover, we show how these features are also effective in identifying disinformers, users who endorse a rumor and further help it to spread. We perform our experiments on more than 10,000 manually annotated tweets collected from Twitter and show how our retrieval model achieves more than 0.95 in Mean Average Precision (MAP). Finally, we believe that our dataset is the first large-scale dataset on rumor detection. It can open new dimensions in analyzing online misinformation and other aspects of microblog conversations.",
"Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.",
"",
"Social media are frequently rife with rumours, and the study of rumour conversational aspects can provide valuable knowledge about how rumours evolve over time and are discussed by others who support or deny them. In this work, we present a new annotation scheme for capturing rumour-bearing conversational threads, as well as the crowdsourcing methodology used to create high quality, human annotated datasets of rumourous conversations from social media. The rumour annotation scheme is validated through comparison between crowdsourced and reference annotations. We also found that only a third of the tweets in rumourous conversations contribute towards determining the veracity of rumours, which reinforces the need for developing methods to extract the relevant pieces of information automatically.",
"As breaking news unfolds people increasingly rely on social media to stay abreast of the latest updates. The use of social media in such situations comes with the caveat that new information being released piecemeal may encourage rumours, many of which remain unverified long after their point of release. Little is known, however, about the dynamics of the life cycle of a social media rumour. In this paper we present a methodology that has enabled us to collect, identify and annotate a dataset of 330 rumour threads (4,842 tweets) associated with 9 newsworthy events. We analyse this dataset to understand how users spread, support, or deny rumours that are later proven true or false, by distinguishing two levels of status in a rumour life cycle i.e., before and after its veracity status is resolved. The identification of rumours associated with each event, as well as the tweet that resolved each rumour as true or false, was performed by journalist members of the research team who tracked the events in real time. Our study shows that rumours that are ultimately proven true tend to be resolved faster than those that turn out to be false. Whilst one can readily see users denying rumours once they have been debunked, users appear to be less capable of distinguishing true from false rumours when their veracity remains in question. In fact, we show that the prevalent tendency for users is to support every unverified rumour. We also analyse the role of different types of users, finding that highly reputable users such as news organisations endeavour to post well-grounded statements, which appear to be certain and accompanied by evidence. Nevertheless, these often prove to be unverified pieces of information that give rise to false rumours. Our study reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumours. The findings of our study provide useful insights for achieving this aim.",
""
]
} |
1610.07569 | 2542357048 | Vector representations of words have heralded a transformational approach to classical problems in NLP; the most popular example is word2vec. However, a single vector does not suffice to model the polysemous nature of many (frequent) words, i.e., words with multiple meanings. In this paper, we propose a three-fold approach for unsupervised polysemy modeling: (a) context representations, (b) sense induction and disambiguation and (c) lexeme (as a word and sense pair) representations. A key feature of our work is the finding that a sentence containing a target word is well represented by a low rank subspace, instead of a point in a vector space. We then show that the subspaces associated with a particular sense of the target word tend to intersect over a line (one-dimensional subspace), which we use to disambiguate senses using a clustering algorithm that harnesses the Grassmannian geometry of the representations. The disambiguation algorithm, which we call @math -Grassmeans, leads to a procedure to label the different senses of the target word in the corpus -- yielding lexeme vector representations, all in an unsupervised manner starting from a large (Wikipedia) corpus in English. Apart from several prototypical target (word,sense) examples and a host of empirical studies to intuit and justify the various geometric representations, we validate our algorithms on standard sense induction and disambiguation datasets and present new state-of-the-art results. | There are two main approaches to model polysemy: one is supervised and uses linguistic resoures @cite_29 @cite_31 and the other is unsupervised inferring senses directly from a large text corpus @cite_25 @cite_32 @cite_14 @cite_23 . Our approach belongs to the latter category. | {
"cite_N": [
"@cite_14",
"@cite_29",
"@cite_32",
"@cite_23",
"@cite_31",
"@cite_25"
],
"mid": [
"2952802852",
"",
"2949364118",
"2963639656",
"2125786288",
"2164019165"
],
"abstract": [
"Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while multi-sense' methods have been proposed and tested on artificial word-similarity tasks, we don't know if they improve real natural language understanding tasks. In this paper we introduce a multi-sense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications.",
"",
"There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours.",
"Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (, 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 \"discourse atoms\" that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory.",
"We present , a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks.",
"Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models."
]
} |
1610.07569 | 2542357048 | Vector representations of words have heralded a transformational approach to classical problems in NLP; the most popular example is word2vec. However, a single vector does not suffice to model the polysemous nature of many (frequent) words, i.e., words with multiple meanings. In this paper, we propose a three-fold approach for unsupervised polysemy modeling: (a) context representations, (b) sense induction and disambiguation and (c) lexeme (as a word and sense pair) representations. A key feature of our work is the finding that a sentence containing a target word is well represented by a low rank subspace, instead of a point in a vector space. We then show that the subspaces associated with a particular sense of the target word tend to intersect over a line (one-dimensional subspace), which we use to disambiguate senses using a clustering algorithm that harnesses the Grassmannian geometry of the representations. The disambiguation algorithm, which we call @math -Grassmeans, leads to a procedure to label the different senses of the target word in the corpus -- yielding lexeme vector representations, all in an unsupervised manner starting from a large (Wikipedia) corpus in English. Apart from several prototypical target (word,sense) examples and a host of empirical studies to intuit and justify the various geometric representations, we validate our algorithms on standard sense induction and disambiguation datasets and present new state-of-the-art results. | Global structure: @cite_23 hypothesizes that the global word representation is a linear combination of its sense vectors. This linear algebraic hypothesis is validated by a surprising experiment wherein a single artificial polysemous word is created by merging two random words. The experiment is ingenious and the finding quite surprising but was under a restricted setting: a single artificial polysemous word is created by merging only two random words. Upon enlargening these parameters (i.e., many artificial polysemous words are created by merging multiple random words) to better suit the landscape of polysemy in natural language, We find the linear-algebraic hypothesis to be fragile: Figure plots the linearity fit as a function of the number of artificial polysemous words created, and also as a function of how many words were merged to create any polysemous word. We see that the linearity fit worsens fairly quickly as the number of polysemous words increases, a scenario that is typical of natural languages. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2963639656"
],
"abstract": [
"Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (, 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 \"discourse atoms\" that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory."
]
} |
1610.07569 | 2542357048 | Vector representations of words have heralded a transformational approach to classical problems in NLP; the most popular example is word2vec. However, a single vector does not suffice to model the polysemous nature of many (frequent) words, i.e., words with multiple meanings. In this paper, we propose a three-fold approach for unsupervised polysemy modeling: (a) context representations, (b) sense induction and disambiguation and (c) lexeme (as a word and sense pair) representations. A key feature of our work is the finding that a sentence containing a target word is well represented by a low rank subspace, instead of a point in a vector space. We then show that the subspaces associated with a particular sense of the target word tend to intersect over a line (one-dimensional subspace), which we use to disambiguate senses using a clustering algorithm that harnesses the Grassmannian geometry of the representations. The disambiguation algorithm, which we call @math -Grassmeans, leads to a procedure to label the different senses of the target word in the corpus -- yielding lexeme vector representations, all in an unsupervised manner starting from a large (Wikipedia) corpus in English. Apart from several prototypical target (word,sense) examples and a host of empirical studies to intuit and justify the various geometric representations, we validate our algorithms on standard sense induction and disambiguation datasets and present new state-of-the-art results. | The experiment, whose results are depicted in Figure , is designed to mimic these underlying simplifications of the proof in @cite_23 : we train word vectors via the skip-gram version of word2vec using the following steps. (a) We initialize the newly generated artificial polysemous words by random vectors; (b) we initialize, and do not update the (two sets of), vector representations of other words @math by the existing word vectors. The embeddings are learnt on the 2016 06 01 Wikipedia dump, tokenized via WikiExtractor ( http: medialab.di.unipi.it wiki Wikipedia_Extractor ); words that occur less than 1,000 times are ignored; words being merged are chosen randomly in proportion to their frequencies. Due to computational constraints, each instance of mergers is subjected to a single run of the word2vec algorithm. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2963639656"
],
"abstract": [
"Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (, 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 \"discourse atoms\" that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory."
]
} |
1610.07569 | 2542357048 | Vector representations of words have heralded a transformational approach to classical problems in NLP; the most popular example is word2vec. However, a single vector does not suffice to model the polysemous nature of many (frequent) words, i.e., words with multiple meanings. In this paper, we propose a three-fold approach for unsupervised polysemy modeling: (a) context representations, (b) sense induction and disambiguation and (c) lexeme (as a word and sense pair) representations. A key feature of our work is the finding that a sentence containing a target word is well represented by a low rank subspace, instead of a point in a vector space. We then show that the subspaces associated with a particular sense of the target word tend to intersect over a line (one-dimensional subspace), which we use to disambiguate senses using a clustering algorithm that harnesses the Grassmannian geometry of the representations. The disambiguation algorithm, which we call @math -Grassmeans, leads to a procedure to label the different senses of the target word in the corpus -- yielding lexeme vector representations, all in an unsupervised manner starting from a large (Wikipedia) corpus in English. Apart from several prototypical target (word,sense) examples and a host of empirical studies to intuit and justify the various geometric representations, we validate our algorithms on standard sense induction and disambiguation datasets and present new state-of-the-art results. | Local structure: @cite_25 @cite_32 model a context by the average of its constituent word embeddings and use this average vector as a feature to induce word senses by partitioning context instances into groups and to disambiguate word senses for new context instances. @cite_14 models the senses for a target word in a given context by a Chinese restaurant process, models the contexts also by averaging its constituent word embeddings and then applies standard word embedding algorithms (continuous bag-of-words (CBOW) or skip-gram). Our approach is broadly similar in spirit to these approaches, in that a local lexical-level model is conceived, but we depart in several ways, the most prominent one being the modeling of the contexts as subspaces (and not as vectors, which is what an average of constituent word embeddings would entail). | {
"cite_N": [
"@cite_14",
"@cite_32",
"@cite_25"
],
"mid": [
"2952802852",
"2949364118",
"2164019165"
],
"abstract": [
"Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while multi-sense' methods have been proposed and tested on artificial word-similarity tasks, we don't know if they improve real natural language understanding tasks. In this paper we introduce a multi-sense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications.",
"There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours.",
"Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models."
]
} |
1610.07459 | 2542635631 | Network latency can have a significant impact on the performance of transactional storage systems, particularly in wide area or geo-distributed deployments. To reduce latency, systems typically rely on a cache to service read-requests closer to the client. However, caches are not effective for write-heavy workloads, which have to be processed by the storage system in order to maintain serializability. This paper presents a new technique, called optimistic abort, which reduces network latency for high-contention, write-heavy workloads by identifying transactions that will abort as early as possible, and aborting them before they reach the store. We have implemented optimistic abort in a system called Gotthard, which leverages recent advances in network data plane programmability to execute transaction processing logic directly in network devices. Gotthard examines network traffic to observe and log transaction requests. If Gotthard suspects that a transaction is likely to be aborted at the store, it aborts the transaction early by re-writing the packet header, and routing the packets back to the client. Gotthard significantly reduces the overall latency and improves the throughput for high-contention workloads. | * Proxies and caches The idea of using a proxy to extend distributed services is a well-established idea @cite_49 that has been widely adopted @cite_5 @cite_34 @cite_35 @cite_27 . Proxies are often used to scale services by caching copies of data closer to clients, such as with content distribution networks (CDNs) @cite_19 @cite_37 @cite_1 . CDNs typically are used for static content, although there are examples of proxies used for dynamic content @cite_28 . | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_28",
"@cite_1",
"@cite_19",
"@cite_27",
"@cite_49",
"@cite_5",
"@cite_34"
],
"mid": [
"2131414602",
"151129402",
"2131472583",
"1969490101",
"2051063627",
"",
"1541859602",
"2139703775",
"2039573521"
],
"abstract": [
"The Rover toolkit combines relocatable dynamic objects and queued remote procedure calls to provide unique services for roving mobile applications. A relocatable dynamic object is an object with a well-defined interface that can be dynamically loaded into a client computer from a server computer (or vice versa) to reduce client-server communication requirements. Queued remote procedure call is a communication system that permits applications to continue to make non-blocking remote procedure call requests even when a host is disconnected, with requests and responses being exchanged upon network reconnection. The challenges of mobile environments include intermittent connectivity, limited bandwidth, and channel-use optimization. Experimental results from a Rover-based mail reader, calendar program, and two non-blocking versions of World-Wide Web browsers show that Rover's services are a good match to these challenges. The Rover toolkit also offers advantages for workstation applications by providing a uniform distributed object architecture for code shipping, object caching, and asynchronous object invocation.",
"CoralCDN is a peer-to-peer content distribution network that allows a user to run a web site that offers high performance and meets huge demand, all for the price of a cheap broadband Internet connection. Volunteer sites that run CoralCDN automatically replicate content as a side effect of users accessing it. Publishing through CoralCDN is as simple as making a small change to the hostname in an object's URL; a peer-to-peer DNS layer transparently redirects browsers to nearby participating cache nodes, which in turn cooperate to minimize load on the origin web server. One of the system's key goals is to avoid creating hot spots that might dissuade volunteers and hurt performance. It achieves this through Coral, a latency-optimized hierarchical indexing infrastructure based on a novel abstraction called a distributed sloppy hash table, or DSHT.",
"Making the internet's edge easily extensible fosters collaboration and innovation on web-based applications, but also raises the problem of how to secure the execution platform. This paper presents Na Kika, an edge-side computing network, that addresses this tension between extensibility and security; it safely opens the internet's edge to all content producers and consumers. First, Na Kika expresses services as scripts, which are selected through predicates on HTTP messages and composed with each other into a pipeline of content processing steps. Second, Na Kika isolates individual scripts from each other and, instead of enforcing inflexible apriori quotas, limits resource consumption based on overall system congestion. Third, Na Kika expresses security policies through the same predicates as regular application functionality, with the result that policies are as easily extensible as hosted code and that enforcement is an integral aspect of content processing. Additionally, Na Kika leverages a structured overlay network to support cooperative caching and incremental deployment with low administrative overhead.",
"It is becoming increasingly common to construct network services using redundant resources geographically distributed across the Internet. Content Distribution Networks are a prime example. Such systems distribute client requests to an appropriate server based on a variety of factors---e.g., server load, network proximity, cache locality---in an effort to reduce response time and increase the system capacity under load. This paper explores the design space of strategies employed to redirect requests, and defines a class of new algorithms that carefully balance load, locality, and proximity. We use large-scale detailed simulations to evaluate the various strategies. These simulations clearly demonstrate the effectiveness of our new algorithms, which yield a 60--91 improvement in system capacity when compared with the best published CDN technology, yet user-perceived response latency remains low and the system scales well with the number of servers.",
"Proxy-based transcoding adapts Web content to be a better match for client capabilities (such as screen size and color depth) and last-hop bandwidths. Traditional transcoding breaks the end-to-end model of the Web, because the proxy does not know the semantics of the content. Server-directed transcoding preserves end-to-end semantics while supporting aggressive content transformations.We show how server-directed transcoding can be integrated into the HTTP protocol and into the implementation of a proxy. We discuss several useful transformations for image content, and present measurements of the performance impacts. Our results demonstrate that server-directed transcoding is a natural extension to HTTP, can be implemented without great complexity, and can provide good performance when carefully implemented.",
"",
"We present a novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context. In a distributed system, the structure of a service or subsystem may be complex, being implemented as a set of communicating server objects; however, this complexity of structure should not be apparent to the client. In our proposal, a client must rst acquire a local object, called a proxy, in order to use such a service. The proxy represents the whole set of servers. The client directs all its communication to the proxy. The proxy, and all the objects it represents, collectively form one distributed object, which is not decomposable by the client. Any higher-level communication protocols are internal to this distributed object. Such a view provides a powerful structuring framework for distributed systems; it can be implemented cheaply without sacricing much exibility. It subsumes may previous proposals, but encourages better information-hiding and encapsulation",
"This article summarizes the results of the BARWAN project, which focused on enabling truly useful mobile networking across an extremely wide variety of real-world networks and mobile devices. We present the overall architecture, summarize key results, and discuss four broad lessons learned along the way. The architecture enables seamless roaming in a single logical overlay network composed of many heterogeneous (mostly wireless) physical networks, and provides significantly better TCP performance for these networks. It also provides complex scalable and highly available services to enable powerful capabilities across a very wide range of mobile devices, and mechanisms for automated discovery and configuration of localized services. Four broad themes arose from the project: (1) the power of dynamic adaptation as a generic solution to heterogeneity, (2) the importance of cross-layer information, such as the exploitation of TCP semantics in the link layer, (3) the use of agents in the infrastructure to enable new abilities and to hide new problems from legacy servers and protocol stacks, and (4) the importance of soft state for such agents for simplicity, ease of fault recovery, and scalability.",
"We propose a new paradigm for network file system design: serverless network file systems . While traditional network file systems rely on a central server machine, a serverless system utilizes workstations cooperating as peers to provide all file system services. Any machine in the system can store, cache, or control any block of data. Our approach uses this location independence, in combination with fast local area networks, to provide better performance and scalability than traditional file systems. Furthermore, because any machine in the system can assume the responsibilities of a failed component, our serverless design also provides high availability via redundatn data storage. To demonstrate our approach, we have implemented a prototype serverless network file system called xFS. Preliminary performance measurements suggest that our architecture achieves its goal of scalability. For instance, in a 32-node xFS system with 32 active clients, each client receives nearly as much read or write throughput as it would see if it were the only active client."
]
} |
1610.07459 | 2542635631 | Network latency can have a significant impact on the performance of transactional storage systems, particularly in wide area or geo-distributed deployments. To reduce latency, systems typically rely on a cache to service read-requests closer to the client. However, caches are not effective for write-heavy workloads, which have to be processed by the storage system in order to maintain serializability. This paper presents a new technique, called optimistic abort, which reduces network latency for high-contention, write-heavy workloads by identifying transactions that will abort as early as possible, and aborting them before they reach the store. We have implemented optimistic abort in a system called Gotthard, which leverages recent advances in network data plane programmability to execute transaction processing logic directly in network devices. Gotthard examines network traffic to observe and log transaction requests. If Gotthard suspects that a transaction is likely to be aborted at the store, it aborts the transaction early by re-writing the packet header, and routing the packets back to the client. Gotthard significantly reduces the overall latency and improves the throughput for high-contention workloads. | Prior work has also explored the possibility of leveraging the network to route requests dynamically to proxies to service requests @cite_1 . Notably, recent work on SwitchKV @cite_29 uses OpenFlow-enable switches to dynamically route read requests to proxy caches. | {
"cite_N": [
"@cite_29",
"@cite_1"
],
"mid": [
"2319809716",
"1969490101"
],
"abstract": [
"SwitchKV is a new key-value store system design that combines high-performance cache nodes with resource-constrained backend nodes to provide load balancing in the face of unpredictable workload skew. The cache nodes absorb the hottest queries so that no individual backend node is over-burdened. Compared with previous designs, SwitchKV exploits SDN techniques and deeply optimized switch hardware to enable efficient content-based routing. Programmable network switches keep track of cached keys and route requests to the appropriate nodes at line speed, based on keys encoded in packet headers. A new hybrid caching strategy keeps cache and switch forwarding rules updated with low overhead and ensures that system load is always well-balanced under rapidly changing workloads. Our evaluation results demonstrate that SwitchKV can achieve up to 5× throughput and 3× latency improvements over traditional system designs.",
"It is becoming increasingly common to construct network services using redundant resources geographically distributed across the Internet. Content Distribution Networks are a prime example. Such systems distribute client requests to an appropriate server based on a variety of factors---e.g., server load, network proximity, cache locality---in an effort to reduce response time and increase the system capacity under load. This paper explores the design space of strategies employed to redirect requests, and defines a class of new algorithms that carefully balance load, locality, and proximity. We use large-scale detailed simulations to evaluate the various strategies. These simulations clearly demonstrate the effectiveness of our new algorithms, which yield a 60--91 improvement in system capacity when compared with the best published CDN technology, yet user-perceived response latency remains low and the system scales well with the number of servers."
]
} |
1610.07459 | 2542635631 | Network latency can have a significant impact on the performance of transactional storage systems, particularly in wide area or geo-distributed deployments. To reduce latency, systems typically rely on a cache to service read-requests closer to the client. However, caches are not effective for write-heavy workloads, which have to be processed by the storage system in order to maintain serializability. This paper presents a new technique, called optimistic abort, which reduces network latency for high-contention, write-heavy workloads by identifying transactions that will abort as early as possible, and aborting them before they reach the store. We have implemented optimistic abort in a system called Gotthard, which leverages recent advances in network data plane programmability to execute transaction processing logic directly in network devices. Gotthard examines network traffic to observe and log transaction requests. If Gotthard suspects that a transaction is likely to be aborted at the store, it aborts the transaction early by re-writing the packet header, and routing the packets back to the client. Gotthard significantly reduces the overall latency and improves the throughput for high-contention workloads. | * Geo-distributed databases. Many recent works deal with geo-distribution in the context of transactional databases, most with a focus on replication and partitioning. Works such as @cite_46 @cite_20 @cite_59 @cite_26 @cite_47 aim at providing strong consistency (i.e., serializability) over wide-area networks. To improve latency and availability, many works also propose weaker consistency criteria such as Parallel Snapshot Isolation @cite_2 , causal consistency @cite_8 @cite_60 and RAMP transactions @cite_11 . PLANET @cite_21 exposes transaction state to the application, enabling speculative processing and faster revocation using the paradigm @cite_50 . Gotthard's approach is complimentary to the aforementioned research, and combining our approach with a full-fledged database solution is part of our future work. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_46",
"@cite_60",
"@cite_21",
"@cite_59",
"@cite_2",
"@cite_50",
"@cite_47",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2161730338",
"2130923111",
"",
"2077031886",
"",
"1969925795",
"2161407849",
"",
"",
"1995339490"
],
"abstract": [
"",
"Geo-replicated, distributed data stores that support complex online applications, such as social networks, must provide an \"always-on\" experience where operations always complete with low latency. Today's systems often sacrifice strong consistency to achieve these goals, exposing inconsistencies to their clients and necessitating complex application logic. In this paper, we identify and define a consistency model---causal consistency with convergent conflict handling, or causal+---that is the strongest achieved under these constraints. We present the design and implementation of COPS, a key-value store that delivers this consistency model across the wide-area. A key contribution of COPS is its scalability, which can enforce causal dependencies between keys stored across an entire cluster, rather than a single server like previous systems. The central approach in COPS is tracking and explicitly checking whether causal dependencies between keys are satisfied in the local cluster before exposing writes. Further, in COPS-GT, we introduce get transactions in order to obtain a consistent view of multiple keys without locking or blocking. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads.",
"Megastore is a storage system developed to meet the requirements of today’s interactive online services. Megastore blends the scalability of a NoSQL datastore with the convenience of a traditional RDBMS in a novel way, and provides both strong consistency guarantees and high availability. We provide fully serializable ACID semantics within ne-grained partitions of data. This partitioning allows us to synchronously replicate each write across a wide area network with reasonable latency and support seamless failover between datacenters. This paper describes Megastore’s semantics and replication algorithm. It also describes our experience supporting a wide range of Google production services built with Megastore.",
"",
"Latency unpredictability in a database system can come from many factors, such as load spikes in the workload, inter-query interactions from consolidation, or communication costs in cloud computing or geo-replication. High variance and high latency environments make developing interactive applications difficult, because transactions may take too long to complete, or fail unexpectedly. We propose Predictive Latency-Aware NEtworked Transactions (PLANET), a new transaction programming model and underlying system support to address this issue. The model exposes the internal progress of the transaction, provides opportunities for application callbacks, and incorporates commit likelihood prediction to enable good user experience even in the presence of significant transaction delays. The mechanisms underlying PLANET can be used for admission control, thus improving overall performance in high contention situations. In this paper, we present this new transaction programming model, demonstrate its expressiveness via several use cases, and evaluate its performance using a strongly consistent geo-replicated database across five data centers.",
"",
"We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application.",
"Reliable systems have always been built out of unreliable components [1]. Early on, the reliable components were small such as mirrored disks or ECC (Error Correcting Codes) in core memory. These systems were designed such that failures of these small components were transparent to the application. Later, the size of the unreliable components grew larger and semantic challenges crept into the application when failures occurred. Fault tolerant algorithms comprise a set of idempotent subalgorithms. Between these idempotent sub-algorithms, state is sent across the failure boundaries of the unreliable components. The failure of an unreliable component can then be tolerated as a takeover by a backup, which uses the last known state and drives forward with a retry of the idempotent sub-algorithm. Classically, this has been done in a linear fashion (i.e. one step at a time). As the granularity of the unreliable component grows (from a mirrored disk to a system to a data center), the latency to communicate with a backup becomes unpalatable. This leads to a more relaxed model for fault tolerance. The primary system will acknowledge the work request and its actions without waiting to ensure that the backup is notified of the work. This improves the responsiveness of the system because the user is not delayed behind a slow interaction with the backup. There are two implications of asynchronous state capture: 1) Everything promised by the primary is probabilistic. There is always a chance that an untimely failure shortly after the promise results in a backup proceeding without knowledge of the commitment. Hence, nothing is guaranteed! 2) Applications must ensure eventual consistency [20]. Since work may be stuck in the primary after a failure and reappear later, the processing order for work cannot be guaranteed. Platform designers are struggling to make this easier for their applications. Emerging patterns of eventual consistency and probabilistic execution may soon yield a way for applications to express requirements for a “looser” form of consistency while providing availability in the face of ever larger failures. As we will also point out in this paper, the patterns of probabilistic execution and eventual consistency are applicable to intermittently connected application patterns. This paper recounts portions of the evolution of these trends, attempts to show the patterns that span these changes, and talks about future directions as we continue to “build on quicksand”.",
"",
"",
"Databases can provide scalability by partitioning data across several servers. However, multi-partition, multi-operation transactional access is often expensive, employing coordination-intensive locking, validation, or scheduling mechanisms. Accordingly, many real-world systems avoid mechanisms that provide useful semantics for multi-partition operations. This leads to incorrect behavior for a large class of applications including secondary indexing, foreign key enforcement, and materialized view maintenance. In this work, we identify a new isolation model---Read Atomic (RA) isolation---that matches the requirements of these use cases by ensuring atomic visibility: either all or none of each transaction's updates are observed by other transactions. We present algorithms for Read Atomic Multi-Partition (RAMP) transactions that enforce atomic visibility while offering excellent scalability, guaranteed commit despite partial failures (via synchronization independence), and minimized communication between servers (via partition independence). These RAMP transactions correctly mediate atomic visibility of updates and provide readers with snapshot access to database state by using limited multi-versioning and by allowing clients to independently resolve non-atomic reads. We demonstrate that, in contrast with existing algorithms, RAMP transactions incur limited overhead---even under high contention---and scale linearly to 100 servers."
]
} |
1610.07459 | 2542635631 | Network latency can have a significant impact on the performance of transactional storage systems, particularly in wide area or geo-distributed deployments. To reduce latency, systems typically rely on a cache to service read-requests closer to the client. However, caches are not effective for write-heavy workloads, which have to be processed by the storage system in order to maintain serializability. This paper presents a new technique, called optimistic abort, which reduces network latency for high-contention, write-heavy workloads by identifying transactions that will abort as early as possible, and aborting them before they reach the store. We have implemented optimistic abort in a system called Gotthard, which leverages recent advances in network data plane programmability to execute transaction processing logic directly in network devices. Gotthard examines network traffic to observe and log transaction requests. If Gotthard suspects that a transaction is likely to be aborted at the store, it aborts the transaction early by re-writing the packet header, and routing the packets back to the client. Gotthard significantly reduces the overall latency and improves the throughput for high-contention workloads. | * Data plane programming languages. Gotthard is written in P4 @cite_14 , although there are several other projects have proposed domain-specific languages for data plane programming. Notable examples including Huawei's POF @cite_7 and Xilinx's PX @cite_62 . We chose to focus on P4 because there is a growing community of active users, and it is relatively more mature than the other choices. However, the ideas for implementing Gotthard should generalize to other languages. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_62"
],
"mid": [
"",
"2040678819",
"2039146729"
],
"abstract": [
"",
"A flexible and programmable forwarding plane is essential to maximize the value of Software-Defined Networks (SDN). In this paper, we propose Protocol-Oblivious Forwarding (POF) as a key enabler for highly flexible and programmable SDN. Our goal is to remove any dependency on protocol-specific configurations on the forwarding elements and enhance the data-path with new stateful instructions to support genuine software defined networking behavior. A generic flow instruction set (FIS) is defined to fulfill this purpose. POF helps to lower network cost by using commodity forwarding elements and to create new value by enabling numerous innovative network services. We built both hardware-based and open source software-based prototypes to demonstrate the feasibility and advantages of POF. We report the preliminary evaluation results and the insights we learnt from the experiments. POF is future-proof and expressive. We believe it represents a promising direction to evolve the OpenFlow protocol and the future SDN forwarding elements.",
"Internet applications, notably streaming video, demand extremely high communication speeds in core networks, currently 100 Gbps and moving toward 400 Gbps and beyond. Data packets must be processed at these rates, presenting serious challenges for traditional computing approaches. This article presents a tool chain that maps a domain-specific packet-processing language called PX to high-performance reconfigurable-computing architectures based on field-programmable gate array (FPGA) technology. PX is a declarative language with object-oriented semantics. A customized computing architecture is generated to match the exact requirements expressed in the PX description. The architecture includes components for packet parsing and editing, and for table lookups. It is expressed in a register transfer level (RTL) description, which is then processed using standard FPGA implementation tools. The architecture is dynamically programmable via custom firmware updates when the packet-processing system is in operation. The authors illustrate the language, tool chain, and implementation results through a practical example involving a 100-Gbps OpenFlow implementation."
]
} |
1610.07459 | 2542635631 | Network latency can have a significant impact on the performance of transactional storage systems, particularly in wide area or geo-distributed deployments. To reduce latency, systems typically rely on a cache to service read-requests closer to the client. However, caches are not effective for write-heavy workloads, which have to be processed by the storage system in order to maintain serializability. This paper presents a new technique, called optimistic abort, which reduces network latency for high-contention, write-heavy workloads by identifying transactions that will abort as early as possible, and aborting them before they reach the store. We have implemented optimistic abort in a system called Gotthard, which leverages recent advances in network data plane programmability to execute transaction processing logic directly in network devices. Gotthard examines network traffic to observe and log transaction requests. If Gotthard suspects that a transaction is likely to be aborted at the store, it aborts the transaction early by re-writing the packet header, and routing the packets back to the client. Gotthard significantly reduces the overall latency and improves the throughput for high-contention workloads. | * Application logic in the network. Several recent projects investigate leveraging network programmability for improved application performance. One thread of research has focused on improving application performance through traffic management. Examples of such systems include PANE @cite_52 , EyeQ @cite_23 , and Merlin @cite_24 which all use resource scheduling to improve job performance. NetAgg @cite_36 leverages user-defined functions to reduce network congestion. Another thread of research has focused on moving application logic into network devices. @cite_48 proposed the idea of moving consensus logic in to network devices. Paxos Made Switch-y @cite_9 describes an implementation of Paxos in P4. Istv ' a @cite_42 implement Zookeeper's atomic broadcast on an FPGA. | {
"cite_N": [
"@cite_36",
"@cite_48",
"@cite_9",
"@cite_42",
"@cite_52",
"@cite_24",
"@cite_23"
],
"mid": [
"",
"2072811945",
"",
"2303620077",
"2071187149",
"2021234005",
"1503891749"
],
"abstract": [
"",
"This paper explores the possibility of implementing the widely deployed Paxos consensus protocol in network devices. We present two different approaches: (i) a detailed design description for implementing the full Paxos logic in SDN switches, which identifies a sufficient set of required OpenFlow extensions; and (ii) an alternative, optimistic protocol which can be implemented without changes to the OpenFlow API, but relies on assumptions about how the network orders messages. Although neither of these protocols can be fully implemented without changes to the underlying switch firmware, we argue that such changes are feasible in existing hardware. Moreover, we present an evaluation that suggests that moving Paxos logic into the network would yield significant performance benefits for distributed applications.",
"",
"Consensus mechanisms for ensuring consistency are some of the most expensive operations in managing large amounts of data. Often, there is a trade off that involves reducing the coordination overhead at the price of accepting possible data loss or inconsistencies. As the demand for more efficient data centers increases, it is important to provide better ways of ensuring consistency without affecting performance. In this paper we show that consensus (atomic broadcast) can be removed from the critical path of performance by moving it to hardware. As a proof of concept, we implement Zookeeper's atomic broadcast at the network level using an FPGA. Our design uses both TCP and an application specific network protocol. The design can be used to push more value into the network, e.g., by extending the functionality of middleboxes or adding inexpensive consensus to in-network processing nodes. To illustrate how this hardware consensus can be used in practical systems, we have combined it with a mainmemory key value store running on specialized microservers (built as well on FPGAs). This results in a distributed service similar to Zookeeper that exhibits high and stable performance. This work can be used as a blueprint for further specialized designs.",
"We present the design, implementation, and evaluation of an API for applications to control a software-defined network (SDN). Our API is implemented by an OpenFlow controller that delegates read and write authority from the network's administrators to end users, or applications and devices acting on their behalf. Users can then work with the network, rather than around it, to achieve better performance, security, or predictable behavior. Our API serves well as the next layer atop current SDN stacks. Our design addresses the two key challenges: how to safely decompose control and visibility of the network, and how to resolve conflicts between untrusted users and across requests, while maintaining baseline levels of fairness and security. Using a real OpenFlow testbed, we demonstrate our API's feasibility through microbenchmarks, and its usefulness by experiments with four real applications modified to take advantage of it.",
"This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler maps these policies into a constraint problem that determines bandwidth allocations using parameterizable heuristics. It then generates code that can be executed on the network elements to enforce the policies. To allow network tenants to dynamically adapt policies to their needs, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and effectiveness of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies that provision network resources.",
"The datacenter network is shared among untrusted tenants in a public cloud, and hundreds of services in a private cloud. Today we lack fine-grained control over network bandwidth partitioning across tenants. In this paper we present EyeQ, a simple and practical system that provides tenants with bandwidth guarantees as if their endpoints were connected to a dedicated switch. To realize this goal, EyeQ leverages the high bisection bandwidth in a datacenter fabric and enforces admission control on traffic, regardless of the tenant transport protocol. We show that this pushes bandwidth contention to the network's edge, enabling EyeQ to support end-to-end minimum bandwidth guarantees to tenant end-points in a simple and scalable manner at the servers. EyeQ requires no changes to applications and is deployable with support from the network available today. We evaluate EyeQ with an efficient software implementation at 10Gb s speeds using unmodified applications and adversarial traffic patterns. Our evaluation demonstrates EyeQ's promise of predictable network performance isolation. For instance, even with an adversarial tenant with bursty UDP traffic, EyeQ is able to maintain the 99.9th percentile latency for a collocated memcached application close to that of a dedicated deployment."
]
} |
1610.07576 | 2542339198 | We investigate the secure connectivity of wireless sensor networks under a heterogeneous random key predistribution scheme and a heterogeneous channel model. In particular, we study a random graph formed by the intersection of an inhomogeneous random key graph with an inhomogeneous Erd o s-R 'enyi graph. The former graph is naturally induced by the heterogeneous random key predistribution scheme while the latter graph constitutes a heterogeneous on off channel model; wherein, the wireless channel between a class- @math node and a class- @math node is on with probability @math independently. We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model so that it has no isolated node with high probability as the number of nodes gets large. We also present numerical results to support these zero-one laws in the finite-node regime. | Our main result extends the results established by Eletreby and Ya g an in @cite_7 for the inhomogeneous random key graph intersecting the (homogeneous) ER graph. There, zero-one laws for the property that the graph has no isolated nodes and the property that the graph is connected were established. It is clear that our work generalizes the model given in @cite_7 by considering the ER graph, enabling the analysis of networks with heterogeneous radio capabilities. Indeed, when @math for @math and each @math , our result recovers the absence of isolated nodes result given in @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2338794901"
],
"abstract": [
"We consider the network reliability problem in secure wireless sensor networks that employ a heterogeneous random key predistribution scheme. This scheme is introduced recently, as a generalization of the Eschenauer-Gligor scheme, to account for the cases when the network comprises sensors with varying level of resources or connectivity requirements; e.g., regular nodes vs. cluster heads. The scheme induces the inhomogeneous random key graph, denoted G(n;μ,K, P ), where each of the n nodes are independently assigned to one of r classes according to a probability distribution μ = (μ1, . . . , μr) and then a class-i node is assigned Ki keys uniformly at random from a pool of size P ; two nodes that share a key are then connected by an edge. We analyze the reliability of G(n;μ,K, P ) against random link failures. Namely, we consider G(n;μ,K, P, α) formed by deleting each edge of G(n;μ,K, P ) independently with probability 1 − α, and study the probability that the resulting graph i) has no isolated node; and ii) is connected. We present scaling conditions on K , P , and α such that both events take place with probability zero or one, respectively, as the number of nodes gets large. We also present numerical results to support these zero-one laws in the finite-node regime."
]
} |
1610.07008 | 2546257475 | Kernel normalization methods have been employed to improve robustness of optimization methods to reparametrization of convolution kernels, covariate shift, and to accelerate training of Convolutional Neural Networks (CNNs). However, our understanding of theoretical properties of these methods has lagged behind their success in applications. We develop a geometric framework to elucidate underlying mechanisms of a diverse range of kernel normalization methods. Our framework enables us to expound and identify geometry of space of normalized kernels. We analyze and delineate how state-of-the-art kernel normalization methods affect the geometry of search spaces of the stochastic gradient descent (SGD) algorithms in CNNs. Following our theoretical results, we propose a SGD algorithm with assurance of almost sure convergence of the methods to a solution at single minimum of classification loss of CNNs. Experimental results show that the proposed method achieves state-of-the-art performance for major image classification benchmarks with CNNs. | Popular kernel normalization methods have been implemented using reparametrizations @cite_50 @cite_49 , and additional constraints, such as orthogonality @cite_2 @cite_44 @cite_28 , in order to preserve unit norm property of the kernels for forward propagation @cite_51 , at initialization @cite_1 @cite_38 , or at each epoch of SGD @cite_32 @cite_49 . Unit norm kernels were used for symmetry invariant optimization at the first and second layers of a network in @cite_10 . One of the challenges of these approaches, besides the lack of theoretical understandings mentioned above, is the employment of the reparametrization and rescaling methods at new layers before after convolution layers, resulting in an increase of complexity of the network structure by aggregation of the new layers. In addition, statistical properties of data need to be recorded during training, and testing. Therefore, kernel normalization methods may increase computational overhead of CNNs for both training and testing. | {
"cite_N": [
"@cite_38",
"@cite_28",
"@cite_10",
"@cite_2",
"@cite_1",
"@cite_32",
"@cite_44",
"@cite_49",
"@cite_50",
"@cite_51"
],
"mid": [
"2125930537",
"2284050795",
"2148571195",
"2209990155",
"2178237821",
"2175402905",
"2204257188",
"2284050935",
"2287334441",
""
],
"abstract": [
"Despite the widespread practical success of deep learning methods, our theoretical understanding of the dynamics of learning in deep neural networks remains quite sparse. We attempt to bridge the gap between the theory and practice of deep learning by systematically analyzing learning dynamics for the restricted case of deep linear neural networks. Despite the linearity of their input-output map, such networks have nonlinear gradient descent dynamics on weights that change with the addition of each new hidden layer. We show that deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions. We provide an analytical description of these phenomena by finding new exact solutions to the nonlinear dynamics of deep learning. Our theoretical analysis also reveals the surprising finding that as the depth of a network approaches infinity, learning speed can nevertheless remain finite: for a special class of initial conditions on the weights, very deep networks incur only a finite, depth independent, delay in learning speed relative to shallow networks. We show that, under certain conditions on the training data, unsupervised pretraining can find this special class of initial conditions, while scaled random Gaussian initializations cannot. We further exhibit a new class of random orthogonal initial conditions on weights that, like unsupervised pre-training, enjoys depth independent learning times. We further show that these initial conditions also lead to faithful propagation of gradients even in deep nonlinear networks, as long as they operate in a special regime known as the edge of chaos.",
"We propose a unified framework for neural net normalization, regularization and optimization, which includes Path-SGD and Batch-Normalization and interpolates between them across two different dimensions. Through this framework we investigate issue of invariance of the optimization, data dependence and the connection with natural gradients.",
"Recent works have highlighted scale invariance or symmetry that is present in the weight space of a typical deep network and the adverse effect that it has on the Euclidean gradient based stochastic gradient descent optimization. In this work, we show that these and other commonly used deep networks, such as those which use a max-pooling and sub-sampling layer, possess more complex forms of symmetry arising from scaling based reparameterization of the network weights. We then propose two symmetry-invariant gradient based weight updates for stochastic gradient descent based learning. Our empirical evidence based on the MNIST dataset shows that these updates improve the test performance without sacrificing the computational efficiency of the weight updates. We also show the results of training with one of the proposed weight updates on an image segmentation problem.",
"In this work, we develop a novel method for automatically learning aspects of the structure of a deep model, in order to improve its performance, especially when labeled training data are scarce. We propose a new convolutional neural network model with the Indian Buffet Process (IBP) prior, termed ibpCNN. The ibpCNN automatically adapts its structure to provided training data, achieves an optimal balance among model complexity, data fidelity and training loss, and thus offers better generalization performance. The proposed ibpCNN captures complicated data distribution in an unsupervised generative way. Therefore, ibpCNN can exploit unlabeled data -- which can be collected at low cost -- to learn its structure. After determining the structure, ibpCNN further learns its parameters according to specified tasks, in an end-to-end fashion, and produces discriminative yet compact representations. We evaluate the performance of ibpCNN, on fully-and semi-supervised image classification tasks, ibpCNN surpasses standard CNN models on benchmark datasets, with much smaller size and higher efficiency.",
"Layer-sequential unit-variance (LSUV) initialization - a simple method for weight initialization for deep net learning - is proposed. The method consists of the two steps. First, pre-initialize weights of each convolution or inner-product layer with orthonormal matrices. Second, proceed from the first to the final layer, normalizing the variance of the output of each layer to be equal to one. Experiment with different activation functions (maxout, ReLU-family, tanh) show that the proposed initialization leads to learning of very deep nets that (i) produces networks with test accuracy better or equal to standard methods and (ii) is at least as fast as the complex schemes proposed specifically for very deep nets such as FitNets ( (2015)) and Highway ( (2015)). Performance is evaluated on GoogLeNet, CaffeNet, FitNets and Residual nets and the state-of-the-art, or very close to it, is achieved on the MNIST, CIFAR-10 100 and ImageNet datasets.",
"Recurrent neural networks (RNNs) are notoriously difficult to train. When the eigenvalues of the hidden to hidden weight matrix deviate from absolute value 1, optimization becomes difficult due to the well studied issue of vanishing and exploding gradients, especially when trying to learn long-term dependencies. To circumvent this problem, we propose a new architecture that learns a unitary weight matrix, with eigenvalues of absolute value exactly 1. The challenge we address is that of parametrizing unitary matrices in a way that does not require expensive computations (such as eigendecomposition) after each weight update. We construct an expressive unitary weight matrix by composing several structured matrices that act as building blocks with parameters to be learned. Optimization with this parameterization becomes feasible only when considering hidden states in the complex domain. We demonstrate the potential of this architecture by achieving state of the art results in several hard tasks involving very long-term dependencies.",
"Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. We perform segmentation experiments using the BSDS and MSCOCO benchmarks and demonstrate that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.",
"We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.",
"Convolutional Neural Networks spread through computer vision like a wildfire, impacting almost all visual tasks imaginable. Despite this, few researchers dare to train their models from scratch. Most work builds on one of a handful of ImageNet pre-trained models, and fine-tunes or adapts these for specific tasks. This is in large part due to the difficulty of properly initializing these networks from scratch. A small miscalibration of the initial weights leads to vanishing or exploding gradients, as well as poor convergence properties. In this work we present a fast and simple data-dependent initialization procedure, that sets the weights of a network such that all units in the network train at roughly the same rate, avoiding vanishing or exploding gradients. Our initialization matches the current state-of-the-art unsupervised or self-supervised pre-training methods on standard computer vision tasks, such as image classification and object detection, while being roughly three orders of magnitude faster. When combined with pre-training methods, our initialization significantly outperforms prior work, narrowing the gap between supervised and unsupervised pre-training.",
""
]
} |
1610.07008 | 2546257475 | Kernel normalization methods have been employed to improve robustness of optimization methods to reparametrization of convolution kernels, covariate shift, and to accelerate training of Convolutional Neural Networks (CNNs). However, our understanding of theoretical properties of these methods has lagged behind their success in applications. We develop a geometric framework to elucidate underlying mechanisms of a diverse range of kernel normalization methods. Our framework enables us to expound and identify geometry of space of normalized kernels. We analyze and delineate how state-of-the-art kernel normalization methods affect the geometry of search spaces of the stochastic gradient descent (SGD) algorithms in CNNs. Following our theoretical results, we propose a SGD algorithm with assurance of almost sure convergence of the methods to a solution at single minimum of classification loss of CNNs. Experimental results show that the proposed method achieves state-of-the-art performance for major image classification benchmarks with CNNs. | Removal of scale and translation from kernels by normalization can be interpreted as imposition of a geometric structure such that the kernels lie on the sphere @cite_74 . In our approach, embedded kernel submanifolds can be described using the sphere, oblique and or the Stiefel manifold. Additional constraints can also be imposed using immersed submanifolds such as rotation groups. Thus, our approach can be considered as generalization of the aforementioned approaches such that we can employ our methods to model different submanifolds according to various constraints, such as orthonormal kernels. Thereby, we employ geometry of kernels to identify the constraints on the optimization problem of CNNs. Moreover, gradient descent of natural gradient (NG) methods can be cast as an approximation to SGD for submanifolds which are equipped with Riemannian structure and employed in our framework @cite_57 @cite_59 @cite_3 . In this aspect, our proposed methods can be considered as generalization of NG methods @cite_61 @cite_65 . Our contributions can be summarized as follows: | {
"cite_N": [
"@cite_61",
"@cite_65",
"@cite_3",
"@cite_57",
"@cite_59",
"@cite_74"
],
"mid": [
"1915968771",
"1877062207",
"",
"2138674039",
"1971945429",
""
],
"abstract": [
"We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.",
"Second-order optimization methods, such as natural gradient, are difficult to apply to highdimensional problems, because they require approximately solving large linear systems. We present FActorized Natural Gradient (FANG), an approximation to natural gradient descent where the Fisher matrix is approximated with a Gaussian graphical model whose precision matrix can be computed efficiently. We analyze the Fisher matrix for a small RBM and derive an extremely sparse graphical model which is a good match to the covariance of the sufficient statistics. Our experiments indicate that FANG allows RBMs to be trained more efficiently compared with stochastic gradient descent. Additionally, our analysis yields insight into the surprisingly good performance of the \"centering trick\" for training RBMs.",
"",
"Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.",
"We prove the equivalence of two online learning algorithms, mirror descent and natural gradient descent. Both mirror descent and natural gradient descent are generalizations of online gradient descent when the parameter of interest lies on a non-Euclidean manifold. Natural gradient descent selects the steepest descent direction along a Riemannian manifold by multiplying the standard gradient by the inverse of the metric tensor. Mirror descent induces non-Euclidean structure by solving iterative optimization problems using different proximity functions. In this paper, we prove that mirror descent induced by a Bregman divergence proximity functions is equivalent to the natural gradient descent algorithm on the Riemannian manifold in the dual co-ordinate system. We use techniques from convex analysis and connections between Riemannian manifolds, Bregman divergences and convexity to prove this result. This equivalence between natural gradient descent and mirror descent, implies that (1) mirror descent is the steepest descent direction along the Riemannian manifold corresponding to the choice of Bregman divergence and (2) mirror descent with log-likelihood loss applied to parameter estimation in exponential families asymptotically achieves the classical Cramer-Rao lower bound.",
""
]
} |
1610.07008 | 2546257475 | Kernel normalization methods have been employed to improve robustness of optimization methods to reparametrization of convolution kernels, covariate shift, and to accelerate training of Convolutional Neural Networks (CNNs). However, our understanding of theoretical properties of these methods has lagged behind their success in applications. We develop a geometric framework to elucidate underlying mechanisms of a diverse range of kernel normalization methods. Our framework enables us to expound and identify geometry of space of normalized kernels. We analyze and delineate how state-of-the-art kernel normalization methods affect the geometry of search spaces of the stochastic gradient descent (SGD) algorithms in CNNs. Following our theoretical results, we propose a SGD algorithm with assurance of almost sure convergence of the methods to a solution at single minimum of classification loss of CNNs. Experimental results show that the proposed method achieves state-of-the-art performance for major image classification benchmarks with CNNs. | By making use of our theoretical results, we propose a SGD algorithm for optimization on kernel submanifolds for training of CNNs in . More precisely, our theoretical results first enable us to employ various smooth manifolds with different metrics to describe submanifolds. For computational efficiency, we then employ Riemannian manifolds to perform steps of SGD methods on submanifolds. We compute steps of SGD according to smooth structures of submanifolds, such as metrics and differential maps, defined on submanifolds, and their topological properties, such as compactness. Moreover, in our proposed SGD algorithm, we can employ momentum and Euclidean gradient decay for optimization on submanifolds extending the methods proposed in @cite_7 @cite_57 . In , we provide two theorems to analyze the convergence of the proposed SGD algorithm. We provide a discussion on computational complexity of the proposed algorithm in . | {
"cite_N": [
"@cite_57",
"@cite_7"
],
"mid": [
"2138674039",
"1804110266"
],
"abstract": [
"Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.",
"Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists."
]
} |
1610.06940 | 2952345740 | Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness. | The notion of robustness studied in @cite_26 has some similarities to our definition of safety, except that the authors work with values over the input distribution @math , which is difficult to estimate accurately in high dimensions. As in @cite_31 @cite_10 , they use optimisation without convergence guarantees, as a result computing only an approximation to the minimal perturbation. @cite_34 pointwise robustness is adopted, which corresponds to our general safety; they also use a constraint solver but represent the full constraint system by reduction to a convex LP problem, and only verify an approximation of the property. In contrast, we work directly with activations rather than an encoding of activation functions, and our method searches through the complete ladder tree for an adversarial example by iterative and nondeterministic application of manipulations. Further, our definition of a manipulation is more flexible, since it allows us to select a of dimensions, and each such subset can have a different region diameter computed with respect to a different norm. | {
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_10",
"@cite_34"
],
"mid": [
"1673923490",
"1678450113",
"2460937040",
""
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"The goal of this paper is to analyze an intriguing phenomenon recently discovered in deep networks, namely their instability to adversarial perturbations (Szegedy et. al., 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on the families of linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured by the distinguishability). Moreover, we show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to d (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed in the context of neural networks. To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks. Our analysis is complemented by experimental results on controlled and real-world data.",
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
""
]
} |
1610.06947 | 2479843978 | Smartphones and other mobile devices are today pervasive across the globe. As an interesting side effect of the surge in mobile communications, mobile network operators can now easily collect a wealth of high-resolution data on the habits of large user populations. The information extracted from mobile network traffic data is very relevant in the context of population mapping: it provides a tool for the automatic and live estimation of population densities, overcoming the limitations of traditional data sources such as censuses and surveys. In this paper, we propose a new approach to infer population densities at urban scales, based on aggregated mobile network traffic metadata. Our approach allows estimating both static and dynamic populations, achieves a significant improvement in terms of accuracy with respect to state-of-the-art solutions in the literature, and is validated on different city scenarios. | There is wide agreement on the suitability of mobile network traffic data as a source of information for generic positioning analysis. Previous works have demonstrated that this type of data allows for the effective estimation of important places @cite_11 , the inference of trips among such places @cite_13 , and the derivation of origin-destination matrices by aggregating a large number of trips @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_11"
],
"mid": [
"1990898695",
"2002644929",
"2004602565"
],
"abstract": [
"Using an algorithm to analyze opportunistically collected mobile phone location data, the authors estimate weekday and weekend travel patterns of a large metropolitan area with high accuracy.",
"This paper presents a strategy to evaluate long-distance travel patterns by tracking cellular phone positions. The authors first note that long-distance trips are generally under-reported in typical household surveys, because of relative low frequency of these trips. Yet transportation analysis and travel demand forecasting require data, including that for long-distance trips, in order to model the decisions that people make related to travel. They stress that their suggested approach allows passive data collection on many travelers over a long period of time at low costs. They present results of a study in Israel, conducted in 2007, that included an average sample of 10,200 cell phone numbers per week for 16 weeks. The tracking system was based on recording events that contain a change in the position of the cell phone with respect to a given antenna. The method was specifically designed to capture long distance trips, as part of the development of a national demand model conducted for the Economics and Planning Department of the Israel Ministry of Transport. Using this method, origin–destination tables can be constructed directly from the cellular phone positions. The authors conclude that this model offers the advantage of monitoring travel demand at the aggregate level and thus could be useful in several transportation and land use applications.",
"Mobile phone datasets allow for the analysis of human behavior on an unprecedented scale. The social network, temporal dynamics and mobile behavior of mobile phone users have often been analyzed independently from each other using mobile phone datasets. In this article, we explore the connections between various features of human behavior extracted from a large mobile phone dataset. Our observations are based on the analysis of communication data of 100,000 anonymized and randomly chosen individuals in a dataset of communications in Portugal. We show that clustering and principal component analysis allow for a significant dimension reduction with limited loss of information. The most important features are related to geographical location. In particular, we observe that most people spend most of their time at only a few locations. With the help of clustering methods, we then robustly identify home and office locations and compare the results with official census data. Finally, we analyze the geographic spread of users’ frequent locations and show that commuting distances can be reasonably well explained by a gravity model."
]
} |
1610.06947 | 2479843978 | Smartphones and other mobile devices are today pervasive across the globe. As an interesting side effect of the surge in mobile communications, mobile network operators can now easily collect a wealth of high-resolution data on the habits of large user populations. The information extracted from mobile network traffic data is very relevant in the context of population mapping: it provides a tool for the automatic and live estimation of population densities, overcoming the limitations of traditional data sources such as censuses and surveys. In this paper, we propose a new approach to infer population densities at urban scales, based on aggregated mobile network traffic metadata. Our approach allows estimating both static and dynamic populations, achieves a significant improvement in terms of accuracy with respect to state-of-the-art solutions in the literature, and is validated on different city scenarios. | As far as population distribution estimation is concerned, mobile communication data was first proposed as a proxy for the density of inhabitants in @cite_2 . Early evidences of the existence of an actual correlation between the mobile network activity and the underlying population density were presented in @cite_10 : the authors showed that city population sizes and the number of mobile customers follow similar distributions. | {
"cite_N": [
"@cite_10",
"@cite_2"
],
"mid": [
"2165598812",
"2085009188"
],
"abstract": [
"We analyze the anonymous communication patterns of 2.5 million customers of a Belgian mobile phone operator. Grouping customers by billing address, we build a social network of cities that consists of communications between 571 cities in Belgium. We show that inter-city communication intensity is characterized by a gravity model: the communication intensity between two cities is proportional to the product of their sizes divided by the square of their distance.",
"The technology for determining the geographic location of cell phones and other handheld devices is becoming increasingly available. It is opening the way to a wide range of applications, collectively referred to as location-based services (LBS), that are primarily aimed at individual users. However, if deployed to retrieve aggregated data in cities, LBS could become a powerful tool for urban analysis. In this paper we aim to review and introduce the potential of this technology to the urban planning community. In addition, we present the ‘Mobile Landscapes’ project: an application in the metropolitan area of Milan, Italy, based on the geographical mapping of cell phone usage at different times of the day. The results enable a graphic representation of the intensity of urban activities and their evolution through space and time. Finally, a number of future applications are discussed and their potential for urban studies and planning is assessed."
]
} |
1610.06947 | 2479843978 | Smartphones and other mobile devices are today pervasive across the globe. As an interesting side effect of the surge in mobile communications, mobile network operators can now easily collect a wealth of high-resolution data on the habits of large user populations. The information extracted from mobile network traffic data is very relevant in the context of population mapping: it provides a tool for the automatic and live estimation of population densities, overcoming the limitations of traditional data sources such as censuses and surveys. In this paper, we propose a new approach to infer population densities at urban scales, based on aggregated mobile network traffic metadata. Our approach allows estimating both static and dynamic populations, achieves a significant improvement in terms of accuracy with respect to state-of-the-art solutions in the literature, and is validated on different city scenarios. | Subsequent works carried out more comprehensive evaluations. @cite_11 , the home location of each subscriber was localized as the most frequently visited cell with a home profile (, where the activity peak occurs in the evening). The density of home locations was then found to match very well --with a 0.92 correlation-- census data on nationwide population distribution. Similarly, excellent agreement between the overnight spatial density of mobile subscribers and that of nationwide static populations was found in @cite_13 @cite_1 . However, these results refer to a nationwide population, and the spatial granularity of the studies is counties or tracts (, large regions comprising whole cities or macroscopic city neighborhoods). Our focus is on intra-urban population distribution estimation: to that end, we downscale the study at the individual cell level, considering orders-of-magnitude higher accuracy and making the task much more challenging. | {
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_11"
],
"mid": [
"1990898695",
"2002644929",
"2004602565"
],
"abstract": [
"Using an algorithm to analyze opportunistically collected mobile phone location data, the authors estimate weekday and weekend travel patterns of a large metropolitan area with high accuracy.",
"This paper presents a strategy to evaluate long-distance travel patterns by tracking cellular phone positions. The authors first note that long-distance trips are generally under-reported in typical household surveys, because of relative low frequency of these trips. Yet transportation analysis and travel demand forecasting require data, including that for long-distance trips, in order to model the decisions that people make related to travel. They stress that their suggested approach allows passive data collection on many travelers over a long period of time at low costs. They present results of a study in Israel, conducted in 2007, that included an average sample of 10,200 cell phone numbers per week for 16 weeks. The tracking system was based on recording events that contain a change in the position of the cell phone with respect to a given antenna. The method was specifically designed to capture long distance trips, as part of the development of a national demand model conducted for the Economics and Planning Department of the Israel Ministry of Transport. Using this method, origin–destination tables can be constructed directly from the cellular phone positions. The authors conclude that this model offers the advantage of monitoring travel demand at the aggregate level and thus could be useful in several transportation and land use applications.",
"Mobile phone datasets allow for the analysis of human behavior on an unprecedented scale. The social network, temporal dynamics and mobile behavior of mobile phone users have often been analyzed independently from each other using mobile phone datasets. In this article, we explore the connections between various features of human behavior extracted from a large mobile phone dataset. Our observations are based on the analysis of communication data of 100,000 anonymized and randomly chosen individuals in a dataset of communications in Portugal. We show that clustering and principal component analysis allow for a significant dimension reduction with limited loss of information. The most important features are related to geographical location. In particular, we observe that most people spend most of their time at only a few locations. With the help of clustering methods, we then robustly identify home and office locations and compare the results with official census data. Finally, we analyze the geographic spread of users’ frequent locations and show that commuting distances can be reasonably well explained by a gravity model."
]
} |
1610.06947 | 2479843978 | Smartphones and other mobile devices are today pervasive across the globe. As an interesting side effect of the surge in mobile communications, mobile network operators can now easily collect a wealth of high-resolution data on the habits of large user populations. The information extracted from mobile network traffic data is very relevant in the context of population mapping: it provides a tool for the automatic and live estimation of population densities, overcoming the limitations of traditional data sources such as censuses and surveys. In this paper, we propose a new approach to infer population densities at urban scales, based on aggregated mobile network traffic metadata. Our approach allows estimating both static and dynamic populations, achieves a significant improvement in terms of accuracy with respect to state-of-the-art solutions in the literature, and is validated on different city scenarios. | Citywide population estimation from mobile traffic data has been addressed by a limited number of works in the literature. @cite_8 , LandScan , a tool for ambient population estimation was employed to explore the relationship between the voice call activity and the underlying inhabitant density, at a 1-km @math resolution. The authors found a weak correlation of 0.24, which improved to 0.45 by limiting the analysis to selected time intervals rather than considering the daily communication volume. @cite_14 , telecommunications data is mixed with a number of other sources, including information on Corine land use, OpenStreetMap infrastructure, satellite nightlights, and slope. This plethora of data is processed through a dasymetric modeling approach, resulting in high 0.92 correlation with census information. However, the correlation is a nationwide average, and the authors indicate that the accuracy is lower for the most densely populated areas, i.e., large cities. Indeed, in such areas, a normalized error of around 0.6 is measured in @cite_14 , whereas we obtain values below 0.06. | {
"cite_N": [
"@cite_14",
"@cite_8"
],
"mid": [
"2055992762",
"2007443624"
],
"abstract": [
"During the past few decades, technologies such as remote sensing, geographical information systems, and global positioning systems have transformed the way the distribution of human population is studied and modeled in space and time. However, the mapping of populations remains constrained by the logistics of censuses and surveys. Consequently, spatially detailed changes across scales of days, weeks, or months, or even year to year, are difficult to assess and limit the application of human population maps in situations in which timely information is required, such as disasters, conflicts, or epidemics. Mobile phones (MPs) now have an extremely high penetration rate across the globe, and analyzing the spatiotemporal distribution of MP calls geolocated to the tower level may overcome many limitations of census-based approaches, provided that the use of MP data is properly assessed and calibrated. Using datasets of more than 1 billion MP call records from Portugal and France, we show how spatially and temporarily explicit estimations of population densities can be produced at national scales, and how these estimates compare with outputs produced using alternative human population mapping methods. We also demonstrate how maps of human population changes can be produced over multiple timescales while preserving the anonymity of MP users. With similar data being collected every day by MP network providers across the world, the prospect of being able to map contemporary and changing human population distributions over relatively short intervals exists, paving the way for new applications and a near real-time understanding of patterns and processes in human geography.",
"Today, large-volume mobile phone call datasets are widely applied to investigate the spatio-temporal characteristics of human urban activity. This paper discusses several fundamental issues in estimating population distributions based on mobile call data. By adopting an individual-based call activity dataset that consists of nearly two million mobile subscribers who made over one hundred million communications over seven consecutive days, we explore the relationships among the Erlang values, the number of calls, and the number of active mobile subscribers. Then, the LandScan population density dataset is introduced to evaluate the process of estimating the population. The empirical findings indicate that: (1) Temporal variation exists in the relation between the Erlang values and the number of calls; (2) The number of calls is linearly proportional to the number of active mobile subscribers; (3) The proportion between the mobile subscribers and the actual total population varies in different areas, thus failing to represent the underlying population. Hence, the call activity reflects \"activity intensity\" rather than population distribution. The Erlang is a defective indicator of population distribution, whereas the number of calls serves as a better measure. This research provides an explicit clarification with respect to using call activity data for estimating population distribution."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.