aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1907.05587
|
2956252722
|
The problem of adversarial examples, evasion attacks on machine learning classifiers, has proven extremely difficult to solve. This is true even when, as is the case in many practical settings, the classifier is hosted as a remote service and so the adversary does not have direct access to the model parameters. This paper argues that in such settings, defenders have a much larger space of actions than have been previously explored. Specifically, we deviate from the implicit assumption made by prior work that a defense must be a stateless function that operates on individual examples, and explore the possibility for stateful defenses. To begin, we develop a defense designed to detect the process of adversarial example generation. By keeping a history of the past queries, a defender can try to identify when a sequence of queries appears to be for the purpose of generating an adversarial example. We then introduce query blinding, a new class of attacks designed to bypass defenses that rely on such a defense approach. We believe that expanding the study of adversarial examples from stateless classifiers to stateful systems is not only more realistic for many black-box settings, but also gives the defender a much-needed advantage in responding to the adversary.
|
To our knowledge, our scheme is the first to use the history of queries in order to detect query-based black-box attacks for creating adversarial examples. The most closely related work is PRADA @cite_24 , which detects black-box model extraction attacks using the history of queries. They examine the @math distance between images and raising an alarm if the distribution of these distances is not Gaussian. Because they use @math distance on images, their scheme is not robust to query blinding, and because they examine only the distribution of distances, their scheme is not robust to insertion of dummy queries to make the distribution Gaussian [ V.C] PRADA . They do not consider how to detect creation of adversarial examples.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"2802314446"
],
"abstract": [
"As machine learning (ML) applications become increasingly prevalent, protecting the confidentiality of ML models becomes paramount for two reasons: (a) models may constitute a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can be used to evade classification by the original model. One way to protect model confidentiality is to limit access to the model only via well-defined prediction APIs. This is common not only in machine-learning-as-a-service (MLaaS) settings where the model is remote, but also in scenarios like autonomous driving where the model is local but direct access to it is protected, for example, by hardware security mechanisms. Nevertheless, prediction APIs still leak information so that it is possible to mount model extraction attacks by an adversary who repeatedly queries the model via the prediction API. In this paper, we describe a new model extraction attack by combining a novel approach for generating synthetic queries together with recent advances in training deep neural networks. This attack outperforms state-of-the-art model extraction techniques in terms of transferability of targeted adversarial examples generated using the extracted model (+15-30 percentage points, pp), and in prediction accuracy (+15-20 pp) on two datasets. We then propose the first generic approach to effectively detect model extraction attacks: PRADA. It analyzes how the distribution of consecutive queries to the model evolves over time and raises an alarm when there are abrupt deviations. We show that PRADA can detect all known model extraction attacks with a 100 success rate and no false positives. PRADA is particularly suited for detecting extraction attacks against local models."
]
}
|
1907.05587
|
2956252722
|
The problem of adversarial examples, evasion attacks on machine learning classifiers, has proven extremely difficult to solve. This is true even when, as is the case in many practical settings, the classifier is hosted as a remote service and so the adversary does not have direct access to the model parameters. This paper argues that in such settings, defenders have a much larger space of actions than have been previously explored. Specifically, we deviate from the implicit assumption made by prior work that a defense must be a stateless function that operates on individual examples, and explore the possibility for stateful defenses. To begin, we develop a defense designed to detect the process of adversarial example generation. By keeping a history of the past queries, a defender can try to identify when a sequence of queries appears to be for the purpose of generating an adversarial example. We then introduce query blinding, a new class of attacks designed to bypass defenses that rely on such a defense approach. We believe that expanding the study of adversarial examples from stateless classifiers to stateful systems is not only more realistic for many black-box settings, but also gives the defender a much-needed advantage in responding to the adversary.
|
Other work has been done to defend against white-box attacks, such as adversarial training @cite_28 . Such defenses are complementary to our defense: we can apply our detection strategy on top of any model. In our paper we study our defense on top of a non-robust model for simplicity and to accurately measure the value of this type of defense. Recent work on robust similarity @cite_12 could also be useful for improving our scheme.
|
{
"cite_N": [
"@cite_28",
"@cite_12"
],
"mid": [
"2640329709",
"2948562596"
],
"abstract": [
"Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.",
"It has been recently shown that the hidden variables of convolutional neural networks make for an efficient perceptual similarity metric that accurately predicts human judgment on relative image similarity assessment. First, we show that such learned perceptual similarity metrics (LPIPS) are susceptible to adversarial attacks that dramatically contradict human visual similarity judgment. While this is not surprising in light of neural networks' well-known weakness to adversarial perturbations, we proceed to show that self-ensembling with an infinite family of random transformations of the input --- a technique known not to render classification networks robust --- is enough to turn the metric robust against attack, while retaining predictive power on human judgments. Finally, we study the geometry imposed by our our novel self-ensembled metric (E-LPIPS) on the space of natural images. We find evidence of \"perceptual convexity\" by showing that convex combinations of similar-looking images retain appearance, and that discrete geodesics yield meaningful frame interpolation and texture morphing, all without explicit correspondences."
]
}
|
1907.05587
|
2956252722
|
The problem of adversarial examples, evasion attacks on machine learning classifiers, has proven extremely difficult to solve. This is true even when, as is the case in many practical settings, the classifier is hosted as a remote service and so the adversary does not have direct access to the model parameters. This paper argues that in such settings, defenders have a much larger space of actions than have been previously explored. Specifically, we deviate from the implicit assumption made by prior work that a defense must be a stateless function that operates on individual examples, and explore the possibility for stateful defenses. To begin, we develop a defense designed to detect the process of adversarial example generation. By keeping a history of the past queries, a defender can try to identify when a sequence of queries appears to be for the purpose of generating an adversarial example. We then introduce query blinding, a new class of attacks designed to bypass defenses that rely on such a defense approach. We believe that expanding the study of adversarial examples from stateless classifiers to stateful systems is not only more realistic for many black-box settings, but also gives the defender a much-needed advantage in responding to the adversary.
|
Transfer attacks are a common approach in the zero-query setting @cite_18 . We explore combining our defense with ensemble adversarial training @cite_19 , currently one of the most effective defenses against zero-query transfer attacks, but the recent Sitatapatra defense may also be effective @cite_3 .
|
{
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_3"
],
"mid": [
"2963744840",
"2274565976",
"2914110731"
],
"abstract": [
"Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks.",
"Advances in deep learning have led to the broad adoption of Deep Neural Networks (DNNs) to a range of important machine learning problems, e.g., guiding autonomous vehicles, speech recognition, malware detection. Yet, machine learning models, including DNNs, were shown to be vulnerable to adversarial samples-subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software. Adversarial examples are known to transfer from one model to another, even if the second model has a different architecture or was trained on a different set. We introduce the first practical demonstration that this cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data. In our demonstration, we only assume that the adversary can observe outputs from the target DNN given inputs chosen by the adversary. We introduce the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model. We evaluate the approach on existing DNN datasets and real-world settings. In one experiment, we force a DNN supported by MetaMind (one of the online APIs for DNN classifiers) to mis-classify inputs at a rate of 84.24 . We conclude with experiments exploring why adversarial samples transfer between DNNs, and a discussion on the applicability of our attack when targeting machine learning algorithms distinct from DNNs.",
"Convolutional Neural Networks (CNNs) are widely used to solve classification tasks in computer vision. However, they can be tricked into misclassifying specially crafted adversarial' samples -- and samples built to trick one model often work alarmingly well against other models trained on the same task. In this paper we introduce Sitatapatra, a system designed to block the transfer of adversarial samples. It diversifies neural networks using a key, as in cryptography, and provides a mechanism for detecting attacks. What's more, when adversarial samples are detected they can typically be traced back to the individual device that was used to develop them. The run-time overheads are minimal permitting the use of Sitatapatra on constrained systems."
]
}
|
1907.05587
|
2956252722
|
The problem of adversarial examples, evasion attacks on machine learning classifiers, has proven extremely difficult to solve. This is true even when, as is the case in many practical settings, the classifier is hosted as a remote service and so the adversary does not have direct access to the model parameters. This paper argues that in such settings, defenders have a much larger space of actions than have been previously explored. Specifically, we deviate from the implicit assumption made by prior work that a defense must be a stateless function that operates on individual examples, and explore the possibility for stateful defenses. To begin, we develop a defense designed to detect the process of adversarial example generation. By keeping a history of the past queries, a defender can try to identify when a sequence of queries appears to be for the purpose of generating an adversarial example. We then introduce query blinding, a new class of attacks designed to bypass defenses that rely on such a defense approach. We believe that expanding the study of adversarial examples from stateless classifiers to stateful systems is not only more realistic for many black-box settings, but also gives the defender a much-needed advantage in responding to the adversary.
|
The approach for query blinding takes inspiration from previous work in signature blinding @cite_31 and mimicry attacks @cite_17 .
|
{
"cite_N": [
"@cite_31",
"@cite_17"
],
"mid": [
"1601001795",
"2135143063"
],
"abstract": [
"Automation of the way we pay for goods and services is already underway, as can be seen by the variety and growth of electronic banking services available to consumers. The ultimate structure of the new electronic payments system may have a substantial impact on personal privacy as well as on the nature and extent of criminal use of payments. Ideally a new payments system should address both of these seemingly conflicting sets of concerns.",
"We examine several host-based anomaly detection systems and study their security against evasion attacks. First, we introduce the notion of a mimicry attack, which allows a sophisticated attacker to cloak their intrusion to avoid detection by the IDS. Then, we develop a theoretical framework for evaluating the security of an IDS against mimicry attacks. We show how to break the security of one published IDS with these methods, and we experimentally confirm the power of mimicry attacks by giving a worked example of an attack on a concrete IDS implementation. We conclude with a call for further research on intrusion detection from both attacker's and defender's viewpoints."
]
}
|
1907.05517
|
2956735194
|
Broadcast is a fundamental network operation, widely used in wireless networks to disseminate messages. The energy-efficiency of broadcast is important particularly when devices in the network are energy constrained. To improve the efficiency of broadcast, different approaches have been taken in the literature. One of these approaches is broadcast with energy accumulation. Through simulations, it has been shown in the literature that broadcast with energy accumulation can result in energy saving. The amount of this saving, however, has only been analyzed for linear multi-hop wireless networks. In this work, we extend this analysis to two-dimensional (2D) multi-hop networks. The analysis of saving in 2D networks is much more challenging than that in linear networks. It is because, unlike in linear networks, in 2D networks, finding minimum-energy broadcasts with or without energy accumulation are both NP-hard problems. Nevertheless, using a novel approach, we prove that this saving is constant when the path loss exponent alpha is strictly greater than two. Also, we prove that the saving is theta(log n) when alpha=2, where n denotes the number of nodes in the network.
|
Energy accumulation can be performed at receivers utilizing maximal ratio combining (MRC) of orthogonal signals in time, frequency or code domain (see @cite_35 @cite_30 @cite_3 ).
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_3"
],
"mid": [
"2140753648",
"2161586373",
"2097365725"
],
"abstract": [
"We formulate the problem of delay constrained energy-efficient broadcast in cooperative multihop wireless networks. We show that this important problem is not only NP-complete, but also o(log(n)) inapproximable. We derive approximation results and an analytical lower-bound for this problem. We break this NP hard problem into three parts: ordering, scheduling and power control. We show that when the ordering is given, the joint scheduling and power-control problem can be solved in polynomial time by a novel algorithm that combines dynamic programming and linear programming to yield the minimum energy broadcast for a given delay constraint. We further show empirically that this algorithm used in conjunction with an ordering derived heuristically using the Dijkstra's shortest path algorithm yields near-optimal performance in typical settings. We use our algorithm to study numerically the trade-off between delay and power-efficiency in cooperative broadcast and compare the performance of our cooperative algorithm with a smart non-cooperative algorithm.",
"A fundamental problem in large scale wireless networks is the energy efficient broadcast of source messages to the whole network. The energy consumption increases as the network size grows, and the optimization of broadcast efficiency becomes more important. In this paper, we study the optimal power allocation problem for cooperative broadcast in dense large-scale networks. In the considered cooperation protocol, a single source initiates the transmission and the rest of the nodes retransmit the source message if they have decoded it reliably. Each node is allocated an-orthogonal channel and the nodes improve their receive signal-to-noise ratio (SNR), hence the energy efficiency, by maximal-ratio combining the receptions of the same packet from different transmitters. We assume that the decoding of the source message is correct as long as the receive SNR exceeds a predetermined threshold. Under the optimal cooperative broadcasting, the transmission order (i.e., the schedule) and the transmission powers of the source and the relays are designed so that every node receives the source message reliably and the total power consumption is minimized. In general, finding the best scheduling in cooperative broadcast is known to be an NP-complete problem. In this paper, we show that the optimal scheduling problem can be solved for dense networks, which we approximate as a continuum of nodes. Under the continuum model, we derive the optimal scheduling and the optimal power density. Furthermore, we propose low-complexity, distributed and power efficient broadcasting schemes and compare their power consumptions with those-of-a traditional noncooperative multihop transmission.",
""
]
}
|
1907.05517
|
2956735194
|
Broadcast is a fundamental network operation, widely used in wireless networks to disseminate messages. The energy-efficiency of broadcast is important particularly when devices in the network are energy constrained. To improve the efficiency of broadcast, different approaches have been taken in the literature. One of these approaches is broadcast with energy accumulation. Through simulations, it has been shown in the literature that broadcast with energy accumulation can result in energy saving. The amount of this saving, however, has only been analyzed for linear multi-hop wireless networks. In this work, we extend this analysis to two-dimensional (2D) multi-hop networks. The analysis of saving in 2D networks is much more challenging than that in linear networks. It is because, unlike in linear networks, in 2D networks, finding minimum-energy broadcasts with or without energy accumulation are both NP-hard problems. Nevertheless, using a novel approach, we prove that this saving is constant when the path loss exponent alpha is strictly greater than two. Also, we prove that the saving is theta(log n) when alpha=2, where n denotes the number of nodes in the network.
|
Existing energy accumulation based cooperative broadcast algorithms fall into two groups. The first group includes algorithms (e.g., @cite_0 @cite_35 ) in which receiving nodes can combine signals from all previous transmissions to benefit from transmission diversity. These algorithms are called cooperative broadcast algorithms with memory. The other group includes memoryless'' cooperative broadcast algorithms such as the one proposed in @cite_30 . In these algorithms, a node can only use transmissions in the present time slot to accumulate energy; Signals received from transmissions in previous time slots are discarded. Our work studies cooperative broadcast algorithms with memory as they fully benefit from the energy accumulation. As the result, our derived upper bounds on the cooperation gain also apply to the memoryless'' cooperative broadcast algorithms.
|
{
"cite_N": [
"@cite_0",
"@cite_35",
"@cite_30"
],
"mid": [
"2119940417",
"2161586373",
"2140753648"
],
"abstract": [
"Broadcasting is a method that allows the distributed nodes in a wireless sensor network to share its data efficiently among each other. Due to the limited energy supplies of a sensor node, energy efficiency has become a crucial issue in the design of broadcasting protocols. In this paper, we analyze the energy savings provided by a cooperative form of broadcast, called the opportunistic large arrays (OLA), and compare it to the performance of conventional multi-hop networks where no cooperation is utilized for transmission. The cooperation in OLA allows the receivers to utilize for detection the accumulation of signal energy provided by the transmitters that are relaying the same symbol. In this work, we derive the optimal energy allocation policy that minimizes the total energy cost of the OLA network subject to the SNR (or BER) requirements at all receivers. Even though the cooperative broadcast protocol provides significant energy savings, we prove that the optimum energy assignment for cooperative networks is an NP-complete problem and, thus, requires high computational complexity in general. We then introduce several suboptimal yet scalable solutions and show the significant energy-savings that one can obtain even with the approximate solutions",
"A fundamental problem in large scale wireless networks is the energy efficient broadcast of source messages to the whole network. The energy consumption increases as the network size grows, and the optimization of broadcast efficiency becomes more important. In this paper, we study the optimal power allocation problem for cooperative broadcast in dense large-scale networks. In the considered cooperation protocol, a single source initiates the transmission and the rest of the nodes retransmit the source message if they have decoded it reliably. Each node is allocated an-orthogonal channel and the nodes improve their receive signal-to-noise ratio (SNR), hence the energy efficiency, by maximal-ratio combining the receptions of the same packet from different transmitters. We assume that the decoding of the source message is correct as long as the receive SNR exceeds a predetermined threshold. Under the optimal cooperative broadcasting, the transmission order (i.e., the schedule) and the transmission powers of the source and the relays are designed so that every node receives the source message reliably and the total power consumption is minimized. In general, finding the best scheduling in cooperative broadcast is known to be an NP-complete problem. In this paper, we show that the optimal scheduling problem can be solved for dense networks, which we approximate as a continuum of nodes. Under the continuum model, we derive the optimal scheduling and the optimal power density. Furthermore, we propose low-complexity, distributed and power efficient broadcasting schemes and compare their power consumptions with those-of-a traditional noncooperative multihop transmission.",
"We formulate the problem of delay constrained energy-efficient broadcast in cooperative multihop wireless networks. We show that this important problem is not only NP-complete, but also o(log(n)) inapproximable. We derive approximation results and an analytical lower-bound for this problem. We break this NP hard problem into three parts: ordering, scheduling and power control. We show that when the ordering is given, the joint scheduling and power-control problem can be solved in polynomial time by a novel algorithm that combines dynamic programming and linear programming to yield the minimum energy broadcast for a given delay constraint. We further show empirically that this algorithm used in conjunction with an ordering derived heuristically using the Dijkstra's shortest path algorithm yields near-optimal performance in typical settings. We use our algorithm to study numerically the trade-off between delay and power-efficiency in cooperative broadcast and compare the performance of our cooperative algorithm with a smart non-cooperative algorithm."
]
}
|
1907.05517
|
2956735194
|
Broadcast is a fundamental network operation, widely used in wireless networks to disseminate messages. The energy-efficiency of broadcast is important particularly when devices in the network are energy constrained. To improve the efficiency of broadcast, different approaches have been taken in the literature. One of these approaches is broadcast with energy accumulation. Through simulations, it has been shown in the literature that broadcast with energy accumulation can result in energy saving. The amount of this saving, however, has only been analyzed for linear multi-hop wireless networks. In this work, we extend this analysis to two-dimensional (2D) multi-hop networks. The analysis of saving in 2D networks is much more challenging than that in linear networks. It is because, unlike in linear networks, in 2D networks, finding minimum-energy broadcasts with or without energy accumulation are both NP-hard problems. Nevertheless, using a novel approach, we prove that this saving is constant when the path loss exponent alpha is strictly greater than two. Also, we prove that the saving is theta(log n) when alpha=2, where n denotes the number of nodes in the network.
|
The problem of cooperative broadcast with minimum energy can be broken into two sub-problems i) transmission scheduling, which determines the set of transmitters and the order of transmissions; ii) power allocation, in which the transmission powers are set. It was proven that, given a transmission scheduling, optimal power allocation can be computed in polynomial time, but finding an optimal scheduling that leads to a minimum power consumption is NP-hard @cite_0 @cite_5 .
|
{
"cite_N": [
"@cite_0",
"@cite_5"
],
"mid": [
"2119940417",
"2136564093"
],
"abstract": [
"Broadcasting is a method that allows the distributed nodes in a wireless sensor network to share its data efficiently among each other. Due to the limited energy supplies of a sensor node, energy efficiency has become a crucial issue in the design of broadcasting protocols. In this paper, we analyze the energy savings provided by a cooperative form of broadcast, called the opportunistic large arrays (OLA), and compare it to the performance of conventional multi-hop networks where no cooperation is utilized for transmission. The cooperation in OLA allows the receivers to utilize for detection the accumulation of signal energy provided by the transmitters that are relaying the same symbol. In this work, we derive the optimal energy allocation policy that minimizes the total energy cost of the OLA network subject to the SNR (or BER) requirements at all receivers. Even though the cooperative broadcast protocol provides significant energy savings, we prove that the optimum energy assignment for cooperative networks is an NP-complete problem and, thus, requires high computational complexity in general. We then introduce several suboptimal yet scalable solutions and show the significant energy-savings that one can obtain even with the approximate solutions",
"We address the minimum-energy broadcast problem under the assumption that nodes beyond the nominal range of a transmitter can collect the energy of unreliably received overheard signals. As a message is forwarded through the network, a node will have multiple opportunities to reliably receive the message by collecting energy during each retransmission. We refer to this cooperative strategy as accumulative broadcast. We seek to employ accumulative broadcast in a large scale loosely synchronized, low-power network. Therefore, we focus on distributed network layer approaches for accumulative broadcast in which loosely synchronized nodes use only local information. To further simplify the system architecture, we assume that nodes forward only reliably decoded messages. Under these assumptions, we formulate the minimum-energy accumulative broadcast problem. We present a solution employing two subproblems. First, we identify the ordering in which nodes should transmit. Second, we determine the optimum power levels for that ordering. While the second subproblem can be solved by means of linear programming, the ordering subproblem is found to be NP-complete. We devise a heuristic algorithm to find a good ordering. Simulation results show the performance of the algorithm to be close to optimum and a significant improvement over the well known BIP algorithm for constructing energy-efficient broadcast trees. We then formulate a distributed version of the accumulative broadcast algorithm that uses only local information at the nodes and has performance close to its centralized counterpart."
]
}
|
1907.05517
|
2956735194
|
Broadcast is a fundamental network operation, widely used in wireless networks to disseminate messages. The energy-efficiency of broadcast is important particularly when devices in the network are energy constrained. To improve the efficiency of broadcast, different approaches have been taken in the literature. One of these approaches is broadcast with energy accumulation. Through simulations, it has been shown in the literature that broadcast with energy accumulation can result in energy saving. The amount of this saving, however, has only been analyzed for linear multi-hop wireless networks. In this work, we extend this analysis to two-dimensional (2D) multi-hop networks. The analysis of saving in 2D networks is much more challenging than that in linear networks. It is because, unlike in linear networks, in 2D networks, finding minimum-energy broadcasts with or without energy accumulation are both NP-hard problems. Nevertheless, using a novel approach, we prove that this saving is constant when the path loss exponent alpha is strictly greater than two. Also, we prove that the saving is theta(log n) when alpha=2, where n denotes the number of nodes in the network.
|
In addition to saving energy, energy accumulation can be used to reduce broadcast latency @cite_34 . Some existing work study the tradeoff between energy and latency in cooperative broadcast @cite_30 @cite_2 . @cite_30 , Baghaie and Krishnamachari prove that the problem of minimizing the energy consumption while meeting a desired latency constraint is not only NP-hard but also @math inapproximable.
|
{
"cite_N": [
"@cite_30",
"@cite_34",
"@cite_2"
],
"mid": [
"2140753648",
"2158971341",
""
],
"abstract": [
"We formulate the problem of delay constrained energy-efficient broadcast in cooperative multihop wireless networks. We show that this important problem is not only NP-complete, but also o(log(n)) inapproximable. We derive approximation results and an analytical lower-bound for this problem. We break this NP hard problem into three parts: ordering, scheduling and power control. We show that when the ordering is given, the joint scheduling and power-control problem can be solved in polynomial time by a novel algorithm that combines dynamic programming and linear programming to yield the minimum energy broadcast for a given delay constraint. We further show empirically that this algorithm used in conjunction with an ordering derived heuristically using the Dijkstra's shortest path algorithm yields near-optimal performance in typical settings. We use our algorithm to study numerically the trade-off between delay and power-efficiency in cooperative broadcast and compare the performance of our cooperative algorithm with a smart non-cooperative algorithm.",
"Cooperative broadcast aims to deliver a source message to a locally connected network by means of collaborating nodes. In traditional architectures, node cooperation has been at the network layer. Recently, physical layer cooperative schemes have been shown to offer several advantages over the network layer approaches. This form of cooperation employs distributed transmission resources at the physical layer as a single radio with spatial diversity. In decentralized cooperation schemes, collaborating nodes make transmission decisions based on the quality of the received signal, which is the only parameter available locally. In this case, critical parameters that influence the broadcast performance include the source relay transmission powers and the decoding threshold (the minimum signal-to-noise ratio (SNR) required to decode a transmission). We study the effect of these parameters on the number of nodes reached by cooperative broadcast. In particular, we show that there exists a phase transition in the network behavior: if the decoding threshold is below a critical value, the message is delivered to the whole network. Otherwise, only a fraction of the nodes is reached, which is proportional to the source transmit power. Our approach is based on the idea of continuum approximation, which yields closed-form expressions that are accurate when the network density is high.",
""
]
}
|
1907.05517
|
2956735194
|
Broadcast is a fundamental network operation, widely used in wireless networks to disseminate messages. The energy-efficiency of broadcast is important particularly when devices in the network are energy constrained. To improve the efficiency of broadcast, different approaches have been taken in the literature. One of these approaches is broadcast with energy accumulation. Through simulations, it has been shown in the literature that broadcast with energy accumulation can result in energy saving. The amount of this saving, however, has only been analyzed for linear multi-hop wireless networks. In this work, we extend this analysis to two-dimensional (2D) multi-hop networks. The analysis of saving in 2D networks is much more challenging than that in linear networks. It is because, unlike in linear networks, in 2D networks, finding minimum-energy broadcasts with or without energy accumulation are both NP-hard problems. Nevertheless, using a novel approach, we prove that this saving is constant when the path loss exponent alpha is strictly greater than two. Also, we prove that the saving is theta(log n) when alpha=2, where n denotes the number of nodes in the network.
|
The best existing approximation algorithm to the problem is, however, due to Caragiannis, Flammini, and Moscardelli @cite_25 . In 2D wireless networks, their algorithm has an approximation ratio of 4.2 for Euclidean cost graphs, and a logarithmic approximation for non-Euclidean cost graphs.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2032612188"
],
"abstract": [
"We present a new approximation algorithm for the Minimum Energy Broadcast Routing (MEBR) problem in ad hoc wireless networks that achieves an exponentially better approximation factor compared to the well-known Minimum Spanning Tree (MST) heuristic. Namely, for any instance where a minimum spanning tree of the set of stations is guaranteed to cost at most ρ ≥ 2 times the cost of an optimal solution for MEBR, we prove that our algorithm achieves an approximation ratio bounded by 2 ln ρ - 2 ln 2 +2. This result is particularly relevant for its consequences on Euclidean instances where we significantly improve previous results. In this respect, our experimental analysis confirms the better performance of the algorithm also in practice."
]
}
|
1907.05517
|
2956735194
|
Broadcast is a fundamental network operation, widely used in wireless networks to disseminate messages. The energy-efficiency of broadcast is important particularly when devices in the network are energy constrained. To improve the efficiency of broadcast, different approaches have been taken in the literature. One of these approaches is broadcast with energy accumulation. Through simulations, it has been shown in the literature that broadcast with energy accumulation can result in energy saving. The amount of this saving, however, has only been analyzed for linear multi-hop wireless networks. In this work, we extend this analysis to two-dimensional (2D) multi-hop networks. The analysis of saving in 2D networks is much more challenging than that in linear networks. It is because, unlike in linear networks, in 2D networks, finding minimum-energy broadcasts with or without energy accumulation are both NP-hard problems. Nevertheless, using a novel approach, we prove that this saving is constant when the path loss exponent alpha is strictly greater than two. Also, we prove that the saving is theta(log n) when alpha=2, where n denotes the number of nodes in the network.
|
Unlike 2D networks, in linear networks, the minimum energy of both cooperative and non-cooperative broadcast algorithms can be computed in polynomial time @cite_33 . For linear networks, the ratio of the two minimum power consumptions was proven to be constant with respect to the number of nodes in the network @cite_14 . Our work extends this study to general 2D networks.
|
{
"cite_N": [
"@cite_14",
"@cite_33"
],
"mid": [
"2247586146",
"2083556259"
],
"abstract": [
"We analyze the maximum gain that can be achieved through cooperative broadcast with energy accumulation and memory. We consider a linear network, where a known number of nodes are placed on a line, and derive an upper bound on @math , the gain of cooperative broadcast over noncooperative broadcast with respect to total power consumption. Specifically, we prove that, in linear networks with path loss exponent @math , @math , irrespective of the number of devices, the size of the network, the node placement strategy, and the cooperative broadcast strategy. We extend this result to any path loss exponent @math . We also show that the cooperation gain in short-range transmissions, wherein the circuit energy consumption is nonnegligible, is smaller than that in long-range transmissions. We further study the cooperative broadcast gain when the objective is to reduce the maximum transmission power used by any node in the network. In this case, we show that, when nodes are distributed uniformly at random, the maximum cooperation gain will be @math (log @math ), with high probability, where @math is the number of nodes in the network. These are important observations that should be considered in designing power-efficient broadcast algorithms in future network-wide cooperative broadcast.",
"In all-wireless networks a crucial problem is to minimize energy consumption, as in most cases the nodes are battery-operated. We focus on the problem of power-optimal broadcast, for which it is well known that the broadcast nature of the radio transmission can be exploited to optimize energy consumption. Several authors have conjectured that the problem of power-optimal broadcast is NP-complete. We provide here a formal proof, both for the general case and for the geometric one; in the former case, the network topology is represented by a generic graph with arbitrary weights, whereas in the latter a Euclidean distance is considered. We then describe a new heuristic, Embedded Wireless Multicast Advantage. We show that it compares well with other proposals and we explain how it can be distributed."
]
}
|
1907.05708
|
2959514140
|
Respiratory diseases are among the most common causes of severe illness and death worldwide. Prevention and early diagnosis are essential to limit or even reverse the trend that characterizes the diffusion of such diseases. In this regard, the development of advanced computational tools for the analysis of respiratory auscultation sounds can become a game changer for detecting disease-related anomalies, or diseases themselves. In this work, we propose a novel learning framework for respiratory auscultation sound data. Our approach combines state-of-the-art feature extraction techniques and advanced deep-neural-network architectures. Remarkably, to the best of our knowledge, we are the first to model a recurrent-neural-network based learning framework to support the clinician in detecting respiratory diseases, at either level of abnormal sounds or pathology classes. Results obtained on the ICBHI benchmark dataset show that our approach outperforms competing methods on both anomaly-driven and pathology-driven prediction tasks, thus advancing the state-of-the-art in respiratory disease analysis.
|
In @cite_9 , the authors proposed a method based on hidden Markov models and Gaussian mixture models. The preprocessing phase includes a noise suppression step which relies on spectral subtraction @cite_9 . The input of the model consists of Mel-frequency cepstral coefficients (MFCCs) extracted in the range between 50 Hz and 2,000 Hz in combination with their first derivatives. The method achieves performance results up to 39.37 improvement of the performance of a single classifier, though at the expense of ten times greater computational burden.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2149535104"
],
"abstract": [
"This paper describes a method for enhancing speech corrupted by broadband noise. The method is based on the spectral noise subtraction method. The original method entails subtracting an estimate of the noise power spectrum from the speech power spectrum, setting negative differences to zero, recombining the new power spectrum with the original phase, and then reconstructing the time waveform. While this method reduces the broadband noise, it also usually introduces an annoying \"musical noise\". We have devised a method that eliminates this \"musical noise\" while further reducing the background noise. The method consists in subtracting an overestimate of the noise power spectrum, and preventing the resultant spectral components from going below a preset minimum level (spectral floor). The method can automatically adapt to a wide range of signal-to-noise ratios, as long as a reasonable estimate of the noise spectrum can be obtained. Extensive listening tests were performed to determine the quality and intelligibility of speech enhanced by our method. Listeners unanimously preferred the quality of the processed speech. Also, for an input signal-to-noise ratio of 5 dB, there was no loss of intelligibility associated with the enhancement technique."
]
}
|
1907.05708
|
2959514140
|
Respiratory diseases are among the most common causes of severe illness and death worldwide. Prevention and early diagnosis are essential to limit or even reverse the trend that characterizes the diffusion of such diseases. In this regard, the development of advanced computational tools for the analysis of respiratory auscultation sounds can become a game changer for detecting disease-related anomalies, or diseases themselves. In this work, we propose a novel learning framework for respiratory auscultation sound data. Our approach combines state-of-the-art feature extraction techniques and advanced deep-neural-network architectures. Remarkably, to the best of our knowledge, we are the first to model a recurrent-neural-network based learning framework to support the clinician in detecting respiratory diseases, at either level of abnormal sounds or pathology classes. Results obtained on the ICBHI benchmark dataset show that our approach outperforms competing methods on both anomaly-driven and pathology-driven prediction tasks, thus advancing the state-of-the-art in respiratory disease analysis.
|
The boosted decision tree model proposed in @cite_13 utilizes two different types of features: MFCCs and low-level features extracted with the help of the library @cite_1 . This method was mainly evaluated on a binary prediction setting (i.e., healthy or unhealthy), achieving accuracy up to 85
|
{
"cite_N": [
"@cite_1",
"@cite_13"
],
"mid": [
"2385545",
"2899493363"
],
"abstract": [
"Comunicacio presentada a la 14th International Society for Music Information Retrieval Conference, celebrada a Curitiba (Brasil) els dies 4 a 8 de novembre de 2013.",
"In modern medicine, every cardiac assessment or respiratory check-up includes an audio auscultation during which one the medical specialist listens to sounds from the patient body with different tools (stethoscope, sonography). This shows how important sound analysis is for heart and lungs disease detection. During the IeBRI 2017 challenge, a database of 920 records acquired from 126 subject, was used to find a method that predicted if a respiratory cycle contains, or not, adventitious sounds like crackles, wheezes or both of them. The team which submits the best results reached around 50 of correct detection. Using a machine learning approach with a boosted decisional tree model and more audio features leads to the same results. A new approach consists in creating a new model at the patient level, which is able to decide if a patient sounds sick or not, by taking as input the predicted results of the first classification model. This new model permits to reach 85 of good predictions and could be used as a tool for helping doctors to make better diagnosis."
]
}
|
1901.03729
|
2909596867
|
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this paper, using the context of an agent that plays Frogger, we describe (a) how to collect a corpus of explanations, (b) how to train a neural rationale generator to produce different styles of rationales, and (c) how people perceive these rationales. We conducted two user studies. The first study establishes the plausibility of each type of generated rationale and situates their user perceptions along the dimensions of confidence, humanlike-ness, adequate justification, and understandability. The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior. Overall, we find alignment between the intended differences in features of the generated rationales and the perceived differences by users. Moreover, context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.
|
Much of the previous work on explainable AI has focused on interpretability . While there is no one definition of interpretability with respect to machine learning models, we view interpretability as a property of machine learned models that dictate the degree to which a human user---AI expert or user---can come to conclusions about the performance of the model on specific inputs. Some types of models are inherently interpretable, meaning they require relatively little effort to understand. Other types of models require more effort to make sense of their performance on specific inputs. Some non-inherently interpretable models can be made interpretable in a post-hoc fashion through explanation or visualization. Model-agnostic post-hoc methods can help to make models intelligible without custom explanation or visualization technologies and without changing the underlying model to make them more interpretable @cite_9 @cite_8 .
|
{
"cite_N": [
"@cite_9",
"@cite_8"
],
"mid": [
"2282821441",
"1825675169"
],
"abstract": [
"Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup."
]
}
|
1901.03729
|
2909596867
|
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this paper, using the context of an agent that plays Frogger, we describe (a) how to collect a corpus of explanations, (b) how to train a neural rationale generator to produce different styles of rationales, and (c) how people perceive these rationales. We conducted two user studies. The first study establishes the plausibility of each type of generated rationale and situates their user perceptions along the dimensions of confidence, humanlike-ness, adequate justification, and understandability. The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior. Overall, we find alignment between the intended differences in features of the generated rationales and the perceived differences by users. Moreover, context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.
|
Explanation generation can be described as a form of post-hoc interpretability @cite_41 @cite_13 ; explanations are generated on-demand based on the current state of a model and---potentially---meta-knowledge about how the algorithm works. An important distinction between interpretability and explanation is that explanation does not elucidate precisely how a model works but aims to give useful information for practitioners and end users. @cite_20 conduct a comprehensive survey on trends in explainable and intelligible systems research.
|
{
"cite_N": [
"@cite_41",
"@cite_13",
"@cite_20"
],
"mid": [
"2439568532",
"",
"2795530988"
],
"abstract": [
"Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but interpretable. And yet the task of interpretation appears underspecified. Papers provide diverse and sometimes non-overlapping motivations for interpretability, and offer myriad notions of what attributes render models interpretable. Despite this ambiguity, many papers proclaim interpretability axiomatically, absent further explanation. In this paper, we seek to refine the discourse on interpretability. First, we examine the motivations underlying interest in interpretability, finding them to be diverse and occasionally discordant. Then, we address model properties and techniques thought to confer interpretability, identifying transparency to humans and post-hoc explanations as competing notions. Throughout, we discuss the feasibility and desirability of different notions, and question the oft-made assertions that linear models are interpretable and that deep neural networks are not.",
"",
"Advances in artificial intelligence, sensors and big data management have far-reaching societ al impacts. As these systems augment our everyday lives, it becomes increasing-ly important for people to understand them and remain in control. We investigate how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Using topic modeling, co-occurrence and network analysis, we mapped the research space from diverse domains, such as algorith-mic accountability, interpretable machine learning, context-awareness, cognitive psychology, and software learnability. We reveal fading and burgeoning trends in explainable systems, and identify domains that are closely connected or mostly isolated. The time is ripe for the HCI community to ensure that the powerful new autonomous systems have intelligible interfaces built-in. From our results, we propose several implications and directions for future research to-wards this goal."
]
}
|
1901.03729
|
2909596867
|
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this paper, using the context of an agent that plays Frogger, we describe (a) how to collect a corpus of explanations, (b) how to train a neural rationale generator to produce different styles of rationales, and (c) how people perceive these rationales. We conducted two user studies. The first study establishes the plausibility of each type of generated rationale and situates their user perceptions along the dimensions of confidence, humanlike-ness, adequate justification, and understandability. The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior. Overall, we find alignment between the intended differences in features of the generated rationales and the perceived differences by users. Moreover, context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.
|
Our work on rationale generation is a model-agnostic explanation system that works by translating the internal state and action representations of an arbitrary reinforcement learning system into natural language. Andreas, Dragan, and Klein @cite_16 describe a technique that translates message-passing policies between two agents into natural language. An alternative approach to translating internal system representations into natural language is to add explanations to a supervised training set such that a model learns to output a classification as well as an explanation @cite_46 . This technique has been applied to generating explanations about procedurally generated game level designs @cite_47 .
|
{
"cite_N": [
"@cite_47",
"@cite_46",
"@cite_16"
],
"mid": [
"2950669295",
"2805897895",
""
],
"abstract": [
"Procedural content generation via Machine Learning (PCGML) is the umbrella term for approaches that generate content for games via machine learning. One of the benefits of PCGML is that, unlike search or grammar-based PCG, it does not require hand authoring of initial content or rules. Instead, PCGML relies on existing content and black box models, which can be difficult to tune or tweak without expert knowledge. This is especially problematic when a human designer needs to understand how to manipulate their data or models to achieve desired results. We present an approach to Explainable PCGML via Design Patterns in which the design patterns act as a vocabulary and mode of interaction between user and model. We demonstrate that our technique outperforms non-explainable versions of our system in interactions with five expert designers, four of whom lack any machine learning expertise.",
"The adoption of machine learning in high-stakes applications such as healthcare and law has lagged in part because predictions are not accompanied by explanations comprehensible to the domain user, who often holds the ultimate responsibility for decisions and outcomes. In this paper, we propose an approach to generate such explanations in which training data is augmented to include, in addition to features and labels, explanations elicited from domain users. A joint model is then learned to produce both labels and explanations from the input features. This simple idea ensures that explanations are tailored to the complexity expectations and domain knowledge of the consumer. Evaluation spans multiple modeling techniques on a game dataset, a (visual) aesthetics dataset, a chemical odor dataset and a Melanoma dataset showing that our approach is generalizable across domains and algorithms. Results demonstrate that meaningful explanations can be reliably taught to machine learning algorithms, and in some cases, also improve modeling accuracy.",
""
]
}
|
1901.03729
|
2909596867
|
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this paper, using the context of an agent that plays Frogger, we describe (a) how to collect a corpus of explanations, (b) how to train a neural rationale generator to produce different styles of rationales, and (c) how people perceive these rationales. We conducted two user studies. The first study establishes the plausibility of each type of generated rationale and situates their user perceptions along the dimensions of confidence, humanlike-ness, adequate justification, and understandability. The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior. Overall, we find alignment between the intended differences in features of the generated rationales and the perceived differences by users. Moreover, context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.
|
The dearth of established methods combined with the variable conceptions of explanations make evaluation of XAI systems challenging. @cite_24 use scenario-based survey design @cite_6 and presented different types of hypothetical explanations for the same decision to measure perceived levels of justice. One non-neural based network evaluates the usefulness and naturalness of generated explanations @cite_2 . @cite_29 use explanations manually generated from content analysis of Facebook's News Feed to study perceptions of algorithmic transparency. One key differentiating factor of our approach is that our evaluation is based rationales that are actual system outputs (compared to hypothetical ones). Moreover, user perceptions of our system's rationales directly influence the design of our rationale generation technique.
|
{
"cite_N": [
"@cite_24",
"@cite_29",
"@cite_6",
"@cite_2"
],
"mid": [
"2786004891",
"2796133875",
"1547579513",
"2122889832"
],
"abstract": [
"Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.",
"Transparency can empower users to make informed choices about how they use an algorithmic decision-making system and judge its potential consequences. However, transparency is often conceptualized by the outcomes it is intended to bring about, not the specifics of mechanisms to achieve those outcomes. We conducted an online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed. We found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. The explanations were less effective for helping participants evaluate the correctness of the system's output, and form opinions about how sensible and consistent its behavior is. We present implications for the design of transparency mechanisms in algorithmic decision-making systems based on these results.",
"From the Publisher: Difficult to learn and awkward to use, today's information systems often change our activities in ways that we do not need or want. The problem lies in the software development process. In this book John Carroll shows how a pervasive but underused element of design practice, the scenario, can transform information systems design. Traditional textbook approaches manage the complexity of the design process via abstraction, treating design problems as if they were composites of puzzles. Scenario-based design uses concretization. A scenario is a concrete story about use. For example: \"A person turned on a computer; the screen displayed a button labeled Start; the person used the mouse to select the button.\" Scenarios are a vocabulary for coordinating the central tasks of system developmentunderstanding people's needs, envisioning new activities and technologies, designing effective systems and software, and drawing general lessons from systems as they are developed and used. Instead of designing software by listing requirements, functions, and code modules, the designer focuses first on the activities that need to be supported and then allows descriptions of those activities to drive everything else. In addition to a comprehensive discussion of the principles of scenario-based design, the book includes in-depth examples of its application.",
"In this paper we focus on explaining to humans the behavior of autonomous agents, i.e., explainable agents. Explainable agents are useful for many reasons including scenario-based training (e.g. disaster training), tutor and pedagogical systems, agent development and debugging, gaming, and interactive storytelling. As the aim is to generate for humans plausible and insightful explanations, user evaluation of different explanations is essential. In this paper we test the hypothesis that different explanation types are needed to explain different types of actions. We present three different, generically applicable, algorithms that automatically generate different types of explanations for actions of BDI-based agents. Quantitative analysis of a user experiment (n=30), in which users rated the usefulness and naturalness of each explanation type for different agent actions, supports our hypothesis. In addition, we present feedback from the users about how they would explain the actions themselves. Finally, we hypothesize guidelines relevant for the development of explainable BDI agents."
]
}
|
1901.03814
|
2908943032
|
Compared with other semantic segmentation tasks, portrait segmentation requires both higher precision and faster inference speed. However, this problem has not been well studied in previous works. In this paper, we propose a lightweight network architecture, called Boundary-Aware Network (BANet) which selectively extracts detail information in boundary area to make high-quality segmentation output with real-time( >25FPS) speed. In addition, we design a new loss function called refine loss which supervises the network with image level gradient information. Our model is able to produce finer segmentation results which has richer details than annotations.
|
: PFCN+ @cite_8 provides a benchmark for the task of portrait segmentation. It calculates firstly an average mask from training dataset, then aligns the average mask to each input images to provide prior information. However, the process of alignment requires facial feature points. So, this method needs an additional landmark detection model, which makes the whole process slow and redundant. BSN @cite_3 proposes a boundary-sensitive kernel to take semantic boundary as a third class and it applies many training tricks such as multi-scale training and multi-task learning. However, it still lacks fine details and the inference speed is very slow.
|
{
"cite_N": [
"@cite_3",
"@cite_8"
],
"mid": [
"2776001714",
"2400000673"
],
"abstract": [
"Compared to the general semantic segmentation problem, portrait segmentation has higher precision requirement on boundary area. However, this problem has not been well studied in previous works. In this paper, we propose a boundary-sensitive deep neural network (BSN) for portrait segmentation. BSN introduces three novel techniques. First, an individual boundary-sensitive kernel is proposed by dilating the contour line and assigning the boundary pixels with multi-class labels. Second, a global boundary-sensitive kernel is employed as a position sensitive prior to further constrain the overall shape of the segmentation map. Third, we train a boundary-sensitive attribute classifier jointly with the segmentation network to reinforce the network with semantic boundary shape information. We have evaluated BSN on the current largest public portrait segmentation dataset, , the PFCN dataset, as well as the portrait images collected from other three popular image segmentation datasets: COCO, COCO-Stuff, and PASCAL VOC. Our method achieves the superior quantitative and qualitative performance over state-of-the-arts on all the datasets, especially on the boundary area.",
"Portraiture is a major art form in both photography and painting. In most instances, artists seek to make the subject stand out from its surrounding, for instance, by making it brighter or sharper. In the digital world, similar effects can be achieved by processing a portrait image with photographic or painterly filters that adapt to the semantics of the image. While many successful user-guided methods exist to delineate the subject, fully automatic techniques are lacking and yield unsatisfactory results. Our paper first addresses this problem by introducing a new automatic segmentation algorithm dedicated to portraits. We then build upon this result and describe several portrait filters that exploit our automatic segmentation algorithm to generate high-quality portraits."
]
}
|
1901.03814
|
2908943032
|
Compared with other semantic segmentation tasks, portrait segmentation requires both higher precision and faster inference speed. However, this problem has not been well studied in previous works. In this paper, we propose a lightweight network architecture, called Boundary-Aware Network (BANet) which selectively extracts detail information in boundary area to make high-quality segmentation output with real-time( >25FPS) speed. In addition, we design a new loss function called refine loss which supervises the network with image level gradient information. Our model is able to produce finer segmentation results which has richer details than annotations.
|
: Image matting has been applied for portrait image process for a long time. Traditional matting algorithms @cite_13 @cite_10 @cite_14 @cite_21 require a user defined trimap which limits their applications in automatic image process. Some works @cite_11 @cite_27 have proposed to generate trimap by using deep leaning models, but they still take trimap generation and detail refinement as two separated stages. In matting tasks, trimap helps to locate regions of interest for alpha-matte. Inspired by the function of trimap in matting tasks, we propose a boundary attention mechanism to help our BANet focus on boundary areas. Different from previous matting models, we use a two-branch architecture. Our attention map is generated by high-level semantic branch, and it is used to guide mining low-level features. DIM @cite_1 designs a compositional loss which has been proved to be effective in many matting tasks, but it is time-consuming to prepare large amount of fore-ground, back-ground images and high-quality alpha matte. @cite_17 presents a gradient-coinstency loss which is able to correct gradient direction on prediction edges, but it is not able to extract richer details. Motivated by that, we propose refine loss to obtain richer detail features on boundary area by using image gradient.
|
{
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_27",
"@cite_10",
"@cite_11"
],
"mid": [
"2103917701",
"",
"",
"2604469346",
"2780636175",
"2951018069",
"",
"2518810941"
],
"abstract": [
"This paper proposes a new Bayesian framework for solving the matting problem, i.e. extracting a foreground element from a background image by estimating an opacity for each pixel of the foreground element. Our approach models both the foreground and background color distributions with spatially-varying sets of Gaussians, and assumes a fractional blending of the foreground and background colors to produce the final output. It then uses a maximum-likelihood criterion to estimate the optimal opacity, foreground and background simultaneously. In addition to providing a principled approach to the matting problem, our algorithm effectively handles objects with intricate boundaries, such as hair strands and fur, and provides an improvement over existing techniques for these difficult cases.",
"",
"",
"Image matting is a fundamental computer vision problem and has many applications. Previous algorithms have poor performance when an image has similar foreground and background colors or complicated textures. The main reasons are prior methods 1) only use low-level features and 2) lack high-level context. In this paper, we propose a novel deep learning based algorithm that can tackle both these problems. Our deep model has two parts. The first part is a deep convolutional encoder-decoder network that takes an image and the corresponding trimap as inputs and predict the alpha matte of the image. The second part is a small convolutional network that refines the alpha matte predictions of the first network to have more accurate alpha values and sharper edges. In addition, we also create a large-scale image matting dataset including 49300 training images and 1000 testing images. We evaluate our algorithm on the image matting benchmark, our testing set, and a wide variety of real images. Experimental results clearly demonstrate the superiority of our algorithm over previous methods.",
"Augmented reality is an emerging technology in many application domains. Among them is the beauty industry, where live virtual try-on of beauty products is of great importance. In this paper, we address the problem of live hair color augmentation. To achieve this goal, hair needs to be segmented quickly and accurately. We show how a modified MobileNet CNN architecture can be used to segment the hair in real-time. Instead of training this network using large amounts of accurate segmentation data, which is difficult to obtain, we use crowd sourced hair segmentation data. While such data is much simpler to obtain, the segmentations there are noisy and coarse. Despite this, we show how our system can produce accurate and fine-detailed hair mattes, while running at over 30 fps on an iPad Pro tablet.",
"Human matting, high quality extraction of humans from natural images, is crucial for a wide variety of applications. Since the matting problem is severely under-constrained, most previous methods require user interactions to take user designated trimaps or scribbles as constraints. This user-in-the-loop nature makes them difficult to be applied to large scale data or time-sensitive scenarios. In this paper, instead of using explicit user input constraints, we employ implicit semantic constraints learned from data and propose an automatic human matting algorithm (SHM). SHM is the first algorithm that learns to jointly fit both semantic information and high quality details with deep networks. In practice, simultaneously learning both coarse semantics and fine details is challenging. We propose a novel fusion strategy which naturally gives a probabilistic estimation of the alpha matte. We also construct a very large dataset with high quality annotations consisting of 35,513 unique foregrounds to facilitate the learning and evaluation of human matting. Extensive experiments on this dataset and plenty of real images show that SHM achieves comparable results with state-of-the-art interactive matting methods.",
"",
"We propose an automatic image matting method for portrait images. This method does not need user interaction, which was however essential in most previous approaches. In order to accomplish this goal, a new end-to-end convolutional neural network (CNN) based framework is proposed taking the input of a portrait image. It outputs the matte result. Our method considers not only image semantic prediction but also pixel-level image matte optimization. A new portrait image dataset is constructed with our labeled matting ground truth. Our automatic method achieves comparable results with state-of-the-art methods that require specified foreground and background regions or pixels. Many applications are enabled given the automatic nature of our system."
]
}
|
1901.03814
|
2908943032
|
Compared with other semantic segmentation tasks, portrait segmentation requires both higher precision and faster inference speed. However, this problem has not been well studied in previous works. In this paper, we propose a lightweight network architecture, called Boundary-Aware Network (BANet) which selectively extracts detail information in boundary area to make high-quality segmentation output with real-time( >25FPS) speed. In addition, we design a new loss function called refine loss which supervises the network with image level gradient information. Our model is able to produce finer segmentation results which has richer details than annotations.
|
: ENet @cite_28 is the first semantic segmentation network that achieves real-time performance. It adopts ResNet @cite_7 bottleneck structure and reduces channel number in order of acceleration, but ENet loses too much accuracy as a tradeoff. ICNet @cite_25 proposes a multi-stream architecture. Three streams extract features from images of different resolution, and then those features are fused by a cascade feature fusion unit. BiSeNet @cite_23 uses a two-stream framework to extract context information and spatial information independently. Then it uses a feature fusion module to combine features of two streams. Inspired by their ideas of separate semantic branch and spatial branch, we design a two-stream architecture. Different from previous works, our two branches are not completely separated. In our framework, low-level branch is guided by high-level branch via a boundary attention map.
|
{
"cite_N": [
"@cite_28",
"@cite_23",
"@cite_25",
"@cite_7"
],
"mid": [
"2419448466",
"2950045474",
"2611259176",
"2949650786"
],
"abstract": [
"The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18x faster, requires 75x less FLOPs, has 79x less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.",
"Semantic segmentation requires both rich spatial information and sizeable receptive field. However, modern approaches usually compromise spatial resolution to achieve real-time inference speed, which leads to poor performance. In this paper, we address this dilemma with a novel Bilateral Segmentation Network (BiSeNet). We first design a Spatial Path with a small stride to preserve the spatial information and generate high-resolution features. Meanwhile, a Context Path with a fast downsampling strategy is employed to obtain sufficient receptive field. On top of the two paths, we introduce a new Feature Fusion Module to combine features efficiently. The proposed architecture makes a right balance between the speed and segmentation performance on Cityscapes, CamVid, and COCO-Stuff datasets. Specifically, for a 2048x1024 input, we achieve 68.4 Mean IOU on the Cityscapes test dataset with speed of 105 FPS on one NVIDIA Titan XP card, which is significantly faster than the existing methods with comparable performance.",
"We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
}
|
1901.03814
|
2908943032
|
Compared with other semantic segmentation tasks, portrait segmentation requires both higher precision and faster inference speed. However, this problem has not been well studied in previous works. In this paper, we propose a lightweight network architecture, called Boundary-Aware Network (BANet) which selectively extracts detail information in boundary area to make high-quality segmentation output with real-time( >25FPS) speed. In addition, we design a new loss function called refine loss which supervises the network with image level gradient information. Our model is able to produce finer segmentation results which has richer details than annotations.
|
: Attention mechanism makes high-level information to guide low-level feature extraction. SEnet @cite_16 applies channel attention on image recognition tasks and achieves the state-of-the-art. ExFuse @cite_0 proposes a Semantic Embedding mechanism to use high-level feature to guide low-level feature. DFN @cite_15 learns a global feature as attention to revise the process of feature extraction. In PFCN+ @cite_8 , shape channel can be viewed as a kind of spatial-wise attention, aligned mean mask forces model to focus on portrait area.
|
{
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_8"
],
"mid": [
"2798265555",
"2952577426",
"",
"2400000673"
],
"abstract": [
"Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features could be less effective because of the gap in semantic levels and spatial resolution. We find that introducing semantic information into low-level features and high-resolution details into high-level features is more effective for the later fusion. Based on this observation, we propose a new framework, named ExFuse, to bridge the gap between low-level and high-level features thus significantly improve the segmentation quality by 4.0 in total. Furthermore, we evaluate our approach on the challenging PASCAL VOC 2012 segmentation benchmark and achieve 87.9 mean IoU, which outperforms the previous state-of-the-art results.",
"Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2 mean IOU on PASCAL VOC 2012 and 80.3 mean IOU on Cityscapes dataset.",
"",
"Portraiture is a major art form in both photography and painting. In most instances, artists seek to make the subject stand out from its surrounding, for instance, by making it brighter or sharper. In the digital world, similar effects can be achieved by processing a portrait image with photographic or painterly filters that adapt to the semantics of the image. While many successful user-guided methods exist to delineate the subject, fully automatic techniques are lacking and yield unsatisfactory results. Our paper first addresses this problem by introducing a new automatic segmentation algorithm dedicated to portraits. We then build upon this result and describe several portrait filters that exploit our automatic segmentation algorithm to generate high-quality portraits."
]
}
|
1901.03613
|
2909793699
|
We prove that every permutation of a Cartesian product of two finite sets can be written as a composition of three permutations, the first of which only modifies the left projection, the second only the right projection, and the third again only the left projection, and three alternations is indeed the optimal number. We show that for two countably infinite sets, the corresponding optimal number of alternations, called the alternation diameter, is four. The notion of alternation diameter can be defined in any category. In the category of finite-dimensional vector spaces, the diameter is also three. For the category of topological spaces, we exhibit a single self-homeomorphism of the plane which is not generated by finitely many alternations of homeomorphisms that only change one coordinate. The results on finite sets and vector spaces were previously known in the context of memoryless computation.
|
The case of finite sets, which started this paper, is inspired by @cite_1 , where it is shown that any permutation of @math , for three sets @math with @math , can be written as a composition of finitely many permutations where alternately only @math or @math is permuted. Our notion of alternation diameter in the category of finite sets is related to this definition, as it also means refers to alternately permuting a product on the left and right''. The difference is that there is no communication coordinate (making it harder), but we allow the permutation to depend on the value on the right when permuting the value on the left, and vice versa (making it easier).
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2339438075"
],
"abstract": [
"Abstract We say that a reversible boolean function on n bits has alternation depth d if it can be written as the sequential composition of d reversible boolean functions, each of which acts only on the top n − 1 bits or on the bottom n − 1 bits. Moreover, if the functions on n − 1 bits are even, we speak of even alternation depth. We show that every even reversible boolean function of n ⩾ 4 bits has alternation depth at most 9 and even alternation depth at most 13."
]
}
|
1901.03613
|
2909793699
|
We prove that every permutation of a Cartesian product of two finite sets can be written as a composition of three permutations, the first of which only modifies the left projection, the second only the right projection, and the third again only the left projection, and three alternations is indeed the optimal number. We show that for two countably infinite sets, the corresponding optimal number of alternations, called the alternation diameter, is four. The notion of alternation diameter can be defined in any category. In the category of finite-dimensional vector spaces, the diameter is also three. For the category of topological spaces, we exhibit a single self-homeomorphism of the plane which is not generated by finitely many alternations of homeomorphisms that only change one coordinate. The results on finite sets and vector spaces were previously known in the context of memoryless computation.
|
It turns out that the results about finite sets and vector spaces have been proved before in the context of memoryless computation: [Theorem 3] BuGiTh14 and [Theorem 2] GaRi15 are essentially the same result as Theorem . More related results on permutation groups can be found in @cite_11 . The motivation and framework is ostensibly different, but the case of finite sets is proved in [Theorem 3.1] BuGiTh09 using a version of Hall's theorem.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2100124210"
],
"abstract": [
"Memoryless computation is a modern technique to compute any function of a set of registers by updating one register at a time while using no memory. Its aim is to emulate how computations are performed in modern cores, since they typically involve updates of single registers. The memoryless computation model can be fully expressed in terms of transformation semigroups, or in the case of bijective functions, permutation groups. In this paper, we consider how efficiently permutations can be computed without memory. We determine the minimum number of basic updates required to compute any permutation, or any even permutation. The small number of required instructions shows that very small instruction sets could be encoded on cores to perform memoryless computation. We then start looking at a possible compromise between the size of the instruction set and the length of the resulting programs. We consider updates only involving a limited number of registers. In particular, we show that binary instructions are not enough to compute all permutations without memory when the alphabet size is even. These results, though expressed as properties of special generating sets of the symmetric or alternating groups, provide guidelines on the implementation of memoryless computation."
]
}
|
1901.03613
|
2909793699
|
We prove that every permutation of a Cartesian product of two finite sets can be written as a composition of three permutations, the first of which only modifies the left projection, the second only the right projection, and the third again only the left projection, and three alternations is indeed the optimal number. We show that for two countably infinite sets, the corresponding optimal number of alternations, called the alternation diameter, is four. The notion of alternation diameter can be defined in any category. In the category of finite-dimensional vector spaces, the diameter is also three. For the category of topological spaces, we exhibit a single self-homeomorphism of the plane which is not generated by finitely many alternations of homeomorphisms that only change one coordinate. The results on finite sets and vector spaces were previously known in the context of memoryless computation.
|
Theorem is also known previously: the number @math of alternations needed for a product of length @math (which can be obtained from Theorem ) can be found in @cite_7 @cite_2 (for a larger class of modules). It turns out that @math is not optimal at least for finite fields: in [Theorem 2.1] CaFaGa14a it is proved that for over a finite field the optimal number of alternations for a product of length @math is @math .
|
{
"cite_N": [
"@cite_7",
"@cite_2"
],
"mid": [
"1569126060",
"2101732749"
],
"abstract": [
"We investigate the computation of mappings from a set S n to itself with in situ programs , that is using no extra variables than the input, and performing modifications of one component at a time. We consider several types of mappings and obtain effective computation and decomposition methods, together with upper bounds on the program length (number of assignments). Our technique is combinatorial and algebraic (graph coloration, partition ordering, modular arithmetics). For general mappings, we build a program with maximal length 5n *** 4, or 2n *** 1 for bijective mappings. The length is reducible to 4n *** 3 when |S | is a power of 2. This is the main combinatorial result of the paper, which can be stated equivalently in terms of multistage interconnection networks as: any mapping of 0,1 n can be performed by a routing in a double n -dimensional Benes network. Moreover, the maximal length is 2n *** 1 for linear mappings when S is any field, or a quotient of an Euclidean domain (e.g. *** s ***). In this case the assignments are also linear, thereby particularly efficient from the algorithmic viewpoint. The in situ trait of the programs constructed here applies to optimization of program and chip design with respect to the number of variables, since no extra writing memory is used. In a non formal way, our approach is to perform an arbitrary transformation of objects by successive elementary local transformations inside these objects only with respect to their successive states.",
"We investigate the computation of mappings from a set S^n to itself with \"in situ programs\", that is using no extra variables than the input, and performing modifications of one component at a time, hence using no extra memory. In this paper, we survey this problem introduced in previous papers by the authors, we detail its close relation with rearrangeable multicast networks, and we provide new results for both viewpoints. A bijective mapping can be computed by 2n-1 component modifications, that is by a program of length 2n-1, a result equivalent to the rearrangeability of the concatenation of two reversed butterfly networks. For a general arbitrary mapping, we give two methods to build a program with maximal length 4n-3. Equivalently, this yields rearrangeable multicast routing methods for the network formed by four successive butterflies with alternating reversions. The first method is available for any set S and practically equivalent to a known method in network theory. The second method, a refinment of the first, described when |S| is a power of 2, is new and allows more flexibility than the known method. For a linear mapping, when S is any field, or a quotient of an Euclidean domain (e.g Z sZ for any integer s), we build a program with maximal length 2n-1. In this case the assignments are also linear, thereby particularly efficient from the algorithmic viewpoint, and giving moreover directly a program for the inverse when it exists. This yields also a new result on matrix decompositions, and a new result on the multicast properties of two successive reversed butterflies. Results of this flavour were known only for the boolean field Z 2Z."
]
}
|
1901.03613
|
2909793699
|
We prove that every permutation of a Cartesian product of two finite sets can be written as a composition of three permutations, the first of which only modifies the left projection, the second only the right projection, and the third again only the left projection, and three alternations is indeed the optimal number. We show that for two countably infinite sets, the corresponding optimal number of alternations, called the alternation diameter, is four. The notion of alternation diameter can be defined in any category. In the category of finite-dimensional vector spaces, the diameter is also three. For the category of topological spaces, we exhibit a single self-homeomorphism of the plane which is not generated by finitely many alternations of homeomorphisms that only change one coordinate. The results on finite sets and vector spaces were previously known in the context of memoryless computation.
|
It should be possible to extract, from the results of @cite_4 , a natural category where alternation diameter is defined but infinite for some objects.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2736495855"
],
"abstract": [
"Code optimization is an important area of research that has remarkable contributions in addressing the challenges of information technology. It has introduced a new trend in hardware as well as in software. Efforts that have been made in this context led to introduce a new foundation, both for compilers and processors. In this report we study different techniques used for sequential decomposition of mappings without using extra variables. We focus on finding and improving these techniques of computations. Especially, we are interested in developing methods and efficient heuristic algorithms to find the decompositions and implementing these methods in particular cases. We want to implement these methods in a compiler with an aim of optimizing code in machine language. It is always possible to calculate an operation related to K registers by a sequence of assignments using only these K registers. We verified the results and introduced new methods. We described In Situ computation of linear mapping by a sequence of linear assignments over the set of integers and investigated bound for the algorithm. We introduced a method for the case of boolean bijective mappings via algebraic operations over polynomials in GF(2). We implemented these methods using Maple"
]
}
|
1901.03706
|
2911119619
|
We focus our attention on the problem of generating adversarial perturbations based on the gradient in image classification domain
|
In adversarial perturbation generating process, adversarial attack methods try to find a perturbation matrix @math by adding the matrix to origin image, this generated image can fool the deep neural models to make mistake classification of @math from correct label @math to other labels @math , we define these images as adversarial examples. In @cite_20 , it first gave an introduction about generating adversarial examples of attacking state-of-the-art deep neural models. In @cite_12 , it shown attack method based on gradient-Fast Gradient Sign Method(FGSM) and updates gradient in the direction of pixels only once time, and seek perturbation tensors @math . The iterative-step FGSM is easily derived @cite_21 . It proposed a targeted attack method called @math attack @cite_16 , which generate adversarial examples reduce detecting defenses rates. In @cite_13 , it seek adversarial examples which fool one deep neural model with only one perturbation.
|
{
"cite_N": [
"@cite_21",
"@cite_16",
"@cite_20",
"@cite_13",
"@cite_12"
],
"mid": [
"2460937040",
"2963857521",
"2964153729",
"2543927648",
"2963855547"
],
"abstract": [
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95 to 0.5 .In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100 probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.",
"In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time."
]
}
|
1901.03706
|
2911119619
|
We focus our attention on the problem of generating adversarial perturbations based on the gradient in image classification domain
|
Improving the robustness of deep neural models is a comprehensive task, @cite_10 reveal ensemble model mechanism to adversarial defense attack which trains data with adversarial examples produced by pre-trained models and hold-out models that having good defense result against black-box and single-step attack. @cite_0 present distillation defense method which use distillation method reduce the effectiveness of adversarial samples on DNNs and lower adversarial attack success rates. @cite_19 illustrate Feature Squeezing method which reduces the search space available to an adversarial example by coalescing samples that correspond to different feature vectors in the original space into a single sample.
|
{
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_10"
],
"mid": [
"2572504188",
"",
"2963744840"
],
"abstract": [
"Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive---new systems and models are being deployed in every domain imaginable, leading to rapid and widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited. We systematize recent findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date. We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by formally exploring the opposing relationship between model accuracy and resilience to adversarial manipulation. Through these explorations, we show that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used.",
"",
"Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks."
]
}
|
1901.03568
|
2963462250
|
The specification and enforcement of network-wide policies in a single administrative domain is common in today's networks and considered as already resolved. However, this is not the case for multi-administrative domains, e.g. among different enterprises. In such situation, new problems arise that challenge classical solutions such as PKIs, which suffer from scalability and granularity concerns. In this paper, we present an extension to Group-Based Policy — a widely used network policy language-for the aforementioned scenario. To do so, we take advantage of a permissioned blockchain implementation (Hyperledger Fabric) to distribute access control policies in a secure and auditable manner, preserving at the same time the independence of each organization. Network administrators specify polices that are rendered into blockchain transactions. A LISP control plane (RFC 6830) allows routers performing the access control to query the blockchain for authorizations. We have implemented an end-to-end experimental prototype and evaluated it in terms of scalability and network latency.
|
There are already several proposals in the literature @cite_7 that leverage blockchains for a wide range of network applications, such as mesh networks @cite_15 , IP addresses @cite_1 , etc.
|
{
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_7"
],
"mid": [
"2964041930",
"2799817285",
"2592590941"
],
"abstract": [
"Recently, mesh networking and blockchain are two of the hottest technologies in the telecommunications industry. Combining both can reformulate Internet access. While mesh networking makes connecting to the Internet easy and affordable, blockchain on top of mesh networks makes Internet access profitable by enabling bandwidth-sharing for crypto-tokens. Hyperledger Fabric (HLF) is a blockchain framework implementation and one of the Hyperledger projects hosted by The Linux Foundation. We evaluate HLF in a real production mesh network and in the laboratory. We quantify the performance, bottlenecks and limitations of the current implementation v1.0. We identify the opportunities for improvement to serve the needs of wireless mesh access networks. To the best of our knowledge, this is the first HLF deployment made in a production wireless mesh network.",
"We present IPchain, a blockchain to store the allocations and delegations of IP addresses, with the aim of easing the deployment of secure interdomain routing systems. Interdomain routing security is of vital importance to the Internet since it prevents unwanted traffic redirections. IPchain makes use of blockchains' properties to provide flexible trust models and simplified management when compared to existing systems. In this paper we argue that Proof of Stake is a suitable consensus algorithm for IPchain due to the unique incentive structure of this use-case. We have implemented and evaluated IPchain's performance and scalability storing around 150k IP prefixes in a 1GB chain.",
"Recent interest about the blockchain technology brought questions about its application to other systems than the cryptocurrency one. In this paper we present blockchain and discuss key applications to network systems in the literature."
]
}
|
1901.03568
|
2963462250
|
The specification and enforcement of network-wide policies in a single administrative domain is common in today's networks and considered as already resolved. However, this is not the case for multi-administrative domains, e.g. among different enterprises. In such situation, new problems arise that challenge classical solutions such as PKIs, which suffer from scalability and granularity concerns. In this paper, we present an extension to Group-Based Policy — a widely used network policy language-for the aforementioned scenario. To do so, we take advantage of a permissioned blockchain implementation (Hyperledger Fabric) to distribute access control policies in a secure and auditable manner, preserving at the same time the independence of each organization. Network administrators specify polices that are rendered into blockchain transactions. A LISP control plane (RFC 6830) allows routers performing the access control to query the blockchain for authorizations. We have implemented an end-to-end experimental prototype and evaluated it in terms of scalability and network latency.
|
Finally, there is also a growing body of work on blockchain-based access control for IoT: @cite_12 leverages a blockchain to store access permissions for IoT devices with a strong emphasis on key management and distribution. @cite_11 also provides authentication, authorization and auditing for IoT but separates them in four independent blockchains, and is generic enough to support a wide range of access control models typical of IoT, while in this paper we concentrate on a specific language, GBP.
|
{
"cite_N": [
"@cite_12",
"@cite_11"
],
"mid": [
"2788822833",
"2783845577"
],
"abstract": [
"In this paper, we propose IoTChain, a combination of the OSCAR architecture [1] and the ACE authorization framework [2] to provide an E2E solution for the secure authorized access to IoT resources. IoTChain consists of two components, an authorization blockchain based on the ACE framework and the OSCAR object security model, extended with a group key scheme. The blockchain provides a flexible and trustless way to handle authorization while OSCAR uses the public ledger to set up multicast groups for authorized clients. To evaluate the feasibility of our architecture, we have implemented the authorization blockchain on top of a private Ethereum network. We report on several experiments that assess the performance of different architecture components.",
"The IoT is pervading our daily activities and lives with devices scattered all over our cities, transport systems, buildings, homes and bodies. This invasion of devices with sensors and communication capabilities brings big concerns, mainly about the privacy and confidentiality of the collected information. These concerns hinder the wide adoption of the IoT. To overcome them, in this work, we present an Blockchain-based architecture for IoT access authorizations. Following the IoT tendency requirements, our architecture is user transparent, user friendly, fully decentralized, scalable, fault tolerant and compatible with a wide range of today's access control models used in the IoT. Finally, our architecture also has a secure way to establish relationships between users, devices and group of both, allowing the assignment of attributes for these relationships and their use in the access control authorization."
]
}
|
1901.03756
|
2910882903
|
In person attributes recognition, we describe a person in terms of their appearance. Typically, this includes a wide range of traits including age, gender, clothing, and footwear. Although this could be used in a wide variety of scenarios, it generally is applied to video surveillance, where attribute recognition is impacted by low resolution, and other issues such as variable pose, occlusion and shadow. Recent approaches have used deep convolutional neural networks (CNNs) to improve the accuracy in person attribute recognition. However, many of these networks are relatively shallow and it is unclear to what extent they use contextual cues to improve classification accuracy. In this paper, we propose deeper methods for person attribute recognition. Interpreting the reasons behind the classification is highly important, as it can provide insight into how the classifier is making decisions. Interpretation suggests that deeper networks generally take more contextual information into consideration, which helps improve classification accuracy and generalizability. We present experimental analysis and results for whole body attributes using the PA-100K and PETA datasets and facial attributes using the CelebA dataset.
|
Later work @cite_15 @cite_3 proposed using CNNs which showed that end-to-end learning (i.e., learning both feature and classification using stochastic gradient descent) could mitigate some of the limitations associated with support vector machines and hand-crafted features. This has three primary benefits. First, features are extracted using convolutional filters learned directly from the training data. This eliminates the need to hand-craft features for each dataset and attribute. Second, the feature extractors and the classifier parameters are optimized together in an end-to-end fashion. The extracted features are optimized for the particular attribute automatically. Finally, multi-label CNNs @cite_3 can significantly outperform SVMs because of their capacity to learn relationships among attributes.
|
{
"cite_N": [
"@cite_15",
"@cite_3"
],
"mid": [
"2147414309",
"1522973599"
],
"abstract": [
"We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person.",
"Recently, pedestrian attributes like gender, age and clothing etc., have been used as soft biometric traits for recognizing people. Unlike existing methods that assume the independence of attributes during their prediction, we propose a multi-label convolutional neural network (MLCNN) to predict multiple attributes together in a unified framework. Firstly, a pedestrian image is roughly divided into multiple overlapping body parts, which are simultaneously integrated in the multi-label convolutional neural network. Secondly, these parts are filtered independently and aggregated in the cost layer. The cost function is a combination of multiple binary attribute classification cost functions. Moreover, we propose an attribute assisted person re-identification method, which fuses attribute distances and low-level feature distances between pairs of person images to improve person re-identification performance. Extensive experiments show: 1) the average attribute classification accuracy of the proposed method is 5.2 and 9.3 higher than the SVM-based method on three public databases, VIPeR and GRID, respectively; 2) the proposed attribute assisted person re-identification method is superior to existing approaches."
]
}
|
1901.03756
|
2910882903
|
In person attributes recognition, we describe a person in terms of their appearance. Typically, this includes a wide range of traits including age, gender, clothing, and footwear. Although this could be used in a wide variety of scenarios, it generally is applied to video surveillance, where attribute recognition is impacted by low resolution, and other issues such as variable pose, occlusion and shadow. Recent approaches have used deep convolutional neural networks (CNNs) to improve the accuracy in person attribute recognition. However, many of these networks are relatively shallow and it is unclear to what extent they use contextual cues to improve classification accuracy. In this paper, we propose deeper methods for person attribute recognition. Interpreting the reasons behind the classification is highly important, as it can provide insight into how the classifier is making decisions. Interpretation suggests that deeper networks generally take more contextual information into consideration, which helps improve classification accuracy and generalizability. We present experimental analysis and results for whole body attributes using the PA-100K and PETA datasets and facial attributes using the CelebA dataset.
|
Stacked conventional CNNs have limitations in their generalization ability when trained with smaller datasets. Because of this, pedestrian attribute recognition datasets are harder for CNNs to learn due to their limited size. The multi-label CNN (ML-CNN) in @cite_3 , for instance, had to be shallow with only 3 convolutional layers to cope with the smaller sample size in the datasets. Existing CNN-based approaches are either shallow networks, limiting the learning of complex features, or they are deeper networks pre-trained on larger datasets such as ImageNet @cite_14 . In this case, representative features are not learned from training data, as is the goal of this work. Examples of pre-training a deeper networks includes @cite_12 @cite_11 @cite_17 @cite_15 .
|
{
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_3",
"@cite_15",
"@cite_12",
"@cite_17"
],
"mid": [
"2410968923",
"2286727787",
"1522973599",
"2147414309",
"2103394661",
"262462045"
],
"abstract": [
"In real video surveillance scenarios, visual pedestrian attributes, such as gender, backpack, clothes types, are very important for pedestrian retrieval and person reidentification. Existing methods for attributes recognition have two drawbacks: (a) handcrafted features (e.g. color histograms, local binary patterns) cannot cope well with the difficulty of real video surveillance scenarios; (b) the relationship among pedestrian attributes is ignored. To address the two drawbacks, we propose two deep learning based models to recognize pedestrian attributes. On the one hand, each attribute is treated as an independent component and the deep learning based single attribute recognition model (DeepSAR) is proposed to recognize each attribute one by one. On the other hand, to exploit the relationship among attributes, the deep learning framework which recognizes multiple attributes jointly (DeepMAR) is proposed. In the DeepMAR, one attribute can contribute to the representation of other attributes. For example, the gender of woman can contribute to the representation oflong hair and wearing skirt. Experiments on recent popular pedestrian attribute datasets illustrate that our proposed models achieve the state-of-the-art results.",
"This paper addresses the problem of human visual attribute recognition, i.e., the prediction of a fixed set of semantic attributes given an image of a person. Previous work often considered the different attributes independently from each other, without taking advantage of possible dependencies between them. In contrast, we propose a method to jointly train a CNN model for all attributes that can take advantage of those dependencies, considering as input only the image without additional external pose, part or context information. We report detailed experiments examining the contribution of individual aspects, which yields beneficial insights for other researchers. Our holistic CNN achieves superior performance on two publicly available attribute datasets improving on methods that additionally rely on pose-alignment or context. To support further evaluations, we present a novel dataset, based on realistic outdoor video sequences, that contains more than 27,000 pedestrians annotated with 10 attributes. Finally, we explore design options to embrace the N A labels inherently present in this task.",
"Recently, pedestrian attributes like gender, age and clothing etc., have been used as soft biometric traits for recognizing people. Unlike existing methods that assume the independence of attributes during their prediction, we propose a multi-label convolutional neural network (MLCNN) to predict multiple attributes together in a unified framework. Firstly, a pedestrian image is roughly divided into multiple overlapping body parts, which are simultaneously integrated in the multi-label convolutional neural network. Secondly, these parts are filtered independently and aggregated in the cost layer. The cost function is a combination of multiple binary attribute classification cost functions. Moreover, we propose an attribute assisted person re-identification method, which fuses attribute distances and low-level feature distances between pairs of person images to improve person re-identification performance. Extensive experiments show: 1) the average attribute classification accuracy of the proposed method is 5.2 and 9.3 higher than the SVM-based method on three public databases, VIPeR and GRID, respectively; 2) the proposed attribute assisted person re-identification method is superior to existing approaches.",
"We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person.",
"This paper addresses the problem of image features selection for pedestrian gender recognition. Hand-crafted features (such as HOG) are compared with learned features which are obtained by training convolutional neural networks. The comparison is performed on the recently created collection of versatile pedestrian datasets which allows us to evaluate the impact of dataset properties on the performance of features. The study shows that hand-crafted and learned features perform equally well on small-sized homogeneous datasets. However, learned features significantly outperform hand-crafted ones in the case of heterogeneous and unfamiliar (unseen) datasets. Our best model which is based on learned features obtains 79 average recognition rate on completely unseen datasets. We also show that a relatively small convolutional neural network is able to produce competitive features even with little training data.",
"Learning to recognize pedestrian attributes at far distance is a challenging problem in visual surveillance since face and body close-shots are hardly available; instead, only far-view image frames of pedestrian are given. In this study, we present an alternative approach that exploits the context of neighboring pedestrian images for improved attribute inference compared to the conventional SVM-based method. In addition, we conduct extensive experiments to evaluate the informativeness of background and foreground features for attribute recognition. Experiments are based on our newly released pedestrian attribute dataset, which is by far the largest and most diverse of its kind."
]
}
|
1901.03756
|
2910882903
|
In person attributes recognition, we describe a person in terms of their appearance. Typically, this includes a wide range of traits including age, gender, clothing, and footwear. Although this could be used in a wide variety of scenarios, it generally is applied to video surveillance, where attribute recognition is impacted by low resolution, and other issues such as variable pose, occlusion and shadow. Recent approaches have used deep convolutional neural networks (CNNs) to improve the accuracy in person attribute recognition. However, many of these networks are relatively shallow and it is unclear to what extent they use contextual cues to improve classification accuracy. In this paper, we propose deeper methods for person attribute recognition. Interpreting the reasons behind the classification is highly important, as it can provide insight into how the classifier is making decisions. Interpretation suggests that deeper networks generally take more contextual information into consideration, which helps improve classification accuracy and generalizability. We present experimental analysis and results for whole body attributes using the PA-100K and PETA datasets and facial attributes using the CelebA dataset.
|
CNNs with branched connections in GoogLeNet @cite_4 and residual mapping using "shortcut" connections were proposed to tackle the accuracy degradation problem with deep CNNs @cite_21 @cite_8 . Residual networks are composed of residual units (or blocks) that have a double convolution residual leg and a direct input-to-output identity mapping or "shortcut" connection @cite_13 . Stacking residual blocks of up to 1000 layers, residual networks were shown to achieve state-of-the-art results on ImageNet large scale visual recognition challenge in 2015 @cite_21 . They are also shown to accelerate the learning process and converge quicker than their plain'' stacked CNN counterparts @cite_23 . Recent work exploited joint prediction of weakly-supervised attribute locations @cite_19 and view @cite_16 together with attribute recognition using a pre-trained GoogLeNet and showed state-of-the-art performance on PETA. The increased performances in these methods were attributed to the joint attribute location and view prediction as well as a complicated branching schemes. Contrast that approach with this approach, where we outperform these approaches using only deeper residual networks trained with attribute recognition.
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_19",
"@cite_23",
"@cite_16",
"@cite_13"
],
"mid": [
"2950179405",
"2949427019",
"2949650786",
"2950413999",
"2274287116",
"2739088263",
"2724323253"
],
"abstract": [
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: this https URL",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"State-of-the-art methods treat pedestrian attribute recognition as a multi-label image classification problem. The location information of person attributes is usually eliminated or simply encoded in the rigid splitting of whole body in previous work. In this paper, we formulate the task in a weakly-supervised attribute localization framework. Based on GoogLeNet, firstly, a set of mid-level attribute features are discovered by novelly designed detection layers, where a max-pooling based weakly-supervised object detection technique is used to train these layers with only image-level labels without the need of bounding box annotations of pedestrian attributes. Secondly, attribute labels are predicted by regression of the detection response magnitudes. Finally, the locations and rough shapes of pedestrian attributes can be inferred by performing clustering on a fusion of activation maps of the detection layers, where the fusion weights are estimated as the correlation strengths between each attribute and its relevant mid-level features. Extensive experiments are performed on the two currently largest pedestrian attribute datasets, i.e. the PETA dataset and the RAP dataset. Results show that the proposed method has achieved competitive performance on attribute recognition, compared to other state-of-the-art methods. Moreover, the results of attribute localization are visualized to understand the characteristics of the proposed method.",
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge",
"Pedestrian attribute inference is a demanding problem in visual surveillance that can facilitate person retrieval, search and indexing. To exploit semantic relations between attributes, recent research treats it as a multi-label image classification task. The visual cues hinting at attributes can be strongly localized and inference of person attributes such as hair, backpack, shorts, etc., are highly dependent on the acquired view of the pedestrian. In this paper we assert this dependence in an end-to-end learning framework and show that a view-sensitive attribute inference is able to learn better attribute predictions. Our proposed model jointly predicts the coarse pose (view) of the pedestrian and learns specialized view-specific multi-label attribute predictions. We show in an extensive evaluation on three challenging datasets (PETA, RAP and WIDER) that our proposed end-to-end view-aware attribute prediction model provides competitive performance and improves on the published state-of-the-art on these datasets.",
"In surveillance images, soft biometric attributes have been demonstrated to be quite effective in the problem of person re-identification. Many of these attributes can vary greatly, which has motivated a number of hand crafted features designed to recognize individual attributes. Although deep learning is generally useful in learning features appropriate for classification, it usually requires more data to train than what is available in most person re-identification databases. In this paper, we propose a residual network (MAResNet) deeper than current pedestrian attribute recognition, which we use to recognize multiple attributes simultaneously. The proposed network is both efficient and accurate. We recognize attributes at a rate of 271 FPS while simultaneously outperforming state of the art in attribute recognition on the PETA dataset."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
Authors in @cite_18 built a proof-of-concept system for counter-surveillance against spy drones by determining whether a certain person or object is under aerial surveillance. They show methods that leverage physical stimuli to detect whether the drone’s camera is directed towards a target in real time. They demonstrate how an interceptor can perform a side-channel attack to detect whether a target is being streamed by analyzing the encrypted FPV channel that is transmitted from a real drone (DJI Mavic) in two use cases: when the target is a private house and when the target is a subject. Although being a significant step towards drone identification, this solution is specifically designed to identify drones that are employed to target a specific asset, while not being suitable for drone's detection at large or for drones that do not feature FPV.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2783045275"
],
"abstract": [
"Drones have created a new threat to people's privacy. We are now in an era in which anyone with a drone equipped with a video camera can use it to invade a subject's privacy by streaming the subject in his her private space over an encrypted first person view (FPV) channel. Although many methods have been suggested to detect nearby drones, they all suffer from the same shortcoming: they cannot identify exactly what is being captured, and therefore they fail to distinguish between the legitimate use of a drone (for example, to use a drone to film a selfie from the air) and illegitimate use that invades someone's privacy (when the same operator uses the drone to stream the view into the window of his neighbor's apartment), a distinction that in some cases depends on the orientation of the drone's video camera rather than on the drone's location. In this paper we shatter the commonly held belief that the use of encryption to secure an FPV channel prevents an interceptor from extracting the POI that is being streamed. We show methods that leverage physical stimuli to detect whether the drone's camera is directed towards a target in real time. We investigate the influence of changing pixels on the FPV channel (in a lab setup). Based on our observations we demonstrate how an interceptor can perform a side-channel attack to detect whether a target is being streamed by analyzing the encrypted FPV channel that is transmitted from a real drone (DJI Mavic) in two use cases: when the target is a private house and when the target is a subject."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
Authors in @cite_32 show that the radio control signal sent to an UAV using a typical transmitter can be captured and analyzed to identify the controlling pilot using machine learning techniques. Authors collected messages exchanged between the drone and the remote controller and used them to train multiple classifiers. They observed that the best performance is reached by a random forest classifier achieving an accuracy of around 90 exploiting the same principle, i.e., classification of the traffic, this paper focuses on the pilot and not on the drone identification.
|
{
"cite_N": [
"@cite_32"
],
"mid": [
"2789453091"
],
"abstract": [
"Analysis of interactions with remotely controlled devices has been used to detect the onset of hijacking attacks, as well as for forensics analysis, e.g., to identify the human controller. Its effectiveness is known to depend on the remote device type as well as on the properties of the remote control signal. This paper shows that the radio control signal sent to an unmanned aerial vehicle (UAV) using a typical transmitter can be captured and analyzed to identify the controlling pilot using machine learning techniques. Twenty trained pilots have been asked to fly a high-end research drone through three different trajectories. Control data have been collected and used to train multiple classifiers. Best performance has been achieved by a random forest classifier that achieved accuracy around 90 using simple time-domain features. Extensive tests have shown that the classification accuracy depends on the flight trajectory and that the pitch, roll, yaw, and thrust control signals show different levels of significance for pilot identification. This result paves the way to a number of security and forensics applications, including continuous identification of UAV pilots to mitigate the risk of hijacking."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
Authors in @cite_10 explored the feasibility of RF-based detection of drones by looking at radio physical characteristics of the communication channel when the drones's body is affected by vibration and body shifting. The analysis considered whether the received drone signals are uniquely differentiated from other mobile wireless phenomena such as cars equipped with Wi-Fi or humans carrying a mobile phone. The sensitivity of detection at distances of hundreds of meters as well as the accuracy of the overall detection system are evaluated using a SDR implementation. Being based on both RSSI and phase of the signals, the precision of the approach varies with the distance of the receiver from the transmitter. In addition, the solution resorts to physical layer information and special hardware (SDR), while our current contribution only exploits network layer information that can be collected by any WiFi device.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2625718276"
],
"abstract": [
"Drones are increasingly flying in sensitive airspace where their presence may cause harm, such as near airports, forest fires, large crowded events, secure buildings, and even jails. This problem is likely to expand given the rapid proliferation of drones for commerce, monitoring, recreation, and other applications. A cost-effective detection system is needed to warn of the presence of drones in such cases. In this paper, we explore the feasibility of inexpensive RF-based detection of the presence of drones. We examine whether physical characteristics of the drone, such as body vibration and body shifting, can be detected in the wireless signal transmitted by drones during communication. We consider whether the received drone signals are uniquely differentiated from other mobile wireless phenomena such as cars equipped with Wi- Fi or humans carrying a mobile phone. The sensitivity of detection at distances of hundreds of meters as well as the accuracy of the overall detection system are evaluated using software defined radio (SDR) implementation."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
An identification mechanism based on the correlation between motion observed from an external camera and acceleration measured on each UAV's accelerometer is proposed by @cite_4 . This solution combines FPV information with accelerometer information to remotely control a subset of swarm drones that are not provided with camera, and therefore it requires the collaboration of one or more drones in the swarm to perform the identification.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2626519255"
],
"abstract": [
"Unmanned aerial vehicle (UAV) swarms provide situation awareness in potential life-threatening tasks such as emergency response, search and rescue, etc. However, most of these scenarios take place in GPS-denied environments, where accurately localizing each UAV is challenging. Heterogeneous UAV swarms, in which only a subset of the drones carry cameras, face the additional challenge of identifying each individual UAV in order to avoid sending position updates to the wrong drone, thus crashing. This work presents an identification mechanism based on the correlation between motion observed from an external camera, and the acceleration measured on each UAV’s accelerometer."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
Fingerprinting of wireless radio traffic at the network layer is emerging as a promising technique to uniquely identify devices in the wild. Authors in @cite_6 proved that the extraction of unique fingerprints provides a reliable and robust means for device identification.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"1874399837"
],
"abstract": [
"Node forgery or impersonation, in which legitimate cryptographic credentials are captured by an adversary, constitutes one major security threat facing wireless networks. The fact that mobile devices are prone to be compromised and reverse engineered significantly increases the risk of such attacks in which adversaries can obtain secret keys on trusted nodes and impersonate the legitimate node. One promising approach toward thwarting these attacks is through the extraction of unique fingerprints that can provide a reliable and robust means for device identification. These fingerprints can be extracted from transmitted signal by analyzing information across the protocol stack. In this paper, the first unified and comprehensive tutorial in the area of wireless device fingerprinting for security applications is presented. In particular, we aim to provide a detailed treatment on developing novel wireless security solutions using device fingerprinting techniques. The objectives are three-fold: (i) to introduce a comprehensive taxonomy of wireless features that can be used in fingerprinting, (ii) to provide a systematic review on fingerprint algorithms including both white-list based and unsupervised learning approaches, and (iii) to identify key open research problems in the area of device fingerprinting and feature extraction, as applied to wireless security."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
A fingerprinting approach for drone identification is proposed in @cite_37 . Authors analyzed the WiFi communication protocol used by drones and developed three unique methods to identify a specific drone model: (i) examining the time intervals between probe request frames; (ii) utilizing the signal strength carried in the frame header; and, finally (iii) exploiting some frame header fields with specific values. However, fingerprint approaches require specific equipment to be used, such as the Software Defined Radios (SDRs).
|
{
"cite_N": [
"@cite_37"
],
"mid": [
"2889328478"
],
"abstract": [
"To better address the growing threats of small commercial Unmanned Aerial Systems (UASs), we developed novel methods to identify specific types of drone models by building their wireless communication profiles with machine learning algorithms. In this paper, we present a basic framework for identifying unique features and discuss basic methods for several drone models. The results based on our proposed methods demonstrate their effectiveness. We continue to collect experimental results involving several popular drones and investigate more features to enhance the accuracy of our methods. We also outline our future work involving additional drone models and further methods used to improve identification accuracy."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
Machine learning techniques have been successfully used for other purposes in this research field. In @cite_21 , authors proposed a wireless power transfer system that predicts the drone's behavior based on the flight data, utilizing machine learning techniques and Naive Bayes algorithms.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2687556248"
],
"abstract": [
"In this paper, the design of a novel wireless power transfer system utilizing drones with machine learning techniques is presented. Research on drones is currently a fast growing field with a great potential in many ubiquitous applications. The wireless power transfer system with the fixed operation frequency at 13.56MHz is applied to an 1-coil receiver on the drone, with an array of transmitter coils on the ground. This work presents an approach where the data is considered “classified” using machine learning techniques, which allows the accurate prediction of the drone's position, thus enhancing the wireless power transfer efficiency."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
In @cite_17 , authors demonstrated that machine learning can successfully predict the transmission patterns in drone network. The packet transmission rates of a communication network with twenty drones were simulated, of which results were used to train the linear regression and Support Vector Machine with Quadratic Kernel (SVM-QK).
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2560062452"
],
"abstract": [
"Drones cooperate with each other by transmitting and receiving packets. Therefore, it is important to conjecture the packet transmission rates within the network. However, the conventional methods are not suitable to describe the transmission patterns with satisfactory computing speed and accuracy. In this paper, we demonstrated that machine learning can successfully predict the transmission patterns in drone network. The packet transmission rates of a communication network with twenty drones were simulated, of which results were used to train the linear regression and Support Vector Machine with Quadratic Kernel (SVM-QK). We found out SVM-QK can precisely predict the communication between drones."
]
}
|
1901.03535
|
2911222431
|
We propose PiNcH, a methodology to detect the presence of a drone and its current status leveraging just the communication traffic exchanged between the drone and its Remote Controller (RC). PiNcH is built applying standard classification algorithms to the eavesdropped traffic, analyzing features such as packets inter-arrival time and size. PiNcH does not require either any special hardware or to transmit any signal. Indeed, it is fully passive and it resorts to cheap and general purpose hardware. To evaluate the effectiveness of our solution, we collected real communication measurements from the 3DR SOLO drone, being the most popular open-source hardware, running the widespread ArduCopter open-source firmware, mounted on-board on a wide range of commercial amateur drones. Then, we test our solution against different publicly available wireless traces. The results prove that PiNcH can efficiently and effectively: (i) identify the presence of the drone in several heterogeneous scenarios; (ii) identify the current state of a powered-on drone, i.e., flying or lying on the ground; (iii) discriminate the movement of the drone; and, finally, (iv) estimate a lower bound on the time required to identify a drone with the requested level of assurance. The quality and viability of our solution do prove that network traffic analysis can be successfully adopted for drone identification and pave the way for future research in the area.
|
Finally, authors in @cite_19 analyze the basic architecture of a drone and propose a generic drone forensic model that would improve the digital investigation process. They also provide recommendations on how one should perform forensics on the various components of a drone such as camera and Wi-Fi.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2606197117"
],
"abstract": [
"Ease of availability and affordability of unmanned aerial vehicles (UAV) have led to an increase in its popularity amongst the public. The proliferation of UAVs has also augmented several security issues. These devices are used for illegal activities such as drug smuggling and privacy invasion. The purpose of this research paper is to analyze the basic architecture of a drone, and to propose a generic drone forensic model that would improve the digital investigation process. This paper also provides recommendations on how one should perform forensics on the various components of a drone such as camera and Wi-Fi."
]
}
|
1901.03597
|
2910333870
|
In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market. In this paper, we show how an attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. We implement the attack using a 3D conditional GAN and show how the framework (CT-GAN) can be automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results which can be executed in milliseconds. To evaluate the attack, we focused on injecting and removing lung cancer from CT scans. We show how three expert radiologists and a state-of-the-art deep learning AI are highly susceptible to the attack. We also explore the attack surface of a modern radiology network and demonstrate one attack vector: we intercepted and manipulated CT scans in an active hospital network with a covert penetration test. Demo video: this https URL Source code: this https URL
|
Many works have proposed methods for detecting forgeries in medical images @cite_49 , but none have focused on the attack itself. The most common methods of image forgery are: copying content from one image to another (image splicing), duplicating content within the same image to cover up or add something (copy-move), and enhancing an image to give it a different feel (image retouching) @cite_21 .
|
{
"cite_N": [
"@cite_21",
"@cite_49"
],
"mid": [
"2779804704",
"2904419556"
],
"abstract": [
"Authenticating digital images is increasingly becoming important because digital images carry important information and due to their use in different areas such as courts of law as essential pieces of evidence. Nowadays, authenticating digital images is difficult because manipulating them has become easy as a result of powerful image processing software and human knowledge. The importance and relevance of digital image forensics has attracted various researchers to establish different techniques for detection in image forensics. The core category of image forensics is passive image forgery detection. One of the most important passive forgeries that affect the originality of the image is copy-move digital image forgery, which involves copying one part of the image onto another area of the same image. Various methods have been proposed to detect copy-move forgery that uses different types of transformations. The goal of this paper is to determine which copy-move forgery detection methods are best for different image attributes such as JPEG compression, scaling, rotation. The advantages and drawbacks of each method are also highlighted. Thus, the current state-of-the-art image forgery detection techniques are discussed along with their advantages and drawbacks.",
"Abstract Editing a real-world photo through computer software or mobile applications is one of the easiest things one can do today before sharing the doctored image on one’s social networking sites. Although most people do it for fun, it is suspectable if one concealed an object or changed someone’s face within the image. Before questioning the intention behind the editing operations, we need to first identify how and which part of the image has been manipulated. It therefore demands automatic tools for identifying the intrinsic difference between authentic images and tampered images. This survey provides an overview on typical image tampering types, released image tampering datasets and recent tampering detection approaches. It presents a distinct perspective to rethink various assumptions of tampering clues behind different detection approaches. And this further encourages the research community to develop general tampering localization methods in the future instead of adhering to single-type tampering detection."
]
}
|
1901.03597
|
2910333870
|
In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market. In this paper, we show how an attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. We implement the attack using a 3D conditional GAN and show how the framework (CT-GAN) can be automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results which can be executed in milliseconds. To evaluate the attack, we focused on injecting and removing lung cancer from CT scans. We show how three expert radiologists and a state-of-the-art deep learning AI are highly susceptible to the attack. We also explore the attack surface of a modern radiology network and demonstrate one attack vector: we intercepted and manipulated CT scans in an active hospital network with a covert penetration test. Demo video: this https URL Source code: this https URL
|
Copy-move attacks can be used to remove cancer or duplicate an existing cancer. However, duplicating an existing cancer will raise suspicion because radiologists closely analyze each sample. Image-splicing can be used to inject cancer into healthy lungs. However, CT scanners have distinct local noise patterns which are visually noticeable @cite_30 @cite_29 . The copied patterns would not fit the local pattern and thus raise suspicion.
|
{
"cite_N": [
"@cite_30",
"@cite_29"
],
"mid": [
"2047790369",
"2410503403"
],
"abstract": [
"Medical image processing is considered as an important topic in the domain of image processing. It is used to help the doctors to improve and speed up the diagnosis process. In particular, computed tomography scanners (CT-Scanner) are used to create cross-sectional medical 3D images of bones. In this paper, we propose a method for CT-Scanner identification based on the sensor noise analysis. We built the reference noise pattern for each CT-Scanner from its 3D image, then we correlated the tested 3D images with each reference noise pattern in order to identify the corresponding CT-Scanner. We used a wavelet-based Wiener filter approach to extract the noise. Experimental results were applied on eight 3D images of 100 slices from different CT-Scanners, and we were able to identify each CT-Scanner separately.",
"In this paper, we focus on the “blind” identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94 and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices."
]
}
|
1901.03597
|
2910333870
|
In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market. In this paper, we show how an attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. We implement the attack using a 3D conditional GAN and show how the framework (CT-GAN) can be automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results which can be executed in milliseconds. To evaluate the attack, we focused on injecting and removing lung cancer from CT scans. We show how three expert radiologists and a state-of-the-art deep learning AI are highly susceptible to the attack. We also explore the attack surface of a modern radiology network and demonstrate one attack vector: we intercepted and manipulated CT scans in an active hospital network with a covert penetration test. Demo video: this https URL Source code: this https URL
|
Since 2016, over 100 papers relating to GANs and medical imaging have been published @cite_2 . These publications mostly relate image reconstruction, denoising, image generation (synthesis), segmentation, detection, classification, and registration. We will focus on the use of GANs to generate medical images.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2890139949"
],
"abstract": [
"Abstract Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique."
]
}
|
1901.03597
|
2910333870
|
In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market. In this paper, we show how an attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. We implement the attack using a 3D conditional GAN and show how the framework (CT-GAN) can be automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results which can be executed in milliseconds. To evaluate the attack, we focused on injecting and removing lung cancer from CT scans. We show how three expert radiologists and a state-of-the-art deep learning AI are highly susceptible to the attack. We also explore the attack surface of a modern radiology network and demonstrate one attack vector: we intercepted and manipulated CT scans in an active hospital network with a covert penetration test. Demo video: this https URL Source code: this https URL
|
Another apporach to augmenting medical datasets is the generation of new instances. In @cite_7 , the authors use a DCGAN to generate 2D brain MRI images with a resolution of 220x172. In @cite_31 , the authors used a DCGAN to generate 2D liver lesions with a resolution of 64x64. In @cite_18 , the authors generated 3D blood vessels using a Wasserstien (WGAN) @cite_20 . In @cite_22 , the authors use a Laplace GAN (LAPGAN) to generate skin lesion images with 256x256 resolution. In @cite_42 , the authors train two DCGANs for generating 2D chest X-rays (one for malign and the other for benign). However, the generated samples were downsampled to 128x128 in resolution since this approach could not be scaled to the original resolution of 2000x3000. In @cite_54 the authors generated 2D images of pulmonary lung nodules (lung cancer) with 56x56 resolution. The author's motivation was to create realistic datasets for doctors to practice on. The samples were generated using a deep convolutional GAN (DCGAN) and their realism were assessed with help of two radiologists. The authors found that the radiologists were unable to differentiate between real and fake samples.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_54",
"@cite_42",
"@cite_31",
"@cite_20"
],
"mid": [
"2797591221",
"2796550904",
"2781600508",
"2964261464",
"2793678581",
"",
""
],
"abstract": [
"Computationally synthesized blood vessels can be used for training and evaluation of medical image analysis applications. We propose a deep generative model to synthesize blood vessel geometries, with an application to coronary arteries in cardiac CT angiography (CCTA). In the proposed method, a Wasserstein generative adversarial network (GAN) consisting of a generator and a discriminator network is trained. While the generator tries to synthesize realistic blood vessel geometries, the discriminator tries to distinguish synthesized geometries from those of real blood vessels. Both real and synthesized blood vessel geometries are parametrized as 1D signals based on the central vessel axis. The generator can optionally be provided with an attribute vector to synthesize vessels with particular characteristics. The GAN was optimized using a reference database with parametrizations of 4,412 real coronary artery geometries extracted from CCTA scans. After training, plausible coronary artery geometries could be synthesized based on random vectors sampled from a latent space. A qualitative analysis showed strong similarities between real and synthesized coronary arteries. A detailed analysis of the latent space showed that the diversity present in coronary artery anatomy was accurately captured by the generator. Results show that Wasserstein generative adversarial networks can be used to synthesize blood vessel geometries.",
"Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking images of faces, scenery and even medical images. Unfortunately, they usually require large training datasets, which are often scarce in the medical field, and to the best of our knowledge GANs have been only applied for medical image synthesis at fairly low resolution. However, many state-of-the-art machine learning models operate on high resolution data as such data carries indispensable, valuable information. In this work, we try to generate realistically looking high resolution images of skin lesions with GANs, using only a small training dataset of 2000 samples. The nature of the data allows us to do a direct comparison between the image statistics of the generated samples and the real dataset. We both quantitatively and qualitatively compare state-of-the-art GAN architectures such as DCGAN and LAPGAN against a modification of the latter for the task of image generation at a resolution of 256x256px. Our investigation shows that we can approximate the real data distribution with all of the models, but we notice major differences when visually rating sample realism, diversity and artifacts. In a set of use-case experiments on skin lesion classification, we further show that we can successfully tackle the problem of heavy class imbalance with the help of synthesized high resolution melanoma samples.",
"An important task in image processing and neuroimaging is to extract quantitative information from the acquired images in order to make observations about the presence of disease or markers of development in populations. Having a low-dimensional manifold of an image allows for easier statistical comparisons between groups and the synthesis of group representatives. Previous studies have sought to identify the best mapping of brain MRI to a low-dimensional manifold, but have been limited by assumptions of explicit similarity measures. In this work, we use deep learning techniques to investigate implicit manifolds of normal brains and generate new, high-quality images. We explore implicit manifolds by addressing the problems of image synthesis and image denoising as important tools in manifold learning. First, we propose the unsupervised synthesis of T1-weighted brain MRI using a Generative Adversarial Network (GAN) by learning from 528 examples of 2D axial slices of brain MRI. Synthesized images were first shown to be unique by performing a cross-correlation with the training set. Real and synthesized images were then assessed in a blinded manner by two imaging experts providing an image quality score of 1-5. The quality score of the synthetic image showed substantial overlap with that of the real images. Moreover, we use an autoencoder with skip connections for image denoising, showing that the proposed method results in higher PSNR than FSL SUSAN after denoising. This work shows the power of artificial networks to synthesize realistic imaging data, which can be used to improve image processing techniques and provide a quantitative framework to structural changes in the brain.",
"Discriminating lung nodules as malignant or benign is still an underlying challenge. To address this challenge, radiologists need computer aided diagnosis (CAD) systems which can assist in learning discriminative imaging features corresponding to malignant and benign nodules. However, learning highly discriminative imaging features is an open problem. In this paper, our aim is to learn the most discriminative features pertaining to lung nodules by using an adversarial learning methodology. Specifically, we propose to use un-supervised learning with Deep Convolutional-Generative Adversarial Networks (DC-GANs) to generate lung nodule samples realistically. We hypothesize that imaging features of lung nodules will be discriminative if it is hard to differentiate them (fake) from real (true) nodules. To test this hypothesis, we present Visual Turing tests to two radiologists in order to evaluate the quality of the generated (fake) nodules. Extensive comparisons are performed in discerning real, generated, benign, and malignant nodules. This experimental set up allows us to validate the overall quality of the generated nodules, which can then be used to (1) improve diagnostic decisions by mining highly discriminative imaging features, (2) train radiologists for educational purposes, and (3) generate realistic samples to train deep networks with big data.",
"Medical imaging datasets are limited in size due to privacy issues and the high cost of obtaining annotations. Augmentation is a widely used practice in deep learning to enrich the data in data-limited scenarios and to avoid overfitting. However, standard augmentation methods that produce new examples of data by varying lighting, field of view, and spatial rigid transformations do not capture the biological variance of medical imaging data and could result in unrealistic images. Generative adversarial networks (GANs) provide an avenue to understand the underlying structure of image data which can then be utilized to generate new realistic samples. In this work, we investigate the use of GANs for producing chest X-ray images to augment a dataset. This dataset is then used to train a convolutional neural network to classify images for cardiovascular abnormalities. We compare our augmentation strategy with traditional data augmentation and show higher accuracy for normal vs abnormal classification in chest X-rays.",
"",
""
]
}
|
1901.03459
|
2887684398
|
We introduce a new task named Story Ending Generation (SEG), which aims at generating a coherent story ending from a sequence of story plot. We propose a framework consisting of a Generator and a Reward Manager for this task. The Generator follows the pointer-generator network with coverage mechanism to deal with out-of-vocabulary (OOV) and repetitive words. Moreover, a mixed loss method is introduced to enable the Generator to produce story endings of high semantic relevance with story plots. In the Reward Manager, the reward is computed to fine-tune the Generator with policy-gradient reinforcement learning (PGRL). We conduct experiments on the recently-introduced ROCStories Corpus. We evaluate our model in both automatic evaluation and human evaluation. Experimental results show that our model exceeds the sequence-to-sequence baseline model by 15.75 and 13.57 in terms of CIDEr and consistency score respectively.
|
Encoder-decoder framework, which uses neural networks as encoder and decoder, was first proposed in machine translation @cite_7 @cite_12 and has been widely used in NLG tasks. The encoder reads and encodes a source sentence into a fixed-length vector, then the decoder outputs a new sequence from the encoded vector. Attention mechanism @cite_3 extends the basic encoder-decoder framework by assigning different weights to input words when generating each target word. @cite_32 , @cite_22 and @cite_18 apply attention-based encoder-decoder model to text summarization. @cite_0 and @cite_1 develop new and effective attentive models for machine reading comprehension.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_12"
],
"mid": [
"1843891098",
"2341401723",
"2950635152",
"",
"2467173223",
"2133564696",
"",
"2130942839"
],
"abstract": [
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"",
"",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
}
|
1901.03459
|
2887684398
|
We introduce a new task named Story Ending Generation (SEG), which aims at generating a coherent story ending from a sequence of story plot. We propose a framework consisting of a Generator and a Reward Manager for this task. The Generator follows the pointer-generator network with coverage mechanism to deal with out-of-vocabulary (OOV) and repetitive words. Moreover, a mixed loss method is introduced to enable the Generator to produce story endings of high semantic relevance with story plots. In the Reward Manager, the reward is computed to fine-tune the Generator with policy-gradient reinforcement learning (PGRL). We conduct experiments on the recently-introduced ROCStories Corpus. We evaluate our model in both automatic evaluation and human evaluation. Experimental results show that our model exceeds the sequence-to-sequence baseline model by 15.75 and 13.57 in terms of CIDEr and consistency score respectively.
|
The encoder-decoder framework is typically trained by maximizing the log-likelihood of the next word given the previous ground-truth input words, resulting in exposure bias @cite_15 and objective mismatch @cite_30 problems. Exposure bias refers to the input distribution discrepancy between training and testing time, which makes generation brittle as error accumulate. Objective mismatch refers to using MLE at training time while using discrete and non-differentiable NLP metrics such as BLEU at test time. Recently, it has been shown that both the two problems can be addressed by incorporating RL in captioning tasks. Specifically, @cite_15 propose the MIXER algorithm to directly optimize the sequence-based test metrics. @cite_9 improve the MIXER algorithm and uses a policy gradient method. @cite_2 present a new optimization approach called self-critical sequence training (SCST). Similar to the above methods, @cite_27 , @cite_30 and @cite_31 explore different reward functions for video captioning. Researchers also make attempts on other NLG tasks such as dialogue generation @cite_6 , sentence simplification @cite_4 and abstract summarization @cite_23 , obtaining satisfying performances with RL.
|
{
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_4",
"@cite_9",
"@cite_6",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_15"
],
"mid": [
"2777622844",
"2775506363",
"2953033958",
"2949376505",
"2410983263",
"2742943414",
"2612675303",
"2560313346",
"2176263492"
],
"abstract": [
"Captioning models are typically trained using the cross-entropy loss. However, their performance is evaluated on other metrics designed to better correlate with human assessments. Recently, it has been shown that reinforcement learning (RL) can directly optimize these metrics in tasks such as captioning. However, this is computationally costly and requires specifying a baseline reward at each step to make training converge. We propose a fast approach to optimize one's objective of interest through the REINFORCE algorithm. First we show that, by replacing model samples with ground-truth sentences, RL training can be seen as a form of weighted cross-entropy loss, giving a fast, RL-based pre-training algorithm. Second, we propose to use the consensus among ground-truth captions of the same video as the baseline reward. This can be computed very efficiently. We call the complete proposal Consensus-based Sequence Training (CST). Applied to the MSRVTT video captioning benchmark, our proposals train significantly faster than comparable methods and establish a new state-of-the-art on the task, improving the CIDEr score from 47.3 to 54.2.",
"Video captioning is the task of automatically generating a textual description of the actions in a video. Although previous work (e.g. sequence-to-sequence model) has shown promising results in abstracting a coarse description of a short video, it is still very challenging to caption a video containing multiple fine-grained actions with a detailed description. This paper aims to address the challenge by proposing a novel hierarchical reinforcement learning framework for video captioning, where a high-level Manager module learns to design sub-goals and a low-level Worker module recognizes the primitive actions to fulfill the sub-goal. With this compositional framework to reinforce video captioning at different levels, our approach significantly outperforms all the baseline methods on a newly introduced large-scale dataset for fine-grained video captioning. Furthermore, our non-ensemble model has already achieved the state-of-the-art results on the widely-used MSR-VTT dataset.",
"Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call Dress (as shorthand for D eep RE inforcement S entence S implification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.",
"Current image captioning methods are usually trained via (penalized) maximum likelihood estimation. However, the log-likelihood score of a caption does not correlate well with human assessments of quality. Standard syntactic evaluation metrics, such as BLEU, METEOR and ROUGE, are also not well correlated. The newer SPICE and CIDEr metrics are better correlated, but have traditionally been hard to optimize for. In this paper, we show how to use a policy gradient (PG) method to directly optimize a linear combination of SPICE and CIDEr (a combination we call SPIDEr): the SPICE score ensures our captions are semantically faithful to the image, while CIDEr score ensures our captions are syntactically fluent. The PG method we propose improves on the prior MIXER approach, by using Monte Carlo rollouts instead of mixing MLE training with PG. We show empirically that our algorithm leads to easier optimization and improved results compared to MIXER. Finally, we show that using our PG method we can optimize any of the metrics, including the proposed SPIDEr metric which results in image captions that are strongly preferred by human raters compared to captions generated by the same model but trained to optimize MLE or the COCO metrics.",
"Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.",
"Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.",
"Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit \"exposure bias\" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.",
"Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a \"baseline\" to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.",
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster."
]
}
|
1901.03438
|
2909970382
|
Recent pretrained sentence encoders achieve state of the art results on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures? We introduce a grammatically annotated development set for the Corpus of Linguistic Acceptability (CoLA; , 2018), which we use to investigate the grammatical knowledge of three pretrained encoders, including the popular OpenAI Transformer (, 2018) and BERT (, 2018). We fine-tune these encoders to do acceptability classification over CoLA and compare the models' performance on the annotated analysis set. Some phenomena, e.g. modification by adjuncts, are easy to learn for all models, while others, e.g. long-distance movement, are learned effectively only by models with strong overall performance, and others still, e.g. morphological agreement, are hardly learned by any model.
|
The Corpus of Linguistic Acceptability @cite_14 is a dataset of 10k example sentences including expert annotations for grammatical acceptability. The sentences are example sentences taken from 23 theoretical linguistics publications, mostly about syntax, including undergraduate textbooks, research articles, and dissertations. Such example sentences are usually labeled for acceptability by their authors or a small group of native English speakers. A small random sample of the CoLA development set (with our added annotations) can be seen in Table .
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2806120502"
],
"abstract": [
"In this work, we explore the ability of artificial neural networks to judge the grammatical acceptability of a sentence. Machine learning research of this kind is well placed to answer important open questions about the role of prior linguistic bias in language acquisition by providing a test for the Poverty of the Stimulus Argument. In service of this goal, we introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical by expert linguists. We train several recurrent neural networks to do binary acceptability classification. These models set a baseline for the task. Error-analysis testing the models on specific grammatical phenomena reveals that they learn some systematic grammatical generalizations like subject-verb-object word order without any grammatical supervision. We find that neural sequence models show promise on the acceptability classification task. However, human-like performance across a wide range of grammatical constructions remains far off."
]
}
|
1901.03440
|
2908591872
|
The representation of the posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machine posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models as posteriors.
|
REINFORCE @cite_60 is the most generic approach for computing the gradient of the approximate posteriors. However, in practice, this estimator suffers from high-variance and must be augmented by variance reduction techniques. For a large class of continuous latent variable models the reparameterization trick @cite_6 @cite_2 provides lower-variance gradient estimates. Reparameterization does not apply to discrete latent variables and recent methods for discrete variables have focused on REINFORCE with control variates @cite_3 @cite_46 @cite_40 @cite_14 @cite_28 or continuous relaxations @cite_34 @cite_54 @cite_21 @cite_25 @cite_7 . See @cite_30 for a review of the recent techniques.
|
{
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_7",
"@cite_60",
"@cite_28",
"@cite_54",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_40",
"@cite_2",
"@cite_46",
"@cite_34",
"@cite_25"
],
"mid": [
"2892457101",
"",
"",
"2119717200",
"",
"",
"",
"2952264928",
"2953046278",
"",
"",
"",
"2952165242",
""
],
"abstract": [
"In many applications we seek to maximize an expectation with respect to a distribution over discrete variables. Estimating gradients of such objectives with respect to the distribution parameters is a challenging problem. We analyze existing solutions including finite-difference (FD) estimators and continuous relaxation (CR) estimators in terms of bias and variance. We show that the commonly used Gumbel-Softmax estimator is biased and propose a simple method to reduce it. We also derive a simpler piece-wise linear continuous relaxation that also possesses reduced bias. We demonstrate empirically that reduced bias leads to a better performance in variational inference and on binary optimization tasks.",
"",
"",
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"",
"",
"",
"Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and none of the approximate inference methods that have been applied to them scale well. We propose a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior. The model and this inference network are trained jointly by maximizing a variational lower bound on the log-likelihood. Although the naive estimator of the inference model gradient is too high-variance to be useful, we make it practical by applying several straightforward model-independent variance reduction techniques. Applying our approach to training sigmoid belief networks and deep autoregressive networks, we show that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset.",
"Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN CNN. Our proposed VAE model allows us to have control over what the global latent code can learn and , by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the VAE only \"autoencodes\" data in a lossy fashion. In addition, by leveraging autoregressive models as both prior distribution @math and decoding distribution @math , we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks.",
"",
"",
"",
"The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.",
""
]
}
|
1901.03281
|
2911115627
|
Hyperspectral imaging can help better understand the characteristics of different materials, compared with traditional image systems. However, only high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS) images can generally be captured at video rate in practice. In this paper, we propose a model-based deep learning approach for merging an HrMS and LrHS images to generate a high-resolution hyperspectral (HrHS) image. In specific, we construct a novel MS HS fusion model which takes the observation models of low-resolution images and the low-rankness knowledge along the spectral mode of HrHS image into consideration. Then we design an iterative algorithm to solve the model by exploiting the proximal gradient method. And then, by unfolding the designed algorithm, we construct a deep network, called MS HS Fusion Net, with learning the proximal operators and model parameters by convolutional neural networks. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.
|
The pansharpening technique in remote sensing is closely related to the investigated MS HS problem. This task aims to obtain a high spatial resolution MS image by the fusion of a MS image and a wide-band panchromatic image. A heuristic approach to perform MS HS fusion is to treat it as a number of pansharpening sub-problems, where each band of the HrMS image plays the role of a panchromatic image. There are mainly two categories of pansharpening methods: component substitution (CS) @cite_12 @cite_4 @cite_5 and multiresolution analysis (MRA) @cite_25 @cite_46 @cite_48 @cite_36 @cite_26 . These methods always suffer from the high spectral distortion, since a single panchromatic image contains little spectral information as compared with the expected HS image.
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_36",
"@cite_48",
"@cite_5",
"@cite_46",
"@cite_25",
"@cite_12"
],
"mid": [
"2117853853",
"1654063000",
"2111896212",
"2103504761",
"",
"",
"2165509083",
"1553305639"
],
"abstract": [
"The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a \"true\" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications.",
"The spatial resolution of a multispectral digital image is enhanced in a process of the type wherein a higher spatial resolution panchromatic image is merged with a plurality of lower spatial resolution spectral band images. A lower spatial resolution panchromatic image is simulated and a Gram-Schmidt transformation is performed on the simulated lower spatial resolution panchromatic image and the plurality of lower spatial resolution spectral band images. The simulated lower spatial resolution panchromatic image is employed as the first band in the Gram-Schmidt transformation. The statistics of the higher spatial resolution panchromatic image are adjusted to match the statistics of the first transform band resulting from the Gram-Schmidt transformation and the higher spatial resolution panchromatic image (with adjusted statistics) is substituted for the first transform band resulting from the Gram-Schmidt transformation to produce a new set of transform bands. Finally, the inverse Gram-Schmidt transformation is performed on the new set of transform bands to produce the enhanced spatial resolution multispectral digital image.",
"This paper describes the undecimated wavelet transform and its reconstruction. In the first part, we show the relation between two well known undecimated wavelet transforms, the standard undecimated wavelet transform and the isotropic undecimated wavelet transform. Then we present new filter banks specially designed for undecimated wavelet decompositions which have some useful properties such as being robust to ringing artifacts which appear generally in wavelet-based denoising methods. A range of examples illustrates the results",
"We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding.",
"",
"",
"Pansharpening aims at fusing a panchromatic image with a multispectral one, to generate an image with the high spatial resolution of the former and the high spectral resolution of the latter. In the last decade, many algorithms have been presented in the literature for pansharpening using multispectral data. With the increasing availability of hyperspectral systems, these methods are now being adapted to hyperspectral images. In this work, we compare new pansharpening techniques designed for hyperspectral data with some of the state of the art methods for multispectral pansharpening, which have been adapted for hyperspectral data. Eleven methods from different classes (component substitution, multiresolution analysis, hybrid, Bayesian and matrix factorization) are analyzed. These methods are applied to three datasets and their effectiveness and robustness are evaluated with widely used performance indicators. In addition, all the pansharpening techniques considered in this paper have been implemented in a MATLAB toolbox that is made available to the community.",
"The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thermatic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results"
]
}
|
1901.03281
|
2911115627
|
Hyperspectral imaging can help better understand the characteristics of different materials, compared with traditional image systems. However, only high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS) images can generally be captured at video rate in practice. In this paper, we propose a model-based deep learning approach for merging an HrMS and LrHS images to generate a high-resolution hyperspectral (HrHS) image. In specific, we construct a novel MS HS fusion model which takes the observation models of low-resolution images and the low-rankness knowledge along the spectral mode of HrHS image into consideration. Then we design an iterative algorithm to solve the model by exploiting the proximal gradient method. And then, by unfolding the designed algorithm, we construct a deep network, called MS HS Fusion Net, with learning the proximal operators and model parameters by convolutional neural networks. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.
|
In the last few years, machine learning based methods have gained much attention on MS HS fusion problem @cite_6 @cite_34 @cite_33 @cite_17 @cite_47 @cite_27 @cite_1 @cite_38 . Some of these methods used sparse coding technique to learn a dictionary on the patches across a HrMS image, which delivers spatial knowledge of HrHS to a certain extent, and then learn a coefficient matrix from LrHS to fully represent the HrHS @cite_6 @cite_34 @cite_33 @cite_38 . Some other methods, such as @cite_17 , use the sparse matrix factorization to learn a spectral dictionary for LrHS images and then construct HrMS images by exploiting both the spectral dictionary and HrMS images. The low-rankness of HS images can also be exploited with non-negative matrix factorization, which helps to reduce spectral distortions and enhances the MS HS fusion performance @cite_47 @cite_27 @cite_1 . The main drawback of these methods is that they are mainly designed based on human observations and strong prior assumptions, which may not be very accurate and would not always hold for diverse real world images.
|
{
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_1",
"@cite_6",
"@cite_27",
"@cite_47",
"@cite_34",
"@cite_17"
],
"mid": [
"2588805337",
"",
"2327364376",
"2126025632",
"2120273653",
"2011643180",
"",
"2053081714"
],
"abstract": [
"This paper proposes a blind model-based fusion method to combine a low-spatial resolution multi-band image and a high-spatial resolution panchromatic image. This method is blind in the sense that the spatial and spectral responses in the degradation model are unknown and estimated from the observed data pair. The Gaussian and total variation priors have been used to regularize the ill-posed fusion problem. The formulated optimization problem associated with the image fusion can be attacked efficiently using a recently developed robust multi-band image fusion algorithm in [1]. Experimental results including qualitative and quantitative ones show that the fused image can combine the spectral information from the multi-band image and the high spatial resolution information from the panchromatic image effectively with very competitive computational time.",
"",
"Unlike multispectral (MSI) and panchromatic (PAN) images, generally the spatial resolution of hyperspectral images (HSI) is limited, due to sensor limitations. In many applications, HSI with a high spectral as well as spatial resolution are required. In this paper, a new method for spatial resolution enhancement of a HSI using spectral unmixing and sparse coding (SUSC) is introduced. The proposed method fuses high spectral resolution features from the HSI with high spatial resolution features from an MSI of the same scene. Endmembers are extracted from the HSI by spectral unmixing, and the exact location of the endmembers is obtained from the MSI. This fusion process by using spectral unmixing is formulated as an ill-posed inverse problem which requires a regularization term in order to convert it into a well-posed inverse problem. As a regularizer, we employ sparse coding (SC), for which a dictionary is constructed using high spatial resolution MSI or PAN images from unrelated scenes. The proposed algorithm is applied to real Hyperion and ROSIS datasets. Compared with other state-of-the-art algorithms based on pansharpening, spectral unmixing, and SC methods, the proposed method is shown to significantly increase the spatial resolution while perserving the spectral content of the HSI.",
"For the instrument limitation and imperfect imaging optics, it is difficult to acquire high spatial resolution hyperspectral imagery. Low spatial resolution will result in a lot of mixed pixels and greatly degrade the detection and recognition performance, affect the related application in civil and military fields. As a powerful statistical image modeling technique, sparse representation can be utilized to analyze the hyperspectral image efficiently. Hyperspectral imagery is intrinsically sparse in spatial and spectral domains, and image super-resolution quality largely depends on whether the prior knowledge is utilized properly. In this article, we propose a novel hyperspectral imagery super-resolution method by utilizing the sparse representation and spectral mixing model. Based on the sparse representation model and hyperspectral image acquisition process model, small patches of hyperspectral observations from different wavelengths can be represented as weighted linear combinations of a small number of atoms in pre-trained dictionary. Then super-resolution is treated as a least squares problem with sparse constraints. To maintain the spectral consistency, we further introduce an adaptive regularization terms into the sparse representation framework by combining the linear spectrum mixing model. Extensive experiments validate that the proposed method achieves much better results.",
"Coupled non-negative matrix factorization (CNMF) is introduced for hyperspectral and multispectral data fusion. The CNMF fused data have little spectral distortion while enhancing spatial resolution of all hyperspectral band images owing to its unmixing based algorithm. CNMF is applied to the synthetic dataset generated from real airborne hyperspectral data taken over pasture area. The spectral quality of fused data is evaluated by the classification accuracy of pasture types. The experiment result shows that CNMF enables accurate identification and classification of observed materials at fine spatial resolution.",
"Hyperspectral (HS) remote sensing image with finer spectral information has great advantages in feature identification and classification. However, the spatial resolution of HS image is usually low due to practical limitations. In this paper, the low-spatial-resolution HS image is fused with the high-spatial-resolution multispectral (MS) image of the same observation scene to improve its spatial resolution. A novel spectral unmixing based HS and MS image fusion approach (VSC-CNMF) is proposed, in which CNMF with minimum endmember simplex volume and abundance sparsity constraints is employed for coupled unmixing of HS and MS images. Simulative experiments are employed for verification and comparison. The experimental results illustrate that the newly proposed VSC-CNMF based HS and MS fusion algorithm outperforms several state-of-the-art unmixing based fusion approaches in cases with moderate number of endmembers.",
"",
"In this paper, we present a novel spatial and spectral fusion model (SASFM) that uses sparse matrix factorization to fuse remote sensing imagery with different spatial and spectral properties. By combining the spectral information from sensors with low spatial resolution (LSaR) but high spectral resolution (HSeR) (hereafter called HSeR sensors), with the spatial information from sensors with high spatial resolution (HSaR) but low spectral resolution (LSeR) (hereafter called HSaR sensors), the SASFM can generate synthetic remote sensing data with both HSaR and HSeR. Given two reasonable assumptions, the proposed model can integrate the LSaR and HSaR data via two stages. In the first stage, the model learns from the LSaR data a spectral dictionary containing pure signatures, and in the second stage, the desired HSaR and HSeR data are predicted using the learned spectral dictionary and the known HSaR data. The SASFM is tested with both simulated data and actual Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) acquisitions, and it is also compared to other representative algorithms. The experimental results demonstrate that the SASFM outperforms other algorithms in generating fused imagery with both the well-preserved spectral properties of MODIS and the spatial properties of ETM+. Generated imagery with simultaneous HSaR and HSeR opens new avenues for applications of MODIS and ETM+."
]
}
|
1901.03281
|
2911115627
|
Hyperspectral imaging can help better understand the characteristics of different materials, compared with traditional image systems. However, only high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS) images can generally be captured at video rate in practice. In this paper, we propose a model-based deep learning approach for merging an HrMS and LrHS images to generate a high-resolution hyperspectral (HrHS) image. In specific, we construct a novel MS HS fusion model which takes the observation models of low-resolution images and the low-rankness knowledge along the spectral mode of HrHS image into consideration. Then we design an iterative algorithm to solve the model by exploiting the proximal gradient method. And then, by unfolding the designed algorithm, we construct a deep network, called MS HS Fusion Net, with learning the proximal operators and model parameters by convolutional neural networks. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.
|
Recently, a number of DL-based pansharpening methods were proposed by exploiting different network structures @cite_10 @cite_16 @cite_45 @cite_11 @cite_49 @cite_8 @cite_29 . These methods can be easily adapted to MS HS fusion problem. For example, very recently, @cite_55 proposed a 3D-CNN based MS HS fusion method by using PCA to reduce the computational cost. This method is usually trained with prepared training data. The network inputs are set as the combination of HrMS panchromatic images and LrHS multispectral images (which is usually interpolated to the same spatial size as HrMS panchromatic images in advance), and the outputs are the corresponding HrHS images. The current DL-based methods have been verified to be able to attain good performance. They, however, just employ networks assembled with some off-the-shelf components in current deep learning toolkits, which are not specifically designed against the investigated problem. Thus the main drawback of this technique is the lack of interpretability to this particular MS HS fusion task. In specific, both the intrinsic observation model ), ) and the evident prior structures, like the spectral correlation property, possessed by HS images have been neglected by such kinds of black-box" deep model.
|
{
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_55",
"@cite_45",
"@cite_49",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2792365373",
"",
"2700882095",
"2717881181",
"2462592242",
"2154789478",
"2619662254"
],
"abstract": [
"",
"Remote sensing images with different spatial and spectral resolution, such as panchromatic (PAN) images and multispectral (MS) images, can be captured by many earth-observing satellites. Normally, PAN images possess high spatial resolution but low spectral resolution, while MS images have high spectral resolution with low spatial resolution. In order to integrate spatial and spectral information contained in the PAN and MS images, image fusion techniques are commonly adopted to generate remote sensing images at both high spatial and spectral resolution. In this study, based on the deep convolutional neural network, a remote sensing image fusion method that can adequately extract spectral and spatial features from source images is proposed. The major innovation of this study is that the proposed fusion method contains a two branches network with the deeper structure which can capture salient features of the MS and PAN images separately. Besides, the residual learning is adopted in our network to thoroughly study the relationship between the high- and low-resolution MS images. The proposed method mainly consists of two procedures. First, spatial and spectral features are respectively extracted from the MS and PAN images by convolutional layers with different depth. Second, the feature fusion procedure utilizes the extracted features from the former step to yield fused images. By evaluating the performance on the QuickBird and Gaofen-1 images, our proposed method provides better results compared with other classical methods.",
"",
"We proposed a deep convolutional network for multi-spectral image pan-sharpening to overcome the drawbacks of traditional methods and improve the fusion accuracy. To break the performance limitation of deep networks, residual learning with specific adaption to image fusion tasks is applied to optimize the architecture of proposed network. Results of adequate experiments support that our model can yield high resolution multi-spectral images with state-of-the-art qualities, as the information in both spatial and spectral domains has been accurately preserved.",
"Pan-sharpening has become an important tool in remote sensing, which normally aims at fusing a multi-spectral image with high spectral resolution and a panchromatic image with high spatial resolution. However, some problems, such as spectral distortion, are facing pan-sharpening methods. Inspired by the applications of convolutional neural network (CNN) in many areas, we adopt an effective CNN model to fulfill pan-sharpening. In our method, only the sparse residuals between the interpolated MS and the pan-sharpened image are learned, which achieves fast convergence and high pan-sharpening quality. The experimental results on real-world data validate the effectiveness of the method.",
"A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem. Moreover, to improve performance without increasing complexity, we augment the input by including several maps of nonlinear radiometric indices typical of remote sensing. Experiments on three representative datasets show the proposed method to provide very promising results, largely competitive with the current state of the art in terms of both full-reference and no-reference metrics, and also at a visual inspection.",
"A deep neural network (DNN)-based new pan-sharpening method for the remote sensing image fusion problem is proposed in this letter. Research on representation learning suggests that the DNN can effectively model complex relationships between variables via the composition of several levels of nonlinearity. Inspired by this observation, a modified sparse denoising autoencoder (MSDA) algorithm is proposed to train the relationship between high-resolution (HR) and low-resolution (LR) image patches, which can be represented by the DNN. The HR LR image patches only sample from the HR LR panchromatic (PAN) images at hand, respectively, without requiring other training images. By connecting a series of MSDAs, we obtain a stacked MSDA (S-MSDA), which can effectively pretrain the DNN. Moreover, in order to better train the DNN, the entire DNN is again trained by a back-propagation algorithm after pretraining. Finally, assuming that the relationship between HR LR multispectral (MS) image patches is the same as that between HR LR PAN image patches, the HR MS image will be reconstructed from the observed LR MS image using the trained DNN. Comparative experimental results with several quality assessment indexes show that the proposed method outperforms other pan-sharpening methods in terms of visual perception and numerical measures.",
"In the field of multispectral (MS) and panchromatic image fusion (pansharpening), the impressive effectiveness of deep neural networks has recently been employed to overcome the drawbacks of the traditional linear models and boost the fusion accuracy. However, the existing methods are mainly based on simple and flat networks with relatively shallow architectures, which severely limits their performance. In this letter, the concept of residual learning is introduced to form a very deep convolutional neural network to make the full use of the high nonlinearity of the deep learning models. Through both quantitative and visual assessments on a large number of high-quality MS images from various sources, it is confirmed that the proposed model is superior to all the mainstream algorithms included in the comparison, and achieves the highest spatial–spectral unified accuracy."
]
}
|
1901.03302
|
2909963584
|
Software engineers make use of design patterns for reasons that range from performance to code comprehensibility. Several design patterns capturing the body of knowledge of best practices have been proposed in the past, namely creational, structural and behavioral patterns. However, with the advent of mobile devices, it becomes a necessity a catalog of design patterns for energy efficiency. In this work, we inspect commits, issues and pull requests of 1027 Android and 756 iOS apps to identify common practices when improving energy efficiency. This analysis yielded a catalog, available online, with 22 design patterns related to improving the energy efficiency of mobile apps. We argue that this catalog might be of relevance to other domains such as Cyber-Physical Systems and Internet of Things. As a side contribution, an analysis of the differences between Android and iOS devices shows that the Android community is more energy-aware.
|
In previous work, @cite_1 have mined 290 energy-saving software commits, identifying 12 categories of source code modification to improve energy usage : , , , , , , , , , , , and . The programming languages used to implement the software systems used in this study were diverse: programming C (158 projects), Java (25 projects), Bourne Shell (17 projects), Arduino Sketch (15 projects), and C++ (12 projects). They found that roughly 50 (e.g., kernels and drivers), which is not a level of abstraction commonly considered during the design of mobile apps. Our work extends this approach to the ecosystem of mobile apps by compiling a set of coding practices that can be used by practitioners across mobile apps on different platforms. Thus, our dataset of apps also includes projects written in , , , , and any other language used for mobile app development in iOS or Android. In addition, we detail these and other energy-saving categories with a context and guidelines to help developers decide on the most appropriate pattern. Moreover, we compare the prevalence of these patterns across different mobile platforms.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2273675851"
],
"abstract": [
"The IoT paradigm holds the promise to revolutionize the way we live and work by means of a wealth of new services, based on seamless interactions between a large amount of heterogeneous devices. After decades of conceptual inception of the IoT, in recent years a large variety of communication technologies has gradually emerged, reflecting a large diversity of application domains and of communication requirements. Such heterogeneity and fragmentation of the connectivity landscape is currently hampering the full realization of the IoT vision, by posing several complex integration challenges. In this context, the advent of 5G cellular systems, with the availability of a connectivity technology, which is at once truly ubiquitous, reliable, scalable, and cost-efficient, is considered as a potentially key driver for the yet-to emerge global IoT. In the present paper, we analyze in detail the potential of 5G technologies for the IoT, by considering both the technological and standardization aspects. We review the present-day IoT connectivity landscape, as well as the main 5G enablers for the IoT. Last but not least, we illustrate the massive business shifts that a tight link between IoT and 5G may cause in the operator and vendors ecosystem."
]
}
|
1901.03302
|
2909963584
|
Software engineers make use of design patterns for reasons that range from performance to code comprehensibility. Several design patterns capturing the body of knowledge of best practices have been proposed in the past, namely creational, structural and behavioral patterns. However, with the advent of mobile devices, it becomes a necessity a catalog of design patterns for energy efficiency. In this work, we inspect commits, issues and pull requests of 1027 Android and 756 iOS apps to identify common practices when improving energy efficiency. This analysis yielded a catalog, available online, with 22 design patterns related to improving the energy efficiency of mobile apps. We argue that this catalog might be of relevance to other domains such as Cyber-Physical Systems and Internet of Things. As a side contribution, an analysis of the differences between Android and iOS devices shows that the Android community is more energy-aware.
|
With a similar approach, @cite_0 have mined 468 power management commits to find coding practices in Android apps . Using a hybrid card sort approach, six different power management practices were identified: , , , , . The study shows that power management activities are more prevalent in navigation apps. Conversely, our work focuses on energy-saving commits, pull requests, and issues. Using the same taxonomy, our work concentrates exclusively on coding practices for , and . Moreover, rather than analyzing the prevalence of power management activities amongst different app categories, we emphasize on providing actionable findings for mobile app practitioners. Finally, we extend this work to the iOS mobile platform, which shares a big part of the mobile app market.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2402612280"
],
"abstract": [
"As Android platform becomes more and more popular, a large amount of Android applications have been developed. When developers design and implement Android applications, power consumption management is an important factor to consider since it affects the usability of the applications. Thus, it is important to help developers adopt proper strategies to manage power consumption. Interestingly, today, there is a large number of Android application repositories made publicly available in sites such as GitHub. These repositories can be mined to help crystalize common power management activities that developers do. These in turn can be used to help other developers to perform similar tasks to improve their own Android applications.In this paper, we present an empirical study of power management commits in Android applications. Our study extends that of who perform an empirical studyon energy aware commits; however they do not focus on Android applications and only a few of the commits that they study come from Android applications. Android applications are often different from other applications (e.g., those running on a server) due to the issue of limited battery life and the use of specialized APIs. As subjects of our empirical study, we obtain a list of open source Android applications from F-Droid and crawl their commits from Github. We get 468 power management commits after we filter the commits using a set of keywords and by performing manual analysis. These 468 power management commits are from 154 different Android applications and belong to 15 different application categories. Furthermore, we use open card sort to categorize these power management commits and we obtain 6 groups which correspond to different power management activities. Our study also reveals that for different kinds of Android application (e.g., Games, Connectivity, Navigation, etc.), the dominant power management activities differ.For example, the percentageof power management commits belonging to Power Adaptation activity is larger for Navigation applications than those belonging to other categories."
]
}
|
1901.03298
|
2909948158
|
This paper addresses the problem of floods classification and floods aftermath detection utilizing both social media and satellite imagery. Automatic detection of disasters such as floods is still a very challenging task. The focus lies on identifying passable routes or roads during floods. Two novel solutions are presented, which were developed for two corresponding tasks at the MediaEval 2018 benchmarking challenge. The tasks are (i) identification of images providing evidence for road passability and (ii) differentiation and detection of passable and non-passable roads in images from two complementary sources of information. For the first challenge, we mainly rely on object and scene-level features extracted through multiple deep models pre-trained on the ImageNet and Places datasets. The object and scene-level features are then combined using early, late and double fusion techniques. To identify whether or not it is possible for a vehicle to pass a road in satellite images, we rely on Convolutional Neural Networks and a transfer learning-based classification approach. The evaluation of the proposed methods are carried out on the large-scale datasets provided for the benchmark competition. The results demonstrate significant improvement in the performance over the recent state-of-art approaches.
|
Geo-located and time-stamped data available in form of text and visual content on social media have also been widely utilized for disaster events analysis to gather useful information to be used in rescue and rehabilitation @cite_7 . To this aim, most of the approaches rely on two types of complementary information including visual contents and the additional information associated with images in the form of metadata, such as user tags, geo-location and temporal information. For instance, in @cite_30 , users' tags and other useful information from metadata are jointly utilized with visual features in an early fusion scheme. In @cite_22 , visual features extracted through deep models, pre-trained on ImageNet @cite_13 , are complemented by textual information, such as users tags, geo-location and temporal information along with textual description. Both textual and visual features are evaluated individually and jointly by concatenating feature vectors.
|
{
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_22",
"@cite_7"
],
"mid": [
"2141317095",
"2108598243",
"2773321697",
"2800524086"
],
"abstract": [
"In this paper, a hierarchical disaster image classification (HDIC) framework based on multi-source data fusion (MSDF) and multiple correspondence analysis (MCA) is proposed to aid emergency managers in disaster response situations. The HDIC framework classifies images into different disaster categories and sub-categories using a pre-defined semantic hierarchy. In order to effectively fuse different sources (visual and text) of information, a weighting scheme is presented to assign different weights to each data resource depending on the hierarchical structure. The experimental analysis demonstrates that the proposed approach can effectively classify disaster images at each logical layer. In addition, the paper also presents an iPad application developed for situation report management using the proposed HDIC framework.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"",
"Being able to automatically link social media and satellite imagery holds large opportunities for research, with a potentially considerable impact on society. The possibility of integrating different information sources opens in fact to new scenarios where the wide coverage of satellite imaging can be used as a collector of the fine-grained details provided by the social media. Remote-sensed data and social media data can well complement each other, integrating the wide perspective provided by the satellite view with the information collected locally, being it textual, audio, or visual. Among the possible applications, natural disasters are certainly one of the most interesting scenarios, where global and local perspectives are needed at the same time. In this paper, we present a system called JORD that is able to autonomously collect social media data (including the text analysis in local languages) about technological and environmental disasters, and link it automatically to remote-sensed data. Moreover, in order to ensure the quality of retrieved information, JORD is equipped with a hierarchical filtering mechanism relying on the temporal information and the content analysis of retrieved multimedia data. To show the capabilities of the system, we present a large number of disaster events detected by the system, and we evaluate both the quality of the provided information about the events and the usefulness of JORD from potential users viewpoint, using crowdsourcing."
]
}
|
1901.03298
|
2909948158
|
This paper addresses the problem of floods classification and floods aftermath detection utilizing both social media and satellite imagery. Automatic detection of disasters such as floods is still a very challenging task. The focus lies on identifying passable routes or roads during floods. Two novel solutions are presented, which were developed for two corresponding tasks at the MediaEval 2018 benchmarking challenge. The tasks are (i) identification of images providing evidence for road passability and (ii) differentiation and detection of passable and non-passable roads in images from two complementary sources of information. For the first challenge, we mainly rely on object and scene-level features extracted through multiple deep models pre-trained on the ImageNet and Places datasets. The object and scene-level features are then combined using early, late and double fusion techniques. To identify whether or not it is possible for a vehicle to pass a road in satellite images, we rely on Convolutional Neural Networks and a transfer learning-based classification approach. The evaluation of the proposed methods are carried out on the large-scale datasets provided for the benchmark competition. The results demonstrate significant improvement in the performance over the recent state-of-art approaches.
|
Existing pre-trained models are also used in @cite_44 , where five different models from four state-of-the-art deep architectures, namely AlexNet @cite_0 , GoogleNet @cite_34 , VggNet @cite_17 and ResNet @cite_8 , pre-trained on the large-scale ImageNet and Places datasets @cite_12 , were used. The basic insight of the paper was to combine object and scene-level features for the flood classification task. Individual Support Vector Machines (SVMs) are then trained on the features extracted through each model, followed by a fusion phase where three different late fusion techniques are used to combine the scores obtained through the individual classifiers along with a Random Forest classifier trained on textual features. Object and scene-level features are also used in @cite_36 @cite_3 @cite_56 , for the classification of flooded and non-flooded images in social media.
|
{
"cite_N": [
"@cite_8",
"@cite_36",
"@cite_3",
"@cite_56",
"@cite_44",
"@cite_0",
"@cite_34",
"@cite_12",
"@cite_17"
],
"mid": [
"2194775991",
"2771415089",
"2771771192",
"2889550687",
"2774411652",
"2163605009",
"2183341477",
"2134670479",
"1686810756"
],
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"",
"The paper addresses the problem of adverse events (natural disasters) recognition in user-generated images from social media, addressing the problem from two complementary perspectives. On one side, we aim to provide a comprehensive comparative analysis of different feature extraction and classification algorithms, relying on two different families of feature extraction algorithms, namely (i) Global features and (ii) Deep features. On the other hand, we demonstrate that the fusion of different feature extraction and classification strategies can outperform the single methods by jointly exploiting the capabilities of individual feature descriptors. The evaluation of the methods are carried out on two datasets, including a benchmark and a self-collected dataset.",
"",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2 top-1 and 5:6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5 top-5 error and 17:3 top-1 error on the validation set and 3:6 top-5 error on the official test set.",
"Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
}
|
1901.03298
|
2909948158
|
This paper addresses the problem of floods classification and floods aftermath detection utilizing both social media and satellite imagery. Automatic detection of disasters such as floods is still a very challenging task. The focus lies on identifying passable routes or roads during floods. Two novel solutions are presented, which were developed for two corresponding tasks at the MediaEval 2018 benchmarking challenge. The tasks are (i) identification of images providing evidence for road passability and (ii) differentiation and detection of passable and non-passable roads in images from two complementary sources of information. For the first challenge, we mainly rely on object and scene-level features extracted through multiple deep models pre-trained on the ImageNet and Places datasets. The object and scene-level features are then combined using early, late and double fusion techniques. To identify whether or not it is possible for a vehicle to pass a road in satellite images, we rely on Convolutional Neural Networks and a transfer learning-based classification approach. The evaluation of the proposed methods are carried out on the large-scale datasets provided for the benchmark competition. The results demonstrate significant improvement in the performance over the recent state-of-art approaches.
|
@cite_21 rely on hand-crafted visual features, such as the colour and edge directivity descriptor (CEDD) @cite_6 , color layout @cite_48 and Gabor wavelets @cite_33 . A more sophisticated solution has been proposed for textual information (i.e., description, title and users' tags) relying on word embeddings trained on the entire YFCC100m dataset @cite_49 . Each textual feature is extracted separately, and then concatenated to form a single feature vector. Moreover, to translate users' tags into English, a machine translation technique has been employed. In @cite_55 , handcrafted visual features are concatenated into a single feature vector followed by dimensionality reduction and classification phases. Term Frequency Inverse Document Frequency (TFIDF) @cite_35 are measured for users' tags to represent textual features. In @cite_4 , handcrafted visual features along with textual information are used for the classification of flood related images.
|
{
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_33",
"@cite_48",
"@cite_55",
"@cite_21",
"@cite_6",
"@cite_49"
],
"mid": [
"1978394996",
"2758405190",
"2154461202",
"1972195523",
"2773853695",
"2750871820",
"2163502583",
"2250384498"
],
"abstract": [
"The experimental evidence accumulated over the past 20 years indicates that textindexing systems based on the assignment of appropriately weighted single terms produce retrieval results that are superior to those obtainable with other more elaborate text representations. These results depend crucially on the choice of effective term weighting systems. This paper summarizes the insights gained in automatic term weighting, and provides baseline single term indexing models with which other more elaborate content analysis procedures can be compared.",
"We introduce a domain-speci c and late-fusion algorithm to cope with the challenge raised in The MediaEval 2017 Multimedia Satel- lite Task. Several known techniques are integrated based on domain- speci c criteria such as late fusion, tuning, ensemble learning, object detection using deep learning, and temporal-spatial-based event con rmation. Experimental results show that the proposed algo- rithm can overcome the main challenges of the proper discrimina- tion of the water levels in di erent areas as well as the consideration of di erent types of ooding events.",
"Recognizing smiles is of much importance for detecting happy moods. Gabor features are conventionally widely applied to facial expression recognition, but the number of Gabor features is usually too large. We proposed to use Pyramid Histogram of Oriented Gradients (PHOG) as the features extracted for smile recognition in this paper. The comparisons between the PHOG and Gabor features using a publicly available dataset demonstrated that the PHOG with a significantly shorter vector length could achieve as high a recognition rate as the Gabor features did. Furthermore, the feature selection conducted by an AdaBoost algorithm was not needed when using the PHOG features. To further improve the recognition performance, we combined these two feature extraction methods and achieved the best smile recognition rate, indicating a good value of the PHOG features for smile recognitions.",
"This paper proposes a color feature description for image and video retrieving applications such as personal video recorder, which manages a large amount of data. This descriptor specifies the spatial distribution of colors with a few nonlinear quantized DCT coefficients of grid based average colors. The tradeoff between the number of the coefficients enclosed in the descriptor and retrieval cost is studied. The experimental results show that the descriptor enclosing six for luminance and three for each chrominance coefficient achieves the best trade-off between the storage cost and retrieval efficiency. It requires 6 bits for DC and 5 bits for AC coefficients, and therefore total storage cost is just 63 bits per image. This description, named color layout descriptor, has been accepted as a part of the MPEG-7 final committee draft.",
"",
"This working note describes the work of the WISC team on the Multimedia Satellite Task at MediaEval 2017. We describe the runs that our team submitted to both the DIRSM and FDSI subtasks, as well as our evaluations on the development set. Our results demonstrate high accuracy in the detection of flooded areas from user-generated content in social media. In the first subtask consisting of disaster image retrieval from social media, we found that tags defined by users to describe the images are very helpful for achieving high accuracy classification. In the second subtask consisting of detecting flood in satellite images, we found that social media can increase the precision in analyses when combined with satellite images by taking advantage of spatial and temporal overlaps between data sources.",
"This paper deals with a new low level feature that is extracted from the images and can be used for indexing and retrieval. This feature is called \"Color and Edge Directivity Descriptor\" and incorporates color and texture information in a histogram. CEDD size is limited to 54 bytes per image, rendering this descriptor suitable for use in large image databases. One of the most important attribute of the CEDD is the low computational power needed for its extraction, in comparison with the needs of the most MPEG-7 descriptors. The objective measure called ANMRR is used to evaluate the performance of the proposed feature. An online demo that implements the proposed feature in an image retrieval system is available at: http: orpheus.ee.duth.gr image_retrieval.",
"This publicly available curated dataset of almost 100 million photos and videos is free and legal for all."
]
}
|
1901.03298
|
2909948158
|
This paper addresses the problem of floods classification and floods aftermath detection utilizing both social media and satellite imagery. Automatic detection of disasters such as floods is still a very challenging task. The focus lies on identifying passable routes or roads during floods. Two novel solutions are presented, which were developed for two corresponding tasks at the MediaEval 2018 benchmarking challenge. The tasks are (i) identification of images providing evidence for road passability and (ii) differentiation and detection of passable and non-passable roads in images from two complementary sources of information. For the first challenge, we mainly rely on object and scene-level features extracted through multiple deep models pre-trained on the ImageNet and Places datasets. The object and scene-level features are then combined using early, late and double fusion techniques. To identify whether or not it is possible for a vehicle to pass a road in satellite images, we rely on Convolutional Neural Networks and a transfer learning-based classification approach. The evaluation of the proposed methods are carried out on the large-scale datasets provided for the benchmark competition. The results demonstrate significant improvement in the performance over the recent state-of-art approaches.
|
An active learning framework intending to collect, filter and analyze social media contents for natural disasters has been proposed in @cite_28 . For data collection, a publicly available system, namely AIDR @cite_43 , has been used to crawl social media platforms, followed by a crowd-sourcing activity for data annotation. A pre-trained model @cite_17 is then fine-tuned on the annotated images for classification purposes.
|
{
"cite_N": [
"@cite_28",
"@cite_43",
"@cite_17"
],
"mid": [
"2792198542",
"2250734828",
"1686810756"
],
"abstract": [
"ABSTRACTThe extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer visi...",
"We present AIDR (Artificial Intelligence for Disaster Response), a platform designed to perform automatic classification of crisis-related microblog communications. AIDR enables humans and machines to work together to apply human intelligence to large-scale data at high speed. The objective of AIDR is to classify messages that people post during disasters into a set of user-defined categories of information (e.g., \"needs\", \"damage\", etc.) For this purpose, the system continuously ingests data from Twitter, processes it (i.e., using machine learning classification techniques) and leverages human-participation (through crowdsourcing) in real-time. AIDR has been successfully tested to classify informative vs. non-informative tweets posted during the 2013 Pakistan Earthquake. Overall, we achieved a classification quality (measured using AUC) of 80 . AIDR is available at http: aidr.qcri.org .",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
}
|
1901.03298
|
2909948158
|
This paper addresses the problem of floods classification and floods aftermath detection utilizing both social media and satellite imagery. Automatic detection of disasters such as floods is still a very challenging task. The focus lies on identifying passable routes or roads during floods. Two novel solutions are presented, which were developed for two corresponding tasks at the MediaEval 2018 benchmarking challenge. The tasks are (i) identification of images providing evidence for road passability and (ii) differentiation and detection of passable and non-passable roads in images from two complementary sources of information. For the first challenge, we mainly rely on object and scene-level features extracted through multiple deep models pre-trained on the ImageNet and Places datasets. The object and scene-level features are then combined using early, late and double fusion techniques. To identify whether or not it is possible for a vehicle to pass a road in satellite images, we rely on Convolutional Neural Networks and a transfer learning-based classification approach. The evaluation of the proposed methods are carried out on the large-scale datasets provided for the benchmark competition. The results demonstrate significant improvement in the performance over the recent state-of-art approaches.
|
Being one of the most valuable sources of information for disaster analysis @cite_7 @cite_9 @cite_52 , a growing portion of research also aims at the detection and classification of natural disaster events in satellite imagery. @cite_1 proposed a deep architecture along with a wavelet transformation-based pre-processing scheme for the identification of disaster affected areas in satellite imagery. @cite_5 propose a CNN-based deep architecture composed of five weighted layers for landslides and flood detection in satellite imagery.
|
{
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_52",
"@cite_5"
],
"mid": [
"2800524086",
"2618263559",
"2492586371",
"2165056704",
"2548107336"
],
"abstract": [
"Being able to automatically link social media and satellite imagery holds large opportunities for research, with a potentially considerable impact on society. The possibility of integrating different information sources opens in fact to new scenarios where the wide coverage of satellite imaging can be used as a collector of the fine-grained details provided by the social media. Remote-sensed data and social media data can well complement each other, integrating the wide perspective provided by the satellite view with the information collected locally, being it textual, audio, or visual. Among the possible applications, natural disasters are certainly one of the most interesting scenarios, where global and local perspectives are needed at the same time. In this paper, we present a system called JORD that is able to autonomously collect social media data (including the text analysis in local languages) about technological and environmental disasters, and link it automatically to remote-sensed data. Moreover, in order to ensure the quality of retrieved information, JORD is equipped with a hierarchical filtering mechanism relying on the temporal information and the content analysis of retrieved multimedia data. To show the capabilities of the system, we present a large number of disaster events detected by the system, and we evaluate both the quality of the provided information about the events and the usefulness of JORD from potential users viewpoint, using crowdsourcing.",
"Being able to automatically link social media information and data to remote-sensed data holds large possibilities for society and research. In this paper, we present a system called JORD that is able to autonomously collect social media data about technological and environmental disasters, and link it automatically to remote-sensed data. In addition, we demonstrate that queries in local languages that are relevant to the exact position of natural disasters retrieve more accurate information about a disaster event. To show the capabilities of the system, we present some examples of disaster events detected by the system. To evaluate the quality of the provided information and usefulness of JORD from the potential users point of view we include a crowdsourced user study.",
"Abstract Geological disaster recognition, especially, landslide recognition, is of vital importance in disaster prevention, disaster monitoring and other applications. As more and more optical remote sensing images are available in recent years, landslide recognition on optical remote sensing images is in demand. Therefore, in this paper, we propose a deep learning based landslide recognition method for optical remote sensing images. In order to capture more distinct features hidden in landslide images, a particular wavelet transformation is proposed to be used as the preprocessing method. Next, a corrupting & denoising method is proposed to enhance the robustness of the model in recognize landslide features. Then, a deep auto-encoder network with multiple hidden layers is proposed to learn the high-level features and representations of each image. A softmax classifier is used for class prediction. Experiments are conducted on the remote sensing images from Google Earth. The experimental results indicate that the proposed wav DAE method outperforms the state-of-the-art classifiers both in efficiency and accuracy.",
"ABSTRACT Klemas, V., 2015. Remote sensing of floods and flood-prone areas: An overview. River floods and coastal storm surges affect the lives of more people than most other weather-related disaste...",
"Analysis of satellite images plays an increasingly vital role in environment and climate monitoring, especially in detecting and managing natural disaster. In this paper, we proposed an automatic disaster detection system by implementing one of the advance deep learning techniques, convolutional neural network (CNN), to analysis satellite images. The neural network consists of 3 convolutional layers, followed by max-pooling layers after each convolutional layer, and 2 fully connected layers. We created our own disaster detection training data patches, which is currently focusing on 2 main disasters in Japan and Thailand: landslide and flood. Each disaster's training data set consists of 30000∼40000 patches and all patches are trained automatically in CNN to extract region where disaster occurred instantaneously. The results reveal accuracy of 80 ∼90 for both disaster detection. The results presented here may facilitate improvements in detecting natural disaster efficiently by establishing automatic disaster detection system."
]
}
|
1901.03298
|
2909948158
|
This paper addresses the problem of floods classification and floods aftermath detection utilizing both social media and satellite imagery. Automatic detection of disasters such as floods is still a very challenging task. The focus lies on identifying passable routes or roads during floods. Two novel solutions are presented, which were developed for two corresponding tasks at the MediaEval 2018 benchmarking challenge. The tasks are (i) identification of images providing evidence for road passability and (ii) differentiation and detection of passable and non-passable roads in images from two complementary sources of information. For the first challenge, we mainly rely on object and scene-level features extracted through multiple deep models pre-trained on the ImageNet and Places datasets. The object and scene-level features are then combined using early, late and double fusion techniques. To identify whether or not it is possible for a vehicle to pass a road in satellite images, we rely on Convolutional Neural Networks and a transfer learning-based classification approach. The evaluation of the proposed methods are carried out on the large-scale datasets provided for the benchmark competition. The results demonstrate significant improvement in the performance over the recent state-of-art approaches.
|
@cite_44 tackle flood detection in satellite imagery as a generative problem where an Adversarial Generative Networks (GANs) based framework has been proposed. The framework mainly relies on a GANs architecture, namely V-GAN @cite_46 , originally developed for the retinal vessel segmentation. In order to adopt the architecture for the flood detection task, the top layer of the generative network is extended with a threshold mechanism to generate binary segmentation mask of the flooded regions in satellite imagery. In an other work from the same authors @cite_7 , the input layer is modified to support 4 channel input images (i.e., RGB and IR) and several experiments are conducted to evaluate the performance of RGB and IR components individually and jointly. In @cite_21 , different indices, namely Land Water Index (LWI), Normalised Difference Vegetation Index (NDVI) and Normalised Difference Water Index (NDWI) are selected from the spectral images. Subsequently, two different strategies based on supervised classification and un-supervised clustering techniques are then adopted for the identification of flooded regions in satellite imagery. On the other hand, @cite_3 rely on Mahalanobis distance @cite_29 and some morphological operations for the task.
|
{
"cite_N": [
"@cite_7",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_44",
"@cite_46"
],
"mid": [
"2800524086",
"1996118086",
"2750871820",
"2771771192",
"2774411652",
"2726555724"
],
"abstract": [
"Being able to automatically link social media and satellite imagery holds large opportunities for research, with a potentially considerable impact on society. The possibility of integrating different information sources opens in fact to new scenarios where the wide coverage of satellite imaging can be used as a collector of the fine-grained details provided by the social media. Remote-sensed data and social media data can well complement each other, integrating the wide perspective provided by the satellite view with the information collected locally, being it textual, audio, or visual. Among the possible applications, natural disasters are certainly one of the most interesting scenarios, where global and local perspectives are needed at the same time. In this paper, we present a system called JORD that is able to autonomously collect social media data (including the text analysis in local languages) about technological and environmental disasters, and link it automatically to remote-sensed data. Moreover, in order to ensure the quality of retrieved information, JORD is equipped with a hierarchical filtering mechanism relying on the temporal information and the content analysis of retrieved multimedia data. To show the capabilities of the system, we present a large number of disaster events detected by the system, and we evaluate both the quality of the provided information about the events and the usefulness of JORD from potential users viewpoint, using crowdsourcing.",
"Abstract The theory of many multivariate chemometrical methods is based on the measurement of distances. The Mahalanobis distance (MD), in the original and principal component (PC) space, will be examined and interpreted in relation with the Euclidean distance (ED). Techniques based on the MD and applied in different fields of chemometrics such as in multivariate calibration, pattern recognition and process control are explained and discussed.",
"This working note describes the work of the WISC team on the Multimedia Satellite Task at MediaEval 2017. We describe the runs that our team submitted to both the DIRSM and FDSI subtasks, as well as our evaluations on the development set. Our results demonstrate high accuracy in the detection of flooded areas from user-generated content in social media. In the first subtask consisting of disaster image retrieval from social media, we found that tags defined by users to describe the images are very helpful for achieving high accuracy classification. In the second subtask consisting of detecting flood in satellite images, we found that social media can increase the precision in analyses when combined with satellite images by taking advantage of spatial and temporal overlaps between data sources.",
"",
"",
"Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone under-segmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets."
]
}
|
1901.03415
|
2909450233
|
We propose a principle for exploring context in machine learning models. Starting with a simple assumption that each observation may or may not depend on its context, a conditional probability distribution is decomposed into two parts: context-free and context-sensitive. Then by employing the log-linear word production model for relating random variables to their embedding space representation and making use of the convexity of natural exponential function, we show that the embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts, respectively. This simple treatment of context provides a unified view of many existing deep learning models, leading to revisions of these models able to achieve significant performance boost. Specifically, our upgraded version of a recent sentence embedding model not only outperforms the original one by a large margin, but also leads to a new, principled approach for compositing the embeddings of bag-of-words features, as well as a new architecture for modeling attention in deep neural networks. More surprisingly, our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore, our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
|
Among bag of words based models, early attempts including @cite_37 directly extended the idea of Word2Vec by treating each sentence as a whole with a unique feature id. Later, @cite_7 showed that a more flexible approach, which simply computes the sentence embedding by summing up the embeddings of individual words, would achieve significant improvement. @cite_59 also showed that starting from an existing word embedding model, training them towards a specific task, then simply using the average of new word embeddings in a sentence would achieve satisfactory results. Recently, @cite_53 proposed a TF-IDF like weighting scheme to composite the embedding of words in a sentence and showed that the performance can be improved simply by removing the first principle component of all the sentence vectors. More recently, the transformer architecture using self-attention proposed by @cite_48 provides further evidences that bag of words model can be more preferable than sequence models, at least in certain tasks.
|
{
"cite_N": [
"@cite_37",
"@cite_7",
"@cite_48",
"@cite_53",
"@cite_59"
],
"mid": [
"2131744502",
"2271328876",
"2963403868",
"2752172973",
"2963499246"
],
"abstract": [
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.",
"Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data. This paper is a systematic comparison of models that learn such representations. We find that the optimal approach depends critically on the intended application. Deeper, more complex models are preferable for representations to be used in supervised systems, but shallow log-linear models work best for building representation spaces that can be decoded with simple spatial distance metrics. We also propose two new unsupervised representation-learning objectives designed to optimise the trade-off between training time, domain portability and performance.",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.",
"The success of neural network methods for computing word embeddings has motivated methods for generating semantic embeddings of longer pieces of text, such as sentences and paragraphs. Surprisingly, (ICLR'16) showed that such complicated methods are outperformed, especially in out-of-domain (transfer learning) settings, by simpler methods involving mild retraining of word embeddings and basic linear regression. The method of requires retraining with a substantial labeled dataset such as Paraphrase Database (, 2013). @PARASPLIT The current paper goes further, showing that the following completely unsupervised sentence embedding is a formidable baseline: Use word embeddings computed using one of the popular methods on unlabeled corpus like Wikipedia, represent the sentence by a weighted average of the word vectors, and then modify them a bit using PCA SVD. This weighting improves performance by about 10 to 30 in textual similarity tasks, and beats sophisticated supervised methods including RNN's and LSTM's. It even improves 's embeddings. This simple method should be used as the baseline to beat in future, especially when labeled training data is scarce or nonexistent. @PARASPLIT The paper also gives a theoretical explanation of the success of the above unsupervised method using a latent variable generative model for sentences, which is a simple extension of the model in (TACL'16) with new \"smoothing\" terms that allow for words occurring out of context, as well as high probabilities for words like and, not in all contexts.",
"Abstract: We consider the problem of learning general-purpose, paraphrastic sentence embeddings based on supervision from the Paraphrase Database (, 2013). We compare six compositional architectures, evaluating them on annotated textual similarity datasets drawn both from the same distribution as the training data and from a wide range of other domains. We find that the most complex architectures, such as long short-term memory (LSTM) recurrent neural networks, perform best on the in-domain data. However, in out-of-domain scenarios, simple architectures such as word averaging vastly outperform LSTMs. Our simplest averaging model is even competitive with systems tuned for the particular tasks while also being extremely efficient and easy to use. In order to better understand how these architectures compare, we conduct further experiments on three supervised NLP tasks: sentence similarity, entailment, and sentiment classification. We again find that the word averaging models perform well for sentence similarity and entailment, outperforming LSTMs. However, on sentiment classification, we find that the LSTM performs very strongly-even recording new state-of-the-art performance on the Stanford Sentiment Treebank. We then demonstrate how to combine our pretrained sentence embeddings with these supervised tasks, using them both as a prior and as a black box feature extractor. This leads to performance rivaling the state of the art on the SICK similarity and entailment tasks. We release all of our resources to the research community with the hope that they can serve as the new baseline for further work on universal sentence embeddings."
]
}
|
1901.03415
|
2909450233
|
We propose a principle for exploring context in machine learning models. Starting with a simple assumption that each observation may or may not depend on its context, a conditional probability distribution is decomposed into two parts: context-free and context-sensitive. Then by employing the log-linear word production model for relating random variables to their embedding space representation and making use of the convexity of natural exponential function, we show that the embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts, respectively. This simple treatment of context provides a unified view of many existing deep learning models, leading to revisions of these models able to achieve significant performance boost. Specifically, our upgraded version of a recent sentence embedding model not only outperforms the original one by a large margin, but also leads to a new, principled approach for compositing the embeddings of bag-of-words features, as well as a new architecture for modeling attention in deep neural networks. More surprisingly, our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore, our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
|
As for sequence-based approaches, @cite_17 proposed SkipThought algorithm that employs a sequence based model to encode a sentence and let it decode its pre and post sentences during training. By including attention mechanism in a sequence-to-sequence model , a better weighting scheme is achieved in compositing the final embedding of a sentence. Besides attention, @cite_5 also considered concept (context) as input, and @cite_45 included convolutional and max-pool layer in sentence embedding. Our model includes both attention and context as its ingredients.
|
{
"cite_N": [
"@cite_5",
"@cite_45",
"@cite_17"
],
"mid": [
"2507868973",
"2963918774",
""
],
"abstract": [
"Most sentence embedding models typically represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance representation capability of sentence, we employ conceptualization model to assign associated concepts for each sentence in the text corpus, and then learn conceptual sentence embedding (CSE). Hence, this semantic representation is more expressive than some widely-used text representation models such as latent topic model, especially for short-text. Moreover, we further extend CSE models by utilizing a local attention-based model that select relevant words within the context to make more efficient prediction. In the experiments, we evaluate the CSE models on two tasks, text classification and information retrieval. The experimental results show that the proposed models outperform typical sentence embed-ding models.",
"Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful. Several attempts at learning unsupervised representations of sentences have not reached satisfactory enough performance to be widely adopted. In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Much like how computer vision uses ImageNet to obtain features, which can then be transferred to other tasks, our work tends to indicate the suitability of natural language inference for transfer learning to other NLP tasks. Our encoder is publicly available.",
""
]
}
|
1901.03415
|
2909450233
|
We propose a principle for exploring context in machine learning models. Starting with a simple assumption that each observation may or may not depend on its context, a conditional probability distribution is decomposed into two parts: context-free and context-sensitive. Then by employing the log-linear word production model for relating random variables to their embedding space representation and making use of the convexity of natural exponential function, we show that the embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts, respectively. This simple treatment of context provides a unified view of many existing deep learning models, leading to revisions of these models able to achieve significant performance boost. Specifically, our upgraded version of a recent sentence embedding model not only outperforms the original one by a large margin, but also leads to a new, principled approach for compositing the embeddings of bag-of-words features, as well as a new architecture for modeling attention in deep neural networks. More surprisingly, our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore, our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
|
In parallel to neural network based models, another popular approach to feature embedding is based on matrix factorization , where a column represents a feature (, a movie) and each row represents a cooccurrence of features (, list of movies watched by a user). In this setting, one can compute the embeddings for both the features and cooccurrences, and the embedding for a new cooccurring case (bag of features) can be computed in a closed form . Recently, the concept of cooccurrence is generalized to n-grams by reformulating the embeddings of bag-of-features as a compressed sensing problem @cite_10 . Nevertheless, these methods are mostly limited to linear systems with closed-form solutions.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2785524755"
],
"abstract": [
"Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the “meaning” of text and a form of unsupervised learning useful for downstream tasks. However, their power is not theoretically understood. The current paper derives formal understanding by looking at the subcase of linear embedding schemes. Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams (BonG) representations of text. This leads to a new theoretical result about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a result that extensive empirical work has thus far been unable to show. Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods. We also show a surprising new property of word embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, a standard sparse recovery tool, which may explain why they lead to better representations in practice."
]
}
|
1901.03415
|
2909450233
|
We propose a principle for exploring context in machine learning models. Starting with a simple assumption that each observation may or may not depend on its context, a conditional probability distribution is decomposed into two parts: context-free and context-sensitive. Then by employing the log-linear word production model for relating random variables to their embedding space representation and making use of the convexity of natural exponential function, we show that the embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts, respectively. This simple treatment of context provides a unified view of many existing deep learning models, leading to revisions of these models able to achieve significant performance boost. Specifically, our upgraded version of a recent sentence embedding model not only outperforms the original one by a large margin, but also leads to a new, principled approach for compositing the embeddings of bag-of-words features, as well as a new architecture for modeling attention in deep neural networks. More surprisingly, our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore, our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
|
Modeling sequential data (, sentences or sound) requires inferring current output from both past observation and current input. Recurrent neural network (RNN) assumes past observations can be summarized in its hidden state, which can then be learned recurrently (recursive in time). Despite its promising modeling power, its basic form is proven to be difficult to train @cite_22 . Hence various improvements are proposed based on different assumptions (, @cite_8 ), among which the LSTM model @cite_36 might be among the most successful one that are widely used. Encouraged by its success, various models are proposed to modify its design but very few are game-changing @cite_60 . One of the very few successful challengers of LSTM is the GRU model @cite_18 , which is proven to perform better than LSTM in many cases.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_60",
"@cite_36",
"@cite_8"
],
"mid": [
"2157331557",
"2107878631",
"1689711448",
"",
"1815076433"
],
"abstract": [
"In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered. >",
"Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( @math years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.",
"",
"There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section."
]
}
|
1901.03415
|
2909450233
|
We propose a principle for exploring context in machine learning models. Starting with a simple assumption that each observation may or may not depend on its context, a conditional probability distribution is decomposed into two parts: context-free and context-sensitive. Then by employing the log-linear word production model for relating random variables to their embedding space representation and making use of the convexity of natural exponential function, we show that the embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts, respectively. This simple treatment of context provides a unified view of many existing deep learning models, leading to revisions of these models able to achieve significant performance boost. Specifically, our upgraded version of a recent sentence embedding model not only outperforms the original one by a large margin, but also leads to a new, principled approach for compositing the embeddings of bag-of-words features, as well as a new architecture for modeling attention in deep neural networks. More surprisingly, our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore, our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
|
More recently, rather than improving the basic RNN-like structures, it was found that changing how they are stacked and connected can help solve very challenging problems like language translation. The sequence-to-sequence model @cite_44 is one of such successful stories. Then the attention mechanism was proposed able to significantly improve the performance @cite_56 . Recently, it was practically validated that even the basic RNN structure can be rid of by using attention mechanism alone @cite_48 .
|
{
"cite_N": [
"@cite_44",
"@cite_48",
"@cite_56"
],
"mid": [
"2130942839",
"2963403868",
"2964308564"
],
"abstract": [
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.",
"Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."
]
}
|
1901.03415
|
2909450233
|
We propose a principle for exploring context in machine learning models. Starting with a simple assumption that each observation may or may not depend on its context, a conditional probability distribution is decomposed into two parts: context-free and context-sensitive. Then by employing the log-linear word production model for relating random variables to their embedding space representation and making use of the convexity of natural exponential function, we show that the embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts, respectively. This simple treatment of context provides a unified view of many existing deep learning models, leading to revisions of these models able to achieve significant performance boost. Specifically, our upgraded version of a recent sentence embedding model not only outperforms the original one by a large margin, but also leads to a new, principled approach for compositing the embeddings of bag-of-words features, as well as a new architecture for modeling attention in deep neural networks. More surprisingly, our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore, our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
|
The visual pathway is one of the most well understood components in human brain, and the convolutional neural network model @cite_55 has been proven to model its hierarchical, multi-layer processing functionality very closely. A series of upgrades have been added to the basic CNN structure to improve its performance, such as pooling @cite_1 , dropout @cite_23 , or normalization @cite_32 @cite_24 . Recently, the residual network model @cite_3 and its variants (, @cite_40 @cite_63 ) becomes very popular due to their superior performance. Among them, the highway networks @cite_63 shares certain similarity with our CA-NN model in its use of a gate (maps to our @math -function) to switch between an identity activation (we use a default value @math instead) and a nonlinear activation (maps to @math ). Each year, combinations of these building blocks are proposed to reach higher accuracies on standard benchmarks for image classification (, @cite_28 @cite_31 ) or other tasks.
|
{
"cite_N": [
"@cite_28",
"@cite_55",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_24",
"@cite_40",
"@cite_23",
"@cite_63",
"@cite_31"
],
"mid": [
"2163605009",
"2154579312",
"2310919327",
"1836465849",
"2194775991",
"",
"2964137095",
"1904365287",
"2950621961",
"1663973292"
],
"abstract": [
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 error rate and about a 9 reject rate on zipcode digits provided by the U.S. Postal Service.",
"",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82 top-5 test error, exceeding the accuracy of human raters.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"",
"When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition.",
"Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.",
"Cristopher M. BishopInformation Science and StatisticsSpringer 2006, 738 pagesAs the author writes in the preface of the book, pattern recognition has its origin inengineering, whereas machine learning grew out of computer science. However, theseactivities can be viewed as two facets of the same field, and they have undergonesubstantial development over the past years.Bayesian methods are widely used, while graphical models have emerged as a generalframework for describing and applying probabilistic models. Similarly new modelsbased on kernels have had significant impact on both algorithms and applications.This textbook reflects these recent developments while providing a comprehensiveintroduction to the fields of pattern recognition and machine learning. It is aimedat advanced undergraduate or first year PhD students, as well as researchers andpractitioners. It can be consider as an introductory course to the subject.The first four chapters are devoted to the concepts of Probability and Statistics that areneededforreadingtherestofthebook,sowecanimaginethatthespeedishighinorderto get from zero to infinity. I believe that it is better to study the book after a previouscourse on Probability and Statistics. On the other hand, a basic knowledge of linearalgebra and multivariate calculus is assumed.The other chapters give to a classic probabilist or statistician a point of view on someapplications that are very interesting but far from his usual world. In all the text themathematical aspects are at the second level in relation withthe ideas and intuitionsthatthe author wants to communicate.The book is supported by a great deal of additional material, including lecture slides aswell as the complete set of figures used in it, and the reader is encouraged to visit thebook web site for the latest information. So it can be very useful for a course or a talkabout the subject."
]
}
|
1901.03415
|
2909450233
|
We propose a principle for exploring context in machine learning models. Starting with a simple assumption that each observation may or may not depend on its context, a conditional probability distribution is decomposed into two parts: context-free and context-sensitive. Then by employing the log-linear word production model for relating random variables to their embedding space representation and making use of the convexity of natural exponential function, we show that the embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts, respectively. This simple treatment of context provides a unified view of many existing deep learning models, leading to revisions of these models able to achieve significant performance boost. Specifically, our upgraded version of a recent sentence embedding model not only outperforms the original one by a large margin, but also leads to a new, principled approach for compositing the embeddings of bag-of-words features, as well as a new architecture for modeling attention in deep neural networks. More surprisingly, our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore, our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
|
However, human vision is also known to be sensitive to changes in time and attend only to contents that have evolutionary value. @cite_0 proposed a reinforcement learning model to explain the dynamic process of fixation in human vision. @cite_34 directly generalizes the text attention model in the encoder-decoder structure @cite_56 to image caption generation. @cite_29 proposed a mutilayer deep architecture to predict the saliency of each pixel. Note that our CA-CNN architecture also allows explicit computation of saliency maps for input images.
|
{
"cite_N": [
"@cite_0",
"@cite_29",
"@cite_34",
"@cite_56"
],
"mid": [
"1984052055",
"2612135493",
"1514535095",
"2964308564"
],
"abstract": [
"Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.",
"In this paper, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although convolutional neural networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve the CNN-based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark data sets demonstrate our method yields the state-of-the-art performance with competitive inference time. 1 1 Our source code is available at https: github.com wenguanwang deepattention .",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.",
"Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."
]
}
|
1901.03465
|
2909206252
|
Hand segmentation and fingertip detection play an indispensable role in hand gesture-based human-machine interaction systems. In this study, we propose a method to discriminate hand components and to locate fingertips in RGB-D images. The system consists of three main steps: hand detection using RGB images providing regions which are considered as promising areas for further processing, hand segmentation, and fingertip detection using depth image and our modified SegNet, a single lightweight architecture that can process two independent tasks at the same time. The experimental results show that our system is a promising method for hand segmentation and fingertip detection which achieves a comparable performance while model complexity is suitable for real-time applications.
|
Regarding the hand component segmentation problem, researchers have focused on two main approaches: wearable device- and image processing-based methods @cite_17 @cite_7 @cite_18 @cite_10 . While the former does not meet the natural and comfortable interaction criteria, the latter is considered a promising approach with the support of a depth camera @cite_14 @cite_4 @cite_16 . The authors proposed fitting systems to represent 3D points acquired by Kinect using a hand model. Their methods use both appearance and temporal information to track hand components over time. However, @cite_14 suffered from multiple-hand tracking issue since they attempted to follow a single object only which is unsuitable for practical applications. Even though @cite_4 showed significant improvements over @cite_14 by using a detailed mesh personalized to each user, a calibration step is required for a new user to transform a poorly-fit template model into personalized tracking one.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"2023633446",
"2423984454",
"2114663654",
"2469784314",
"2099800354",
"2144232362"
],
"abstract": [
"",
"We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis.",
"We present a fast, practical method for personalizing a hand shape basis to an individual user's detailed hand shape using only a small set of depth images. To achieve this, we minimize an energy based on a sum of render-and-compare cost functions called the golden energy. However, this energy is only piecewise continuous, due to pixels crossing occlusion boundaries, and is therefore not obviously amenable to efficient gradient-based optimization. A key insight is that the energy is the combination of a smooth low-frequency function with a high-frequency, low-amplitude, piecewisecontinuous function. A central finite difference approximation with a suitable step size can therefore jump over the discontinuities to obtain a good approximation to the energy's low-frequency behavior, allowing efficient gradient-based optimization. Experimental results quantitatively demonstrate for the first time that detailed personalized models improve the accuracy of hand tracking and achieve competitive results in both tracking and model registration.",
"Articulated hand-tracking systems have been widely used in virtual reality but are rarely deployed in consumer applications due to their price and complexity. In this paper, we propose an easy-to-use and inexpensive system that facilitates 3-D articulated user-input using the hands. Our approach uses a single camera to track a hand wearing an ordinary cloth glove that is imprinted with a custom pattern. The pattern is designed to simplify the pose estimation problem, allowing us to employ a nearest-neighbor approach to track hands at interactive rates. We describe several proof-of-concept applications enabled by our system that we hope will provide a foundation for new interactions in modeling, animation control and augmented reality.",
"Fully articulated hand tracking promises to enable fundamentally new interactions with virtual and augmented worlds, but the limited accuracy and efficiency of current systems has prevented widespread adoption. Today's dominant paradigm uses machine learning for initialization and recovery followed by iterative model-fitting optimization to achieve a detailed pose fit. We follow this paradigm, but make several changes to the model-fitting, namely using: (1) a more discriminative objective function; (2) a smooth-surface model that provides gradients for non-linear optimization; and (3) joint optimization over both the model pose and the correspondences between observed data points and the model surface. While each of these changes may actually increase the cost per fitting iteration, we find a compensating decrease in the number of iterations. Further, the wide basin of convergence means that fewer starting points are needed for successful model fitting. Our system runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers. The hand tracker is efficient enough to run on low-power devices such as tablets. We can track up to several meters from the camera to provide a large working volume for interaction, even using the noisy data from current-generation depth cameras. Quantitative assessments on standard datasets show that the new approach exceeds the state of the art in accuracy. Qualitative results take the form of live recordings of a range of interactive experiences enabled by this new approach.",
"Digits is a wrist-worn sensor that recovers the full 3D pose of the user's hand. This enables a variety of freehand interactions on the move. The system targets mobile settings, and is specifically designed to be low-power and easily reproducible using only off-the-shelf hardware. The electronics are self-contained on the user's wrist, but optically image the entirety of the user's hand. This data is processed using a new pipeline that robustly samples key parts of the hand, such as the tips and lower regions of each finger. These sparse samples are fed into new kinematic models that leverage the biomechanical constraints of the hand to recover the 3D pose of the user's hand. The proposed system works without the need for full instrumentation of the hand (for example using data gloves), additional sensors in the environment, or depth cameras which are currently prohibitive for mobile scenarios due to power and form-factor considerations. We demonstrate the utility of Digits for a variety of application scenarios, including 3D spatial interaction with mobile devices, eyes-free interaction on-the-move, and gaming. We conclude with a quantitative and qualitative evaluation of our system, and discussion of strengths, limitations and future work.",
"Hand movement data acquisition is used in many engineering applications ranging from the analysis of gestures to the biomedical sciences. Glove-based systems represent one of the most important efforts aimed at acquiring hand movement data. While they have been around for over three decades, they keep attracting the interest of researchers from increasingly diverse fields. This paper surveys such glove systems and their applications. It also analyzes the characteristics of the devices, provides a road map of the evolution of the technology, and discusses limitations of current technology and trends at the frontiers of research. A foremost goal of this paper is to provide readers who are new to the area with a basis for understanding glove systems technology and how it can be applied, while offering specialists an updated picture of the breadth of applications in several engineering and biomedical sciences areas."
]
}
|
1901.03465
|
2909206252
|
Hand segmentation and fingertip detection play an indispensable role in hand gesture-based human-machine interaction systems. In this study, we propose a method to discriminate hand components and to locate fingertips in RGB-D images. The system consists of three main steps: hand detection using RGB images providing regions which are considered as promising areas for further processing, hand segmentation, and fingertip detection using depth image and our modified SegNet, a single lightweight architecture that can process two independent tasks at the same time. The experimental results show that our system is a promising method for hand segmentation and fingertip detection which achieves a comparable performance while model complexity is suitable for real-time applications.
|
To address the fingertip detection issue, researchers have clearly focused on two main approaches in terms of input signals: the first only uses RGB images while the other considers depth images as well. The latter consistently acquires a reputation for its performance compared to the former since it uses both RGB values and depth signals from the camera. Regarding the first approach, researchers have proposed hand image processing methods based on background subtraction and skin color detection @cite_2 @cite_9 . However, this approach can be affected by the illumination and variations in the background of the sampling environment, that is, the authors assumed that the images were captured from stationary cameras in a steady scenario. The benefit of this hypothesis is that such simple computer vision techniques can be employed as background subtraction, etc. The other approach for hand image processing, which is to use RGB-D images, was applied in @cite_1 @cite_6 . In lieu of using a hand detector for RGB images, the authors located fingertips by assuming that the hand must be at the shortest distance from the camera with the given depth information.
|
{
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_2"
],
"mid": [
"2025193300",
"2139916749",
"2005255079",
"2080154195"
],
"abstract": [
"This paper presents a low-cost hand gesture human-computer interaction system for remote controlling of TV and Set-Top-Box (STB). The proposed system adds only a little to the hardware cost as just a webcam is used, and can run on mainstream and even low-end TV and STB without any software and hardware upgrading.",
"Gesture recognition has been a research focus with the popularity of depth sensing device. In this paper, we propose a new fingertip detection method based on a novel definition of fingers. This method consists of two steps. Firstly, finger bases are detected and estimated as prior information. Secondly, the finger regions and fingertips are located. In the second module, the point cloud of hand is represented as a graph to obtain all geodesic paths originated from palm center. If one path travels through a finger base, then its terminal point is defined as a finger point. The fingertips are determined within these finger points by utilizing geodesic distances. To our knowledge, such definition has never been applied in gesture recognition before and its performance surpasses the definition of geodesic maxima. Experimental results demonstrates the effectiveness of our method even when hand is not parallel to camera. Compared with state-of-the-art approach, our method shows much less error.",
"We propose a real-time finger writing character recognition system using depth information. This system allows user to input characters by writing freely in the air with the Kinect. During the writing process, it is reasonable to assume that the finger and hand are always holding in front of torso. Firstly, we compute the depth histogram of human body and use a switch mixture Gaussian model to characterize it. Since the hand is closer to camera, a model-based threshold can segment the hand-related region out. Then, we employ an unsupervised clustering algorithm, K-means, to classify the segmented region into two parts, the finger-hand part and hand-arm part. By identifying the arm direction, we can determine the finger-hand cluster and locate the fingertip as the farthest point from the other cluster. We collected over 8000 frames writing-in-the-air sequences including two different subjects writing numbers, strokes, pattern, English and Chinese characters from two different distances. From our experiments, the proposed algorithm can provide robust and accurate fingertip detection, and achieve encouraging character recognition result.",
"Human-Computer Interaction (HCI) exists ubiquitously in our daily lives. It is usually achieved by using a physical controller such as a mouse, keyboard or touch screen. It hinders Natural User Interface (NUI) as there is a strong barrier between the user and computer. There are various hand tracking systems available on the market, but they are complex and expensive. In this paper, we present the design and development of a robust marker-less hand finger tracking and gesture recognition system using low-cost hardware. We propose a simple but efficient method that allows robust and fast hand tracking despite complex background and motion blur. Our system is able to translate the detected hands or gestures into different functional inputs and interfaces with other applications via several methods. It enables intuitive HCI and interactive motion gaming. We also developed sample applications that can utilize the inputs from the hand tracking system. Our results show that an intuitive HCI and motion gaming system can be achieved with minimum hardware requirements."
]
}
|
1901.03465
|
2909206252
|
Hand segmentation and fingertip detection play an indispensable role in hand gesture-based human-machine interaction systems. In this study, we propose a method to discriminate hand components and to locate fingertips in RGB-D images. The system consists of three main steps: hand detection using RGB images providing regions which are considered as promising areas for further processing, hand segmentation, and fingertip detection using depth image and our modified SegNet, a single lightweight architecture that can process two independent tasks at the same time. The experimental results show that our system is a promising method for hand segmentation and fingertip detection which achieves a comparable performance while model complexity is suitable for real-time applications.
|
In the past few years, deep convolutional neural networks have achieved state-of-the-art performance in numerous computer vision issues. Due to the availability of powerful hardware and public datasets, training deep neural networks is not as difficult or restricting as it was previously @cite_12 . In this paper, we propose a method for hand segmentation and fingertip detection using RGB-D image and deep neural networks. Our system works well in various sampling scenarios including dynamic and stationary environments, meanwhile, the processing time is up to 15 fps with GPU support. In addition, a calibration step is not indispensable for each user and our method also addressed multiple-hand processing problem. Moreover, another contribution of this paper is a modified version of SegNet for semantic segmentation @cite_13 . Instead of using two different SegNets for multiple tasks including hand component segmentation and fingertip detection, our multi-task SegNet, a single lightweight architecture whose the number of parameters was reduced by @math , can process two different tasks at the same time. Last but not least, the modified model performance is similar to the original architecture.
|
{
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2963881378",
"2962850098"
],
"abstract": [
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .",
"Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the \"perfect single frame detector\". We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech dataset), and by manually clustering the recurrent errors of a top detector. Our results characterise both localisation and background-versusforeground errors. To address localisation errors we study the impact of training annotation noise on the detector performance, and show that we can improve even with a small portion of sanitised training data. To address background foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech dataset, and provide a new sanitised set of training and test annotations."
]
}
|
1901.03447
|
2910555006
|
This paper addresses the problem of interpolating visual textures. We formulate the problem of texture interpolation by requiring (1) by-example controllability and (2) realistic and smooth interpolation among an arbitrary number of texture samples. To solve it we propose a neural network trained simultaneously on a reconstruction task and a generation task, which can project texture examples onto a latent space where they can be linearly interpolated and reprojected back onto the image domain, thus ensuring both intuitive control and realistic results. We show several additional applications including texture brushing and texture dissolve, and show our method outperforms a number of baselines according to a comprehensive suite of metrics as well as a user study.
|
Recently, generative adversarial networks (GANs) @cite_21 @cite_7 @cite_46 @cite_2 have shown improved realism in image synthesis and translation tasks @cite_32 @cite_42 @cite_41 . GANs have also been used directly for texture synthesis @cite_51 @cite_30 @cite_4 , however they were limited to a single texture they were trained on. A recent approach dubbed PSGAN @cite_57 learns to synthesize a collection of textures present in a single photograph, making it more general and applicable to texture interpolation; it does not, however, allow for user control by specifying which textures are synthesized.
|
{
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_7",
"@cite_41",
"@cite_21",
"@cite_42",
"@cite_32",
"@cite_57",
"@cite_2",
"@cite_46",
"@cite_51"
],
"mid": [
"",
"2801495938",
"",
"",
"2099471712",
"",
"2962793481",
"2962719787",
"",
"",
"2339754110"
],
"abstract": [
"",
"This paper proposes a new GAN-based approach for example-based non-stationary texture synthesis. It can cope with challenging textures, which, to our knowledge, no other existing method can handle.",
"",
"",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"This paper introduces a novel approach to texture synthesis based on generative adversarial networks (GAN) (, 2014), and call this technique Periodic Spatial GAN (PS-GAN). The PSGAN has several novel abilities which surpass the current state of the art in texture synthesis. First, we can learn multiple textures, periodic or non-periodic, from datasets of one or more complex large images. Second, we show that the image generation with PS-GANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset. We make multiple experiments which show that PSGANs can flexibly handle diverse texture and image data sources, and the method is highly scalable and can generate output images of arbitrary large size.",
"",
"",
"This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required at generation time, our run-time performance (0.25 M pixel images at 25 Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization."
]
}
|
1901.03447
|
2910555006
|
This paper addresses the problem of interpolating visual textures. We formulate the problem of texture interpolation by requiring (1) by-example controllability and (2) realistic and smooth interpolation among an arbitrary number of texture samples. To solve it we propose a neural network trained simultaneously on a reconstruction task and a generation task, which can project texture examples onto a latent space where they can be linearly interpolated and reprojected back onto the image domain, thus ensuring both intuitive control and realistic results. We show several additional applications including texture brushing and texture dissolve, and show our method outperforms a number of baselines according to a comprehensive suite of metrics as well as a user study.
|
Finally, some neural-based image stylization approaches @cite_44 @cite_51 @cite_59 @cite_47 based on separating the images into content and style layers have shown that by stylizing a noise content image they can effectively synthesize texture @cite_0 . By spatially varying the style layer, texture interpolation may thus be achieved.
|
{
"cite_N": [
"@cite_44",
"@cite_0",
"@cite_59",
"@cite_47",
"@cite_51"
],
"mid": [
"2475287302",
"2964193438",
"",
"2962772087",
"2339754110"
],
"abstract": [
"Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.",
"Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.",
"",
"Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures by simple feature coloring.",
"This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required at generation time, our run-time performance (0.25 M pixel images at 25 Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization."
]
}
|
1901.03107
|
2910284584
|
In this paper, we deal with the problem of temporal action localization for a large-scale untrimmed cricket videos dataset. Our action of interest for cricket videos is a cricket stroke played by a batsman, which is, usually, covered by cameras placed at the stands of the cricket ground at both ends of the cricket pitch. After applying a sequence of preprocessing steps, we have 73 million frames for 1110 videos in the dataset at constant frame rate and resolution. The method of localization is a generalized one which applies a trained random forest model for CUTs detection(using summed up grayscale histogram difference features) and two linear SVM camera models(CAM1 and CAM2) for first frame detection, trained on HOG features of CAM1 and CAM2 video shots. CAM1 and CAM2 are assumed to be part of the cricket stroke. At the predicted boundary positions, the HOG features of the first frames are computed and a simple algorithm was used to combine the positively predicted camera shots. In order to make the process as generic as possible, we did not consider any domain specific knowledge, such as tracking or specific shape and motion features. The detailed analysis of our methodology is provided along with the metrics used for evaluation of individual models, and the final predicted segments. We achieved a weighted mean TIoU of 0.5097 over a small sample of the test set.
|
The problem of action recognition in videos has picked up pace with the onset of deep neural networks @cite_35 @cite_33 @cite_9 @cite_32 @cite_20 @cite_24 . These works modify the architecture of the Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) and train them on the video data. As a result, they produce state-of-the-art results at the cost of increasing the number of parameters, and training time.
|
{
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_9",
"@cite_32",
"@cite_24",
"@cite_20"
],
"mid": [
"2016053056",
"1522734439",
"2952186347",
"",
"1923404803",
"1985912834"
],
"abstract": [
"Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"",
"Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 73.0 ).",
"Human activity understanding with 3D depth sensors has received increasing attention in multimedia processing and interactions. This work targets on developing a novel deep model for automatic activity recognition from RGB-D videos. We represent each human activity as an ensemble of cubic-like video segments, and learn to discover the temporal structures for a category of activities, i.e. how the activities to be decomposed in terms of classification. Our model can be regarded as a structured deep architecture, as it extends the convolutional neural networks (CNNs) by incorporating structure alternatives. Specifically, we build the network consisting of 3D convolutions and max-pooling operators over the video segments, and introduce the latent variables in each convolutional layer manipulating the activation of neurons. Our model thus advances existing approaches in two aspects: (i) it acts directly on the raw inputs (grayscale-depth data) to conduct recognition instead of relying on hand-crafted features, and (ii) the model structure can be dynamically adjusted accounting for the temporal variations of human activities, i.e. the network configuration is allowed to be partially activated during inference. For model training, we propose an EM-type optimization method that iteratively (i) discovers the latent structure by determining the decomposed actions for each training example, and (ii) learns the network parameters by using the back-propagation algorithm. Our approach is validated in challenging scenarios, and outperforms state-of-the-art methods. A large human activity database of RGB-D videos is presented in addition."
]
}
|
1901.03107
|
2910284584
|
In this paper, we deal with the problem of temporal action localization for a large-scale untrimmed cricket videos dataset. Our action of interest for cricket videos is a cricket stroke played by a batsman, which is, usually, covered by cameras placed at the stands of the cricket ground at both ends of the cricket pitch. After applying a sequence of preprocessing steps, we have 73 million frames for 1110 videos in the dataset at constant frame rate and resolution. The method of localization is a generalized one which applies a trained random forest model for CUTs detection(using summed up grayscale histogram difference features) and two linear SVM camera models(CAM1 and CAM2) for first frame detection, trained on HOG features of CAM1 and CAM2 video shots. CAM1 and CAM2 are assumed to be part of the cricket stroke. At the predicted boundary positions, the HOG features of the first frames are computed and a simple algorithm was used to combine the positively predicted camera shots. In order to make the process as generic as possible, we did not consider any domain specific knowledge, such as tracking or specific shape and motion features. The detailed analysis of our methodology is provided along with the metrics used for evaluation of individual models, and the final predicted segments. We achieved a weighted mean TIoU of 0.5097 over a small sample of the test set.
|
The tasks of classification, tracking, segmentation and temporal localization are quite inter-dependent and may involve similar approaches and features. In some works, the problem of temporal action localization has also been tackled using an end-to-end learning network @cite_29 @cite_13 . Segmenting the object of interest and tracking it in the sequence of frames has been done in a few works like @cite_25 @cite_0 @cite_18 .
|
{
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_0",
"@cite_13",
"@cite_25"
],
"mid": [
"2476839805",
"2950971447",
"2018068650",
"2179401333",
"2950966695"
],
"abstract": [
"This paper considers the problem of localizing actions in videos as sequences of bounding boxes. The objective is to generate action proposals that are likely to include the action of interest, ideally achieving high recall with few proposals. Our contributions are threefold. First, inspired by selective search for object proposals, we introduce an approach to generate action proposals from spatiotemporal super-voxels in an unsupervised manner, we call them Tubelets. Second, along with the static features from individual frames our approach advantageously exploits motion. We introduce independent motion evidence as a feature to characterize how the action deviates from the background and explicitly incorporate such motion information in various stages of the proposal generation. Finally, we introduce spatiotemporal refinement of Tubelets, for more precise localization of actions, and pruning to keep the number of Tubelets limited. We demonstrate the suitability of our approach by extensive experiments for action proposal quality and action localization on three public datasets: UCF Sports, MSR-II and UCF101. For action proposal quality, our unsupervised proposals beat all other existing approaches on the three datasets. For action localization, we show top performance on both the trimmed videos of UCF Sports and UCF101 as well as the untrimmed videos of MSR-II.",
"Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( @math ) and UCF101 ( @math ). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.",
"This paper considers the problem of action localization, where the objective is to determine when and where certain actions appear. We introduce a sampling strategy to produce 2D+t sequences of bounding boxes, called tubelets. Compared to state-of-the-art alternatives, this drastically reduces the number of hypotheses that are likely to include the action of interest. Our method is inspired by a recent technique introduced in the context of image localization. Beyond considering this technique for the first time for videos, we revisit this strategy for 2D+t sequences obtained from super-voxels. Our sampling strategy advantageously exploits a criterion that reflects how action related motion deviates from background motion. We demonstrate the interest of our approach by extensive experiments on two public datasets: UCF Sports and MSR-II. Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.",
"In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2 or less) of the video frames.",
"We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15 , 7 and 12 respectively in mAP."
]
}
|
1901.03107
|
2910284584
|
In this paper, we deal with the problem of temporal action localization for a large-scale untrimmed cricket videos dataset. Our action of interest for cricket videos is a cricket stroke played by a batsman, which is, usually, covered by cameras placed at the stands of the cricket ground at both ends of the cricket pitch. After applying a sequence of preprocessing steps, we have 73 million frames for 1110 videos in the dataset at constant frame rate and resolution. The method of localization is a generalized one which applies a trained random forest model for CUTs detection(using summed up grayscale histogram difference features) and two linear SVM camera models(CAM1 and CAM2) for first frame detection, trained on HOG features of CAM1 and CAM2 video shots. CAM1 and CAM2 are assumed to be part of the cricket stroke. At the predicted boundary positions, the HOG features of the first frames are computed and a simple algorithm was used to combine the positively predicted camera shots. In order to make the process as generic as possible, we did not consider any domain specific knowledge, such as tracking or specific shape and motion features. The detailed analysis of our methodology is provided along with the metrics used for evaluation of individual models, and the final predicted segments. We achieved a weighted mean TIoU of 0.5097 over a small sample of the test set.
|
Some of the above approaches need pre-trained models that can be fine-tuned on their own problem specific datasets, while some other use large benchmark datasets for training purpose. Applying such techniques for a sporting event requires a lot of hand-annotated training data, which is hard to get or may include a lot of noise. Automated ways of creating datasets may involve using a third party API, like YouTube Data API (as done in @cite_35 ), or extraction using text meta-data of videos, which may not, at all, be accurate. @cite_31 proposed a method to extract action videos based on the tags. Deciding the relevancy of a tag, in itself, is a research problem. Due to these reasons, content-based action extraction from videos is a good choice for automatic construction of large-scale action dataset. @cite_15 provide a survey of the content-based methods for video extraction.
|
{
"cite_N": [
"@cite_35",
"@cite_31",
"@cite_15"
],
"mid": [
"2016053056",
"2054337388",
""
],
"abstract": [
"Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).",
"Video sharing websites have recently become a tremendous video source, which is easily accessible without any costs. This has encouraged researchers in the action recognition field to construct action database exploiting Web sources. However Web sources are generally too noisy to be used directly as a recognition database. Thus building action database from Web sources has required extensive human efforts on manual selection of video parts related to specified actions. In this paper, we introduce a novel method to automatically extract video shots related to given action keywords from Web videos according to their metadata and visual features. First, we select relevant videos among tagged Web videos based on the relevance between their tags and the given keyword. After segmenting selected videos into shots, we rank these shots exploiting their visual features in order to obtain shots of interest as top ranked shots. Especially, we propose to adopt Web images and human pose matching method in shot ranking step and show that this application helps to boost more relevant shots to the top. This unsupervised method of ours only requires the provision of action keywords such as ''surf wave'' or ''bake bread'' at the beginn ing. We have made large-scale experiments on various kinds of human actions as well as non-human actions and obtained promising results.",
""
]
}
|
1901.03107
|
2910284584
|
In this paper, we deal with the problem of temporal action localization for a large-scale untrimmed cricket videos dataset. Our action of interest for cricket videos is a cricket stroke played by a batsman, which is, usually, covered by cameras placed at the stands of the cricket ground at both ends of the cricket pitch. After applying a sequence of preprocessing steps, we have 73 million frames for 1110 videos in the dataset at constant frame rate and resolution. The method of localization is a generalized one which applies a trained random forest model for CUTs detection(using summed up grayscale histogram difference features) and two linear SVM camera models(CAM1 and CAM2) for first frame detection, trained on HOG features of CAM1 and CAM2 video shots. CAM1 and CAM2 are assumed to be part of the cricket stroke. At the predicted boundary positions, the HOG features of the first frames are computed and a simple algorithm was used to combine the positively predicted camera shots. In order to make the process as generic as possible, we did not consider any domain specific knowledge, such as tracking or specific shape and motion features. The detailed analysis of our methodology is provided along with the metrics used for evaluation of individual models, and the final predicted segments. We achieved a weighted mean TIoU of 0.5097 over a small sample of the test set.
|
Coming up with a purely content-based domain specific event extraction for untrimmed cricket telecast videos has not been attempted by many. @cite_14 tried to annotate the videos based by mapping the segments to the text-commentary using dynamic programming alignment, which may not always be available or may be noisy. Moreover, their dataset is quite small, as compared to what we are trying to achieve. Some other works have also looked at extraction of cricketing events but they have not considered it at such a scale as ours, or have not tried to generalize their solutions. @cite_8 and some similar works have tried to classify frames based on ground, pitch, or players present in them, or came up with a rule-based solution to classify motion features. None of them have analyzed their results on an entirely new set of cricket videos.
|
{
"cite_N": [
"@cite_14",
"@cite_8"
],
"mid": [
"2176302750",
"2546646060"
],
"abstract": [
"The recognition of human activities is one of the key problems in video understanding. Action recognition is challenging even for specific categories of videos, such as sports, that contain only a small set of actions. Interestingly, sports videos are accompanied by detailed commentaries available online, which could be used to perform action annotation in a weakly-supervised setting. For the specific case of Cricket videos, we address the challenge of temporal segmentation and annotation of ctions with semantic descriptions. Our solution consists of two stages. In the first stage, the video is segmented into \"scenes\", by utilizing the scene category information extracted from text-commentary. The second stage consists of classifying video-shots as well as the phrases in the textual description into various categories. The relevant phrases are then suitably mapped to the video-shots. The novel aspect of this work is the fine temporal scale at which semantic information is assigned to the video. As a result of our approach, we enable retrieval of specific actions that last only a few seconds, from several hours of video. This solution yields a large number of labeled exemplars, with no manual effort, that could be used by machine learning algorithms to learn complex actions.",
"Cricket broadcast video analysis has had difficulty identifying aspects of the content such as the type of batting stroke or the direction of the field played towards. Here we construct a composite feature combining Optical flow analysis along with camera view analysis to model the type of shots played. The work first presents an improved camera shot analysis based on learning parameters from a small supervision set. This splits the broadcast video into shots which are combined into balls and, the segment where the batsman is playing the stroke is identified. After that optical flow analysis is used to determine the direction of the stroke with an accuracy of 80 percent."
]
}
|
1901.03107
|
2910284584
|
In this paper, we deal with the problem of temporal action localization for a large-scale untrimmed cricket videos dataset. Our action of interest for cricket videos is a cricket stroke played by a batsman, which is, usually, covered by cameras placed at the stands of the cricket ground at both ends of the cricket pitch. After applying a sequence of preprocessing steps, we have 73 million frames for 1110 videos in the dataset at constant frame rate and resolution. The method of localization is a generalized one which applies a trained random forest model for CUTs detection(using summed up grayscale histogram difference features) and two linear SVM camera models(CAM1 and CAM2) for first frame detection, trained on HOG features of CAM1 and CAM2 video shots. CAM1 and CAM2 are assumed to be part of the cricket stroke. At the predicted boundary positions, the HOG features of the first frames are computed and a simple algorithm was used to combine the positively predicted camera shots. In order to make the process as generic as possible, we did not consider any domain specific knowledge, such as tracking or specific shape and motion features. The detailed analysis of our methodology is provided along with the metrics used for evaluation of individual models, and the final predicted segments. We achieved a weighted mean TIoU of 0.5097 over a small sample of the test set.
|
Temporal localization using deep neural networks has been quite successful recently, @cite_34 @cite_2 . Though, there are other works that do not use deep neural nets, such as @cite_17 . They use an unsupervised spectral clustering approach to find similar action types and localizing them. Our approach also does not use deep neural networks as our object is to come up with a dataset that is large enough to train CNNs with millions of parameters. Even the pre-trained networks need sufficiently large amount of labeled data for fine-tuning the network. We have done the labeling for only a small set of highlight videos (1GB of World Cup T20 2016) and bootstrapped simple machine learning models trained on only grayscale histogram differences and HOG @cite_3 features.
|
{
"cite_N": [
"@cite_34",
"@cite_17",
"@cite_3",
"@cite_2"
],
"mid": [
"2964214371",
"2777542469",
"2161969291",
"2597958930"
],
"abstract": [
"We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions, (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network, and (3) a localization network fine-tunes the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and achieve high temporal localization accuracy. In the end, only the proposal network and the localization network are used during prediction. On two largescale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014.",
"This paper is the first to address the problem of unsupervised action localization in videos. Given unlabeled data without bounding box annotations, we propose a novel approach that: 1) Discovers action class labels and 2) Spatio-temporally localizes actions in videos. It begins by computing local video features to apply spectral clustering on a set of unlabeled training videos. For each cluster of videos, an undirected graph is constructed to extract a dominant set, which are known for high internal homogeneity and in-homogeneity between vertices outside it. Next, a discriminative clustering approach is applied, by training a classifier for each cluster, to iteratively select videos from the non-dominant set and obtain complete video action classes. Once classes are discovered, training videos within each cluster are selected to perform automatic spatio-temporal annotations, by first over-segmenting videos in each discovered class into supervoxels and constructing a directed graph to apply a variant of knapsack problem with temporal constraints. Knapsack optimization jointly collects a subset of supervoxels, by enforcing the annotated action to be spatio-temporally connected and its volume to be the size of an actor. These annotations are used to train SVM action classifiers. During testing, actions are localized using a similar Knapsack approach, where supervoxels are grouped together and SVM, learned using videos from discovered action classes, is used to recognize these actions. We evaluate our approach on UCF-Sports, Sub-JHMDB, JHMDB, THUMOS13 and UCF101 datasets. Our experiments suggest that despite using no action class labels and no bounding box annotations, we are able to get competitive results to the state-of-the-art supervised methods.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"Temporal Action Proposal (TAP) generation is an important problem, as fast and accurate extraction of semantically important (e.g. human actions) segments from untrimmed videos is an important step for large-scale video analysis. We propose a novel Temporal Unit Regression Network (TURN) model. There are two salient aspects of TURN: (1) TURN jointly predicts action proposals and refines the temporal boundaries by temporal coordinate regression: (2) Fast computation is enabled by unit feature reuse: a long untrimmed video is decomposed into video units, which are reused as basic building blocks of temporal proposals. TURN outperforms the previous state-of-the-art methods under average recall (AR) by a large margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames per second (FPS) on a TITAN X GPU. We further apply TURN as a proposal generation stage for existing temporal action localization pipelines, it outperforms state-of-the-art performance on THUMOS-14 and ActivityNet."
]
}
|
1901.03136
|
2908935617
|
More than ever, technical inventions are the symbol of our society's advance. Patents guarantee their creators protection against infringement. For an invention being patentable, its novelty and inventiveness have to be assessed. Therefore, a search for published work that describes similar inventions to a given patent application needs to be performed. Currently, this so-called search for prior art is executed with semi-automatically composed keyword queries, which is not only time consuming, but also prone to errors. In particular, errors may systematically arise by the fact that different keywords for the same technical concepts may exist across disciplines. In this paper, a novel approach is proposed, where the full text of a given patent application is compared to existing patents using machine learning and natural language processing techniques to automatically detect inventions that are similar to the one described in the submitted document. Various state-of-the-art approaches for feature extraction and document comparison are evaluated. In addition to that, the quality of the current search process is assessed based on ratings of a domain expert. The evaluation results show that our automated approach, besides accelerating the search process, also improves the search results for prior art with respect to their quality.
|
Most research concerned with facilitating and improving the search for a patent's prior art has focused on automatically composing and extending the search queries. For example, a manually formulated query can be improved by automatically including synonyms for the keywords using a thesaurus @cite_34 @cite_22 @cite_44 @cite_59 @cite_41 . A potential drawback of such an approach, however, is that the thesaurus itself has to be manually curated and extended @cite_76 . Another line of research focuses on pseudo-relevance feedback, where, given an initial search, the first @math search results are used to identify additional keywords that can be used to extend the original query @cite_65 @cite_39 @cite_49 . Similarly, past queries @cite_21 or meta data such as citations can be used to augment the search query @cite_6 @cite_0 @cite_43 . A recent study has also examined the possibility of using the language model @cite_9 @cite_16 @cite_26 to automatically identify relevant words in the search results that can be used to extend the query @cite_46 .
|
{
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_41",
"@cite_46",
"@cite_9",
"@cite_21",
"@cite_65",
"@cite_6",
"@cite_39",
"@cite_44",
"@cite_0",
"@cite_43",
"@cite_16",
"@cite_59",
"@cite_49",
"@cite_76",
"@cite_34"
],
"mid": [
"2141599568",
"349863593",
"2123613952",
"2531645065",
"1614298861",
"2060340204",
"94683886",
"2092465960",
"2123675809",
"",
"2090719489",
"2044413982",
"2153579005",
"1509974181",
"2113365997",
"",
"2009299552"
],
"abstract": [
"Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male female relationship is automatically learned, and with the induced vector representations, “King Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40 of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems.",
"In the patent domain Boolean retrieval is particularly common. But despite the importance of Boolean retrieval, there is not much work in current research assisting patent experts in formulating such queries. Currently, these approaches are mostly limited to the usage of standard dictionaries, such as WordNet, to provide synonymous expansion terms. In this paper we present a new approach to support patent searchers in the query generation process. We extract a lexical database, which we call PatNet, from real query sessions of patent examiners of the United Patent and Trademark Office (USPTO). PatNet provides several types of synonym relations. Further, we apply several query term expansion strategies to improve the precision measures of PatNet in suggesting expansion terms. Experiments based on real query sessions of patent examiners show a drastic increase in precision, when considering support of the synonym relations, US patent classes, and word senses.",
"Since patent documents are important technical resources, effective patent retrieval has become more and more crucial. Unlike common information retrieval, patent retrieval is a recall-oriented retrieval, and patent query inputs are usually long. However, current patent retrieval approaches cannot effectively capture user query intents and obtain good expansion terms, which lead to low retrieval effectiveness. To address this issue, this paper proposes a novel semantic query expansion-based patent retrieval approach according to patent-specific characteristics. Firstly, patent domain features are extracted by using a domain-dependent term frequency scheme. Based on domain features, query inputs are analyzed to determine query domains. Furthermore, query domain matching is employed to generate candidate expansion terms, and semantic-based similarity computation is adopted to select expansion terms. Experiment results show that our approach achieves better retrieval performance than other state-of-art approaches.",
"ABSTRACTQuery expansion is a well-known method for improving the performance of information retrieval systems. Pseudo-relevance feedback (PRF)-based query expansion is a type of query expansion approach that assumes the top-ranked retrieved documents are relevant. The addition of all the terms of PRF documents is not important or appropriate for expanding the original user query. Hence, the selection of proper expansion term is very important for improving retrieval system performance. Various individual query expansion term selection methods have been widely investigated for improving system performance. Every individual expansion term selection method has its own weaknesses and strengths. In order to minimize the weaknesses and utilizing the strengths of the individual method, we used multiple terms selection methods together. First, this paper explored the possibility of improving overall system performance by using individual query expansion terms selection methods. Further, ranks-aggregating method n...",
"",
"In the patent domain significant efforts are invested to assist researchers in formulating better queries, preferably via automated query expansion. Currently, automatic query expansion in patent search is mostly limited to computing co-occurring terms for the searchable features of the invention. Additional query terms are extracted automatically from patent documents based on entropy measures. Learning synonyms in the patent domain for automatic query expansion has been a difficult task. No dedicated sources providing synonyms for the patent domain, such as patent domain specific lexica or thesauri, are available. In this paper we focus on the highly professional search setting of patent examiners. In particular, we use query logs to learn synonyms for the patent domain. For automatic query expansion, we create term networks based on the query logs specifically for several USPTO patent classes. Experiments show good performance in automatic query expansion using these automatically generated term networks. Specifically, with a larger number of query logs for a specific patent US class available the performance of the learned term networks increases.",
"Pseudo-relevance feedback (PRF) is an effective approach in Information Retrieval but unfortunately many experiments have shown that PRF is ineffective in patent retrieval. This is because the quality of initial results in the patent retrieval is poor and therefore estimating a relevance model via PRF often hurts the retrieval performance due to off-topic terms. We propose a learning to rank framework for estimating the effectiveness of a patent document in terms of its performance in PRF. Specifically, the knowledge of effective feedback documents on past queries is used to estimate effective feedback documents for new queries. This is achieved by introducing features correlated with feedback document effectiveness. We use patent-specific contents to define such features. We then apply regression to predict document effectiveness given the proposed features. We evaluated the effectiveness of the proposed method on the patent prior art search collection CLEF-IP 2010. Our experimental results show significantly improved retrieval accuracy over a PRF baseline which expands the query using all top-ranked documents.",
"This paper proposes a method to combine text-based and citation-based retrieval methods in the invalidity patent search. Using the NTCIR-6 test collection including eight years of USPTO patents, we show the effectiveness of our method experimentally.",
"Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs).",
"",
"Patent prior art search is a type of search in the patent domain where documents are searched for that describe the work previously carried out related to a patent application. The goal of this search is to check whether the idea in the patent application is novel. Vocabulary mismatch is one of the main problems of patent retrieval which results in low retrievability of similar documents for a given patent application. In this paper we show how the term distribution of the cited documents in an initially retrieved ranked list can be used to address the vocabulary mismatch. We propose a method for query modeling estimation which utilizes the citation links in a pseudo relevance feedback set. We first build a topic dependent citation graph, starting from the initially retrieved set of feedback documents and utilizing citation links of feedback documents to expand the set. We identify the important documents in the topic dependent citation graph using a citation analysis measure. We then use the term distribution of the documents in the citation graph to estimate a query model by identifying the distinguishing terms and their respective weights. We then use these terms to expand our original query. We use CLEF-IP 2011 collection to evaluate the effectiveness of our query modeling approach for prior art search. We also study the influence of different parameters on the performance of the proposed method. The experimental results demonstrate that the proposed approach significantly improves the recall over a state-of-the-art baseline which uses the link-based structure of the citation graph but not the term distribution of the cited documents.",
"Prior art search or recommending citations for a patent application is a challenging task. Many approaches have been proposed and shown to be useful for prior art search. However, most of these methods do not consider the network structure for integrating and diffusion of different kinds of information present among tied patents in the citation network. In this paper, we propose a method based on a time-aware random walk on a weighted network of patent citations, the weights of which are characterized by contextual similarity relations between two nodes on the network. The goal of the random walker is to find influential documents in the citation network of a query patent, which can serve as candidates for drawing query terms and bigrams for query refinement. The experimental results on CLEF-IP datasets (CLEF-IP 2010 and CLEF-IP 2011) show the effectiveness of encoding contextual similarities (common classification codes, common inventor, and common applicant) between nodes in the citation network. Our proposed approach can achieve significantly better results in terms of recall and Mean Average Precision rates compared to strong baselines of prior art search.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"This paper presents the experiments and results of DCU in CLEF-IP 2009. Our work applied standard information retrieval (IR) techniques to patent search. Different experiments tested various methods for the patent retrieval, including query formulation, structured index, weighted fields, document filtering, and blind relevance feedback. Some methods did not show expected good retrieval effectiveness such as blind relevance feedback, other experiments showed acceptable performance. Query formulation was the key to achieving better retrieval effectiveness, and this was performed through assigning higher weights to certain document fields. Further experiments showed that for longer queries, better results are achieved but at the expense of additional computations. For the best runs, the retrieval effectiveness is still lower than for IR applications for other domains, illustrating the difficulty of patent search. The official results have shown that among fifteen participants we achieved the seventh and the fourth ranks from the mean average precision (MAP) and recall point of view, respectively.",
"In this paper, we investigate the influence of term selection on retrieval performance on the CLEF-IP prior art test collection, using the Description section of the patent query with Language Model (LM) and BM25 scoring functions. We find that an oracular relevance feedback system that extracts terms from the judged relevant documents far outperforms the baseline and performs twice as well on MAP as the best competitor in CLEF-IP 2010. We find a very clear term selection value threshold for use when choosing terms. We also noticed that most of the useful feedback terms are actually present in the original query and hypothesized that the baseline system could be substantially improved by removing negative query terms. We tried four simple automated approaches to identify negative terms for query reduction but we were unable to notably improve on the baseline performance with any of them. However, we show that a simple, minimal interactive relevance feedback approach where terms are selected from only the first retrieved relevant document outperforms the best result from CLEF-IP 2010 suggesting the promise of interactive methods for term selection in patent prior art search.",
"",
"Patent retrieval is a recall-oriented search task where the objective is to find all possible relevant documents. Queries in patent retrieval are typically very long since they take the form of a patent claim or even a full patent application in the case of prior-art patent search. Nevertheless, there is generally a significant mismatch between the query and the relevant documents, often leading to low retrieval effectiveness. Some previous work has tried to address this mismatch through the application of query expansion (QE) techniques which have generally showed effectiveness for many other retrieval tasks. However, results of QE on patent search have been found to be very disappointing. We present a review of previous investigations of QE in patent retrieval, and explore some of these techniques on a prior-art patent search task. In addition, a novel method for QE using automatically generated synonyms set is presented. While previous QE techniques fail to improve over baseline retrieval, our new approach show statistically better retrieval precision over the baseline, although not for recall. In addition, it proves to be significantly more efficient than existing techniques. An extensive analysis to the results is presented which seeks to better understand situations where these QE techniques succeed or fail."
]
}
|
1901.03136
|
2908935617
|
More than ever, technical inventions are the symbol of our society's advance. Patents guarantee their creators protection against infringement. For an invention being patentable, its novelty and inventiveness have to be assessed. Therefore, a search for published work that describes similar inventions to a given patent application needs to be performed. Currently, this so-called search for prior art is executed with semi-automatically composed keyword queries, which is not only time consuming, but also prone to errors. In particular, errors may systematically arise by the fact that different keywords for the same technical concepts may exist across disciplines. In this paper, a novel approach is proposed, where the full text of a given patent application is compared to existing patents using machine learning and natural language processing techniques to automatically detect inventions that are similar to the one described in the submitted document. Various state-of-the-art approaches for feature extraction and document comparison are evaluated. In addition to that, the quality of the current search process is assessed based on ratings of a domain expert. The evaluation results show that our automated approach, besides accelerating the search process, also improves the search results for prior art with respect to their quality.
|
Approaches for automatically adapting and extending queries still require the patent examiner to manually formulate the initial search query. To make this step obsolete, heuristics can be used to automatically extract keywords from a given patent application @cite_56 @cite_2 @cite_29 or a (BOW) approach can be used to transform the entire text of a patent into a list of words that can then be used to search for its prior art @cite_67 @cite_52 @cite_18 . Often times, partial patent applications, such as an extended abstract, may already suffice to conduct the search @cite_52 . The search results can also be further refined with a graph-based ranking model @cite_4 or by using the patents' categories to filter the results @cite_60 . Different prior art search approaches have previously been discussed and benchmarked within the CLEF project, see e.g. @cite_54 and @cite_73 .
|
{
"cite_N": [
"@cite_67",
"@cite_18",
"@cite_4",
"@cite_60",
"@cite_29",
"@cite_54",
"@cite_52",
"@cite_56",
"@cite_2",
"@cite_73"
],
"mid": [
"",
"2031612684",
"1525595230",
"2396479859",
"2143175105",
"",
"1967684050",
"2295485705",
"73861561",
"195748530"
],
"abstract": [
"",
"Searching for prior-art patents is an essential step for the patent examiner to validate or invalidate a patent application. In this paper, we consider the whole patent as the query, which reduces the burden on the user, and also makes many more potential search features available. We explore how to automatically transform the query patent into an effective search query, especially focusing on the effect of different patent fields. Experiments show that the background summary of a patent is the most useful source of terms for generating a query, even though most previous work used the patent claims.",
"In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications.",
"In this paper we describe experiments conducted for CLEF- IP 2011 Prior Art Retrieval track. We examined the impact of 1) us- ing key phrase extraction to generate queries from input patent and 2) the use of citation network and (International Patent Classification) IPC class vector in ranking patents. Variations of a popular key phrase extrac- tion technique were explored for extracting and scoring terms of query patent. These terms are used as queries to retrieve similar patents. In the second approach, we use a two stage retrieval model to find similar patents. Each patent is represented as an IPC class vector. Citation net- work of patents is used to propagate these vectors from a node (patent) to its neighbors (cited patents). Similar patents are found by comparing query vector with vectors of patents in the corpus. Text based search is used to re-rank this solution set to improve precision. Two-stage sys- tem is used to retrieve and rank patents. Finally, we also extract and add citations present within the text of a query patent to the result set. Adding these citations (present in query patent text) to the results shows significant improvement in Mean Average Precision (MAP).",
"Invalidity search poses different challenges when compared to conventional Information Retrieval problems. Presently, the success of invalidity search relies on the queries created from a patent application by the patent examiner. Since a lot of time is spent in constructing relevant queries, automatically creating them from a patent would save the examiner a lot of effort. In this paper, we address the problem of automatically creating queries from an input patent. An optimal query can be formed by extracting important keywords or phrases from a patent by using Key Phrase Extraction (KPE) techniques. Several KPE algorithms have been proposed in the literature but their performance on query construction for patents has not yet been explored. We systematically evaluate and analyze the performance of queries created by using state-of-the-art KPE techniques for invalidity search task. Our experiments show that queries formed by KPE approaches perform better than those formed by selecting phrases based on tf or tf-idf scores.",
"",
"Patents are used by legal entities to legally protect their inventions and represent a multi-billion dollar industry of licensing and litigation. In 2014, 326,033 patent applications were approved in the US alone -- a number that has doubled in the past 15 years and which makes prior art search a daunting, but necessary task in the patent application process. In this work, we seek to investigate the efficacy of prior art search strategies from the perspective of the inventor who wishes to assess the patentability of their ideas prior to writing a full application. While much of the literature inspired by the evaluation framework of the CLEF-IP competition has aimed to assist patent examiners in assessing prior art for complete patent applications, less of this work has focused on patent search with queries representing partial applications. In the (partial) patent search setting, a query is often much longer than in other standard IR tasks, e.g., the description section may contain hundreds or even thousands of words. While the length of such queries may suggest query reduction strategies to remove irrelevant terms, intentional obfuscation and general language used in patents suggests that it may help to expand queries with additionally relevant terms. To assess the trade-offs among all of these pre-application prior art search strategies, we comparatively evaluate a variety of partial application search and query reformulation methods. Among numerous findings, querying with a full description, perhaps in conjunction with generic (non-patent specific) query reduction methods, is recommended for best performance. However, we also find that querying with an abstract represents the best trade-off in terms of writing effort vs. retrieval efficacy (i.e., querying with the description sections only lead to marginal improvements) and that for such relatively short queries, generic query expansion methods help.",
"",
"A tracking control system for controlling a relative positional relation between an optical pick-up and a rotating optical disc is provided. A tracking error signal is supplied to a low pass filter, to a lead phase compensating circuit and also to a zero cross comparator which supplies its output to a pulse generating circuit. During a normal tracking mode, in which the optical pick-up is maintained in alignment with a recording track of the rotating disc, an output from the low pass filter and an output from the lead phase compensating circuit are added to define a feed-back signal which is then supplied to the optical pick-up. On the other hand, during a track jump control mode, in which the optical pick-up is caused to jump to the next adjacent recording track, with the lead compensating circuit maintained in an inhibited state, an acceleration pulse is generated from the pulse generating circuit and then a deceleration pulse opposite in polarity is generated from the pulse generating circuit in response to an output from the zero cross comparator, wherein the output of the low pass filter is added with the acceleration and deceleration pulses to define a feed-back signal to be applied to the optical pick-up.",
"The first Clef-Ip test collection was made available in 2009 to support research in IR methods in the intellectual property domain; only one type of retrieval task Prior Art Search was given to the participants. Since then the test collection has been extended with both more content and varied types of tasks, reflecting various specific parts of patent experts' workflows. In 2013 we organized two tasks --- Passage Retrieval Starting from Claims and Structure Recognition --- on which we report in this work."
]
}
|
1901.03136
|
2908935617
|
More than ever, technical inventions are the symbol of our society's advance. Patents guarantee their creators protection against infringement. For an invention being patentable, its novelty and inventiveness have to be assessed. Therefore, a search for published work that describes similar inventions to a given patent application needs to be performed. Currently, this so-called search for prior art is executed with semi-automatically composed keyword queries, which is not only time consuming, but also prone to errors. In particular, errors may systematically arise by the fact that different keywords for the same technical concepts may exist across disciplines. In this paper, a novel approach is proposed, where the full text of a given patent application is compared to existing patents using machine learning and natural language processing techniques to automatically detect inventions that are similar to the one described in the submitted document. Various state-of-the-art approaches for feature extraction and document comparison are evaluated. In addition to that, the quality of the current search process is assessed based on ratings of a domain expert. The evaluation results show that our automated approach, besides accelerating the search process, also improves the search results for prior art with respect to their quality.
|
Calculating the similarity between texts is at the heart of a wide range of information retrieval tasks, such as search engine development, question answering, document clustering, or corpus visualization. Approaches for computing text similarities can be divided into similarity measures relying on word similarities and those based on document feature vectors @cite_23 .
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2171313960"
],
"abstract": [
"ABSTRACT Measuring the similarity between words, sentences, paragraphs and documents is an important component in various tasks such as information retrieval, document clustering, word-sense disambiguation, automatic essay scoring, short answer grading, machine translation and text summarization. This survey discusses the existing works on text similarity through partitioning them into three approaches; String-based, Corpus-based and Knowledge-based similarities. Furthermore, samples of combination between these similarities are presented. General Terms Text Mining, Natural Language Processing. Keywords BasedText Similarity, Semantic Similarity, String-Based Similarity, Corpus-Based Similarity, Knowledge-Based Similarity. NeedlemanWunsch 1. INTRODUCTION Text similarity measures play an increasingly important role in text related research and applications in tasks Nsuch as information retrieval, text classification, document clustering, topic detection, topic tracking, questions generation, question answering, essay scoring, short answer scoring, machine translation, text summarization and others. Finding similarity between words is a fundamental part of text similarity which is then used as a primary stage for sentence, paragraph and document similarities. Words can be similar in two ways lexically and semantically. Words are similar lexically if they have a similar character sequence. Words are similar semantically if they have the same thing, are opposite of each other, used in the same way, used in the same context and one is a type of another. DistanceLexical similarity is introduced in this survey though different String-Based algorithms, Semantic similarity is introduced through Corpus-Based and Knowledge-Based algorithms. String-Based measures operate on string sequences and character composition. A string metric is a metric that measures similarity or dissimilarity (distance) between two text strings for approximate string matching or comparison. Corpus-Based similarity is a semantic similarity measure that determines the similarity between words according to information gained from large corpora. Knowledge-Based similarity is a semantic similarity measure that determines the degree of similarity between words using information derived from semantic networks. The most popular for each type will be presented briefly. This paper is organized as follows: Section two presents String-Based algorithms by partitioning them into two types character-based and term-based measures. Sections three and four introduce Corpus-Based and knowledge-Based algorithms respectively. Samples of combinations between similarity algorithms are introduced in section five and finally section six presents conclusion of the survey."
]
}
|
1901.03136
|
2908935617
|
More than ever, technical inventions are the symbol of our society's advance. Patents guarantee their creators protection against infringement. For an invention being patentable, its novelty and inventiveness have to be assessed. Therefore, a search for published work that describes similar inventions to a given patent application needs to be performed. Currently, this so-called search for prior art is executed with semi-automatically composed keyword queries, which is not only time consuming, but also prone to errors. In particular, errors may systematically arise by the fact that different keywords for the same technical concepts may exist across disciplines. In this paper, a novel approach is proposed, where the full text of a given patent application is compared to existing patents using machine learning and natural language processing techniques to automatically detect inventions that are similar to the one described in the submitted document. Various state-of-the-art approaches for feature extraction and document comparison are evaluated. In addition to that, the quality of the current search process is assessed based on ratings of a domain expert. The evaluation results show that our automated approach, besides accelerating the search process, also improves the search results for prior art with respect to their quality.
|
Full text similarity measures have previously been used to improve search results for MEDLINE articles, where a two step approach using the cosine similarity measure between vectors in combination with a sentence alignment algorithm yielded superior results compared to the boolean search strategy used by PubMed @cite_19 . The Science Concierge @cite_63 computes the similarities between papers' abstracts to provide content based recommendations, however it still requires an initial keyword search to retrieve articles of interest. The PubVis web application by Horn @cite_72 , developed for visually exploring scientific corpora, also provides recommendations for similar articles given a submitted abstract by measuring overlapping terms in the document feature vectors. While full text similarity search approaches have shown potential in domains such as scientific literature, only few studies have explored this approach for the much harder task of retrieving prior art for a new patent application @cite_50 , where much less overlap between text documents is to be expected due to the usage of very abstract and general terms when describing new inventions. Specifically, document representations created using recently developed neural network language models such as @cite_9 @cite_16 @cite_14 or @cite_69 were not yet evaluated on patent documents.
|
{
"cite_N": [
"@cite_69",
"@cite_14",
"@cite_9",
"@cite_19",
"@cite_72",
"@cite_50",
"@cite_63",
"@cite_16"
],
"mid": [
"2131744502",
"2963941405",
"1614298861",
"2099781321",
"2677552389",
"2142189579",
"2341468445",
"2153579005"
],
"abstract": [
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.",
"",
"",
"Motivation: The most widely used literature search techniques, such as those offered by NCBI's PubMed system, require significant effort on the part of the searcher, and inexperienced searchers do not use these systems as effectively as experienced users. Improved literature search engines can save researchers time and effort by making it easier to locate the most important and relevant literature. Results: We have created and optimized a new, hybrid search system for Medline that takes natural text as input and then delivers results with high precision and recall. The combination of a fast, low-sensitivity weighted keyword-based first pass algorithm to cast a wide net to gather an initial set of literature, followed by a unique sentence-alignment based similarity algorithm to rank order those results was developed that is sensitive, fast and easy to use. Several text similarity search algorithms, both standard and novel, were implemented and tested in order to determine which obtained the best results in information retrieval exercises. Availability: Literature searching algorithms are implemented in a system called eTBLAST, freely accessible over the web at http: invention.swmed.edu. A variety of other derivative systems and visualization tools provides the user with an enhanced experience and additional capabilities. Contact: Harold.Garner@UTSouthwestern.edu",
"With an exponentially growing number of scientific papers published each year, advanced tools for exploring and discovering publications of interest are becoming indispensable. To empower users beyond a simple keyword search provided e.g. by Google Scholar, we present the novel web application PubVis. Powered by a variety of machine learning techniques, it combines essential features to help researchers find the content most relevant to them. An interactive visualization of a large collection of scientific publications provides an overview of the field and encourages the user to explore articles beyond a narrow research focus. This is augmented by personalized content based article recommendations as well as an advanced full text search to discover relevant references. The open sourced implementation of the app can be easily set up and run locally on a desktop computer to provide access to content tailored to the specific needs of individual users. Additionally, a PubVis demo with access to a collection of 10,000 papers can be tested online.",
"Since the huge database of patent documents is continuously increasing, the issue of classifying, updating and retrieving patent documents turned into an acute necessity. Therefore, we investigate the efficiency of applying Latent Semantic Indexing, an automatic indexing method of information retrieval, to some classes of patent documents from the United States Patent Classification System. We present some experiments that provide the optimal number of dimensions for the Latent Semantic Space and we compare the performance of Latent Semantic Indexing (LSI) to the Vector Space Model (VSM) technique applied to real life text documents, namely, patent documents. However, we do not strongly recommend the LSI as an improved alternative method to the VSM, since the results are not significantly better.",
"Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
}
|
1901.02586
|
2910498695
|
We study the class of holomorphic and isometric submersions between finite-type Teichm "uller spaces. We prove that, with potential exceptions coming from low-genus phenomena, any such map is a forgetful map @math obtained by filling in punctures. This generalizes a classical result of Royden and Earle-Kra asserting that biholomorphisms between finite-type Teichm "uller spaces arise from mapping classes. As a key step in the argument, we prove that any @math -linear embedding @math between spaces of integrable quadratic differentials is, up to scale, pull-back by a holomorphic map. We accomplish this step by adapting methods developed by Markovic to study isometries of infinite-type Teichm "uller spaces.
|
We mention also a result of Antonakoudis-Aramayona-Souto @cite_9 stating that any holomorphic map @math between moduli spaces is forgetful, as long as @math and @math . One can see this is as a parallel of our result, with our metric constraint replaced by an equivariance condition.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2736922977"
],
"abstract": [
"We prove that every non-constant holomorphic mapeM& em&⊂g,p& sub&→ eM& em&⊂& g',p'& sub& between moduli spaces of Riemann surfaces is a forgetful map, provided that g ≥ 6 and g' ≤ 2g-2."
]
}
|
1901.02586
|
2910498695
|
We study the class of holomorphic and isometric submersions between finite-type Teichm "uller spaces. We prove that, with potential exceptions coming from low-genus phenomena, any such map is a forgetful map @math obtained by filling in punctures. This generalizes a classical result of Royden and Earle-Kra asserting that biholomorphisms between finite-type Teichm "uller spaces arise from mapping classes. As a key step in the argument, we prove that any @math -linear embedding @math between spaces of integrable quadratic differentials is, up to scale, pull-back by a holomorphic map. We accomplish this step by adapting methods developed by Markovic to study isometries of infinite-type Teichm "uller spaces.
|
S. Antonakoudis @cite_14 was the first to study isometric submersions in the context of Teichm "uller theory. He proved that there is no holomorphic and Kobayshi-isometric submersion between a finite-dimensional Teichm "uller space and a bounded symmetric domain, provided each is of complex dimension at least two.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2145814138"
],
"abstract": [
"We study isometric maps between Teichmuller spaces and bounded symmetric domains in their Kobayashi metric. We prove that every totally geodesic isometry from a disk to Teichmuller space is either holomorphic or anti-holomorphic; in particular, it is a Teichmuller disk. However, we prove that in dimensions two or more there are no holomorphic isometric immersions between Teichmuller spaces and bounded symmetric domains and also prove a similar result for isometric submersions."
]
}
|
1901.02586
|
2910498695
|
We study the class of holomorphic and isometric submersions between finite-type Teichm "uller spaces. We prove that, with potential exceptions coming from low-genus phenomena, any such map is a forgetful map @math obtained by filling in punctures. This generalizes a classical result of Royden and Earle-Kra asserting that biholomorphisms between finite-type Teichm "uller spaces arise from mapping classes. As a key step in the argument, we prove that any @math -linear embedding @math between spaces of integrable quadratic differentials is, up to scale, pull-back by a holomorphic map. We accomplish this step by adapting methods developed by Markovic to study isometries of infinite-type Teichm "uller spaces.
|
The classification of holomorphic isometric submersions between bounded symmetric domains is an interesting problem. See the paper of Knese @cite_4 for the classification of holomorphic Kobayashi-isometric submersions from the polydisk @math to the disk @math . For Teichm "uller-theoretic applications of this class of functions on the polydisk, see @cite_16 and @cite_5 .
|
{
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_4"
],
"mid": [
"",
"2586093215",
"2085898155"
],
"abstract": [
"",
"We study the family of holomorphic maps from the polydisk to the disk which restrict to the identity on the diagonal. In particular, we analyze the asymptotics of the orbit of such a map under the conjugation action of a unipotent subgroup of @math . We discuss an application our results to the study of the Caratheodory metric on Teichmuller space.",
"We prove a generalization of the infinitesimal portion of the classical Schwarz lemma for functions from the polydisk to the disk. In particular, we describe the functions which play the role of automorphisms of the disk in this context-they turn out to be rational inner functions in the Schur-Agler class of the polydisk with an added symmetry constraint. In addition, some sufficient conditions are given for a function to be of this type."
]
}
|
1901.02757
|
2909934232
|
Various forms of representations may arise in the many layers embedded in deep neural networks (DNNs). Of these, where can we find the most compact representation? We propose to use a pruning framework to answer this question: How compact can each layer be compressed, without losing performance? Most of the existing DNN compression methods do not consider the relative compressibility of the individual layers. They uniformly apply a single target sparsity to all layers or adapt layer sparsity using heuristics and additional training. We propose a principled method that automatically determines the sparsity of individual layers derived from the importance of each layer. To do this, we consider a metric to measure the importance of each layer based on the layer-wise capacity. Given the trained model and the total target sparsity, we first evaluate the importance of each layer from the model. From the evaluated importance, we compute the layer-wise sparsity of each layer. The proposed method can be applied to any DNN architecture and can be combined with any pruning method that takes the total target sparsity as a parameter. To validate the proposed method, we carried out an image classification task with two types of DNN architectures on two benchmark datasets and used three pruning methods for compression. In case of VGG-16 model with weight pruning on the ImageNet dataset, we achieved up to 75 (17.5 on average) better top-5 accuracy than the baseline under the same total target sparsity. Furthermore, we analyzed where the maximum compression can occur in the network. This kind of analysis can help us identify the most compact representation within a deep neural network.
|
Pruning is a simple but efficient method for DNN model compression. cite zhu2017prune showed that making large DNN models sparse by pruning can outperform small-dense DNN models trained from scratch. There are many pruning methods according to the granularity of pruning from weight pruning @cite_4 @cite_19 to channel pruning @cite_7 @cite_21 @cite_9 @cite_14 @cite_3 . These approaches mainly focused on how to select the redundant weights filters in the model, rather than considering how many weights filters need to be pruned in each layer.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_19"
],
"mid": [
"2963287528",
"2963674932",
"2963363373",
"2515385951",
"2962851801",
"2786054724",
"2119144962"
],
"abstract": [
"We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation-a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.",
"The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.",
"The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.",
"Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions on resource-limited scenarios. A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs), which does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computational difficult and not always useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: the first being to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels being constant, and the second being to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interesting aspects and the competitive performance.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency."
]
}
|
1901.02645
|
2908984097
|
Aggregating extra features of novel modality brings great advantages for building robust pedestrian detector under adverse illumination conditions. However, misaligned imagery still persists in multispectral scenario and will depress the performance of detector in a non-trivial way. In this paper, we first present and explore the cross-modality disparity problem in multispectral pedestrian detection, providing insights into the utilization of multimodal inputs. Then, to further address this issue, we propose a novel framework including a region feature alignment module and the region of interest (RoI) jittering training strategy. Moreover, dense, high-quality, and modality-independent color-thermal annotation pairs are provided to scrub the large-scale KAIST dataset to benefit future multispectral detection research. Extensive experiments demonstrate that the proposed approach improves the robustness of detector with a large margin and achieves state-of-the-art performance with high efficiency. Code and data will be publicly available.
|
As an essential step for various applications ( robotics, autonomous driving, and video surveillance), pedestrian detection has attracted great attention from the computer vision community. Over the years, extensive features and algorithms have been proposed, including both traditional detectors ( ICF @cite_36 , ACF @cite_22 , LDCF @cite_31 , Checkerboard @cite_1 ) and the lately dominated CNN-based detectors @cite_4 @cite_12 @cite_48 @cite_15 @cite_16 @cite_20 .
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_15",
"@cite_36",
"@cite_48",
"@cite_1",
"@cite_31",
"@cite_16",
"@cite_20",
"@cite_12"
],
"mid": [
"2497039038",
"2125556102",
"",
"2159386181",
"",
"2074777933",
"2170101770",
"",
"",
""
],
"abstract": [
"Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.",
"Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).",
"",
"We study the performance of ‘integral channel features’ for image classification tasks, focusing in particular on pedestrian detection. The general idea behind integral channel features is that multiple registered image channels are computed using linear and non-linear transformations of the input image, and then features such as local sums, histograms, and Haar features and their various generalizations are efficiently computed using integral images. Such features have been used in recent literature for a variety of tasks – indeed, variations appear to have been invented independently multiple times. Although integral channel features have proven effective, little effort has been devoted to analyzing or optimizing the features themselves. In this work we present a unified view of the relevant work in this area and perform a detailed experimental evaluation. We demonstrate that when designed properly, integral channel features not only outperform other features including histogram of oriented gradient (HOG), they also (1) naturally integrate heterogeneous sources of information, (2) have few parameters and are insensitive to exact parameter settings, (3) allow for more accurate spatial localization during detection, and (4) result in fast detectors when coupled with cascade classifiers.",
"",
"This paper starts from the observation that multiple top performing pedestrian detectors can be modelled by using an intermediate layer filtering low-level features in combination with a boosted decision forest. Based on this observation we propose a unifying framework and experimentally explore different filter families. We report extensive results enabling a systematic analysis. Using filtered channel features we obtain top performance on the challenging Caltech and KITTI datasets, while using only HOG+LUV as low-level features. When adding optical flow features we further improve detection quality and report the best known results on the Caltech dataset, reaching 93 recall at 1 FPPI.",
"Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods. The result is an overcomplete but locally decorrelated representation ideally suited for use with orthogonal decision trees. In fact, orthogonal trees with our locally decorrelated features outperform oblique trees trained over the original features at a fraction of the computational cost. The overall improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we reduce false positives nearly tenfold over the previous state-of-the-art.",
"",
"",
""
]
}
|
1901.02645
|
2908984097
|
Aggregating extra features of novel modality brings great advantages for building robust pedestrian detector under adverse illumination conditions. However, misaligned imagery still persists in multispectral scenario and will depress the performance of detector in a non-trivial way. In this paper, we first present and explore the cross-modality disparity problem in multispectral pedestrian detection, providing insights into the utilization of multimodal inputs. Then, to further address this issue, we propose a novel framework including a region feature alignment module and the region of interest (RoI) jittering training strategy. Moreover, dense, high-quality, and modality-independent color-thermal annotation pairs are provided to scrub the large-scale KAIST dataset to benefit future multispectral detection research. Extensive experiments demonstrate that the proposed approach improves the robustness of detector with a large margin and achieves state-of-the-art performance with high efficiency. Code and data will be publicly available.
|
Most existing methods are employed under the modality alignment assumption, commonly use fusion strategies ( element-wise summation, concatenation and Network-in-Network @cite_32 ) that directly fuse features of different modalities in the corresponding pixel position. However, the cross-modality disparity problem extensively exists in the multispectral scenarios and causes adverse impacts to the detection performance, but still lacks of study.
|
{
"cite_N": [
"@cite_32"
],
"mid": [
"2769924742"
],
"abstract": [
"Detecting individual pedestrians in a crowd remains a challenging problem since the pedestrians often gather together and occlude each other in real-world scenarios. In this paper, we first explore how a state-of-the-art pedestrian detector is harmed by crowd occlusion via experimentation, providing insights into the crowd occlusion problem. Then, we propose a novel bounding box regression loss specifically designed for crowd scenes, termed repulsion loss. This loss is driven by two motivations: the attraction by target, and the repulsion by other surrounding objects. The repulsion term prevents the proposal from shifting to surrounding objects thus leading to more crowd-robust localization. Our detector trained by repulsion loss outperforms all the state-of-the-art methods with a significant improvement in occlusion cases."
]
}
|
1901.02523
|
2909636570
|
The posterior matching scheme, for feedback encoding of a message point lying on the unit interval over memoryless channels, maximizes mutual information for an arbitrary number of channel uses. However, it in general does not always achieve any positive rate; so far, elaborate analyses have been required to show that it achieves any positive rate below capacity. More recent efforts have introduced a random "dither" shared by the encoder and decoder to the problem formulation, to simplify analyses and guarantee that the randomized scheme achieves any rate below capacity. Motivated by applications (e.g. human-computer interfaces) where (a) common randomness shared by the encoder and decoder may not be feasible and (b) the message point lies in a higher dimensional space, we focus here on the original formulation without common randomness, and use optimal transport theory to generalize the scheme for a message point in a higher dimensional space. By defining a stricter, almost sure, notion of message decoding, we use classical probabilistic techniques (e.g. change of measure and martingale convergence) to establish succinct necessary and sufficient conditions on when the message point can be recovered from infinite observations: Birkhoff ergodicity of a random process sequentially generated by the encoder. We also show a surprising "all or nothing" result: the same ergodicity condition is necessary and sufficient to achieve any rate below capacity. We provide applications of this message point framework in human-computer interfaces and multi-antenna communications.
|
Horstein first introduced a problem of this framework for the binary symmetric channel (BSC) @cite_58 , where the message point lies on the @math interval. In this work, Horstein showed that the median of the posterior distribution is a sufficient statistic for the decoder to provide the encoder, for which the subsequent channel input signals whether the message point is larger or smaller than this threshold. Subsequently, Schalkwijk and Kailath @cite_35 @cite_17 considered signaling a message point on the @math interval over the additive white Gaussian channel (AWGN) with feedback. There, they showed a close connection with estimation theory, where the minimum mean square error (MMSE) estimate of the message given all the observations plays a key role in the feedback encoder scheme. It was also shown that not only can capacity be achieved, but also a doubly exponential error exponent.
|
{
"cite_N": [
"@cite_35",
"@cite_58",
"@cite_17"
],
"mid": [
"2157875630",
"2040877654",
"2096307573"
],
"abstract": [
"In some communication problems, it is a good assumption that the channel consists of an additive white Gaussian noise forward link and an essentially noiseless feedback link. In this paper, we study channels where no bandwidth constraint is placed on the transmitted signals. Such channels arise in space communications. It is known that the availability of the feedback link cannot increase the channel capacity of the noisy forward link, but it can considerably reduce the coding effort required to achieve a given level of performance. We present a coding scheme that exploits the feedback to achieve considerable reductions in coding and decoding complexity and delay over what would be needed for comparable performance with the best known (simplex) codes for the one-way channel. Our scheme, which was motivated by the Robbins-Monro stochastic approximation technique, can also be used over channels where the additive noise is not Gaussian but is still independent from instant to instant. An extension of the scheme for channels with limited signal bandwidth is presented in a companion paper (Part II).",
"The presence of a feedback channel makes possible a variety of sequential transmission procedures, each of which can be classified as either a block-transmission or a continuous-transmission scheme according to the way in which information is encoded for transmission over a noisy forward channel. A sequential continuous-transmission system employing a binary symmetric forward channel (but which is suitable for use with any discrete memoryless forward channel) and a noiseless feedback channel is described. Its error exponent is shown to be substantially greater than the optimum block-code error exponent at all transmission rates less than channel capacity. The average value and the first-order probability distribution of the effective constraint length, found by simulating the system on an IBM 709 computer, are also given.",
"In Part I of this paper, we presented a scheme for effectively exploiting a noiseless feedback link associated with an additive white Gaussian noise channel with no signal bandwidth constraints. We now extend the scheme for this channel, which we shall call the wideband (WB) scheme, to a band-limited (BL) channel with signal bandwidth restricted to (- W, W) . Our feedback scheme achieves the well-known channel capacity, C = W (1 +P_ u,v N_ 0 W) , for this system and, in fact, is apparently the first deterministic procedure for doing this. We evaluate the fairly simple exact error probability for our scheme and find that it provides considerable improvements over the best-known results (which are lower bounds on the performance of sphere-packed codes) for the one-way channel. We also study the degradation in performance of our scheme when there is noise in the feedback link."
]
}
|
1901.02523
|
2909636570
|
The posterior matching scheme, for feedback encoding of a message point lying on the unit interval over memoryless channels, maximizes mutual information for an arbitrary number of channel uses. However, it in general does not always achieve any positive rate; so far, elaborate analyses have been required to show that it achieves any positive rate below capacity. More recent efforts have introduced a random "dither" shared by the encoder and decoder to the problem formulation, to simplify analyses and guarantee that the randomized scheme achieves any rate below capacity. Motivated by applications (e.g. human-computer interfaces) where (a) common randomness shared by the encoder and decoder may not be feasible and (b) the message point lies in a higher dimensional space, we focus here on the original formulation without common randomness, and use optimal transport theory to generalize the scheme for a message point in a higher dimensional space. By defining a stricter, almost sure, notion of message decoding, we use classical probabilistic techniques (e.g. change of measure and martingale convergence) to establish succinct necessary and sufficient conditions on when the message point can be recovered from infinite observations: Birkhoff ergodicity of a random process sequentially generated by the encoder. We also show a surprising "all or nothing" result: the same ergodicity condition is necessary and sufficient to achieve any rate below capacity. We provide applications of this message point framework in human-computer interfaces and multi-antenna communications.
|
Shayevitz and Feder introduced the posterior matching scheme for the message point lying on the @math interval in @cite_57 that is applicable to any memoryless channel. It includes the encoding schemes for the BSC and AWGN channel by Horstein and Schalkwijk-Kailath as special cases and provides the first rigorous proof that the Horstein scheme achieves capacity on the BSC.
|
{
"cite_N": [
"@cite_57"
],
"mid": [
"2011914761"
],
"abstract": [
"In this paper, we introduce a fundamental principle for optimal communication over general memoryless channels in the presence of noiseless feedback, termed posterior matching. Using this principle, we devise a (simple, sequential) generic feedback transmission scheme suitable for a large class of memoryless channels and input distributions, achieving any rate below the corresponding mutual information. This provides a unified framework for optimal feedback communication in which the Horstein scheme (BSC) and the Schalkwijk-Kailath scheme (AWGN channel) are special cases. Thus, as a corollary, we prove that the Horstein scheme indeed attains the BSC capacity, settling a longstanding conjecture. We further provide closed form expressions for the error probability of the scheme over a range of rates, and derive the achievable rates in a mismatch setting where the scheme is designed according to the wrong channel model. Several illustrative examples of the posterior matching scheme for specific channels are given, and the corresponding error probability expressions are evaluated. The proof techniques employed utilize novel relations between information rates and contraction properties of iterated function systems."
]
}
|
1901.02523
|
2909636570
|
The posterior matching scheme, for feedback encoding of a message point lying on the unit interval over memoryless channels, maximizes mutual information for an arbitrary number of channel uses. However, it in general does not always achieve any positive rate; so far, elaborate analyses have been required to show that it achieves any positive rate below capacity. More recent efforts have introduced a random "dither" shared by the encoder and decoder to the problem formulation, to simplify analyses and guarantee that the randomized scheme achieves any rate below capacity. Motivated by applications (e.g. human-computer interfaces) where (a) common randomness shared by the encoder and decoder may not be feasible and (b) the message point lies in a higher dimensional space, we focus here on the original formulation without common randomness, and use optimal transport theory to generalize the scheme for a message point in a higher dimensional space. By defining a stricter, almost sure, notion of message decoding, we use classical probabilistic techniques (e.g. change of measure and martingale convergence) to establish succinct necessary and sufficient conditions on when the message point can be recovered from infinite observations: Birkhoff ergodicity of a random process sequentially generated by the encoder. We also show a surprising "all or nothing" result: the same ergodicity condition is necessary and sufficient to achieve any rate below capacity. We provide applications of this message point framework in human-computer interfaces and multi-antenna communications.
|
Applications and variations of this formulation are manyfold. The Horstein scheme, combined with arithmetic coding of a sequence of symbols in an ordered symbolic alphabet to represent any sequence with enumerative source encoding @cite_56 as a message point on the @math line, has been used for brain-computer interfaces to specify a sentence or a smooth path @cite_18 , to navigate mobile robots @cite_14 , and to remotely teleoperate an unmanned aircraft @cite_8 . Many active learning problems borrow principles from posterior matching, but with different formulations or performance criteria; generalized 20 questions for target search @cite_26 @cite_24 and generalized binary search for function search @cite_49 @cite_37 serve as examples. Naghsvar and Tavidi considered variable-length encoding with feedback and utilized principles from stochastic control and posterior matching to demonstrate non-asymptotic upper bounds on expected code length and deterministic one-phase coding schemes that achieve capacity and attain optimal error exponents @cite_9 . More general variable-length formulations of active hypothesis testing, where the statistics of the measurement may depend on the query or are controllable, have been developed in @cite_10 @cite_4 @cite_23 @cite_51 . Feedback coding channels with memory using principles from posterior matching have been explored in @cite_3 @cite_50 @cite_59 @cite_7 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_26",
"@cite_37",
"@cite_4",
"@cite_8",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_56",
"@cite_24",
"@cite_49",
"@cite_23",
"@cite_51",
"@cite_50",
"@cite_59",
"@cite_10"
],
"mid": [
"",
"2168790074",
"2062626064",
"1965313644",
"2963912378",
"2010661952",
"2963656208",
"1992720861",
"2163608699",
"2034323860",
"2725485065",
"2106447856",
"2540076116",
"2744584070",
"2081237665",
"2509220592",
"1990503235"
],
"abstract": [
"",
"This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.",
"We consider the problem of 20 questions with noise for multiple players under the minimum entropy criterion in the setting of stochastic search, with application to target localization. Each player yields a noisy response to a binary query governed by a certain error probability. First, we propose a sequential policy for constructing questions that queries each player in sequence and refines the posterior of the target location. Second, we consider a joint policy that asks all players questions in parallel at each time instant and characterize the structure of the optimal policy for constructing the sequence of questions. This generalizes the single player probabilistic bisection method for stochastic search problems. Third, we prove an equivalence between the two schemes showing that, despite the fact that the sequential scheme has access to a more refined filtration, the joint scheme performs just as well on average. Fourth, we establish convergence rates of the mean-square error and derive error exponents. Finally, we obtain an extension to the case of unknown error probabilities. This framework provides a mathematical model for incorporating a human in the loop for active machine learning systems.",
"This paper investigates the problem of determining a binary-valued function through a sequence of strategically selected queries. The focus is an algorithm called Generalized Binary Search (GBS). GBS is a well-known greedy algorithm for determining a binary-valued function through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideration into two disjoint subsets, a natural generalization of the idea underlying classic binary search. This paper develops novel incoherence and geometric conditions under which GBS achieves the information-theoretically optimal query complexity; i.e., given a collection of N hypotheses, GBS terminates with the correct function after no more than a constant times logN queries. Furthermore, a noise-tolerant version of GBS is developed that also achieves the optimal query complexity. These results are applied to learning halfspaces, a problem arising routinely in image processing and machine learning.",
"Consider a target moving at a constant velocity on a unit-circumference circle, starting at an arbitrary location. To acquire the target, any region of the circle can be probed to obtain a noisy measurement of the target’s presence, where the noise level increases with the size of the probed region. We are interested in the expected time required to find the target to within some given resolution and error probability. For a known velocity and a given reliability, we provide an asymptotical characterization of the optimal tradeoff between time and resolution. Considering an asymptotically diminishing error probability, we derive the maximal targeting rate, and show that in contrast to the well-studied case of constant measurement noise, measurement dependent noise incurs a multiplicative gap in the maximal targeting rate between adaptive and non-adaptive search strategies. Moreover, for all rates below this maximal rate, our adaptive strategy attains the optimal rate-reliability tradeoff. We further show that accounting for a target moving at an unknown fixed velocity, the optimal non-adaptive search strategy incurs a factor of at least two in the maximal targeting rate.",
"This article presents a new approach to designing brain–computer interfaces (BCIs) that explicitly accounts for both the uncertainty of neural signals and the important role of sensory feedback. This approach views a BCI as the means by which users communicate intent to an external device and models intent as a string in an ordered symbolic language. This abstraction allows the problem of designing a BCI to be reformulated as the problem of designing a reliable communication protocol using tools from feedback information theory. Here, this protocol is given by a posterior matching scheme. This scheme is not only provably optimal but also easily understood and implemented by a human user. Experimental validation is provided by an interface for text entry and an interface for tracing smooth planar curves, where input is taken in each case from an electroencephalograph during left- and right-hand motor imagery.",
"The reliability function of memoryless channels with noiseless feedback and variable-length coding has been found to be a linear function of the average rate in the classic work of Burnashev. In this work we consider unifilar channels with noiseless feedback and study specific transmission schemes, the performance of which provides lower bounds for the channel reliability function. In unifilar channels the channel state evolves in a deterministic fashion based on the previous state, input, and output, and is known to the transmitter but is unknown to the receiver. We consider a two-stage transmission scheme. In the first stage, both transmitter and receiver summarize their common information in an M-dimensional vector with elements in the state space of the unifilar channel and an M-dimensional probability mass function, with M being the number of messages. The second stage, which is entered when one of the messages is sufficiently reliable, is resolving a binary hypothesis testing problem. The analysis assumes the presence of some common randomness shared by the transmitter and receiver, and is based on the study of the log-likelihood ratio of the transmitted message posterior belief, and in particular on the study of its multistep drift. Simulation results confirm that the bounds are tight compared to the upper bounds derived in a companion paper.",
"This paper considers the problem of variable-length coding over a discrete memoryless channel with noiseless feedback. This paper provides a stochastic control view of the problem whose solution is analyzed via a newly proposed symmetrized divergence, termed extrinsic Jensen–Shannon (EJS) divergence. It is shown that strictly positive lower bounds on EJS divergence provide nonasymptotic upper bounds on the expected code length. This paper presents strictly positive lower bounds on EJS divergence, and hence nonasymptotic upper bounds on the expected code length, for the following two coding schemes: 1) variable-length posterior matching and 2) MaxEJS coding scheme that is based on a greedy maximization of the EJS divergence. As an asymptotic corollary of the main results, this paper also provides a rate–reliability test. Variable-length coding schemes that satisfy the condition(s) of the test for parameters @math and @math are guaranteed to achieve a rate @math and an error exponent @math . The results are specialized for posterior matching and MaxEJS to obtain deterministic one-phase coding schemes achieving capacity and optimal error exponent. For the special case of symmetric binary-input channels, simpler deterministic schemes of optimal performance are proposed and analyzed.",
"For a memoryless channel, although feedback cannot increase capacity, it can reduce the complexity and or improve the error performance of a communication system. Recently, Shayevitz and Feder proposed the posterior matching scheme (PMS) which is a simple recursive transmission scheme that achieves the capacity of memoryless channels with feedback. Furthermore, Coleman provided a Lyapunov function approach to prove capacity achievability of the PMS. In this paper, we investigate a capacity-achieving PMS for the case of finite-state channels (FSCs). We first derive a single-letter expression for the capacity of the FSC with delayed output and state feedback by formulating the problem in a stochastic control framework. The resulting capacity expression can be evaluated using dynamic programming. We then propose a simple recursive PMS-like transmission scheme. To prove capacity achievability of the proposed PMS, we identify an appropriate Markov chain induced by the PMS.",
"Let S be a given subset of binary n-sequences. We provide an explicit scheme for calculating the index of any sequence in S according to its position in the lexicographic ordering of S . A simple inverse algorithm is also given. Particularly nice formulas arise when S is the set of all n -sequences of weight k and also when S is the set of all sequences having a given empirical Markov property. Schalkwijk and Lynch have investigated the former case. The envisioned use of this indexing scheme is to transmit or store the index rather than the sequence, thus resulting in a data compression of ( ) n .",
"Abstract This paper considers the problem of adaptively searching for an unknown target using multiple agents connected through a time-varying network topology. Agents are equipped with sensors capable of fast information processing, and we propose a decentralized collaborative algorithm for controlling their search given noisy observations. Specifically, we propose decentralized extensions of the adaptive query-based search strategy that combines elements from the 20 questions approach and social learning. Under standard assumptions on the time-varying network dynamics, we prove convergence to correct consensus on the value of the parameter as the number of iterations go to infinity. The convergence analysis takes a novel approach using martingale-based techniques combined with spectral graph theory. Our results establish that stability and consistency can be maintained even with one-way updating and randomized pairwise averaging, thus providing a scalable low complexity method with performance guarantees. We illustrate the effectiveness of our algorithm for random network topologies.",
"This paper analyzes the potential advantages and theoretical challenges of \"active learning\" algorithms. Active learning involves sequential sampling procedures that use information gleaned from previous samples in order to focus the sampling and accelerate the learning process relative to \"passive learning\" algorithms, which are based on nonadaptive (usually random) samples. There are a number of empirical and theoretical results suggesting that in certain situations active learning can be significantly more effective than passive learning. However, the fact that active learning algorithms are feedback systems makes their theoretical analysis very challenging. This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore, we show that the learning rates derived are tight for \"boundary fragment\" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below.",
"Consider a target search problem on a unit interval where at any given time an agent can choose a region to probe into for the presence of the target in that region. The measurement noise is assumed to be increasing with the size of the search region the agent chooses. In this paper, a single-phase sequential and adaptive search algorithm is proposed and shown to achieve the best possible targeting rate and error exponent among all adaptive search algorithms. The proposed algorithm simply adopts a low complexity sorting operation on the posterior of the target and then pick up locations with larger posterior until the probability that the search region contains the target is closest to half.",
"This paper considers the problem of searching for the unknown location of a target among a finite number of possible locations by probing multiple locations simultaneously. Outcome of each search measurement is corrupted by Gaussian noise whose intensity is proportional to the number of locations probed. We characterize a non-asymptotic lower bound on adaptivity gain; i.e. reduction in the expected number of measurements under an adaptive search strategies over the non-adaptive search strategies. Then we investigate the adaptivity gain in two complementary asymptotic regimes: one where the total search area is kept fixed but the location width is shrinking or the search resolution is increasing, and the other where each location width is fixed but the total search area is growing. Interestingly, adaptivity gain grows in distinctly different manner in these two regimes. In particular, adaptivity gains are significant in the later regime when the total search space grows; implying adaptivity is far more critical when either total search area or the noise intensity is large.",
"The capacity of unifilar finite-state channels with feedback has been recently derived in the form of a single-letter expression and it has been evaluated analytically for a number of channels of interest, such as the trapdoor channel and the Ising channel with feedback. In this paper, we investigate transmission schemes for this class of channels. These schemes are inspired by the posterior matching scheme (PMS) introduced for memoryless channels with feedback. The transmission scheme is proven to achieve zero rate and is conjectured to achieve channel capacity.",
"Shayevitz and Feder proposed a capacity-achieving sequential transmission scheme for memoryless channels called posterior matching (PM). The proof of capacity achievability of PM is involved and requires invertibility of the PM kernel (also referred to as one-step invertibility). Recent work by the same authors provided a simpler proof but still requires PM kernel invertibility.",
"Consider a decision maker who is responsible to dynamically collect observations so as to enhance his information about an underlying phenomena of interest in a speedy manner while accounting for the penalty of wrong declaration. Due to the sequential nature of the problem, the decision maker relies on his current information state to adaptively select the most informative'' sensing action among the available ones. In this paper, using results in dynamic programming, lower bounds for the optimal total cost are established. The lower bounds characterize the fundamental limits on the maximum achievable information acquisition rate and the optimal reliability. Moreover, upper bounds are obtained via an analysis of two heuristic policies for dynamic selection of actions. It is shown that the first proposed heuristic achieves asymptotic optimality, where the notion of asymptotic optimality, due to Chernoff, implies that the relative difference between the total cost achieved by the proposed policy and the optimal total cost approaches zero as the penalty of wrong declaration (hence the number of collected samples) increases. The second heuristic is shown to achieve asymptotic optimality only in a limited setting such as the problem of a noisy dynamic search. However, by considering the dependency on the number of hypotheses, under a technical condition, this second heuristic is shown to achieve a nonzero information acquisition rate, establishing a lower bound for the maximum achievable rate and error exponent. In the case of a noisy dynamic search with size-independent noise, the obtained nonzero rate and error exponent are shown to be maximum."
]
}
|
1901.02523
|
2909636570
|
The posterior matching scheme, for feedback encoding of a message point lying on the unit interval over memoryless channels, maximizes mutual information for an arbitrary number of channel uses. However, it in general does not always achieve any positive rate; so far, elaborate analyses have been required to show that it achieves any positive rate below capacity. More recent efforts have introduced a random "dither" shared by the encoder and decoder to the problem formulation, to simplify analyses and guarantee that the randomized scheme achieves any rate below capacity. Motivated by applications (e.g. human-computer interfaces) where (a) common randomness shared by the encoder and decoder may not be feasible and (b) the message point lies in a higher dimensional space, we focus here on the original formulation without common randomness, and use optimal transport theory to generalize the scheme for a message point in a higher dimensional space. By defining a stricter, almost sure, notion of message decoding, we use classical probabilistic techniques (e.g. change of measure and martingale convergence) to establish succinct necessary and sufficient conditions on when the message point can be recovered from infinite observations: Birkhoff ergodicity of a random process sequentially generated by the encoder. We also show a surprising "all or nothing" result: the same ergodicity condition is necessary and sufficient to achieve any rate below capacity. We provide applications of this message point framework in human-computer interfaces and multi-antenna communications.
|
Although the PM scheme maximizes mutual information, in some situations the posterior @math never converges to a point mass at @math , implying that no positive rate is achievable. [Example 11] Shayevitz2011pm shows that the breakdown of reliability or achieving capacity is not solely a property of the channel: for the same channel, variants of the original scheme involving measure-preserving transformations of the input can ameliorate these issues. Sufficient conditions for reliability [Lemma 13] Shayevitz2011pm and achieving capacity [Theorem 4] Shayevitz2011pm were originally established in @cite_57 , involving regularity' and uniformly bounded max-to-min ratios assumptions along with elaborate fix-point analysis. This has led researchers to slightly alter the problem formulation and encoding schemes to more simply confirm guarantees on subsequent performance. Li and El Gamal @cite_27 considered a non-sequential, fixed-rate, fixed-block-length feedback coding scheme for discrete memoryless channels (DMCs) when @math and introduced a random dither known to the encoder and decoder to provide a simple proof of achieving capacity.
|
{
"cite_N": [
"@cite_57",
"@cite_27"
],
"mid": [
"2011914761",
"1975231568"
],
"abstract": [
"In this paper, we introduce a fundamental principle for optimal communication over general memoryless channels in the presence of noiseless feedback, termed posterior matching. Using this principle, we devise a (simple, sequential) generic feedback transmission scheme suitable for a large class of memoryless channels and input distributions, achieving any rate below the corresponding mutual information. This provides a unified framework for optimal feedback communication in which the Horstein scheme (BSC) and the Schalkwijk-Kailath scheme (AWGN channel) are special cases. Thus, as a corollary, we prove that the Horstein scheme indeed attains the BSC capacity, settling a longstanding conjecture. We further provide closed form expressions for the error probability of the scheme over a range of rates, and derive the achievable rates in a mismatch setting where the scheme is designed according to the wrong channel model. Several illustrative examples of the posterior matching scheme for specific channels are given, and the corresponding error probability expressions are evaluated. The proof techniques employed utilize novel relations between information rates and contraction properties of iterated function systems.",
"Existing fixed-length feedback communication schemes are either specialized to particular channels (Schalkwijk–Kailath, Horstein), or apply to general channels but either have high coding complexity (block feedback schemes) or are difficult to analyze (posterior matching). This paper introduces a new fixed-length feedback coding scheme which achieves the capacity for all discrete memoryless channels, has an error exponent that approaches the sphere packing bound as the rate approaches the capacity, and has @math coding complexity. These benefits are achieved by judiciously combining features from previous schemes with new randomization technique and encoding decoding rule. These new features make the analysis of the error probability for the new scheme easier than for posterior matching."
]
}
|
1901.02523
|
2909636570
|
The posterior matching scheme, for feedback encoding of a message point lying on the unit interval over memoryless channels, maximizes mutual information for an arbitrary number of channel uses. However, it in general does not always achieve any positive rate; so far, elaborate analyses have been required to show that it achieves any positive rate below capacity. More recent efforts have introduced a random "dither" shared by the encoder and decoder to the problem formulation, to simplify analyses and guarantee that the randomized scheme achieves any rate below capacity. Motivated by applications (e.g. human-computer interfaces) where (a) common randomness shared by the encoder and decoder may not be feasible and (b) the message point lies in a higher dimensional space, we focus here on the original formulation without common randomness, and use optimal transport theory to generalize the scheme for a message point in a higher dimensional space. By defining a stricter, almost sure, notion of message decoding, we use classical probabilistic techniques (e.g. change of measure and martingale convergence) to establish succinct necessary and sufficient conditions on when the message point can be recovered from infinite observations: Birkhoff ergodicity of a random process sequentially generated by the encoder. We also show a surprising "all or nothing" result: the same ergodicity condition is necessary and sufficient to achieve any rate below capacity. We provide applications of this message point framework in human-computer interfaces and multi-antenna communications.
|
Also, Shayevitz and Feder @cite_6 examined a randomized variant of the PM scheme with a random dither and were able to provide a much simpler proof of optimality over general memoryless channels when @math . That is, as compared to the proofs above which are restricted to non-sequential variants with a fixed number of messages and only apply to DMCs, Shayevitz and Feder @cite_6 used a random dither to provide a proof of optimality of the sequential, horizon-free, posterior matching scheme over general memoryless channels. These settings where the encoder and decoder share a common source of randomness by way of a dither, however, may be undesirable in some situations (e.g. when considering human involvement as described above).
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2964122424"
],
"abstract": [
"Posterior matching (PM) is a sequential horizon-free feedback communication scheme introduced by the authors, who also provided a rather involved optimality proof, showing that it achieves capacity for a large class of memoryless channels. considered a non-sequential variation of PM with a fixed number of messages and a random decision-time, and gave a simpler proof establishing its optimality via a novel extrinsic Jensen–Shannon divergence argument. Another simpler optimality proof was given by Li and El Gamal, who considered a fixed-rate fixed block-length variation of PM with an additional randomization. Both these works also provided error exponent bounds. However, their simpler achievability proofs apply only to discrete memoryless channels, and are restricted to a non-sequential setup with a fixed number of messages. In this paper, we provide a short and transparent proof for the optimality of the fully sequential randomized horizon-free PM scheme over general memoryless channels. Borrowing the key randomization idea of Li and El Gamal, our proof is based on analyzing the random walk behavior of the shrinking posterior intervals induced by a reversed iterated function system decoder."
]
}
|
1901.02523
|
2909636570
|
The posterior matching scheme, for feedback encoding of a message point lying on the unit interval over memoryless channels, maximizes mutual information for an arbitrary number of channel uses. However, it in general does not always achieve any positive rate; so far, elaborate analyses have been required to show that it achieves any positive rate below capacity. More recent efforts have introduced a random "dither" shared by the encoder and decoder to the problem formulation, to simplify analyses and guarantee that the randomized scheme achieves any rate below capacity. Motivated by applications (e.g. human-computer interfaces) where (a) common randomness shared by the encoder and decoder may not be feasible and (b) the message point lies in a higher dimensional space, we focus here on the original formulation without common randomness, and use optimal transport theory to generalize the scheme for a message point in a higher dimensional space. By defining a stricter, almost sure, notion of message decoding, we use classical probabilistic techniques (e.g. change of measure and martingale convergence) to establish succinct necessary and sufficient conditions on when the message point can be recovered from infinite observations: Birkhoff ergodicity of a random process sequentially generated by the encoder. We also show a surprising "all or nothing" result: the same ergodicity condition is necessary and sufficient to achieve any rate below capacity. We provide applications of this message point framework in human-computer interfaces and multi-antenna communications.
|
Recently, we have considered the case when the message point lies in a higher dimensional space (e.g. @math for some arbitrary @math ) and constructed a feedback encoding scheme using optimal transport theory @cite_19 . Inspired by the time-invariant dynamical systems structure of the original PM scheme (without shared randomness at the encoder), and motivated by communication applications where humans play a role, below we develop appropriate notions of reliability and rate achievability in the multidimensional message point setting involving almost sure convergence. This allows us to use classical probability tools, including change of measure and martingale convergence, that have recently been employed by Van Handel to provide rigorous and succinct results pertaining to filter stability in hidden Markov models @cite_31 @cite_33 , to provide succinct necessary and sufficient conditions for the generalized PM scheme to attain optimal performance. This provides a clear characterization of optimality of the original PM scheme in arbitrary dimensions in terms of Birkhoff ergodicity of an appropriate random process, without requiring the use of a dither.
|
{
"cite_N": [
"@cite_19",
"@cite_31",
"@cite_33"
],
"mid": [
"2074430484",
"2070178449",
"2010995253"
],
"abstract": [
"This paper re-visits Shayevitz & Feder's recent ‘Posterior Matching Scheme’, an explicit, dynamical system encoder for communication with feedback that treats the message as a point on the [0,1] line and achieves capacity on memo-ryless channels. It has two key properties that ensure that it maximizes mutual information at each step: (a) the encoder sequentially hands the decoder what is missing; and (b) the next input has the desired statistics. Motivated by brain-machine interface applications and multi-antenna communications, we consider developing dynamical system feedback encoders for scenarios when the message point lies in higher dimensions. We develop a necessary and sufficient condition — the Jacobian equation — for any dynamical system encoder that maximizes mutual information. In general, there are many solutions to this equation. We connect this to the Monge-Kantorovich Optimal Transportation Problem, which provides a framework to identify a unique solution suiting a specific purpose. We provide two examplar capacity-achieving solutions — for different purposes — for the multi-antenna Gaussian channel with feedback. This insight further elucidates an interesting relationship between interactive decision theory problems and the theory of optimal transportation.",
"We consider a discrete time hidden Markov model where the signal is a stationary Markov chain. When conditioned on the observations, the signal is a Markov chain in a random environment under the conditional measure. It is shown that this conditional signal is weakly ergodic when the signal is ergodic and the observations are nondegenerate. This permits a delicate exchange of the intersection and supremum of σ-fields, which is key for the stability of the nonlinear filter and partially resolves a long-standing gap in the proof of a result of Kunita [J. Multivariate Anal. 1 (1971) 365―393]. A similar result is obtained also in the continuous time setting. The proofs are based on an ergodic theorem for Markov chains in random environments in a general state space.",
"The nonlinear filter for an ergodic signal observed in white noise is said to achieve maximal accuracy if the stationary filtering error vanishes as the signal to noise ratio diverges. We give a general characterization of the maximal accuracy property in terms of various systems theoretic notions. When the signal state space is a finite set explicit necessary and sufficient conditions are obtained, while the linear Gaussian case reduces to a classic result of Kwakernaak and Sivan [IEEE Trans. Automatic Control, AC-17 (1972), pp. 79-86]."
]
}
|
1901.02579
|
2909364378
|
Recent advances in computer vision have made it possible to automatically assess from videos the manipulation skills of humans in performing a task, which breeds many important applications in domains such as health rehabilitation and manufacturing. Previous methods of video-based skill assessment did not consider the attention mechanism humans use in assessing videos, limiting their performance as only a small part of video regions is informative for skill assessment. Our motivation here is to estimate attention in videos that helps to focus on critically important video regions for better skill assessment. In particular, we propose a novel RNN-based spatial attention model that considers accumulated attention state from previous frames as well as high-level knowledge about the progress of an undergoing task. We evaluate our approach on a newly collected dataset of infant grasping task and four existing datasets of hand manipulation tasks. Experiment results demonstrate that state-of-the-art performance can be achieved by considering attention in automatic skill assessment.
|
A number of previous methods only regards the skill level in a coarse manner: In @cite_18 the level of skill is only split into novice and expert. These works @cite_11 @cite_18 determine skill labels by participants’ previous experience, but not their performance in individual videos. In this work we aim to rank the skill in each video instead of classifying the videos as expert or novice.
|
{
"cite_N": [
"@cite_18",
"@cite_11"
],
"mid": [
"2398889210",
"2553098005"
],
"abstract": [
"We present an automated framework for visual assessment of the expertise level of surgeons using the OSATS Objective Structured Assessment of Technical Skills criteria. Video analysis techniques for extracting motion quality via frequency coefficients are introduced. The framework is tested on videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.",
"The correct execution of well-defined movements plays a crucial role in physical rehabilitation and sports. While there is an extensive number of well-established approaches for human action recognition, the task of assessing the quality of actions and providing feedback for correcting inaccurate movements has remained an open issue in the literature. We present a learning-based method for efficiently providing feedback on a set of training movements captured by a depth sensor. We propose a novel recursive neural network that uses growing self-organization for the efficient learning of body motion sequences. The quality of actions is then computed in terms of how much a performed movement matches the correct continuation of a learned sequence. The proposed system provides visual assistance to the person performing an exercise by displaying real-time feedback, thus enabling the user to correct inaccurate postures and motion intensity. We evaluate our approach with a data set containing 3 powerlifting exercises performed by 17 athletes. Experimental results show that our novel architecture outperforms our previous approach for the correct prediction of routines and the detection of mistakes both in a single- and multiple-subject scenario."
]
}
|
1901.02579
|
2909364378
|
Recent advances in computer vision have made it possible to automatically assess from videos the manipulation skills of humans in performing a task, which breeds many important applications in domains such as health rehabilitation and manufacturing. Previous methods of video-based skill assessment did not consider the attention mechanism humans use in assessing videos, limiting their performance as only a small part of video regions is informative for skill assessment. Our motivation here is to estimate attention in videos that helps to focus on critically important video regions for better skill assessment. In particular, we propose a novel RNN-based spatial attention model that considers accumulated attention state from previous frames as well as high-level knowledge about the progress of an undergoing task. We evaluate our approach on a newly collected dataset of infant grasping task and four existing datasets of hand manipulation tasks. Experiment results demonstrate that state-of-the-art performance can be achieved by considering attention in automatic skill assessment.
|
Perhaps the most similar work with ours is @cite_28 , in which a two-stream pairwise deep ranking model with a newly designed loss function is used for skill assessment. However, their method is purely bottom up, without using the high-level information related to task or skill to guide the bottom-up feed-forward process. This may result in worse performance since too much redundant information is observed. In this work, we use the attention mechanism to guide the bottom-up feed-forward process. Our attention is learned from both low-level information globally extracted from each frame and skill-related information accumulated in previous observation, which is able to dynamically generate spatial attention maps to guide the bottom-up features, thus achieving a better performance.
|
{
"cite_N": [
"@cite_28"
],
"mid": [
"2604794854"
],
"abstract": [
"This paper presents a method for assessing skill of performance from video, for a variety of tasks, ranging from drawing to surgery and rolling dough. We formulate the problem as pairwise and overall ranking of video collections, and propose a supervised deep ranking model to learn discriminative features between pairs of videos exhibiting different amounts of skill. We utilise a two-stream Temporal Segment Network to capture both the type and quality of motions and the evolving task state. Results demonstrate our method is applicable to a variety of tasks, with the percentage of correctly ordered pairs of videos ranging from 70 to 82 for four datasets. We demonstrate the robustness of our approach via sensitivity analysis of its parameters. We see this work as effort toward the automated and objective organisation of how-to videos and overall, generic skill determination in video."
]
}
|
1901.02579
|
2909364378
|
Recent advances in computer vision have made it possible to automatically assess from videos the manipulation skills of humans in performing a task, which breeds many important applications in domains such as health rehabilitation and manufacturing. Previous methods of video-based skill assessment did not consider the attention mechanism humans use in assessing videos, limiting their performance as only a small part of video regions is informative for skill assessment. Our motivation here is to estimate attention in videos that helps to focus on critically important video regions for better skill assessment. In particular, we propose a novel RNN-based spatial attention model that considers accumulated attention state from previous frames as well as high-level knowledge about the progress of an undergoing task. We evaluate our approach on a newly collected dataset of infant grasping task and four existing datasets of hand manipulation tasks. Experiment results demonstrate that state-of-the-art performance can be achieved by considering attention in automatic skill assessment.
|
Evidence from human perception process shows the importance of attention mechanism @cite_12 , which uses top information to guide bottom-up feed forward process @cite_32 . Recently, tentative efforts have been made towards applying attention into deep neural network for various tasks like image recognition @cite_29 @cite_32 @cite_49 , visual question answering @cite_21 @cite_50 , image and video captioning @cite_37 @cite_3 @cite_39 @cite_24 and visual attention prediction @cite_6 @cite_42 @cite_23 @cite_22 .
|
{
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_42",
"@cite_32",
"@cite_3",
"@cite_39",
"@cite_24",
"@cite_6",
"@cite_49",
"@cite_50",
"@cite_23",
"@cite_12"
],
"mid": [
"2745461083",
"2907184450",
"",
"2883092128",
"2616247523",
"2963495494",
"",
"2895420168",
"2739107216",
"2769991960",
"2884585870",
"",
"2795307598",
"2951527505"
],
"abstract": [
"Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.",
"In this work, we address two coupled tasks of gaze prediction and action recognition in egocentric videos by exploring their mutual context. Our assumption is that in the procedure of performing a manipulation task, what a person is doing determines where the person is looking at, and the gaze point reveals gaze and non-gaze regions which contain important and complementary information about the undergoing action. We propose a novel mutual context network (MCN) that jointly learns action-dependent gaze prediction and gaze-guided action recognition in an end-to-end manner. Experiments on public egocentric video datasets demonstrate that our MCN achieves state-of-the-art performance of both gaze prediction and action recognition.",
"",
"Abstract We conduct large-scale studies on ‘human attention’ in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Our experiments show that current attention models in VQA do not seem to be looking at the same regions as humans. Finally, we train VQA models with explicit attention supervision, and find that it improves VQA performance.",
"We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E.",
"In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.",
"",
"Visual attention has shown usefulness in image captioning, with the goal of enabling a caption model to selectively focus on regions of interest. Existing models typically rely on top-down language information and learn attention implicitly by optimizing the captioning objectives. While somewhat effective, the learned top-down attention can fail to focus on correct regions of interest without direct supervision of attention. Inspired by the human visual system which is driven by not only the task-specific top-down signals but also the visual stimuli, we in this work propose to use both types of attention for image captioning. In particular, we highlight the complementary nature of the two types of attention and develop a model (Boosted Attention) to integrate them for image captioning. We validate the proposed approach with state-of-the-art performance across various evaluation metrics.",
"Recent progress in using long short-term memory (LSTM) for image captioning has motivated the exploration of their applications for video captioning. By taking a video as a sequence of features, an LSTM model is trained on video-sentence pairs and learns to associate a video to a sentence. However, most existing methods compress an entire video shot or frame into a static representation, without considering attention mechanism which allows for selecting salient features. Furthermore, existing approaches usually model the translating error, but ignore the correlations between sentence semantics and visual content. To tackle these issues, we propose a novel end-to-end framework named aLSTMs, an attention-based LSTM model with semantic consistency, to transfer videos to natural sentences. This framework integrates attention mechanism with LSTM to capture salient structures of video, and explores the correlation between multimodal representations (i.e., words and visual content) for generating sentences with rich semantic content. Specifically, we first propose an attention mechanism that uses the dynamic weighted sum of local two-dimensional convolutional neural network representations. Then, an LSTM decoder takes these visual features at time @math and the word-embedding feature at time @math @math 1 to generate important words. Finally, we use multimodal embedding to map the visual and sentence features into a joint space to guarantee the semantic consistence of the sentence description and the video visual content. Experiments on the benchmark datasets demonstrate that our method using single feature can achieve competitive or even better results than the state-of-the-art baselines for video captioning in both BLEU and METEOR.",
"This work aims to develop a computer-vision technique for understanding objects jointly attended by a group of people during social interactions. As a key tool to discover such objects of joint attention, we rely on a collection of wearable eye-tracking cameras that provide a first-person video of interaction scenes and points-of-gaze data of interacting parties. Technically, we propose a hierarchical conditional random field-based model that can 1) localize events of joint attention temporally and 2) segment objects of joint attention spatially. We show that by alternating these two procedures, objects of joint attention can be discovered reliably even from cluttered scenes and noisy points-of-gaze data. Experimental results demonstrate that our approach outperforms several state-of-the-art methods for co-segmentation and joint attention discovery.",
"We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available.",
"",
"We present a new computational model for gaze prediction in egocentric videos by exploring patterns in temporal shift of gaze fixations (attention transition) that are dependent on egocentric manipulation tasks. Our assumption is that the high-level context of how a task is completed in a certain way has a strong influence on attention transition and should be modeled for gaze prediction in natural dynamic scenes. Specifically, we propose a hybrid model based on deep neural networks which integrates task-dependent attention transition with bottom-up saliency prediction. In particular, the task-dependent attention transition is learned with a recurrent neural network to exploit the temporal context of gaze fixations, e.g. looking at a cup after moving gaze away from a grasped bottle. Experiments on public egocentric activity datasets show that our model significantly outperforms state-of-the-art gaze prediction methods and is able to learn meaningful transition of human attention.",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so."
]
}
|
1901.02579
|
2909364378
|
Recent advances in computer vision have made it possible to automatically assess from videos the manipulation skills of humans in performing a task, which breeds many important applications in domains such as health rehabilitation and manufacturing. Previous methods of video-based skill assessment did not consider the attention mechanism humans use in assessing videos, limiting their performance as only a small part of video regions is informative for skill assessment. Our motivation here is to estimate attention in videos that helps to focus on critically important video regions for better skill assessment. In particular, we propose a novel RNN-based spatial attention model that considers accumulated attention state from previous frames as well as high-level knowledge about the progress of an undergoing task. We evaluate our approach on a newly collected dataset of infant grasping task and four existing datasets of hand manipulation tasks. Experiment results demonstrate that state-of-the-art performance can be achieved by considering attention in automatic skill assessment.
|
In video representation, the majority of works utilize the attention mechanism in action recognition @cite_35 @cite_8 @cite_46 @cite_25 and action localization @cite_43 . In @cite_14 , an end-to-end spatiotemporal attention model is used to recognize action using skeleton data. They use LSTM-like structure and construct joint selection gates and frame selection gates for modeling spatial and temporal attention. Girdhar @cite_52 propose an attentional pooling method based on second-order pooling for action recognition. Both saliency-based and class-specific attention are considered in their model, however, the attention is learned statically from each frame where no temporal information between frames is considered. @cite_45 incorporates a pose-based attention mechanism into recurrent networks to learn complex temporal motion structures for action recognition. Although the temporal relationship of attention in context is modeled by a recurrent structure, the model cannot generalize into the situation where the pose is unavailable from appearance.
|
{
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_8",
"@cite_52",
"@cite_43",
"@cite_45",
"@cite_46",
"@cite_25"
],
"mid": [
"2736334449",
"",
"2963699792",
"2964233791",
"2772853459",
"2779380177",
"2608988379",
"2962889000"
],
"abstract": [
"Long Short-Term Memory (LSTM) networks have shown superior performance in 3D human action recognition due to their power in modeling the dynamics and dependencies in sequential data. Since not all joints are informative for action analysis and the irrelevant joints often bring a lot of noise, we need to pay more attention to the informative ones. However, original LSTM does not have strong attention capability. Hence we propose a new class of LSTM network, Global Context-Aware Attention LSTM (GCA-LSTM), for 3D action recognition, which is able to selectively focus on the informative joints in the action sequence with the assistance of global contextual information. In order to achieve a reliable attention representation for the action sequence, we further propose a recurrent attention mechanism for our GCA-LSTM network, in which the attention performance is improved iteratively. Experiments show that our end-to-end network can reliably focus on the most informative joints in each frame of the skeleton sequence. Moreover, our network yields state-of-the-art performance on three challenging datasets for 3D action recognition.",
"",
"Human actions often involve complex interactions across several inter-related objects in the scene. However, existing approaches to fine-grained video understanding or visual relationship detection often rely on single object representation or pairwise object relationships. Furthermore, learning interactions across multiple objects in hundreds of frames for video is computationally infeasible and performance may suffer since a large combinatorial space has to be modeled. In this paper, we propose to efficiently learn higher-order interactions between arbitrary subgroups of objects for fine-grained video understanding. We demonstrate that modeling object interactions significantly improves accuracy for both action recognition and video captioning, while saving more than 3-times the computation over traditional pairwise relationships. The proposed method is validated on two large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and SINet-Caption achieve state-of-the-art performances on both datasets even though the videos are sampled at a maximum of 1 FPS. To the best of our knowledge, this is the first work modeling object interactions on open domain large-scale video datasets, and we additionally model higher-order object interactions which improves the performance with low computational costs.",
"We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII dataset with 12.5 relative improvement. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem.",
"We present a method for utilizing weakly supervised data for action localization in videos. We focus on sports video analysis, where videos contain scenes of multiple people. Weak supervision gathered from sports website is provided in the form of an action taking place in a video clip, without specification of the person performing the action. Since many frames of a clip can be ambiguous, a novel temporal attention approach is designed to select the most distinctive frames in which to apply the weak supervision. Empirical results demonstrate that leveraging weak supervision can build upon purely supervised localization methods, and utilizing temporal attention further improves localization accuracy.",
"Recent studies demonstrate the effectiveness of Recurrent Neural Networks (RNNs) for action recognition in videos. However, previous works mainly utilize video-level category as supervision to train RNNs, which may prohibit RNNs to learn complex motion structures along time. In this paper, we propose a recurrent pose-attention network (RPAN) to address this challenge, where we introduce a novel pose-attention mechanism to adaptively learn pose-related features at every time-step action prediction of RNNs. More specifically, we make three main contributions in this paper. Firstly, unlike previous works on pose-related action recognition, our RPAN is an end-toend recurrent network which can exploit important spatialtemporal evolutions of human pose to assist action recognition in a unified framework. Secondly, instead of learning individual human-joint features separately, our poseattention mechanism learns robust human-part features by sharing attention parameters partially on the semanticallyrelated human joints. These human-part features are then fed into the human-part pooling layer to construct a highlydiscriminative pose-related representation for temporal action modeling. Thirdly, one important byproduct of our RPAN is pose estimation in videos, which can be used for coarse pose annotation in action videos. We evaluate the proposed RPAN quantitatively and qualitatively on two popular benchmarks, i.e., Sub-JHMDB and PennAction. Experimental results show that RPAN outperforms the recent state-of-the-art methods on these challenging datasets.",
"In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks [42] with learnable spatio-temporal feature aggregation [6]. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13 relative) as well as outperforms other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks.",
"Abstract Recurrent Neural Networks (RNNs) have been widely used in natural language processing and computer vision. Amongst them, the Hierarchical Multi-scale RNN (HM-RNN), a recently proposed multi-scale hierarchical RNN, can automatically learn the hierarchical temporal structure from data. In this paper, we extend the work to solve the computer vision task of action recognition. However, in sequence-to-sequence models like RNN, it is normally very hard to discover the relationships between inputs and outputs given static inputs. As a solution, the attention mechanism can be applied to extract the relevant information from the inputs thus facilitating the modeling of the input–output relationships. Based on these considerations, we propose a novel attention network, namely Hierarchical Multi-scale Attention Network (HM-AN), by incorporating the attention mechanism into the HM-RNN and applying it to action recognition. A newly proposed gradient estimation method for stochastic neurons, namely Gumbel-softmax, is exploited to implement the temporal boundary detectors and the stochastic hard attention mechanism. To reduce the negative effect of the temperature sensitivity of the Gumbel-softmax, an adaptive temperature training method is applied to improve the system performance. The experimental results demonstrate the improved effect of HM-AN over LSTM with attention on the vision task. Through visualization of what has been learnt by the network, it can be observed that both the attention regions of the images and the hierarchical temporal structure can be captured by a HM-AN."
]
}
|
1901.02579
|
2909364378
|
Recent advances in computer vision have made it possible to automatically assess from videos the manipulation skills of humans in performing a task, which breeds many important applications in domains such as health rehabilitation and manufacturing. Previous methods of video-based skill assessment did not consider the attention mechanism humans use in assessing videos, limiting their performance as only a small part of video regions is informative for skill assessment. Our motivation here is to estimate attention in videos that helps to focus on critically important video regions for better skill assessment. In particular, we propose a novel RNN-based spatial attention model that considers accumulated attention state from previous frames as well as high-level knowledge about the progress of an undergoing task. We evaluate our approach on a newly collected dataset of infant grasping task and four existing datasets of hand manipulation tasks. Experiment results demonstrate that state-of-the-art performance can be achieved by considering attention in automatic skill assessment.
|
The most widely used ranking formulation is pairwise ranking. It was originally designed to learn search engine retrieval functions from click-through data, however, it has been adopted in other ranking applications such as relative attributes in images @cite_26 . Ranking aims to minimize the number of incorrectly ordered pairs of elements, by training a binary classifier to decide which element in a pair should be ranked higher. This formulation was first used by Joachims al in RankSVM @cite_7 , where a linear SVM is used to learn a ranking. Firstly in @cite_48 , the ranking has also been integrated into deep learning frameworks. @cite_2 uses a pairwise deep ranking model for highlight detection in egocentric videos. In this work, we use pairwise deep ranking as the training scheme, not only to ease the difficulty in ground truth labeling but also to provide augmented data (pairs) for training our model.
|
{
"cite_N": [
"@cite_48",
"@cite_26",
"@cite_7",
"@cite_2"
],
"mid": [
"2143331230",
"",
"2047221353",
"2467794422"
],
"abstract": [
"We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data from a commercial internet search engine.",
"",
"This paper presents an approach to automatically optimizing the retrieval quality of search engines using clickthrough data. Intuitively, a good information retrieval system should present relevant documents high in the ranking, with less relevant documents following below. While previous approaches to learning retrieval functions from examples exist, they typically require training data generated from relevance judgments by experts. This makes them difficult and expensive to apply. The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking. Such clickthrough data is available in abundance and can be recorded at very low cost. Taking a Support Vector Machine (SVM) approach, this paper presents a method for learning retrieval functions. From a theoretical perspective, this method is shown to be well-founded in a risk minimization framework. Furthermore, it is shown to be feasible even for large sets of queries and features. The theoretical results are verified in a controlled experiment. It shows that the method can effectively adapt the retrieval function of a meta-search engine to a particular group of users, outperforming Google in terms of retrieval quality after only a couple of hundred training examples.",
"The emergence of wearable devices such as portable cameras and smart glasses makes it possible to record life logging first-person videos. Browsing such long unstructured videos is time-consuming and tedious. This paper studies the discovery of moments of user's major or special interest (i.e., highlights) in a video, for generating the summarization of first-person videos. Specifically, we propose a novel pairwise deep ranking model that employs deep learning techniques to learn the relationship between high-light and non-highlight video segments. A two-stream network structure by representing video segments from complementary information on appearance of video frames and temporal dynamics across frames is developed for video highlight detection. Given a long personal video, equipped with the highlight detection model, a highlight score is assigned to each segment. The obtained highlight segments are applied for summarization in two ways: video time-lapse and video skimming. The former plays the highlight (non-highlight) segments at low (high) speed rates, while the latter assembles the sequence of segments with the highest scores. On 100 hours of first-person videos for 15 unique sports categories, our highlight detection achieves the improvement over the state-of-the-art RankSVM method by 10.5 in terms of accuracy. Moreover, our approaches produce video summary with better quality by a user study from 35 human subjects."
]
}
|
1901.02675
|
2911224109
|
Do we know what the different filters of a face network represent? Can we use this filter information to train other tasks without transfer learning? For instance, can age, head pose, emotion and other face related tasks be learned from face recognition network without transfer learning? Understanding the role of these filters allows us to transfer knowledge across tasks and take advantage of large data sets in related tasks. Given a pretrained network, we can infer which tasks the network generalizes for and the best way to transfer the information to a new task. We demonstrate a computationally inexpensive algorithm to reuse the filters of a face network for a task it was not trained for. Our analysis proves these attributes can be extracted with an accuracy comparable to what is obtained with transfer learning, but 10 times faster. We show that the information about other tasks is present in relatively small number of filters. We use these insights to do task specific pruning of a pretrained network. Our method gives significant compression ratios with reduction in size of 95 and computational reduction of 60
|
Other efforts attempt to interpret how individual neurons or groups of neurons work. Notably, the work by Raghu al @cite_23 proposed a method to determine the true dimensionality of a layer and determined that it is much smaller than the number of neurons in that layer. S. Morcos al @cite_16 studied the effect of single neurons on generalization performance by removing neurons from a neural network one by one. Alian and Bengio @cite_19 developed intuition about the trained model by using linear classifiers that use the hidden units of a given intermediate layer as discriminating features. This allows the user to visualize the state of the model at multiple steps of training. D.Bau al @cite_2 proposed a method for quantifying interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts.These methods focus on interpreting the latent features of deep networks trained on a single task. In contrast, our work focuses on analyzing how the latent features contain information about external tasks which the network did not encounter during training, and how to reuse these features for these tasks.
|
{
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_23",
"@cite_2"
],
"mid": [
"2952502547",
"2963420658",
"2767204723",
"2963749936"
],
"abstract": [
"Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as \"probes\", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.",
"Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network’s reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyper- parameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.",
"We propose a new technique, Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing comparison between different layers and networks) and fast to compute (allowing more comparisons to be calculated than with previous methods). We deploy this tool to measure the intrinsic dimensionality of layers, showing in some cases needless over-parameterization; to probe learning dynamics throughout training, finding that networks converge to final representations from the bottom up; to show where class-specific information in networks is formed; and to suggest new training regimes that simultaneously save computation and overfit less.",
"We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power."
]
}
|
1907.05274
|
2960314297
|
Autonomous vehicles (AV) have progressed rapidly with the advancements in computer vision algorithms. The deep convolutional neural network as the main contributor to this advancement has boosted the classification accuracy dramatically. However, the discovery of adversarial examples reveals the generalization gap between dataset and the real world. Furthermore, affine transformations may also confuse computer vision based object detectors. The degradation of the perception system is undesirable for safety critical systems such as autonomous vehicles. In this paper, a deep learning system is proposed: Affine Disentangled GAN (ADIS-GAN), which is robust against affine transformations and adversarial attacks. It is demonstrated that conventional data augmentation for affine transformation and adversarial attacks are orthogonal, while ADIS-GAN can handle both attacks at the same time. Useful information such as image rotation angle and scaling factor are also generated in ADIS-GAN. On MNIST dataset, ADIS-GAN can achieve over 98 percent classification accuracy within 30 degrees rotation, and over 90 percent classification accuracy against FGSM and PGD adversarial attack.
|
In @cite_20 , it was shown that simple affine transformations (e.g., rotations) can cause deep CNNs to misclassify images. The images captured by the perception system could experience similar affine distortions during normal driving scenarios when vehicles are passing water puddles (see Fig. 2) or on rural roads. Both adversarial attack and affine distortions need to be addressed before integrating deep CNN vision systems into safety-related applications like autonomous vehicle at large scale.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2773726006"
],
"abstract": [
"Recent work has shown that neural network-based vision classifiers exhibit a significant vulnerability to misclassifications caused by imperceptible but adversarial perturbations of their inputs. These perturbations, however, are purely pixel-wise and built out of loss function gradients of either the attacked model or its surrogate. As a result, they tend to be contrived and look pretty artificial. This might suggest that such vulnerability to slight input perturbations can only arise in a truly adversarial setting and thus is unlikely to be an issue in more \"natural\" contexts. In this paper, we provide evidence that such belief might be incorrect. We demonstrate that significantly simpler, and more likely to occur naturally, transformations of the input - namely, rotations and translations alone, suffice to significantly degrade the classification performance of neural network-based vision models across a spectrum of datasets. This remains to be the case even when these models are trained using appropriate data augmentation. Finding such \"fooling\" transformations does not require having any special access to the model - just trying out a small number of random rotation and translation combinations already has a significant effect. These findings suggest that our current neural network-based vision models might not be as reliable as we tend to assume. Finally, we consider a new class of perturbations that combines rotations and translations with the standard pixel-wise attacks. We observe that these two types of input transformations are, in a sense, orthogonal to each other. Their effect on the performance of the model seems to be additive, while robustness to one type does not seem to affect the robustness to the other type. This suggests that this combined class of transformations is a more complete notion of similarity in the context of adversarial robustness of vision models."
]
}
|
1907.05274
|
2960314297
|
Autonomous vehicles (AV) have progressed rapidly with the advancements in computer vision algorithms. The deep convolutional neural network as the main contributor to this advancement has boosted the classification accuracy dramatically. However, the discovery of adversarial examples reveals the generalization gap between dataset and the real world. Furthermore, affine transformations may also confuse computer vision based object detectors. The degradation of the perception system is undesirable for safety critical systems such as autonomous vehicles. In this paper, a deep learning system is proposed: Affine Disentangled GAN (ADIS-GAN), which is robust against affine transformations and adversarial attacks. It is demonstrated that conventional data augmentation for affine transformation and adversarial attacks are orthogonal, while ADIS-GAN can handle both attacks at the same time. Useful information such as image rotation angle and scaling factor are also generated in ADIS-GAN. On MNIST dataset, ADIS-GAN can achieve over 98 percent classification accuracy within 30 degrees rotation, and over 90 percent classification accuracy against FGSM and PGD adversarial attack.
|
GAN @cite_19 has been widely studied and utilized since its invention. It is a generative model which captures high-dimensional data distribution through adversarial process. It can generate images that simulate the training images such as hand written digits, animals, vehicles, etc. Deep Convolutional GAN @cite_24 introduces convolution mechanism into the GAN structure by inserting the deconvolution layers in the generator network. Bi-directional GAN @cite_26 further provides a pathway to convert data from image space back to latent space with additional encoder network. InfoGAN @cite_9 utilizes a disentangled representation that separates features and noise in the latent space. The separated features can represent categorical and continuous attributes of the training images. In @cite_2 , the issue of inductive bias in disentangled representation is discussed. In @cite_15 , the concept of symmetry group concept is introduced to define disentanglement behaviour. DefenceGAN @cite_11 uses GAN as a defence method against adversarial attacks.
|
{
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_24",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_11"
],
"mid": [
"2099471712",
"2434741482",
"2963684088",
"",
"2950662112",
"2516574342",
"2787496614"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",
"Abstract: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"",
"The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. In this paper, we provide a sober look at recent progress in the field and challenge some common assumptions. We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12000 models covering most prominent methods and evaluation metrics in a reproducible large-scale experimental study on seven different data sets. We observe that while the different methods successfully enforce properties encouraged'' by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision. Furthermore, increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets.",
"Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input @math and any target classification @math , it is possible to find a new input @math that is similar to @math but classified as @math . This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from @math to @math . In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with @math probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. Our code has been made publicly available at this https URL"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.