aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1711.02816 | 2760746198 | This paper proposes a novel deep architecture to address multi-label image recognition, a fundamental and practical task towards general visual understanding. Current solutions for this task usually rely on an extra step of extracting hypothesis regions (i.e., region proposals), resulting in redundant computation and sub-optimal performance. In this work, we achieve the interpretable and contextualized multi-label image classification by developing a recurrent memorized-attention module. This module consists of two alternately performed components: i) a spatial transformer layer to locate attentional regions from the convolutional feature maps in a region-proposal-free way and ii) an LSTM (Long-Short Term Memory) sub-network to sequentially predict semantic labeling scores on the located regions while capturing the global dependencies of these regions. The LSTM also output the parameters for computing the spatial transformer. On large-scale benchmarks of multi-label image classification (e.g., MS-COCO and PASCAL VOC 07), our approach demonstrates superior performances over other existing state-of-the-arts in both accuracy and efficiency. | Attention model has been recently applied to various computer vision tasks, including image classification @cite_13 @cite_17 , saliency detection @cite_18 , and image captioning @cite_33 . Most of these works use the recurrent neural network for sequential attentions, and optimized their models with reinforcement learning technique. Works @cite_13 @cite_17 formulate a recurrent attention model and apply it to the digital classification tasks for which the images are low-resolution with a clean background, using the small attention network. The model is non-differential and addressed with reinforcement learning to learn task-specific policies. @cite_32 propose a differential spatial transformer module which could be used to extract attentional regions with any spatial transformation, including scaling, rotation, transition, and cropping. Moreover, it could be easily integrated into the neural network and optimized using the standard back-propagation algorithm without reinforcement learning. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_32",
"@cite_13",
"@cite_17"
],
"mid": [
"2342171291",
"2950178297",
"2951005624",
"2951527505",
""
],
"abstract": [
"Convolutional-deconvolution networks can be adopted to perform end-to-end saliency detection. But, they do not work well with objects of multiple scales. To overcome such a limitation, in this work, we propose a recurrent attentional convolutional-deconvolution network (RACDNN). Using spatial transformer and recurrent network units, RACDNN is able to iteratively attend to selected image sub-regions to perform saliency refinement progressively. Besides tackling the scale problem, RACDNN can also learn context-aware features from past iterations to enhance saliency refinement in future iterations. Experiments on several challenging saliency detection datasets validate the effectiveness of RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection methods.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
""
]
} |
1711.02279 | 2767561560 | Encrypted database systems provide a great method for protecting sensitive data in untrusted infrastructures. These systems are built using either special-purpose cryptographic algorithms that support operations over encrypted data, or by leveraging trusted computing co-processors. Strong cryptographic algorithms usually result in high performance overheads (e.g., public-key encryptions, garbled circuits), while weaker algorithms (e.g., order-preserving encryption) result in large leakage profiles. On the other hand, some encrypted database systems (e.g., Cipherbase, TrustedDB) leverage non-standard trusted computing devices, and are designed to work around their specific architectural limitations. In this work we build StealthDB -- an encrypted database system from Intel SGX. Our system can run on any newer generation Intel CPU. StealthDB has a very small trusted computing base, scales to large datasets, requires no DBMS changes, and provides strong security guarantees at steady state and during query execution. | TrustedDB @cite_30 also uses a secure co-processor to perform operations. In their design, large portions of the DBMS engine (query parser and processor) are executed inside the TEE. We explore this design in and conclude that it's not ideal when working with SGX. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2042012257"
],
"abstract": [
"TrustedDB is an outsourced database prototype that allows clients to execute SQL queries with privacy and under regulatory compliance constraints without having to trust the service provider. TrustedDB achieves this by leveraging server-hosted tamper-proof trusted hardware in critical query processing stages. TrustedDB does not limit the query expressiveness of supported queries. And, despite the cost overhead and performance limitations of trusted hardware, the costs per query are orders of magnitude lower than any (existing or) potential future software-only mechanisms. TrustedDB is built and runs on actual hardware, and its performance and costs are evaluated here."
]
} |
1711.02279 | 2767561560 | Encrypted database systems provide a great method for protecting sensitive data in untrusted infrastructures. These systems are built using either special-purpose cryptographic algorithms that support operations over encrypted data, or by leveraging trusted computing co-processors. Strong cryptographic algorithms usually result in high performance overheads (e.g., public-key encryptions, garbled circuits), while weaker algorithms (e.g., order-preserving encryption) result in large leakage profiles. On the other hand, some encrypted database systems (e.g., Cipherbase, TrustedDB) leverage non-standard trusted computing devices, and are designed to work around their specific architectural limitations. In this work we build StealthDB -- an encrypted database system from Intel SGX. Our system can run on any newer generation Intel CPU. StealthDB has a very small trusted computing base, scales to large datasets, requires no DBMS changes, and provides strong security guarantees at steady state and during query execution. | A few works studied how to build versions of encrypted databases with SGX. VC3 system proposes an architecture for analytical MapReduce jobs in cloud settings @cite_2 . Opaque studies how to leverage SGX to secure distributed analytical workloads in Spark systems @cite_3 . A concurrent work of ours, ObliDB @cite_16 , obtains an oblivious database supporting both transactional and analytical workloads. However, their solution involves extensive changes to the underlying DBMS engine. HardIDX investigates how to perform index searches over BTrees in an enclave @cite_11 . They consider two design choices: first design in which the entire BTree is loaded into an enclave, decrypted and processed in cleartext, and the second design where parts of BTree are loaded during query processing. Their conclusions are similar to ours (), where we show that databases operating over large datasets scale better when the amount of of code and data in an enclave is kelp small. Overall, @cite_11 just prototypes index searches, whereas we architecture a complete encrypted database system. | {
"cite_N": [
"@cite_11",
"@cite_16",
"@cite_3",
"@cite_2"
],
"mid": [
"2599541228",
"2762536102",
"2604861932",
"1569778844"
],
"abstract": [
"Software-based approaches for search over encrypted data are still either challenged by lack of proper, low-leakage encryption or slow performance. Existing hardware-based approaches do not scale well due to hardware limitations and software designs that are not specifically tailored to the hardware architecture, and are rarely well analyzed for their security (e.g., the impact of side channels). Additionally, existing hardware-based solutions often have a large code footprint in the trusted environment susceptible to software compromises. In this paper we present HardIDX: a hardware-based approach, leveraging Intel’s SGX, for search over encrypted data. It implements only the security critical core, i.e., the search functionality, in the trusted environment and resorts to untrusted software for the remainder. HardIDX is deployable as a highly performant encrypted database index: it is logarithmic in the size of the index and searches are performed within a few milliseconds. We formally model and prove the security of our scheme showing that its leakage is equivalent to the best known searchable encryption schemes.",
"We present ObliDB, a secure SQL database for the public cloud that supports both transactional and analytics workloads and protects against access pattern leakage. With databases being a critical component in many applications, there is significant interest in outsourcing them securely. Hardware enclaves offer a strong practical foundation towards this goal by providing encryption and secure execution, but they still suffer from access pattern leaks that can reveal a great deal of information. The naive way to address this issue--using generic Oblivious RAM (ORAM) primitives beneath a database--adds prohibitive overhead. Instead, ObliDB co-designs both its data structures (e.g., oblivious B+ trees) and query operators to accelerate SQL processing, giving up to 329x speedup over naive ORAM. On analytics workloads, ObliDB ranges from competitive to 19x faster than systems designed only for analytics, such as Opaque, and comes within 2.6x of Spark SQL. Moreover, ObliDB also supports point queries, insertions, and deletions with latencies of 1-10ms, making it usable for transactional workloads too. To our knowledge, ObliDB is the first oblivious database that supports both transactional and analytic workloads.",
"Many systems run rich analytics on sensitive data in the cloud, but are prone to data breaches. Hardware enclaves promise data confidentiality and secure execution of arbitrary computation, yet still suffer from access pattern leakage. We propose Opaque, a distributed data analytics platform supporting a wide range of queries while providing strong security guarantees. Opaque introduces new distributed oblivious relational operators that hide access patterns, and new query planning techniques to optimize these new operators. Opaque is implemented on Spark SQL with few changes to the underlying system. Opaque provides data encryption, authentication and computation verification with a performance ranging from 52 faster to 3.3x slower as compared to vanilla Spark SQL; obliviousness comes with a 1.6-46x overhead. Opaque provides an improvement of three orders of magnitude over state-of-the-art oblivious protocols, and our query optimization techniques improve performance by 2-5x.",
"We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hyper visor out of the TCB, thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop: VC3's average runtime overhead is negligible for its base security guarantees, 4.5 with write integrity and 8 with read write integrity."
]
} |
1711.02279 | 2767561560 | Encrypted database systems provide a great method for protecting sensitive data in untrusted infrastructures. These systems are built using either special-purpose cryptographic algorithms that support operations over encrypted data, or by leveraging trusted computing co-processors. Strong cryptographic algorithms usually result in high performance overheads (e.g., public-key encryptions, garbled circuits), while weaker algorithms (e.g., order-preserving encryption) result in large leakage profiles. On the other hand, some encrypted database systems (e.g., Cipherbase, TrustedDB) leverage non-standard trusted computing devices, and are designed to work around their specific architectural limitations. In this work we build StealthDB -- an encrypted database system from Intel SGX. Our system can run on any newer generation Intel CPU. StealthDB has a very small trusted computing base, scales to large datasets, requires no DBMS changes, and provides strong security guarantees at steady state and during query execution. | A number of works study how to load unmodified applications into enclaves @cite_7 @cite_27 @cite_39 @cite_38 . These approaches work well for applications that process small data sizes, but do not scale well to larger workloads due to SGX limitations. Also, increasing the complexity of the codebase inside the enclaves aggravates the security risks associated with SGX @cite_36 . | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_36",
"@cite_39",
"@cite_27"
],
"mid": [
"2575735093",
"1852007091",
"2727025244",
"2164399967",
"2561955909"
],
"abstract": [
"Users of modern data-processing services such as tax preparation or genomic screening are forced to trust them with data that the users wish to keep secret. Ryoan protects secret data while it is processed by services that the data owner does not trust. Accomplishing this goal in a distributed setting is difficult because the user has no control over the service providers or the computational platform. Confining code to prevent it from leaking secrets is notoriously difficult, but Ryoan benefits from new hardware and a request-oriented data model. Ryoan provides a distributed sandbox, leveraging hardware enclaves (e.g., Intel's software guard extensions (SGX) [15]) to protect sandbox instances from potentially malicious computing platforms. The protected sandbox instances confine untrusted data-processing modules to prevent leakage of the user's input data. Ryoan is designed for a request-oriented data model, where confined modules only process input once and do not persist state about the input. We present the design and prototype implementation of Ryoan and evaluate it on a series of challenging problems including email filtering, heath analysis, image processing and machine translation.",
"Today's cloud computing infrastructure requires substantial trust. Cloud users rely on both the provider's staff and its globally-distributed software hardware platform not to expose any of their private data. We introduce the notion of shielded execution, which protects the confidentiality and integrity of a program and its data from the platform on which it runs (i.e., the cloud operator's OS, VM and firmware). Our prototype, Haven, is the first system to achieve shielded execution of unmodified legacy applications, including SQL Server and Apache, on a commodity OS (Windows) and commodity hardware. Haven leverages the hardware protection of Intel SGX to defend against privileged code and physical attacks such as memory probes, but also addresses the dual challenges of executing unmodified legacy binaries and protecting them from a malicious host. This work motivated recent changes in the SGX specification.",
"Intel Software Guard Extensions (SGX) is a hardware-based Trusted Execution Environment (TEE) that is widely seen as a promising solution to traditional security threats. While SGX promises strong protection to bugfree software, decades of experience show that we have to expect vulnerabilities in any non-trivial application. In a traditional environment, such vulnerabilities often allow attackers to take complete control of vulnerable systems. Efforts to evaluate the security of SGX have focused on side-channels. So far, neither a practical attack against a vulnerability in enclave code nor a proof-of-concept attack scenario has been demonstrated. Thus, a fundamental question remains: What are the consequences and dangers of having a memory corruption vulnerability in enclave code? To answer this question, we comprehensively analyze exploitation techniques against vulnerabilities inside enclaves. We demonstrate a practical exploitation technique, called Dark-ROP, which can completely disarm the security guarantees of SGX. Dark-ROP exploits a memory corruption vulnerability in the enclave software through return-oriented programming (ROP). However Dark-ROP differs significantly from traditional ROP attacks because the target code runs under solid hardware protection. We overcome the problem of exploiting SGX-specific properties and obstacles by formulating a novel ROP attack scheme against SGX under practical assumptions. Specifically, we build several oracles that inform the attacker about the status of enclave execution. This enables him to launch the ROP attack while both code and data are hidden. In addition, we exfiltrate the enclave's code and data into a shadow application to fully control the execution environment. This shadow application emulates the enclave under the complete control of the attacker, using the enclave (through ROP calls) only to perform SGX operations such as reading the enclave's SGX crypto keys.",
"Library OSes are a promising approach for applications to efficiently obtain the benefits of virtual machines, including security isolation, host platform compatibility, and migration. Library OSes refactor a traditional OS kernel into an application library, avoiding overheads incurred by duplicate functionality. When compared to running a single application on an OS kernel in a VM, recent library OSes reduce the memory footprint by an order-of-magnitude. Previous library OS (libOS) research has focused on single-process applications, yet many Unix applications, such as network servers and shell scripts, span multiple processes. Key design challenges for a multi-process libOS include management of shared state and minimal expansion of the security isolation boundary. This paper presents Graphene, a library OS that seamlessly and efficiently executes both single and multi-process applications, generally with low memory and performance overheads. Graphene broadens the libOS paradigm to support secure, multi-process APIs, such as copy-on-write fork, signals, and System V IPC. Multiple libOS instances coordinate over pipe-like byte streams to implement a consistent, distributed POSIX abstraction. These coordination streams provide a simple vantage point to enforce security isolation.",
"In multi-tenant environments, Linux containers managed by Docker or Kubernetes have a lower resource footprint, faster startup times, and higher I O performance compared to virtual machines (VMs) on hypervisors. Yet their weaker isolation guarantees, enforced through software kernel mechanisms, make it easier for attackers to compromise the confidentiality and integrity of application data within containers. We describe SCONE, a secure container mechanism for Docker that uses the SGX trusted execution support of Intel CPUs to protect container processes from outside attacks. The design of SCONE leads to (i) a small trusted computing base (TCB) and (ii) a low performance overhead: SCONE offers a secure C standard library interface that transparently encrypts decrypts I O data; to reduce the performance impact of thread synchronization and system calls within SGX enclaves, SCONE supports user-level threading and asynchronous system calls. Our evaluation shows that it protects unmodified applications with SGX, achieving 0.6×-1.2× of native throughput."
]
} |
1711.02279 | 2767561560 | Encrypted database systems provide a great method for protecting sensitive data in untrusted infrastructures. These systems are built using either special-purpose cryptographic algorithms that support operations over encrypted data, or by leveraging trusted computing co-processors. Strong cryptographic algorithms usually result in high performance overheads (e.g., public-key encryptions, garbled circuits), while weaker algorithms (e.g., order-preserving encryption) result in large leakage profiles. On the other hand, some encrypted database systems (e.g., Cipherbase, TrustedDB) leverage non-standard trusted computing devices, and are designed to work around their specific architectural limitations. In this work we build StealthDB -- an encrypted database system from Intel SGX. Our system can run on any newer generation Intel CPU. StealthDB has a very small trusted computing base, scales to large datasets, requires no DBMS changes, and provides strong security guarantees at steady state and during query execution. | OSPIR-OXT @cite_8 @cite_41 @cite_24 , SisoSPIR @cite_1 and BLIND SEER @cite_34 build encrypted database systems from scratch with provable security guarantees for a subset of functionality based on different cryptography tools. There are also multitude of other works which provide improvements over security or specific functionalities of a database, but they are not implemented or integrable with an existing database. A recent SoK paper provides are great summary of the state-of-art research in encrypted database systems @cite_31 . Fully homomorphic encryption @cite_21 is another powerful cryptographic primitive which enables an untrusted user to perform arbitrary computations on encrypted data without learning any information about the underlying data. But the current constructs for doing this are very far from being practical @cite_9 . In general, while theoretical security of systems built based on cryptographic methods can be high, the security of the system relies on the multitude of factors: correct implementations of non-trivial crypto algorithms, meta-data contents, DBMS structure and stored relationships in data-structures, information in log files, etc. | {
"cite_N": [
"@cite_8",
"@cite_41",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_24",
"@cite_31",
"@cite_34"
],
"mid": [
"1502708590",
"2086763678",
"236632755",
"2031533839",
"2408284914",
"2294724888",
"2592789682",
"1996453724"
],
"abstract": [
"This work presents the design and analysis of the first searchable symmetric encryption (SSE) protocol that supports conjunctive search and general Boolean queries on outsourced symmetrically- encrypted data and that scales to very large databases and arbitrarily-structured data including free text search. To date, work in this area has focused mainly on single-keyword search. For the case of conjunctive search, prior SSE constructions required work linear in the total number of documents in the database and provided good privacy only for structured attribute-value data, rendering these solutions too slow and inflexible for large practical databases.",
"We design and implement dynamic symmetric searchable encryption schemes that efficiently and privately search server-held encrypted databases with tens of billions of record-keyword pairs. Our basic theoretical construction supports single-keyword searches and offers asymptotically optimal server index size, fully parallel searching, and minimal leakage. Our implementation effort brought to the fore several factors ignored by earlier coarse-grained theoretical performance analyses, including lowlevel space utilization, I O parallelism and goodput. We accordingly introduce several optimizations to our theoretically optimal construction that model the prototype’s characteristics designed to overcome these factors. All of our schemes and optimizations are proven secure and the information leaked to the untrusted server is precisely quantified. We evaluate the performance of our prototype using two very large datasets: a synthesized census database with 100 million records and hundreds of keywords per record and a multi-million webpage collection that includes Wikipedia as a subset. Moreover, we report on an implementation that uses the dynamic SSE schemes developed here as the basis for supporting recent SSE advances, including complex search queries (e.g., Boolean queries) and richer operational settings (e.g., query delegation), in the above terabyte-scale databases.",
"HElib is a software library that implements homomorphic encryption (HE), specifically the Brakerski-Gentry-Vaikuntanathan (BGV) scheme, focusing on effective use of the Smart-Vercauteren ciphertext packing techniques and the Gentry-Halevi-Smart optimizations. The underlying cryptosystem serves as the equivalent of a “hardware platform” for HElib, in that it defines a set of operations that can be applied homomorphically, and specifies their cost. This “platform” is a SIMD environment (somewhat similar to Intel SSE and the like), but with unique cost metrics and parameters. In this report we describe some of the algorithms and optimization techniques that are used in HElib for data movement, linear algebra, and other operations over this “platform.”",
"We propose a fully homomorphic encryption scheme -- i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result -- that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Lattice-based cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a public-key ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable -- i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a server-aided cryptosystem.",
"With the growing popularity of remote storage, the ability to outsource a large private database yet be able to search on this encrypted data is critical. Searchable symmetric encryption SSE is a practical method of encrypting data so that natural operations such as searching can be performed on this data. It can be viewed as an efficient private-key alternative to powerful tools such as fully homomorphic encryption, oblivious RAM, or secure multiparty computation. The main drawbacks of existing SSE schemes are the limited types of search available to them and their leakage. In this paper, we present a construction of a private outsourced database in the two-server model e.g. two cloud services which can be thought of as an SSE scheme on a B-tree that allows for a wide variety of search features such as range queries, substring queries, and more. Our solution can hide all leakage due to access patterns \"metadata\" between queries and features a tunable parameter that provides a smooth tradeoff between privacy and efficiency. This allows us to implement a solution that supports databases which are terabytes in size and contain millions of records with only a @math slowdown compared to MySQL when the query result size is around 10i¾? of the database, though the fixed costs dominate smaller queries resulting in over @math relative slowdown under 1i¾?s actual. In addition, our solution also provides a mechanism for allowing data owners to set filters that prevent prohibited queries from returning any results, without revealing the filtering terms. Finally, we also present the benchmarks of our prototype implementation.",
"We extend the searchable symmetric encryption (SSE) protocol of [, Crypto’13] adding support for range, substring, wildcard, and phrase queries, in addition to the Boolean queries supported in the original protocol. Our techniques apply to the basic single-client scenario underlying the common SSE setting as well as to the more complex Multi-Client and Outsourced Symmetric PIR extensions of [, CCS’13]. We provide performance information based on our prototype implementation, showing the practicality and scalability of our techniques to very large databases, thus extending the performance results of [, NDSS’14] to these rich and comprehensive query types.",
"Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly, systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions:1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms.2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality.3) An analysis of attacks against protected search for different base queries.4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.",
"Query privacy in secure DBMS is an important feature, although rarely formally considered outside the theoretical community. Because of the high overheads of guaranteeing privacy in complex queries, almost all previous works addressing practical applications consider limited queries (e.g., just keyword search), or provide a weak guarantee of privacy. In this work, we address a major open problem in private DB: efficient sub linear search for arbitrary Boolean queries. We consider scalable DBMS with provable security for all parties, including protection of the data from both server (who stores encrypted data) and client (who searches it), as well as protection of the query, and access control for the query. We design, build, and evaluate the performance of a rich DBMS system, suitable for real-world deployment on today medium-to large-scale DBs. On a modern server, we are able to query a formula over 10TB, 100M-record DB, with 70 searchable index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower than MySQL, although some queries are costlier). We support a rich query set, including searching on arbitrary boolean formulas on keywords and ranges, support for stemming, and free keyword searches over text fields. We identify and permit a reasonable and controlled amount of leakage, proving that no further leakage is possible. In particular, we allow leakage of some search pattern information, but protect the query and data, provide a high level of privacy for individual terms in the executed search formula, and hide the difference between a query that returned no results and a query that returned a very small result set. We also support private and complex access policies, integrated in the search process so that a query with empty result set and a query that fails the policy are hard to tell apart."
]
} |
1711.02608 | 2767338551 | Abstract Huge volumes of textual information has been produced every single day. In order to organize and understand such large datasets, in recent years, summarization techniques have become popular. These techniques aims at finding relevant, concise and non-redundant content from such a big data. While network methods have been adopted to model texts in some scenarios, a systematic evaluation of multilayer network models in the multi-document summarization task has been limited to a few studies. Here, we evaluate the performance of a multilayer-based method to select the most relevant sentences in the context of an extractive multi document summarization (MDS) task. In the adopted model, nodes represent sentences and edges are created based on the number of shared words between sentences. Differently from previous studies in multi-document summarization, we make a distinction between edges linking sentences from different documents (inter-layer) and those connecting sentences from the same document (intra-layer). As a proof of principle, our results reveal that such a discrimination between intra- and inter-layer in a multilayered representation is able to improve the quality of the generated summaries. This piece of information could be used to improve current statistical methods and related textual models. | Several works have addressed the task of extractive summarization based on complex networks tools and methods. For example, in the work of @cite_22 , the MDS task was applied for documents in Brazilian Portuguese. The authors first extracted all sentences from the cluster of documents and then they modelled them as a single network. After the pre-processing stage, sentences were represented as nodes, which were linked by traditional similarity measurements. In order to select the best ranked sentences, the authors used simple network measurements, including degree, clustering coefficient and average shortest path length. A simple heuristic to avoid redundant sentences in the generated summaries was also applied. The results showed that the proposed method yielded competitive results, which were close to the best statistical systems available for the Portuguese language. Even though this work addressed the MDS with a graph-based approach, no distinction between intra- and inter-layer edges was considered. | {
"cite_N": [
"@cite_22"
],
"mid": [
"199114572"
],
"abstract": [
"In this work we investigate the use of graphs for multi-document summarization. We adapt the traditional Relationship Map approach to the multi-document scenario and, in a hybrid approach, we consider adding CST (Cross-document Structure Theory) relations to this adapted model. We also investigate some measures derived from graphs and complex networks for sentence selection. We show that the superficial graph-based methods are promising for the task. More importantly, some of them perform almost as good as a deep approach."
]
} |
1711.02608 | 2767338551 | Abstract Huge volumes of textual information has been produced every single day. In order to organize and understand such large datasets, in recent years, summarization techniques have become popular. These techniques aims at finding relevant, concise and non-redundant content from such a big data. While network methods have been adopted to model texts in some scenarios, a systematic evaluation of multilayer network models in the multi-document summarization task has been limited to a few studies. Here, we evaluate the performance of a multilayer-based method to select the most relevant sentences in the context of an extractive multi document summarization (MDS) task. In the adopted model, nodes represent sentences and edges are created based on the number of shared words between sentences. Differently from previous studies in multi-document summarization, we make a distinction between edges linking sentences from different documents (inter-layer) and those connecting sentences from the same document (intra-layer). As a proof of principle, our results reveal that such a discrimination between intra- and inter-layer in a multilayered representation is able to improve the quality of the generated summaries. This piece of information could be used to improve current statistical methods and related textual models. | @cite_3 represented sentences as nodes, which are linked if they share significant lemmatized nouns. The authors applied static complex network metrics to identify the relevant sentences to compose the extract. A summarization system based on voting system was used to combine the results of summaries generated by different measurements. Some systems achieved good results, which are comparable to the top single-document summarizers for Brazilian Portuguese. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2118364625"
],
"abstract": [
"Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network representing one piece of text consists of nodes corresponding to sentences, while edges connect sentences that share common meaningful nouns. Because various metrics could be used, we developed a set of 14 summarizers, generically referred to as CN-Summ, employing network concepts such as node degree, length of shortest paths, d-rings and k-cores. An additional summarizer was created which selects the highest ranked sentences in the 14 systems, as in a voting system. When applied to a corpus of Brazilian Portuguese texts, some CN-Summ versions performed better than summarizers that do not employ deep linguistic knowledge, with results comparable to state-of-the-art summarizers based on expensive linguistic resources. The use of complex networks to represent texts appears therefore as suitable for automatic summarization, consistent with the belief that the metrics of such networks may capture important text features."
]
} |
1711.02608 | 2767338551 | Abstract Huge volumes of textual information has been produced every single day. In order to organize and understand such large datasets, in recent years, summarization techniques have become popular. These techniques aims at finding relevant, concise and non-redundant content from such a big data. While network methods have been adopted to model texts in some scenarios, a systematic evaluation of multilayer network models in the multi-document summarization task has been limited to a few studies. Here, we evaluate the performance of a multilayer-based method to select the most relevant sentences in the context of an extractive multi document summarization (MDS) task. In the adopted model, nodes represent sentences and edges are created based on the number of shared words between sentences. Differently from previous studies in multi-document summarization, we make a distinction between edges linking sentences from different documents (inter-layer) and those connecting sentences from the same document (intra-layer). As a proof of principle, our results reveal that such a discrimination between intra- and inter-layer in a multilayered representation is able to improve the quality of the generated summaries. This piece of information could be used to improve current statistical methods and related textual models. | Leite and Rino @cite_23 used multiple features to automatically select the best attributes from single-layer complex networks and other linguistic features. More specifically, the authors combined @math linguistic features and @math features network-based measurements. For extract generation, Leite and Rino used machine learning to classify each sentence as present or not present in summary. An evaluation in a corpus o Portuguese texts confirmed that the proposed network methods can be combined with linguistic features so as to improve the characterization of textual documents. | {
"cite_N": [
"@cite_23"
],
"mid": [
"1598469373"
],
"abstract": [
"In this paper we explore multiple features for extractive automatic summarization using machine learning. They account for SuPor-2 features, a supervised summarizer for Brazilian Portuguese, and graph-based features mirroring complex networks measures. Four different classifiers and automatic feature selection are explored. ROUGE is used for assessment of single-document summarization of news texts."
]
} |
1711.02608 | 2767338551 | Abstract Huge volumes of textual information has been produced every single day. In order to organize and understand such large datasets, in recent years, summarization techniques have become popular. These techniques aims at finding relevant, concise and non-redundant content from such a big data. While network methods have been adopted to model texts in some scenarios, a systematic evaluation of multilayer network models in the multi-document summarization task has been limited to a few studies. Here, we evaluate the performance of a multilayer-based method to select the most relevant sentences in the context of an extractive multi document summarization (MDS) task. In the adopted model, nodes represent sentences and edges are created based on the number of shared words between sentences. Differently from previous studies in multi-document summarization, we make a distinction between edges linking sentences from different documents (inter-layer) and those connecting sentences from the same document (intra-layer). As a proof of principle, our results reveal that such a discrimination between intra- and inter-layer in a multilayered representation is able to improve the quality of the generated summaries. This piece of information could be used to improve current statistical methods and related textual models. | In the work of Erkan and Radev @cite_8 , sentences are represented as nodes. The bag-of-words model is used to represent the sentences. A connection between two nodes is established if the cosine similarity between the vectors of sentences is higher than a predefined threshold. To rank the sentences, the authors used degree centrality and eigenvector based metrics. Competitive results were reported, even if when applied to noisy data. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2110693578"
],
"abstract": [
"We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents."
]
} |
1711.02608 | 2767338551 | Abstract Huge volumes of textual information has been produced every single day. In order to organize and understand such large datasets, in recent years, summarization techniques have become popular. These techniques aims at finding relevant, concise and non-redundant content from such a big data. While network methods have been adopted to model texts in some scenarios, a systematic evaluation of multilayer network models in the multi-document summarization task has been limited to a few studies. Here, we evaluate the performance of a multilayer-based method to select the most relevant sentences in the context of an extractive multi document summarization (MDS) task. In the adopted model, nodes represent sentences and edges are created based on the number of shared words between sentences. Differently from previous studies in multi-document summarization, we make a distinction between edges linking sentences from different documents (inter-layer) and those connecting sentences from the same document (intra-layer). As a proof of principle, our results reveal that such a discrimination between intra- and inter-layer in a multilayered representation is able to improve the quality of the generated summaries. This piece of information could be used to improve current statistical methods and related textual models. | In the work of Mihalcea @cite_9 , similarly to other studies, sentences are nodes and edges represent the lexical similarity between sentences. The authors used recommendation algorithms for Web Pages to select the most informative sentences. The proposed algorithms used both Google's PageRank @cite_1 and HITS @cite_21 . Mihalcea @cite_9 considered three network types: undirected, forward (edges reflecting the natural reading flow) and backward (edges going from the current to the previous word). The systems were evaluated by using the English corpus DUC-2002 @cite_2 . The best performance was achieved with the HITS algorithm for English texts. Conversely, for the Portuguese scenario, the PageRank algorithm yielded the best performance. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_2"
],
"mid": [
"2122497354",
"2138621811",
"1854214752",
""
],
"abstract": [
"We demonstrate TextRank -- a system for unsupervised extractive summarization that relies on the application of iterative graph-based ranking algorithms to graphs encoding the cohesive structure of a text. An important characteristic of the system is that it does not rely on any language-specific knowledge resources or any manually constructed training data, and thus it is highly portable to new languages or domains.",
"The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.",
"The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.",
""
]
} |
1711.02525 | 2767221003 | Bed-making is a universal home task that can be challenging for senior citizens due to reaching motions. Automating bed-making has multiple technical challenges such as perception in an unstructured environments, deformable object manipulation, obstacle avoidance and sequential decision making. We explore how DART, an LfD algorithm for learning robust policies, can be applied to automating bed making without fiducial markers with a Toyota Human Support Robot (HSR). By gathering human demonstrations for grasping the sheet and failure detection, we can learn deep neural network policies that leverage pre-trained YOLO features to automate the task. Experiments with a 1 2 scale twin bed and distractors placed on the bed, suggest policies learned on 50 demonstrations with DART achieve 96 sheet coverage, which is over 200 better than a corner detector baseline using contour detection. | Manipulation of cloth has been explored in a variety of contexts including laundry folding and surgical robotics. used an algorithm based on identifying and tensioning corners to enable a home robot to fold laundry @cite_12 . proposed a technique that used 3rd person human demonstrations to learn how to fold a towel in isolation @cite_2 . examined the problem of bringing clothing into an arbitrary position and proposed using a deformable object simulator to plan motions @cite_5 . Shibata proposed to fold a towel without using registration by examining humans performing the action and designing a robust folding strategy @cite_8 , given the known initial position of the towel. | {
"cite_N": [
"@cite_5",
"@cite_8",
"@cite_12",
"@cite_2"
],
"mid": [
"2105537999",
"1966561894",
"2089630413",
"2137549137"
],
"abstract": [
"We consider the problem of autonomously bringing an article of clothing into a desired configuration using a general-purpose two-armed robot. We propose a hidden Markov model (HMM) for estimating the identity of the article and tracking the article's configuration throughout a specific sequence of manipulations and observations. At the end of this sequence, the article's configuration is known, though not necessarily desired. The estimated identity and configuration of the article are then used to plan a second sequence of manipulations that brings the article into the desired configuration. We propose a relaxation of a strain-limiting finite element model for cloth simulation that can be solved via convex optimization; this serves as the basis of the transition and observation models of the HMM. The observation model uses simple perceptual cues consisting of the height of the article when held by a single gripper and the silhouette of the article when held by two grippers. The model accurately estimates the identity and configuration of clothing articles, enabling our procedure to autonomously bring a variety of articles into desired configurations that are useful for other tasks, such as folding.",
"In our research, we aim to construct a house-keeping robot system which folds laundry. And, we propose a way for folding clothes without vision sensing device. To realize this system, we prepare appropriate fixed folding motion. We analyzed cloth folding motions by human. And, we found fixed folding motion pattern for the robot. Furthermore, we generated a trajectory of folding motion which is applicable to different size of towels. We confirmed the effectiveness of our method through experiments without the use of vision sensing device.",
"We present a novel vision-based grasp point detection algorithm that can reliably detect the corners of a piece of cloth, using only geometric cues that are robust to variation in texture. Furthermore, we demonstrate the effectiveness of our algorithm in the context of folding a towel using a general-purpose two-armed mobile robotic platform without the use of specialized end-effectors or tools. The robot begins by picking up a randomly dropped towel from a table, goes through a sequence of vision-based re-grasps and manipulations—partially in the air, partially on the table—and finally stacks the folded towel in a target location. The reliability and robustness of our algorithm enables for the first time a robot with general purpose manipulators to reliably and fully-autonomously fold previously unseen towels, demonstrating success on all 50 out of 50 single-towel trials as well as on a pile of 5 towels.",
"Research on robotic manipulation has primarily focused on grasping rigid objects using a single manipulator. It is however evident that in order to be truly pervasive, service robots will need to handle deformable objects, possibly with two arms. In this paper we tackle the problem of using cooperative manipulators to perform towel folding tasks. Differently from other approaches, our method executes what we call a momentum fold - a swinging motion that exploits the dynamics of the object being manipulated. We propose a new learning algorithm that combines imitation and reinforcement learning. Human demonstrations are used to reduce the search space of the reinforcement learning algorithm, which then quickly converges to its final solution. The strengths of the algorithm come from its efficient processing, fast learning capabilities, absence of a deformable object model, and applicability to other problems exhibiting temporally incoherent parameter spaces. A wide range of experiments were performed on a robotic platform, demonstrating the algorithm's capability and practicality."
]
} |
1711.02525 | 2767221003 | Bed-making is a universal home task that can be challenging for senior citizens due to reaching motions. Automating bed-making has multiple technical challenges such as perception in an unstructured environments, deformable object manipulation, obstacle avoidance and sequential decision making. We explore how DART, an LfD algorithm for learning robust policies, can be applied to automating bed making without fiducial markers with a Toyota Human Support Robot (HSR). By gathering human demonstrations for grasping the sheet and failure detection, we can learn deep neural network policies that leverage pre-trained YOLO features to automate the task. Experiments with a 1 2 scale twin bed and distractors placed on the bed, suggest policies learned on 50 demonstrations with DART achieve 96 sheet coverage, which is over 200 better than a corner detector baseline using contour detection. | In the surgical setting, cutting of cloth has been considered due to it being similar to cutting tissue. examined cutting a circle out of surgical gauze via leveraging expert demonstrations @cite_19 . However, this approach suffered in reliability due to imprecision in the tensioning policy on the cloth. examined learning a more robust tensioning policy in simulation using state of the art Deep RL algorithms @cite_16 . | {
"cite_N": [
"@cite_19",
"@cite_16"
],
"mid": [
"1584687743",
"2736750746"
],
"abstract": [
"Automating repetitive surgical subtasks such as suturing, cutting and debridement can reduce surgeon fatigue and procedure times and facilitate supervised tele-surgery. Programming is difficult because human tissue is deformable and highly specular. Using the da Vinci Research Kit (DVRK) robotic surgical assistant, we explore a “Learning By Observation” (LBO) approach where we identify, segment, and parameterize motion sequences and sensor conditions to build a finite state machine (FSM) for each subtask. The robot then executes the FSM repeatedly to tune parameters and if necessary update the FSM structure. We evaluate the approach on two surgical subtasks: debridement of 3D Viscoelastic Tissue Phantoms (3d-DVTP), in which small target fragments are removed from a 3D viscoelastic tissue phantom; and Pattern Cutting of 2D Orthotropic Tissue Phantoms (2d-PCOTP), a step in the standard Fundamentals of Laparoscopic Surgery training suite, in which a specified circular area must be cut from a sheet of orthotropic tissue phantom. We describe the approach and physical experiments with repeatability of 96 for 50 trials of the 3d-DVTP subtask and 70 for 20 trials of the 2d-PCOTP subtask. A video is available at: http: j.mp Robot-Surgery-Video-Oct-2014.",
"In the Fundamentals of Laparoscopic Surgery (FLS) standard medical training regimen, the Pattern Cutting task requires residents to demonstrate proficiency by maneuvering two tools, surgical scissors and tissue gripper, to accurately cut a circular pattern on surgical gauze suspended at the corners. Accuracy of cutting depends on tensioning, wherein the gripper pinches a point on the gauze in R3 and pulls to induce and maintain tension in the material as cutting proceeds. An automated tensioning policy maps the current state of the gauze to output a direction of pulling as an action. The optimal tensioning policy depends on both the choice of pinch point and cutting trajectory. We explore the problem of learning a tensioning policy conditioned on specific cutting trajectories. Every timestep, we allow the gripper to react to the deformation of the gauze and progress of the cutting trajectory with a translation unit vector along an allowable set of directions. As deformation is difficult to analytically model and explicitly observe, we leverage deep reinforcement learning with direct policy search methods to learn tensioning policies using a finite-element simulator and then transfer them to a physical system. We compare the Deep RL tensioning policies with fixed and analytic (opposing the error vector with a fixed pinch point) policies on a set of 17 open and closed curved contours in simulation and 4 patterns in physical experiments with the da Vinci Research Kit (dVRK). Our simulation results suggest that learning to tension with Deep RL can significantly improve performance and robustness to noise and external forces."
]
} |
1711.02525 | 2767221003 | Bed-making is a universal home task that can be challenging for senior citizens due to reaching motions. Automating bed-making has multiple technical challenges such as perception in an unstructured environments, deformable object manipulation, obstacle avoidance and sequential decision making. We explore how DART, an LfD algorithm for learning robust policies, can be applied to automating bed making without fiducial markers with a Toyota Human Support Robot (HSR). By gathering human demonstrations for grasping the sheet and failure detection, we can learn deep neural network policies that leverage pre-trained YOLO features to automate the task. Experiments with a 1 2 scale twin bed and distractors placed on the bed, suggest policies learned on 50 demonstrations with DART achieve 96 sheet coverage, which is over 200 better than a corner detector baseline using contour detection. | @cite_3 proposed DAgger, an on-policy method in which the supervisor iteratively provides corrective feedback on the robot's behavior. This alleviates the problem of compounding errors, since the robot is trained to identify and fix small errors after they occur. However, this can be problematic for bed-making because the robot needs to physically execute potentially highly sub-optimal actions, which may lead to collisions. Recently, it was shown that another way to correct for covariate shift is to inject small noise levels into the supervisor’s policy to simulate error during data collection @cite_1 . A technique, known as DART, was proposed to optimize for this noise distribution. We demonstrate that by using DART, we can achieve robust bed making and collect data at the same level of performance as the supervisor. | {
"cite_N": [
"@cite_1",
"@cite_3"
],
"mid": [
"2952996256",
"1931877416"
],
"abstract": [
"One approach to Imitation Learning is Behavior Cloning, in which a robot observes a supervisor and infers a control policy. A known problem with this \"off-policy\" approach is that the robot's errors compound when drifting away from the supervisor's demonstrations. On-policy, techniques alleviate this by iteratively collecting corrective actions for the current robot policy. However, these techniques can be tedious for human supervisors, add significant computation burden, and may visit dangerous states during training. We propose an off-policy approach that injects noise into the supervisor's policy while demonstrating. This forces the supervisor to demonstrate how to recover from errors. We propose a new algorithm, DART (Disturbances for Augmenting Robot Trajectories), that collects demonstrations with injected noise, and optimizes the noise level to approximate the error of the robot's trained policy during data collection. We compare DART with DAgger and Behavior Cloning in two domains: in simulation with an algorithmic supervisor on the MuJoCo tasks (Walker, Humanoid, Hopper, Half-Cheetah) and in physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter. For high dimensional tasks like Humanoid, DART can be up to @math faster in computation time and only decreases the supervisor's cumulative reward by @math during training, whereas DAgger executes policies that have @math less cumulative reward than the supervisor. On the grasping in clutter task, DART obtains on average a @math performance increase over Behavior Cloning.",
"Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem."
]
} |
1711.02056 | 2768013476 | We consider a single cell uplink in which many devices randomly transmit a data payload to the base station. Given a fixed latency constraint per device, we propose a time and frequency slotted random access scheme with retransmissions, which when necessary, are Chase combined at the receiver. We analyze the proposed setting at constant SNR. We characterize the scaling of random access throughput versus the latency, by optimizing the number of retransmissions and the number of frequency bins. For infinite block length (IBL), we conclude that at low SNR devices should completely share the time and frequency resources. For high SNR, however, the number of frequency bins should scale with altered load, and the slot duration for each retransmission is determined by the outage tolerance. Since infinite packet sizes are not possible, we extend our model to the finite block length (FBL) regime and characterize the gap versus the IBL regime. We also provide some new results for FBL capacity to bound the probability of outage. The proposed random access model gives an upper bound for the total uplink traffic that can be successfully decoded for a single receive antenna given the latency constraint, and provides insights for 5G cellular system design. | Because ideal interference cancellation is not realizable in practice, especially for a large number of devices, reference @cite_10 characterizes the throughput of a suboptimal but more practical random access system where both the time and frequency domains are slotted. The receiver uses conventional single-user detection which demodulates a desired user's data stream by treating other users' interfering signals as noise, and as a simplifying assumption, the analysis uses Shannon capacity to approximate the SINR threshold for a failed transmission. This approximation leads to an optimistic bound on the throughput, and it becomes exact in the limit of infinite coding block lengths. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2740650368"
],
"abstract": [
"We consider a single cell wireless uplink in which randomly arriving devices transmit their payload to a receiver. Given SNR per user, payload size per device, a fixed latency constraint T, total available bandwidth W, i.e., total symbol resources is given by N = TW. The total bandwidth W is evenly partitioned into B bins. Each time slot of duration T is split into a maximum number of retransmission attempts M. Hence, the N resources are partitioned into N MB resources each bin per retransmission. We characterize the maximum average rate or number of Poisson arrivals that can successfully complete the random access procedure such that the probability of outage is sufficiently small. We analyze the proposed setting for i) noise-limited regime and ii) interference-limited regime. We show that in the noise-limited regime the devices share the resources, and in the interference-limited regime, the resources split such that devices do not experience any interference. We then incorporate Rayleigh fading to model the channel power gain distribution. Although the variability of the channel causes a drop in the number of arrivals that can successfully complete the random access phase, similar scaling results extend to the Rayleigh fading case."
]
} |
1711.02056 | 2768013476 | We consider a single cell uplink in which many devices randomly transmit a data payload to the base station. Given a fixed latency constraint per device, we propose a time and frequency slotted random access scheme with retransmissions, which when necessary, are Chase combined at the receiver. We analyze the proposed setting at constant SNR. We characterize the scaling of random access throughput versus the latency, by optimizing the number of retransmissions and the number of frequency bins. For infinite block length (IBL), we conclude that at low SNR devices should completely share the time and frequency resources. For high SNR, however, the number of frequency bins should scale with altered load, and the slot duration for each retransmission is determined by the outage tolerance. Since infinite packet sizes are not possible, we extend our model to the finite block length (FBL) regime and characterize the gap versus the IBL regime. We also provide some new results for FBL capacity to bound the probability of outage. The proposed random access model gives an upper bound for the total uplink traffic that can be successfully decoded for a single receive antenna given the latency constraint, and provides insights for 5G cellular system design. | The current paper reviews and extends the results in @cite_10 by incorporating recent characterizations of capacity under finite block length transmissions @cite_13 @cite_15 @cite_16 . This extension is especially relevant for MTC applications where the payload size could be as small as a few hundred bits. A related random access framework for finite block length transmissions is discussed in @cite_8 , and a similar framework for downlink URLLC is studied in @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_10",
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"2950919160",
"2262889796",
"2740650368",
"",
"",
"2106864314"
],
"abstract": [
"5G wireless networks are expected to support Ultra Reliable Low Latency Communications (URLLC) traffic which requires very low packet delays ( < 1 msec.) and extremely high reliability ( @math 99.999 ). In this paper we focus on the design of a wireless system supporting downlink URLLC traffic. Using a queuing network based model for the wireless system we characterize the effect of various design choices on the maximum URLLC load it can support, including: 1) system parameters such as the bandwidth, link SINR , and QoS requirements; 2) resource allocation schemes in Orthogonal Frequency Division Multiple Access (OFDMA) based systems; and 3) Hybrid Automatic Repeat Request (HARQ) schemes. Key contributions of this paper which are of practical interest are: 1) study of how the the minimum required system bandwidth to support a given URLLC load scales with associated QoS constraints; 2) characterization of optimal OFDMA resource allocation schemes which maximize the admissible URLLC load; and 3) optimization of a repetition code based HARQ scheme which approximates Chase HARQ combining.",
"Most of the recent advances in the design of high-speed wireless systems are based on information-theoretic principles that demonstrate how to efficiently transmit long data packets. However, the upcoming wireless systems, notably the fifth-generation (5G) system, will need to support novel traffic types that use short packets. For example, short packets represent the most common form of traffic generated by sensors and other devices involved in machine-to-machine (M2M) communications. Furthermore, there are emerging applications in which small packets are expected to carry critical information that should be received with low latency and ultrahigh reliability. Current wireless systems are not designed to support short-packet transmissions. For example, the design of current systems relies on the assumption that the metadata (control information) is of negligible size compared to the actual information payload. Hence, transmitting metadata using heuristic methods does not affect the overall system performance. However, when the packets are short, metadata may be of the same size as the payload, and the conventional methods to transmit it may be highly suboptimal. In this paper, we review recent advances in information theory, which provide the theoretical principles that govern the transmission of short packets. We then apply these principles to three exemplary scenarios (the two-way channel, the downlink broadcast channel, and the uplink random access channel), thereby illustrating how the transmission of control information can be optimized when the packets are short. The insights brought by these examples suggest that new principles are needed for the design of wireless protocols supporting short packets. These principles will have a direct impact on the system design.",
"We consider a single cell wireless uplink in which randomly arriving devices transmit their payload to a receiver. Given SNR per user, payload size per device, a fixed latency constraint T, total available bandwidth W, i.e., total symbol resources is given by N = TW. The total bandwidth W is evenly partitioned into B bins. Each time slot of duration T is split into a maximum number of retransmission attempts M. Hence, the N resources are partitioned into N MB resources each bin per retransmission. We characterize the maximum average rate or number of Poisson arrivals that can successfully complete the random access procedure such that the probability of outage is sufficiently small. We analyze the proposed setting for i) noise-limited regime and ii) interference-limited regime. We show that in the noise-limited regime the devices share the resources, and in the interference-limited regime, the resources split such that devices do not experience any interference. We then incorporate Rayleigh fading to model the channel power gain distribution. Although the variability of the channel causes a drop in the number of arrivals that can successfully complete the random access phase, similar scaling results extend to the Rayleigh fading case.",
"",
"",
"This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ? isclosely approximated by C - ?(V n) Q-1(?) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function."
]
} |
1711.02044 | 2767532440 | In a rechargeable wireless sensor network, the data packets are generated by sensor nodes at a specific data rate, and transmitted to a base station. Moreover, the base station transfers power to the nodes by using Wireless Power Transfer (WPT) to extend their battery life. However, inadequately scheduling WPT and data collection causes some of the nodes to drain their battery and have their data buffer overflow, while the other nodes waste their harvested energy, which is more than they need to transmit their packets. In this paper, we investigate a novel optimal scheduling strategy, called EHMDP, aiming to minimize data packet loss from a network of sensor nodes in terms of the nodes' energy consumption and data queue state information. The scheduling problem is first formulated by a centralized MDP model, assuming that the complete states of each node are well known by the base station. This presents the upper bound of the data that can be collected in a rechargeable wireless sensor network. Next, we relax the assumption of the availability of full state information so that the data transmission and WPT can be semi-decentralized. The simulation results show that, in terms of network throughput and packet loss rate, the proposed algorithm significantly improves the network performance. | To achieve a balance between throughput and data collection fairness, a scheduling strategy is proposed to allocate the power transfer among users and the proportion of the time between energy harvests @cite_13 . @cite_14 , a data rate selection in RWSN is studied, which considers fairness of data rate and WPT duration among the nodes. Moreover, a set of algorithms are developed to obtain a optimal rate assignment with the Water-Filling-Framework under different routing types. A WPT system is studied to balance a tradeoff between spectral and energy efficiencies by jointly designing beamforming for WPT and data transmission @cite_20 . Moreover, both WPT and data transmission beamforming are adaptive to channel fading by exploiting the benefits of channel state information. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_20"
],
"mid": [
"2149933233",
"2027622574",
""
],
"abstract": [
"This paper considers max-min fair rate allocation and routing in energy harvesting networks where fairness is required among both the nodes and the time slots. Unlike most previous work on fairness, we focus on multihop topologies and consider different routing methods. We assume a predictable energy profile and focus on the design of efficient and optimal algorithms that can serve as benchmarks for distributed and approximate algorithms. We first develop an algorithm that obtains a max-min fair rate assignment for any given (time-variable or time-invariable) unsplittable routing or a routing tree. For time-invariable unsplittable routing, we also develop an algorithm that finds routes that maximize the minimum rate assigned to any node in any slot. For fractional routing, we study the joint routing and rate assignment problem. We develop an algorithm for the time-invariable case with constant rates. We show that the time-variable case is at least as hard as the 2-commodity feasible flow problem and design an FPTAS to combat the high running time. Finally, we show that finding a max-min fair unsplittable routing or a routing tree is NP-hard, even for a time horizon of a single slot. Our analysis provides insights into the problem structure and can be applied to other related fairness problems.",
"This paper considers the allocation of time slots in a frame, as well as power and rate to multiple receivers on an energy harvesting downlink. Energy arrival times that will occur within the frame are known at the beginning of the frame. The goal is to optimize throughput in a proportionally fair way, taking into account the inherent differences of channel quality among users. Analysis of structural characteristics of the problem reveals that it can be formulated as a biconvex optimization problem, and that it has multiple optima. Due to the biconvex nature of the problem, a Block Coordinate Descent (BCD) based optimization algorithm that converges to an optimal solution is presented. However, finding the optimal allocation with BCD entails a computational complexity that increases sharply in terms of the number of users or slots. Therefore, certain structural characteristics of the optimal power-time allocation policy are derived. Building on those, two simple and computationally scalable heuristics, PTF and ProNTO are proposed. Simulation results suggest that PTF and ProNTO can closely track the performance of BCD which achieves a good balance between total throughput and fairness.",
""
]
} |
1711.02044 | 2767532440 | In a rechargeable wireless sensor network, the data packets are generated by sensor nodes at a specific data rate, and transmitted to a base station. Moreover, the base station transfers power to the nodes by using Wireless Power Transfer (WPT) to extend their battery life. However, inadequately scheduling WPT and data collection causes some of the nodes to drain their battery and have their data buffer overflow, while the other nodes waste their harvested energy, which is more than they need to transmit their packets. In this paper, we investigate a novel optimal scheduling strategy, called EHMDP, aiming to minimize data packet loss from a network of sensor nodes in terms of the nodes' energy consumption and data queue state information. The scheduling problem is first formulated by a centralized MDP model, assuming that the complete states of each node are well known by the base station. This presents the upper bound of the data that can be collected in a rechargeable wireless sensor network. Next, we relax the assumption of the availability of full state information so that the data transmission and WPT can be semi-decentralized. The simulation results show that, in terms of network throughput and packet loss rate, the proposed algorithm significantly improves the network performance. | MAC protocol design for scheduling data transmission and power transfer is given in the literature @cite_36 @cite_30 @cite_22 . It was noted that the power transfer varies depending on surrounding environments. A MAC protocol based on fair polling scheme is studied to achieve a data collection fairness @cite_36 . Moreover, each node contends channel adaptively to their energy harvesting rate. @cite_30 , a RF-MAC protocol is studied to jointly schedule power transmitters and energy harvesting rates in RWSN. RF-MAC focuses on the amount of energy delivery to the nodes, while eliminating disruption to data communication. @cite_22 , multiple WPT transmitters are grouped into different sets based on an estimate of their separation distance from the node to reduce the impact of destructive interference. Moreover, a MAC protocol is presented to transfer power to the nodes on request. Unfortunately, the MAC protocols in literature schedule WPT and data communication with the objective of interference elimination, which is different from the problem in this work. | {
"cite_N": [
"@cite_36",
"@cite_22",
"@cite_30"
],
"mid": [
"1511092558",
"2078384676",
"1988755608"
],
"abstract": [
"Energy harvesting wireless sensor network (EHWSNs) are being actively studied in order to solve the problems faced by battery-operated WSNs, namely the cost for battery replacement and impacts on the environment. In EH-WSNs, each node harvests ambient energy, such as light, heat, vibration, and uses it for sensing, computations, and wireless communications, where the amount of harvested energy of each node varies depending on their surrounding environments. MAC protocols for EH-WSNs need to be designed to achieve high throughput and fairness, however, the conventional MAC protocols proposed for EH-WSNs do not adapt to the harvesting rate of each node, resulting in poor fairness. In this paper, we propose a fair MAC protocol based on polling scheme for EH-WSNs. The proposed scheme adjusts contention probability of each node according to the individual harvesting rate, thereby increasing throughput achieved by nodes with low harvesting rate. We evaluate throughput and fairness of the proposed fair polling scheme by computer simulations, and show that the proposed scheme can improve fairness with little degradation of the overall network throughput.",
"Wireless transfer of energy will help realize perennially operating sensors, where dedicated transmitters replenish the residual node battery level through directed radio frequency (RF) waves. However, as this radiative transfer is in-band, it directly impacts data communication in the network, requiring a fresh perspective on medium access control (MAC) protocol design for appropriately sharing the channel for these two critical functions. Through an experimental study, we first demonstrate how the placement, the chosen frequency, and number of the RF energy transmitters affect the sensor charging time. These studies are then used to design a MAC protocol called RFMAC that optimizes energy delivery to desirous sensor nodes on request. To the best of our knowledge, this is the first distributed MAC protocol for RF energy harvesting sensors, and through a combination of experimentation and simulation studies, we demonstrate 112 average network throughput improvement over the modified unslotted CSMA MAC protocol.",
"Wireless charging through directed radio frequency (RF) waves is an emerging technology that can be used to replenish the battery of a sensor node, albeit at the cost of data communi- cation in the network. This tradeoff between energy transfer and communication functions requires a fresh perspective on medium access control (MAC) protocol design for appropriately sharing the channel. Through an experimental study, we demonstrate how the placement, the chosen frequency, and number of the RF energy transmitters impact the sensor charging time. These studies are then used to design a MAC protocol called RF-MAC that optimizes energy delivery to sensor nodes, while minimizing disruption to data communication. In the course of the protocol design, we describe mechanisms for (i) setting the maximum energy charging threshold, (ii) selecting specific transmitters based on the collective impact on charging time, (iii) requesting and granting energy transfer requests, and (iv) evaluating the respective priorities of data communication and energy transfer. To the best of our knowl- edge, this is the first distributed MAC protocol for RF energy harvesting sensors, and through a combination of experimenta- tion and simulation studies, we observe 300 maximum network throughput improvement over the classical modified unslotted CSMA MAC protocol."
]
} |
1711.02217 | 2767455318 | In this work, we propose a new segmentation algorithm for images containing convex objects present in multiple shapes with a high degree of overlap. The proposed algorithm is carried out in two steps, first we identify the visible contours, segment them using concave points and finally group the segments belonging to the same object. The next step is to assign a shape identity to these grouped contour segments. For images containing objects in multiple shapes we begin first by identifying shape classes of the contours followed by assigning a shape entity to these classes. We provide a comprehensive experimentation of our algorithm on two crystal image datasets. One dataset comprises of images containing objects in multiple shapes overlapping each other and the other dataset contains standard images with objects present in a single shape. We test our algorithm against two baselines, with our proposed algorithm outperforming both the baselines. | Certain approaches use dynamic programming for the task @cite_17 , they determine the most optimal boundaries for the hidden objects by defining a path of the highest average intensity along boundaries. Graph-cut methods segment objects by creating a graph out of pixels treated as nodes and difference between their intensities as a weight for edges, followed by finding the minimum cut for the graph @cite_12 @cite_10 . Both of these approaches require prominent gradients at boundaries to be effective in segmenting objects. | {
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2121947440",
"1968056609"
],
"abstract": [
"",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.",
"Background Understanding the cellular and molecular basis of tissue development and function requires analysis of individual cells while in their tissue context. Methods We developed software to find the optimum border around each cell (segmentation) from two-dimensional microscopic images of intact tissue. Samples were labeled with a fluorescent cell surface marker so that cell borders were brighter than elsewhere. The optimum border around each cell was defined as the border with an average intensity per unit length greater that any other possible border around that cell, and was calculated using the gray-weighted distance transform. Algorithm initiation requiring the user to mark two points per cell, one approximately in the center and the other on the border, ensured virtually 100 correct segmentation. Thereafter segmentation was automatic. Results The method was highly robust, because intermittent labeling of the cell borders, diffuse borders, and spurious signals away from the border do not significantly affect the optimum path. Computer-generated cells with increasing levels of added noise showed that the approach was accurate provided the cell could be detected visually. Conclusions We have developed a highly robust algorithm for segmenting images of surface-labeled cells, enabling accurate and quantitative analysis of individual cells in tissue. © 2005 International Society for Analytical Cytology"
]
} |
1711.02217 | 2767455318 | In this work, we propose a new segmentation algorithm for images containing convex objects present in multiple shapes with a high degree of overlap. The proposed algorithm is carried out in two steps, first we identify the visible contours, segment them using concave points and finally group the segments belonging to the same object. The next step is to assign a shape identity to these grouped contour segments. For images containing objects in multiple shapes we begin first by identifying shape classes of the contours followed by assigning a shape entity to these classes. We provide a comprehensive experimentation of our algorithm on two crystal image datasets. One dataset comprises of images containing objects in multiple shapes overlapping each other and the other dataset contains standard images with objects present in a single shape. We test our algorithm against two baselines, with our proposed algorithm outperforming both the baselines. | One of the popular approaches for segmentation of overlapping objects is the watershed algorithm @cite_13 which in its classical form often results in over-segmentation. A potential solution to over-segmentation is region merging, which does solve the problem but the method efficiency varies depending upon the object size and object distribution concentration in the image. Marker-controlled watershed @cite_15 @cite_4 is also used for the purpose but the efficiency of the method depends highly on the accurate identification of the markers. | {
"cite_N": [
"@cite_15",
"@cite_13",
"@cite_4"
],
"mid": [
"2054821087",
"2124260943",
"2151538727"
],
"abstract": [
"Cluster division is a critical issue in fluorescence microscopy-based analytical cytology when preparation protocols do not provide appropriate separation of objects. Overlooking clustered nuclei and analyzing only isolated nuclei may dramatically increase analysis time or affect the statistical validation of the results. Automatic segmentation of clustered nuclei requires the implementation of specific image segmentation tools. Most algorithms are inspired by one of the two following strategies: 1) cluster division by the detection of internuclei gradients; or 2) division by definition of domains of influence (geometrical approach). Both strategies lead to completely different implementations, and usually algorithms based on a single view strategy fail to correctly segment most clustered nuclei, or perform well just for a specific type of sample. An algorithm based on morphological watersheds has been implemented and tested on the segmentation of microscopic nuclei clusters. This algorithm provides a tool that can be used for the implementation of both gradient- and domain-based algorithms, and, more importantly, for the implementation of mixed (gradient- and shape-based) algorithms. Using this algorithm, almost 90 of the test clusters were correctly segmented in peripheral blood and bone marrow preparations. The algorithm was valid for both types of samples, using the appropriate markers and transformations. Cytometry 28:289–297, 1997. © 1997 Wiley-Liss, Inc.",
"A fast and flexible algorithm for computing watersheds in digital gray-scale images is introduced. A review of watersheds and related motion is first presented, and the major methods to determine watersheds are discussed. The algorithm is based on an immersion process analogy, in which the flooding of the water in the picture is efficiently simulated using of queue of pixel. It is described in detail provided in a pseudo C language. The accuracy of this algorithm is proven to be superior to that of the existing implementations, and it is shown that its adaptation to any kind of digital grid and its generalization to n-dimensional images (and even to graphs) are straightforward. The algorithm is reported to be faster than any other watershed algorithm. Applications of this algorithm with regard to picture segmentation are presented for magnetic resonance (MR) imagery and for digital elevation models. An example of 3-D watershed is also provided. >",
"It is important to observe and study cancer cells' cycle progression in order to better understand drug effects on cancer cells. Time-lapse microscopy imaging serves as an important method to measure the cycle progression of individual cells in a large population. Since manual analysis is unreasonably time consuming for the large volumes of time-lapse image data, automated image analysis is proposed. Existing approaches dealing with time-lapse image data are rather limited and often give inaccurate analysis results, especially in segmenting and tracking individual cells in a cell population. In this paper, we present a new approach to segment and track cell nuclei in time-lapse fluorescence image sequence. First, we propose a novel marker-controlled watershed based on mathematical morphology, which can effectively segment clustered cells with less oversegmentation. To further segment undersegmented cells or to merge oversegmented cells, context information among neighboring frames is employed, which is proved to be an effective strategy. Then, we design a tracking method based on modified mean shift algorithm, in which several kernels with adaptive scale, shape, and direction are designed. Finally, we combine mean-shift and Kalman filter to achieve a more robust cell nuclei tracking method than existing ones. Experimental results show that our method can obtain 98.8 segmentation accuracy, 97.4 cell division tracking accuracy, and 97.6 cell tracking accuracy"
]
} |
1711.02217 | 2767455318 | In this work, we propose a new segmentation algorithm for images containing convex objects present in multiple shapes with a high degree of overlap. The proposed algorithm is carried out in two steps, first we identify the visible contours, segment them using concave points and finally group the segments belonging to the same object. The next step is to assign a shape identity to these grouped contour segments. For images containing objects in multiple shapes we begin first by identifying shape classes of the contours followed by assigning a shape entity to these classes. We provide a comprehensive experimentation of our algorithm on two crystal image datasets. One dataset comprises of images containing objects in multiple shapes overlapping each other and the other dataset contains standard images with objects present in a single shape. We test our algorithm against two baselines, with our proposed algorithm outperforming both the baselines. | @cite_9 formulates the problem as a combination of level set functions and uses similarity transforms to predict the hidden parts of partially visible objects. The drawback of this method is that its performance is dependent on initialization and is computationally expensive. @cite_8 proposes a morphological extension to the level based methods, it introduces morphological operations in traditional snake algorithms. This method outperforms earlier active contour methods as the curve evolution is simpler and faster. It however fails to converge when large number of closely spaced overlapped objects are present in the image. In @cite_18 , the problem of segmenting overlapping elliptical bubbles is solved by segmenting the extracted contours into curves. These curves are further segmented into groups belonging to the same bubble followed by ellipse fitting. This approach works well for images having similar shaped objects, but fail for images containing multi-shaped overlapping objects. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_8"
],
"mid": [
"2075778519",
"1966394605",
""
],
"abstract": [
"We address the problem of segmenting multiple similar objects by optimizing a Chan-Vese-like [1] functional with respect to a mixture of level set functions. We solve the variational formulation under this model allowing for similarity transforms. This allows shape priors to be enforced even in the presence of mutual occlusion, lifting the limitation in [2]. We show numerical results on example images to demonstrate the promise of our approach.",
"Direct imaging technology is an effective and convenient method for the estimation of bubble size distribution (BSD). However, overlapping bubble has an influence on BSD when gas holdup is more than 1 . In this paper, we present a new method of overlapping elliptical bubble recognition to determine bubble size. The method mainly includes two steps: contour segmentation and segment grouping. Contour segmentation is on the assumption that the concave points in the dominant point sequence are always the connecting points, and segment grouping is mainly based on the average distance deviation criterion. Both simulated images and real bubble images are used to evaluate this new method. The results show that it is effective in the recognition of overlapping elliptical bubbles and have a potential in other elliptical object recognition. In the last, two methods are used for BSD estimation. It is found that the bubble size (such d10 or d32) estimated by the ignore method is slightly smaller than that estimated by the recognition method.",
""
]
} |
1711.02217 | 2767455318 | In this work, we propose a new segmentation algorithm for images containing convex objects present in multiple shapes with a high degree of overlap. The proposed algorithm is carried out in two steps, first we identify the visible contours, segment them using concave points and finally group the segments belonging to the same object. The next step is to assign a shape identity to these grouped contour segments. For images containing objects in multiple shapes we begin first by identifying shape classes of the contours followed by assigning a shape entity to these classes. We provide a comprehensive experimentation of our algorithm on two crystal image datasets. One dataset comprises of images containing objects in multiple shapes overlapping each other and the other dataset contains standard images with objects present in a single shape. We test our algorithm against two baselines, with our proposed algorithm outperforming both the baselines. | Our proposed segmentation algorithm is inspired from concave point based overlapping object segmentation method @cite_16 . In @cite_16 , the idea of grouping the contours belonging to the same object present in an overlap cluster is carried out by fitting ellipses over the segmented contours. The grouped segments are further fitted with ellipses to separate overlapping objects. The efficiency of this approach declines when applied to images with multi-shaped objects. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2252294251"
],
"abstract": [
"This paper presents a novel method for the segmentation of partially overlapping nanoparticles with a convex shape in silhouette images. The proposed method involves two main steps: contour evidence extraction and contour estimation. Contour evidence extraction starts with contour segmentation where contour segments are recovered from a binarized image by detecting concave points. After this, contour segments which belong to the same object are grouped by utilizing properties of fitted ellipses. Finally, the contour estimation is implemented through a non-linear ellipse fitting problem in which partially observed objects are modeled in the form of ellipse-shape objects. The experiments on a dataset consisting of nanoparticles demonstrate that the proposed method outperforms two current state-of-art approaches in overlapping nanoparticles segmentation. The method relies only on edge information and can be applied to any segmentation problems where the objects are partially overlapping and have an approximately elliptical shape, such as cell segmentation."
]
} |
1711.02216 | 2767472575 | To enable autonomous robotic manipulation in unstructured environments, we present SegICP-DSR, a real- time, dense, semantic scene reconstruction and pose estimation algorithm that achieves mm-level pose accuracy and standard deviation (7.9 mm, =7.6 mm and 1.7 deg, =0.7 deg) and suc- cessfully identified the object pose in 97 of test cases. This represents a 29 increase in accuracy, and a 14 increase in success rate compared to SegICP in cluttered, unstruc- tured environments. The performance increase of SegICP-DSR arises from (1) improved deep semantic segmentation under adversarial training, (2) precise automated calibration of the camera intrinsic and extrinsic parameters, (3) viewpoint specific ray-casting of the model geometry, and (4) dense semantic ElasticFusion point clouds for registration. We benchmark the performance of SegICP-DSR on thousands of pose-annotated video frames and demonstrate its accuracy and efficacy on two tight tolerance grasping and insertion tasks using a KUKA LBR iiwa robotic arm. | Accurate pose estimation in cluttered, unstructured environments is challenging, and many successful manipulation techniques (e.g. visual servoing) bypass the problem entirely. Several recent manipulation approaches have shown great success by avoiding an explicit representation of object pose and directly mapping raw sensor observations to motor behavior @cite_3 @cite_21 @cite_39 . However, these approaches do not provide the semantic expressiveness required for symbolic task planners @cite_16 , which are convenient for complex, long-horizon, multi-step manipulation tasks. Furthermore, object recognition and pose estimation provide an elegant, compact representation of high-dimensional sensor input. In this regard, we focus primarily on approaches that first solve the object-identification and pose-estimation problem and subsequently tackle manipulation. | {
"cite_N": [
"@cite_16",
"@cite_21",
"@cite_3",
"@cite_39"
],
"mid": [
"2070683816",
"2414685554",
"2964161785",
""
],
"abstract": [
"We describe an integrated strategy for planning, perception, state estimation and action in complex mobile manipulation domains based on planning in the belief space of probability distributions over states using hierarchical goal regression (pre-image back-chaining). We develop a vocabulary of logical expressions that describe sets of belief states, which are goals and subgoals in the planning process. We show that a relatively small set of symbolic operators can give rise to task-oriented perception in support of the manipulation goals. An implementation of this method is demonstrated in simulation and on a real PR2 robot, showing robust, flexible solution of mobile manipulation problems with multiple objects and substantial uncertainty.",
"This paper presents the Dexterity Network (Dex-Net) 1.0, a dataset of 3D object models and a sampling-based planning algorithm to explore how Cloud Robotics can be used for robust grasp planning. The algorithm uses a Multi- Armed Bandit model with correlated rewards to leverage prior grasps and 3D object models in a growing dataset that currently includes over 10,000 unique 3D object models and 2.5 million parallel-jaw grasps. Each grasp includes an estimate of the probability of force closure under uncertainty in object and gripper pose and friction. Dex-Net 1.0 uses Multi-View Convolutional Neural Networks (MV-CNNs), a new deep learning method for 3D object classification, to provide a similarity metric between objects, and the Google Cloud Platform to simultaneously run up to 1,500 virtual cores, reducing experiment runtime by up to three orders of magnitude. Experiments suggest that correlated bandit techniques can use a cloud-based network of object models to significantly reduce the number of samples required for robust grasp planning. We report on system sensitivity to variations in similarity metrics and in uncertainty in pose and friction. Code and updated information is available at http: berkeleyautomation.github.io dex-net .",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
""
]
} |
1711.02026 | 2767389462 | Full-duplex (FD) has emerged as a disruptive communications paradigm for enhancing the achievable spectral efficiency (SE), thanks to the recent major breakthroughs in self-interference (SI) mitigation. The FD versus half-duplex (HD) SE gain, in cellular networks, is however largely limited by the mutual-interference (MI) between the downlink (DL) and the uplink (UL). A potential remedy for tackling the MI bottleneck is through cooperative communications. This paper provides a stochastic design and analysis of FD enabled cloud radio access network (C-RAN) under the Poisson point process (PPP)-based abstraction model of multi-antenna radio units (RUs) and user equipments (UEs). We consider different disjoint and user-centric approaches towards the formation of finite clusters in the C-RAN. Contrary to most existing studies, we explicitly take into consideration non-isotropic fading channel conditions and finite-capacity fronthaul links. Accordingly, upper-bound expressions for the C-RAN DL and UL SEs, involving the statistics of all intended and interfering signals, are derived. The performance of the FD C-RAN is investigated through the proposed theoretical framework and Monte-Carlo (MC) simulations. The results indicate that significant FD versus HD C-RAN SE gains can be achieved, particularly in the presence of sufficient-capacity fronthaul links and advanced interference cancellation capabilities. | Several studies on the performance of FD enabled cooperative wireless systems have also been reported in the literature. In @cite_14 , an information-theoretic analysis of C-RAN with FD RUs based on the classical Wyner cellular model was provided. In particular, the authors investigated the FD C-RAN DL and UL SEs (versus single-cell processing), considering capacity-limited fronthaul links, successive interference cancellation (SIC) capability at the user equipment (UE) side, and perfect SI cancellation capability at the RU side. In addition, in @cite_45 , the authors considered a FD enabled multi-cell network MIMO paradigm, and utilized spatial interference-alignment (IA) towards tackling the MI from the UL operation on the DL performance. In particular, the scaling multiplexing gain of FD versus HD operation in multi-cell network MIMO was characterized in closed-form. On the other hand, the authors in @cite_0 , considered a C-RAN scenario in which a single FD UE simultaneously communicates with randomly-deployed HD multi-antenna RUs in the DL and UL directions. The results indicated that with appropriate beamforming and RU association, significant FD versus HD SE gains can be achieved for this particular case, subject to residual SI power being low. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_45"
],
"mid": [
"",
"2019048245",
"1527785909"
],
"abstract": [
"",
"The conventional design of cellular systems prescribes the separation of uplink and downlink transmissions via time-division or frequency-division duplex. Recent advances in analog and digital domain self-interference interference cancellation challenge the need for this arrangement and open up the possibility to operate base stations, especially low-power ones, in a full-duplex mode. As a means to cope with the resulting downlink-to-uplink interference among base stations, this letter investigates the impact of the Cloud Radio Access Network (C-RAN) architecture. The analysis follows an information theoretic approach based on the classical Wyner model. The analytical results herein confirm the significant potential advantages of the C-RAN architecture in the presence of full-duplex base stations, as long as sufficient fronthaul capacity is available and appropriate mobile station scheduling, or successive interference cancellation at the mobile stations, is implemented.",
"We investigate the open problem of characterizing the multiplexing gain offered by FD in a network of M cells (compared to the gain of two available on a single link). While self-interference cancellation is fundamental in realizing full duplex (FD) capability, the more challenging problem in a network-wide deployment of FD communication is a new form of uplink-downlink interference, namely UDI, caused by transmission of uplink clients on the downlink reception of other clients operating in the same frequency band during FD. We leverage spatial interference alignment (IA) as an effective approach to address UDI and characterize the scalability of the FD's multiplexing gain (in terms of M) by providing a closed-form expression. To the best of our knowledge, this is the first characterization of FD's multiplexing gain in a multi-cell network. We also provide an IA construction that can achieve the best scaling possible. Further, we extend our results to practical settings with limited number of clients and limited information sharing between access points."
]
} |
1711.01761 | 2767327871 | We study a new aggregation operator for gradients coming from a mini-batch for stochastic gradient (SG) methods that allows a significant speed-up in the case of sparse optimization problems. We call this method AdaBatch and it only requires a few lines of code change compared to regular mini-batch SGD algorithms. We provide a theoretical insight to understand how this new class of algorithms is performing and show that it is equivalent to an implicit per-coordinate rescaling of the gradients, similarly to what Adagrad methods can do. In theory and in practice, this new aggregation allows to keep the same sample efficiency of SG methods while increasing the batch size. Experimentally, we also show that in the case of smooth convex optimization, our procedure can even obtain a better loss when increasing the batch size for a fixed number of samples. We then apply this new algorithm to obtain a parallelizable stochastic gradient method that is synchronous but allows speed-up on par with Hogwild! methods as convergence does not deteriorate with the increase of the batch size. The same approach can be used to make mini-batch provably efficient for variance-reduced SG methods such as SVRG. | The effect of delay for constant step-size stochastic gradient descent has been studied by @cite_16 . Allowing for delay will remove the need for synchronization and thus limit the overhead when parallelizing. The main result of @cite_16 concludes that there is two different regimes. During the first phase, delay will not help convergence, although once the asymptotic terms are dominating, a theoretical linear speedup with the number of worker is recovered. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2949198759"
],
"abstract": [
"Online learning algorithms have impressive convergence properties when it comes to risk minimization and convex games on very large problems. However, they are inherently sequential in their design which prevents them from taking advantage of modern multi-core architectures. In this paper we prove that online learning with delayed updates converges well, thereby facilitating parallel online learning."
]
} |
1711.01761 | 2767327871 | We study a new aggregation operator for gradients coming from a mini-batch for stochastic gradient (SG) methods that allows a significant speed-up in the case of sparse optimization problems. We call this method AdaBatch and it only requires a few lines of code change compared to regular mini-batch SGD algorithms. We provide a theoretical insight to understand how this new class of algorithms is performing and show that it is equivalent to an implicit per-coordinate rescaling of the gradients, similarly to what Adagrad methods can do. In theory and in practice, this new aggregation allows to keep the same sample efficiency of SG methods while increasing the batch size. Experimentally, we also show that in the case of smooth convex optimization, our procedure can even obtain a better loss when increasing the batch size for a fixed number of samples. We then apply this new algorithm to obtain a parallelizable stochastic gradient method that is synchronous but allows speed-up on par with Hogwild! methods as convergence does not deteriorate with the increase of the batch size. The same approach can be used to make mini-batch provably efficient for variance-reduced SG methods such as SVRG. | is a popular alternative for parallelizing or distributing SGD. In @cite_25 , the reduction of the variance of the gradient estimate is used to prove improvement in convergence. Our theoretical and practical results show that in the case of sparse learning, mini-batch do not offer an improvement during the first stage of optimization. We believe our merging rule is a simple modification of mini-batch SGD that can considerably improve convergence speed compared to regular mini-batch. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2130062883"
],
"abstract": [
"Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem."
]
} |
1711.01575 | 2767382337 | We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning. | Domain Adaptation. Recent unsupervised domain adaptation (UDA) methods for visual data aim to align the feature distributions of the source and target domains ( @cite_4 @cite_34 @cite_13 @cite_20 @cite_2 @cite_7 @cite_22 ). Such methods are motivated by theoretical results stating that minimizing the divergence between domains will lower the upper bound of the error on target domain ( @cite_35 ). Many works in deep learning utilize the technique of distribution matching in hidden layers of a network such as a CNN ( @cite_13 @cite_20 @cite_2 ). However, they measure the domain divergence based on the hidden features of the network without considering the relationship between its decision boundary and the target features, as we do. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_2",
"@cite_34",
"@cite_13",
"@cite_20"
],
"mid": [
"2104094955",
"",
"2408201877",
"",
"2951670162",
"2467286621",
"1565327149",
"1731081199"
],
"abstract": [
"Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors.",
"",
"Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.",
"",
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL is a \"frustratingly easy\" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.",
"We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application."
]
} |
1711.01575 | 2767382337 | We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning. | In ( @cite_1 ) entropy minimization is just a part of the overall approach. To compare our ADR approach to entropy minimization directly, we use a new baseline method. To our knowledge, though this method has not been proposed by any previous works, it is easily achieved by modifying a method proposed by ( @cite_32 ). The generator tries to minimize the entropy of the target samples whereas the critic tries to maximize it. The entropy is directly measured by the output of the classifier. This baseline is similar to our approach in that the goal of the method is to achieve low-density separation. | {
"cite_N": [
"@cite_1",
"@cite_32"
],
"mid": [
"2950361018",
"2178768799"
],
"abstract": [
"The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can simultaneously learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into the deep network to explicitly learn the residual function with reference to the target classifier. We embed features of multiple layers into reproducing kernel Hilbert spaces (RKHSs) and match feature distributions for feature adaptation. The adaptation behaviors can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently using standard back-propagation. Empirical evidence exhibits that the approach outperforms state of art methods on standard domain adaptation datasets.",
"In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method - which we dub categorical generative adversarial networks (or CatGAN) - on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM)."
]
} |
1711.01567 | 2767353373 | This paper describes a general, scalable, end-to-end framework that uses the generative adversarial network (GAN) objective to enable robust speech recognition. Encoders trained with the proposed approach enjoy improved invariance by learning to map noisy audio to the same embedding space as that of clean audio. Unlike previous methods, the new framework does not rely on domain expertise or simplifying assumptions as are often needed in signal processing, and directly encourages robustness in a data-driven way. We show the new approach improves simulated far-field speech recognition of vanilla sequence-to-sequence models without specialized front-ends or preprocessing. | A vast majority of work in robust ASR deals with reverberations and ambient noise; @cite_15 provides an extensive survey in this effort. One of the most effective approaches in this variability is to devise a strong front-end such as the weighted prediction error (WPE) speech dereverberation @cite_13 @cite_4 and train the resulting neural network with realistic augmented data @cite_7 @cite_8 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_15",
"@cite_13"
],
"mid": [
"1983812858",
"2622203030",
"2533523411",
"2951660826",
"2164502538"
],
"abstract": [
"The performance of many microphone array processing techniques deteriorates in the presence of reverberation. To provide a widely applicable solution to this longstanding problem, this paper generalizes existing dereverberation methods using subband-domain multi-channel linear prediction filters so that the resultant generalized algorithm can blindly shorten a multiple-input multiple-output (MIMO) room impulse response between a set of unknown number of sources and a microphone array. Unlike existing dereverberation methods, the presented algorithm is developed without assuming specific acoustic conditions, and provides a firm theoretical underpinning for the applicability of the subband-domain multi-channel linear prediction methods. The generalization is achieved by using a new cost function for estimating the prediction filter and an efficient optimization algorithm. The proposed generalized algorithm makes it easier to understand the common background underlying different dereverberation methods and future technical development. Indeed, this paper also derives two alternative dereverberation methods from the proposed algorithm, which are advantageous in terms of computational complexity. Experimental results are reported, showing that the proposed generalized algorithm effectively achieves blind MIMO impulse response shortening especially in a mid-to-high frequency range.",
"",
"Conversational speech recognition has served as a flagship speech recognition task since the release of the Switchboard corpus in the 1990s. In this paper, we measure the human error rate on the widely used NIST 2000 test set, and find that our latest automated system has reached human parity. The error rate of professional transcribers is 5.9 for the Switchboard portion of the data, in which newly acquainted pairs of people discuss an assigned topic, and 11.3 for the CallHome portion where friends and family members have open-ended conversations. In both cases, our automated system establishes a new state of the art, and edges past the human benchmark, achieving error rates of 5.8 and 11.0 , respectively. The key to our system's performance is the use of various convolutional and LSTM acoustic model architectures, combined with a novel spatial smoothing method and lattice-free MMI acoustic training, multiple recurrent neural network language modeling approaches, and a systematic use of system combination.",
"Eliminating the negative effect of non-stationary environmental noise is a long-standing research topic for automatic speech recognition that stills remains an important challenge. Data-driven supervised approaches, including ones based on deep neural networks, have recently emerged as potential alternatives to traditional unsupervised approaches and with sufficient training, can alleviate the shortcomings of the unsupervised methods in various real-life acoustic environments. In this light, we review recently developed, representative deep learning approaches for tackling non-stationary additive and convolutional degradation of speech with the aim of providing guidelines for those involved in the development of environmentally robust speech recognition systems. We separately discuss single- and multi-channel techniques developed for the front-end and back-end of speech recognition systems, as well as joint front-end and back-end training frameworks.",
"This paper proposes a statistical model-based speech dereverberation approach that can cancel the late reverberation of a reverberant speech signal captured by distant microphones without prior knowledge of the room impulse responses. With this approach, the generative model of the captured signal is composed of a source process, which is assumed to be a Gaussian process with a time-varying variance, and an observation process modeled by a delayed linear prediction (DLP). The optimization objective for the dereverberation problem is derived to be the sum of the squared prediction errors normalized by the source variances; hence, this approach is referred to as variance-normalized delayed linear prediction (NDLP). Inheriting the characteristic of DLP, NDLP can robustly estimate an inverse system for late reverberation in the presence of noise without greatly distorting a direct speech signal. In addition, owing to the use of variance normalization, NDLP allows us to improve the dereverberation result especially with relatively short (of the order of a few seconds) observations. Furthermore, NDLP can be implemented in a computationally efficient manner in the time-frequency domain. Experimental results demonstrate the effectiveness and efficiency of the proposed approach in comparison with two existing approaches."
]
} |
1711.01567 | 2767353373 | This paper describes a general, scalable, end-to-end framework that uses the generative adversarial network (GAN) objective to enable robust speech recognition. Encoders trained with the proposed approach enjoy improved invariance by learning to map noisy audio to the same embedding space as that of clean audio. Unlike previous methods, the new framework does not rely on domain expertise or simplifying assumptions as are often needed in signal processing, and directly encourages robustness in a data-driven way. We show the new approach improves simulated far-field speech recognition of vanilla sequence-to-sequence models without specialized front-ends or preprocessing. | For speech, @cite_1 proposes a GAN based speech enhancement method called SEGAN but without the end goal of speech recognition. SEGAN operates on raw speech samples and hence it is computationally impractical for large scale experiments. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2600556233"
],
"abstract": [
"Current speech enhancement techniques operate on the spectral domain and or exploit some higher-level feature. The majority of them tackle a limited number of noise conditions and rely on first-order statistics. To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. In this work, we propose the use of generative adversarial networks for speech enhancement. In contrast to current techniques, we operate at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them. We evaluate the proposed model using an independent, unseen test set with two speakers and 20 alternative noise conditions. The enhanced samples confirm the viability of the proposed model, and both objective and subjective evaluations confirm the effectiveness of it. With that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to improve their performance."
]
} |
1711.01684 | 2767330327 | In the past several decades, many authorship attribution studies have used computational methods to determine the authors of disputed texts. Disputed authorship is a common problem in Classics, since little information about ancient documents has survived the centuries. Many scholars have questioned the authenticity of the final chapter of Xenophon's Cyropaedia, a 4th century B.C. historical text. In this study, we use N-grams frequency vectors with a cosine similarity function and word frequency vectors with Naive Bayes Classifiers (NBC) and Support Vector Machines (SVM) to analyze the authorship of the Cyropaedia. Although the N-gram analysis shows that the epilogue of the Cyropaedia differs slightly from the rest of the work, comparing the analysis of Xenophon with analyses of Aristotle and Plato suggests that this difference is not significant. Both NBC and SVM analyses of word frequencies show that the final chapter of the Cyropaedia is closely related to the other chapters of the Cyropaedia. Therefore, this analysis suggests that the disputed chapter was written by Xenophon. This information can help scholars better understand the Cyropaedia and also demonstrates the usefulness of applying modern authorship analysis techniques to classical literature. | A 1982 study of the Corpus Lysiacum did use word frequencies as one of the features to distinguish texts attributed to Lysias. This method used chi-squared tests to provide a distance measure. Chi-squared tests assume that features are independent, but independence is often not true in grammar. The study of the Corpus Lysiacum additionally involved parsing the text by hand, which is extremely time consuming and error prone @cite_2 . | {
"cite_N": [
"@cite_2"
],
"mid": [
"2038870135"
],
"abstract": [
"Thirty-five speeches, fragments and abstracts of speeches' comprise the corpus that has survived from a body of over four hundred attributed in ancient times to the Greek orator Lysias2. All three of the genres of oratory recognised by ancient critics are represented deliberative, epideictic and forensic. Most belong to the last category, being products of the profession which he practized during the latter part of his life, composing speeches for litigants to deliver in their own persons, as they were required to do in Athenian law-courts. Lysias, a Syracusan whose father Cephalus settled in Athens before the Peloponnesian War, conducted his own case against Eratosthenes, who had murdered his brother and despoiled his estate as one of the Thirty Tyrants who ruled Athens briefly in 404 3 B.C. The speech which he wrote for this prosecution survives as 12 in the Corpus (Oxford Text numbering, which is used in this article), and was probably the only speech that he made on his own behalf. But his name occurs"
]
} |
1711.01587 | 2735323847 | Similarity search is essential to many important applications and often involves searching at scale on high-dimensional data based on their similarity to a query. In biometric applications, recent vulnerability studies have shown that adversarial machine learning can compromise biometric recognition systems by exploiting the biometric similarity information. Existing methods for biometric privacy protection are in general based on pairwise matching of secured biometric templates and have inherent limitations in search efficiency and scalability. In this paper, we propose an inference-based framework for privacy-preserving similarity search in Hamming space. Our approach builds on an obfuscated distance measure that can conceal Hamming distance in a dynamic interval. Such a mechanism enables us to systematically design statistically reliable methods for retrieving most likely candidates without knowing the exact distance values. We further propose to apply Montgomery multiplication for generating search indexes that can withstand adversarial similarity analysis, and show that information leakage in randomized Montgomery domains can be made negligibly small. Our experiments on public biometric datasets demonstrate that the inference-based approach can achieve a search accuracy close to the best performance possible with secure computation methods, but the associated cost is reduced by orders of magnitude compared to cryptographic primitives. | Biometric privacy study has been largely focused on enabling one-to-one matching without revealing the biometric features that characterize an individual, known as biometric template protection @cite_12 @cite_38 @cite_22 . Depending on how the protected reference is generated and matched, template protection schemes can be classified into bio-cryptosystems and feature transformations @cite_12 . Bio-cryptosystems generate error correction codewords for an indirect matching of biometric templates @cite_38 . They typically yield a yes no decision for verification on one-to-one basis, which is not suitable for returning candidates in the order of their matching degrees. Feature transformation methods apply non-invertible functions to biometric templates @cite_22 . The protected references are usually distance preserving to perform matching directly in the transformed space. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_12"
],
"mid": [
"2105727875",
"1490122681",
"1584467222"
],
"abstract": [
"Biometrics are an important and widely used class of methods for identity verification and access control. Biometrics are attractive because they are inherent properties of an individual. They need not be remembered like passwords and are not easily lost or forged like identifying documents. At the same time, biometrics are fundamentally noisy and irreplaceable. There are always slight variations among the measurements of a given biometric, and, unlike passwords or identification numbers, biometrics are derived from physical characteristics that cannot easily be changed. The proliferation of biometric usage raises critical privacy and security concerns that, due to the noisy nature of biometrics, cannot be addressed using standard cryptographic methods. In this article, we present an overview of secure biometrics, also referred to as biometric template protection, an emerging class of methods that address these concerns.",
"Recent years have seen an exponential growth in the use of various biometric technologies for trusted automatic recognition of humans. With the rapid adaptation of biometric systems, there is a growing concern that biometric technologies may compromise the privacy and anonymity of individuals. Unlike credit cards and passwords, which can be revoked and reissued when compromised, biometrics are permanently associated with a user and cannot be replaced. To prevent the theft of biometric patterns, it is desirable to modify them through revocable and noninvertible transformations to produce cancelable biometric templates. In this article, we provide an overview of various cancelable biometric schemes for biometric template protection. We discuss the merits and drawbacks of available cancelable biometric systems and identify promising avenues of research in this rapidly evolving field.",
"Biometric recognition is an integral component of modern identity management and access control systems. Due to the strong and permanent link between individuals and their biometric traits, exposure of enrolled users? biometric information to adversaries can seriously compromise biometric system security and user privacy. Numerous techniques have been proposed for biometric template protection over the last 20 years. While these techniques are theoretically sound, they seldom guarantee the desired noninvertibility, revocability, and nonlinkability properties without significantly degrading the recognition performance. The objective of this work is to analyze the factors contributing to this performance divide and highlight promising research directions to bridge this gap. The design of invariant biometric representations remains a fundamental problem, despite recent attempts to address this issue through feature adaptation schemes. The difficulty in estimating the statistical distribution of biometric features not only hinders the development of better template protection algorithms but also diminishes the ability to quantify the noninvertibility and nonlinkability of existing algorithms. Finally, achieving nonlinkability without the use of external secrets (e.g., passwords) continues to be a challenging proposition. Further research on the above issues is required to cross the chasm between theory and practice in biometric ?template protection."
]
} |
1711.01587 | 2735323847 | Similarity search is essential to many important applications and often involves searching at scale on high-dimensional data based on their similarity to a query. In biometric applications, recent vulnerability studies have shown that adversarial machine learning can compromise biometric recognition systems by exploiting the biometric similarity information. Existing methods for biometric privacy protection are in general based on pairwise matching of secured biometric templates and have inherent limitations in search efficiency and scalability. In this paper, we propose an inference-based framework for privacy-preserving similarity search in Hamming space. Our approach builds on an obfuscated distance measure that can conceal Hamming distance in a dynamic interval. Such a mechanism enables us to systematically design statistically reliable methods for retrieving most likely candidates without knowing the exact distance values. We further propose to apply Montgomery multiplication for generating search indexes that can withstand adversarial similarity analysis, and show that information leakage in randomized Montgomery domains can be made negligibly small. Our experiments on public biometric datasets demonstrate that the inference-based approach can achieve a search accuracy close to the best performance possible with secure computation methods, but the associated cost is reduced by orders of magnitude compared to cryptographic primitives. | Other than search in the encrypted domain, the concept of search with reduced reference is proposed in privacy-preserving content-based information retrieval to protect the original content and accelerate the search simultaneously @cite_7 @cite_9 @cite_8 . The basic idea therein is to enforce @math -anonymity or @math -diversity properties by raising the ambiguity level of a data collection @cite_45 @cite_1 . Techniques of this paradigm are mostly based on randomized embedding @cite_0 and in particular locality sensitive hashing (LSH) @cite_9 @cite_11 @cite_3 @cite_36 . LSH performs approximate NN search by hashing similar items into the same bucket and those that are distant from one another into different buckets, respectively, with high probabilities. However, LSH by itself does not guarantee privacy @cite_0 @cite_3 @cite_36 . It requires all parties involved in a search to use the same random keys in generating hash codes. Moreover, to achieve a good precision in search, LSH-based algorithms usually require a large number of random hash functions and may increase privacy risks @cite_39 . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_39",
"@cite_0",
"@cite_45",
"@cite_11"
],
"mid": [
"",
"2462806717",
"2533461851",
"2019586918",
"2058181105",
"1975179452",
"",
"2162511804",
"1538715284",
"2541558529"
],
"abstract": [
"",
"This work proposes a privacy-protection framework for an important application called outsourced media search . This scenario involves a data owner, a client, and an untrusted server, where the owner outsources a search service to the server. Due to lack of trust, the privacy of the client and the owner should be protected. The framework relies on multimedia hashing and symmetric encryption. It requires involved parties to participate in a privacy-enhancing protocol. Additional processing steps are carried out by the owner and the client: (i) before outsourcing low-level media features to the server, the owner has to one-way hash them, and partially encrypt each hash-value; (ii) the client completes the similarity search by re-ranking the most similar candidates received from the server. One-way hashing and encryption add ambiguity to data and make it difficult for the server to infer contents from database items and queries, so the privacy of both the owner and the client is enforced. The proposed framework realizes trade-offs among strength of privacy enforcement, quality of search, and complexity, because the information loss can be tuned during hashing and encryption. Extensive experiments demonstrate the effectiveness and the flexibility of the framework.",
"We present a novel method to securely determine whether two signals are similar to each other, and apply it to approximate nearest neighbor clustering. The proposed method relies on a locality sensitive hashing scheme based on a secure binary embedding, computed using quantized random projections. Hashes extracted from the signals preserve information about the distance between the signals, provided this distance is small enough. If the distance between the signals is larger than a threshold, then no information about the distance is revealed. Theoretical and experimental justification is provided for this property. Further, when the randomized embedding parameters are unknown, then the mutual information between the hashes of any two signals decays to zero exponentially fast as a function of the l 2 distance between the signals. Taking advantage of this property, we suggest that these binary hashes can be used to perform privacy-preserving nearest neighbor search with significantly lower complexity compared to protocols which use the actual signals.",
"We propose a privacy protection framework for large-scale content-based information retrieval. It offers two layers of protection. First, robust hash values are used as queries to prevent revealing original content or features. Second, the client can choose to omit certain bits in a hash value to further increase the ambiguity for the server. Due to the reduced information, it is computationally difficult for the server to know the client’s interest. The server has to return the hash values of all possible candidates to the client. The client performs a search within the candidate list to find the best match. Since only hash values are exchanged between the client and the server, the privacy of both parties is protected. We introduce the concept oftunable privacy, where the privacy protection level can be adjusted according to a policy. It is realized through hash-based piecewise inverted indexing. The idea is to divide a feature vector into pieces and index each piece with a subhash value. Each subhash value is associated with an inverted index list. The framework has been extensively tested using a large image database. We have evaluated both retrieval performance and privacy-preserving performance for a particular content identification application. Two different constructions of robust hash algorithms are used. One is based on random projections; the other is based on the discrete wavelet transform. Both algorithms exhibit satisfactory performance in comparison with state-of-the-art retrieval schemes. The results show that the privacy enhancement slightly improves the retrieval performance. We consider the majority voting attack for estimating the query category and identification. Experiment results show that this attack is a threat when there are near-duplicates, but the success rate decreases with the number of omitted bits and the number of distinct items.",
"In some domains, the need for data privacy and data sharing conflict. Data obfuscation addresses this dilemma by extending several existing technologies and defining obfuscation properties that quantify the technologies' usefulness and privacy preservation.",
"The Locality Sensitive Hashing (LSH) technique of scalably finding nearest-neighbors can be adapted to enable discovering similar users while preserving their privacy. The key idea is to compute the user profile on the end-user device, apply LSH on the local profile, and use the LSH cluster identifier as the interest group identifier of a user. By properties of LSH, the interest group comprises other users with similar interests. The collective behavior of the members of the interest group is anonymously collected at some aggregation node to generate recommendations for the group members. The quality of recommendation depends on the efficiency of the LSH clustering algorithm, i.e. its capability of gathering similar users. In contrast, with conventional usage of LSH (for scalability and not privacy), in our framework one can not perform a linear search over the cluster members to identify the nearest neighbors and to prune away false positives. A good clustering quality is therefore of functional importance for our system. We report in this work how changing the nature of LSH inputs, which in our case corresponds to the user profile representations, impacts the performance of LSH-based clustering and the final quality of recommendations. We present extensive performance evaluations of the LSH-based privacypreserving recommender system using two large datasets of MovieLens ratings and Delicious bookmarks, respectively.",
"",
"Comparing two signals is one of the most essential and prevalent tasks in signal processing. A large number of applications fundamentally rely on determining the answers to the following two questions: 1) How should two signals be compared? 2) Given a set of signals and a query signal, which signals are the nearest neighbors (NNs) of the query signal, i.e., which signals in the database are most similar to the query signal? The NN search problem is defined as follows: Given a set S containing points in a metric space M, and a query point x !M, find the point in S that is closest to x. The problem can be extended to K-NN, i.e., determining the K signals nearest to x. In this context, the points in question are signals, such as images, videos, or other waveforms. The qualifier closest refers to a distance metric, such as the Euclidean distance or Manhattan distance between pairs of points in S. Finding the NN of the query point should be at most linear in the database size and is a well-studied problem in conventional NN settings.",
"The Handbook of Database Security: Applications & Trends, an edited volume by renowned researchers within data security, provides an up-to-date overview of data security models, techniques, and architectures in a variety of data management applications and settings. This edited volume represents the most comprehensive work on numerous data security aspects published in the last ten years. The Handbook of Database Security: Applications & Trends places a particular focus on data-centric security aspects that reach beyond traditional and well-studied data security aspects in databases. It also covers security in advanced database systems, data privacy and trust management, and data outsourcing, and outlines directions for future research in these fields. The Handbook of Database Security: Applications & Trends is designed for a professional audience composed of practitioners and researchers in industry and academia as a reference book. This volume is also suitable for advanced-level students in computer science interested in the state-of-the-art in data security.",
"In many situations, such as in biometric applications, there is need to encrypt and “hide” data, while simultaneously permitting restricted computations on them. We present a method to securely determine the 2 distance between two signals if they are close enough. This method relies on a locality sensitive hashing scheme based on a secure modular embedding, computed using quantized random projections, being a generalization of previous work in the area. Secure Modular Hashes (SMH) extracted from the signals preserve information about the distance between the signals, hiding other characteristic from the signals. Theoretical properties state that the described scheme provides a mechanism to threshold how much information to reveal, and is also information theoretically secure above this threshold. Finally, experimental results reveal that distances computed from SMH vectors can effectively replace the actual Euclidean distances with minimal degradation."
]
} |
1711.01587 | 2735323847 | Similarity search is essential to many important applications and often involves searching at scale on high-dimensional data based on their similarity to a query. In biometric applications, recent vulnerability studies have shown that adversarial machine learning can compromise biometric recognition systems by exploiting the biometric similarity information. Existing methods for biometric privacy protection are in general based on pairwise matching of secured biometric templates and have inherent limitations in search efficiency and scalability. In this paper, we propose an inference-based framework for privacy-preserving similarity search in Hamming space. Our approach builds on an obfuscated distance measure that can conceal Hamming distance in a dynamic interval. Such a mechanism enables us to systematically design statistically reliable methods for retrieving most likely candidates without knowing the exact distance values. We further propose to apply Montgomery multiplication for generating search indexes that can withstand adversarial similarity analysis, and show that information leakage in randomized Montgomery domains can be made negligibly small. Our experiments on public biometric datasets demonstrate that the inference-based approach can achieve a search accuracy close to the best performance possible with secure computation methods, but the associated cost is reduced by orders of magnitude compared to cryptographic primitives. | Privacy-enhanced variants of LSH are recently proposed by combining LSH with cryptographic or information-theoretic protocols @cite_0 . The privacy protection framework proposed in @cite_9 generates a partial query instance by omitting certain bits in one or more sub-hash values to increase the ambiguity of query information for the server. The hash values of retrieved candidates are returned to the client for refinement. The framework is extended in @cite_8 where partial encryption is performed on the hash code of each item to prevent an untrustworthy server from precisely linking queries and database records. In particular, the server uses the unencrypted part of each item for approximate indexing and search while the client uses the encrypted part for re-ranking the candidates received from the server. To limit the number of candidates sent to the client, the server performs a preliminary ranking based on the computed from the unencrypted part. The trade-off between privacy and utility of a search at the server side is therefore controlled by the number of unencrypted bits. We call this approach LSH + partial distance'' for brevity and will use it as one baseline approach for performance comparison. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_8"
],
"mid": [
"2162511804",
"2019586918",
"2462806717"
],
"abstract": [
"Comparing two signals is one of the most essential and prevalent tasks in signal processing. A large number of applications fundamentally rely on determining the answers to the following two questions: 1) How should two signals be compared? 2) Given a set of signals and a query signal, which signals are the nearest neighbors (NNs) of the query signal, i.e., which signals in the database are most similar to the query signal? The NN search problem is defined as follows: Given a set S containing points in a metric space M, and a query point x !M, find the point in S that is closest to x. The problem can be extended to K-NN, i.e., determining the K signals nearest to x. In this context, the points in question are signals, such as images, videos, or other waveforms. The qualifier closest refers to a distance metric, such as the Euclidean distance or Manhattan distance between pairs of points in S. Finding the NN of the query point should be at most linear in the database size and is a well-studied problem in conventional NN settings.",
"We propose a privacy protection framework for large-scale content-based information retrieval. It offers two layers of protection. First, robust hash values are used as queries to prevent revealing original content or features. Second, the client can choose to omit certain bits in a hash value to further increase the ambiguity for the server. Due to the reduced information, it is computationally difficult for the server to know the client’s interest. The server has to return the hash values of all possible candidates to the client. The client performs a search within the candidate list to find the best match. Since only hash values are exchanged between the client and the server, the privacy of both parties is protected. We introduce the concept oftunable privacy, where the privacy protection level can be adjusted according to a policy. It is realized through hash-based piecewise inverted indexing. The idea is to divide a feature vector into pieces and index each piece with a subhash value. Each subhash value is associated with an inverted index list. The framework has been extensively tested using a large image database. We have evaluated both retrieval performance and privacy-preserving performance for a particular content identification application. Two different constructions of robust hash algorithms are used. One is based on random projections; the other is based on the discrete wavelet transform. Both algorithms exhibit satisfactory performance in comparison with state-of-the-art retrieval schemes. The results show that the privacy enhancement slightly improves the retrieval performance. We consider the majority voting attack for estimating the query category and identification. Experiment results show that this attack is a threat when there are near-duplicates, but the success rate decreases with the number of omitted bits and the number of distinct items.",
"This work proposes a privacy-protection framework for an important application called outsourced media search . This scenario involves a data owner, a client, and an untrusted server, where the owner outsources a search service to the server. Due to lack of trust, the privacy of the client and the owner should be protected. The framework relies on multimedia hashing and symmetric encryption. It requires involved parties to participate in a privacy-enhancing protocol. Additional processing steps are carried out by the owner and the client: (i) before outsourcing low-level media features to the server, the owner has to one-way hash them, and partially encrypt each hash-value; (ii) the client completes the similarity search by re-ranking the most similar candidates received from the server. One-way hashing and encryption add ambiguity to data and make it difficult for the server to infer contents from database items and queries, so the privacy of both the owner and the client is enforced. The proposed framework realizes trade-offs among strength of privacy enforcement, quality of search, and complexity, because the information loss can be tuned during hashing and encryption. Extensive experiments demonstrate the effectiveness and the flexibility of the framework."
]
} |
1711.01845 | 2950291962 | Efficiently exploiting GPUs is increasingly essential in scientific computing, as many current and upcoming supercomputers are built using them. To facilitate this, there are a number of programming approaches, such as CUDA, OpenACC and OpenMP 4, supporting different programming languages (mainly C C++ and Fortran). There are also several compiler suites (clang, nvcc, PGI, XL) each supporting different combinations of languages. In this study, we take a detailed look at some of the currently available options, and carry out a comprehensive analysis and comparison using computational loops and applications from the domain of unstructured mesh computations. Beyond runtimes and performance metrics (GB s), we explore factors that influence performance such as register counts, occupancy, usage of different memory types, instruction counts, and algorithmic differences. Results of this work show how clang's CUDA compiler frequently outperform NVIDIA's nvcc, performance issues with directive-based approaches on complex kernels, and OpenMP 4 support maturing in clang and XL; currently around 10 slower than CUDA. | Work by Ledur et. al. compares a few simple testcases such as Mandelbrot and N-Queens implemented with CUDA and OpenACC (PGI) @cite_14 , Herdman et. al. @cite_20 take a larger stencil code written in C, and study CUDA, OpenCL and OpenACC implementations, but offer no detailed insights into the differences. Work by Hoshino et. al. @cite_16 offers a detailed look at CUDA and OpenACC variants of a CFD code and some smaller benchmarks written in C, and show a few language-specific optimizations, but analysis stops at the measured runtime. Normat et. al. @cite_9 compare CUDA Fortran and OpenACC versions of an atmospheric model, CAM-SE, which offers some details about code generated by the PGI and Cray compilers, and identifies a number of key differences that let CUDA outperform OpenACC, thanks to lower level optimizations, such as the use of shared memory. Kuan et. al. @cite_11 also compare runtimes of CUDA and OpenACC implementations of the same statistical algorithm (phylogenetic inference). Gonge et. al. @cite_15 compare CUDA Fortran and OpenACC implementations of Nekbone, and scale up to 16k GPUs on Titan - but no detailed study of performance differences. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_15",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2068658185",
"2502318050",
"2018408367",
"2056862683",
""
],
"abstract": [
"",
"Abstract The porting of a key kernel in the tracer advection routines of the Community Atmosphere Model – Spectral Element (CAM-SE) to use Graphics Processing Units (GPUs) using OpenACC is considered in comparison to an existing CUDA FORTRAN port. The development of the OpenACC kernel for GPUs was substantially simpler than that of the CUDA port. Also, OpenACC performance was about 1.5× slower than the optimized CUDA version. Particular focus is given to compiler maturity regarding OpenACC implementation for modern FORTRAN, and it is found that the Cray implementation is currently more mature than the PGI implementation. Still, for the case that ran successfully on PGI, the PGI OpenACC runtime was slightly faster than Cray. The results show encouraging performance for OpenACC implementation compared to CUDA while also exposing some issues that may be necessary before the implementations are suitable for porting all of CAM-SE. Most notable are that GPU shared memory should be used by future OpenACC implementations and that derived type support should be expanded.",
"We present a hybrid GPU implementation and performance analysis of Nekbone, which represents one of the core kernels of the incompressible Navier---Stokes solver Nek5000. The implementation is based on OpenACC and CUDA Fortran for local parallelization of the compute-intensive matrix---matrix multiplication part, which significantly minimizes the modification of the existing CPU code while extending the simulation capability of the code to GPU architectures. Our discussion includes the GPU results of OpenACC interoperating with CUDA Fortran and the gather---scatter operations with GPUDirect communication. We demonstrate performance of up to 552 Tflops on 16, 384 GPUs of the OLCF Cray XK7 Titan.",
"OpenACC is a new accelerator programming interface that provides a set of OpenMP-like loop directives for the programming of accelerators in an implicit and portable way. It allows the programmer to express the offloading of data and computations to accelerators, such that the porting process for legacy CPU-based applications can be significantly simplified. This paper focuses on the performance aspects of OpenACC using two micro benchmarks and one real-world computational fluid dynamics application. Both evaluations show that in general OpenACC performance is approximately 50 lower than CUDA. However, for some applications it can reach up to 98 with careful manual optimizations. The results also indicate several limitations of the OpenACC specification that hamper full use of the GPU hardware resources, resulting in a significant performance gap when compared to a fully tuned CUDA code. The lack of a programming interface for the shared memory in particular results in as much as three times lower performance.",
"Hardware accelerators such as GPGPUs are becoming increasingly common in HPC platforms and their use is widely recognised as being one of the most promising approaches for reaching exascale levels of performance. Large HPC centres, such as AWE, have made huge investments in maintaining their existing scientific software codebases, the vast majority of which were not designed to effectively utilise accelerator devices. Consequently, HPC centres will have to decide how to develop their existing applications to take best advantage of future HPC system architectures. Given limited development and financial resources, it is unlikely that all potential approaches will be evaluated for each application. We are interested in how this decision making can be improved, and this work seeks to directly evaluate three candidate technologies-OpenACC, OpenCL and CUDA-in terms of performance, programmer productivity, and portability using a recently developed Lagrangian-Eulerian explicit hydrodynamics mini-application. We find that OpenACC is an extremely viable programming model for accelerator devices, improving programmer productivity and achieving better performance than OpenCL and CUDA.",
""
]
} |
1711.01845 | 2950291962 | Efficiently exploiting GPUs is increasingly essential in scientific computing, as many current and upcoming supercomputers are built using them. To facilitate this, there are a number of programming approaches, such as CUDA, OpenACC and OpenMP 4, supporting different programming languages (mainly C C++ and Fortran). There are also several compiler suites (clang, nvcc, PGI, XL) each supporting different combinations of languages. In this study, we take a detailed look at some of the currently available options, and carry out a comprehensive analysis and comparison using computational loops and applications from the domain of unstructured mesh computations. Beyond runtimes and performance metrics (GB s), we explore factors that influence performance such as register counts, occupancy, usage of different memory types, instruction counts, and algorithmic differences. Results of this work show how clang's CUDA compiler frequently outperform NVIDIA's nvcc, performance issues with directive-based approaches on complex kernels, and OpenMP 4 support maturing in clang and XL; currently around 10 slower than CUDA. | Support in compilers for OpenMP 4 and GPU offloading is relatively new @cite_24 and there are only a handful of papers evaluating their performance: Martineau et. al. @cite_4 present some runtimes of basic computational loops in C compiled with Cray and clang, and comparisons with CUDA. Karlin et. al @cite_19 port three CORAL benchmark codes to OpenMP 4.5 (C), compile them with clang, and compare them with CUDA implementations - the analysis is focused on runtimes and register pressure. Hart el. al. @cite_22 compare OpenMP 4.5 with Cray to OpenACC on Nekbone, however the analysis here is also restricted to runtimes, the focus is more on programmability. We are not aware of academic papers studying the performance of CUDA Fortran or OpenMP 4 in the IBM XL compilers aside from early results in our own previous work @cite_10 . There is also very little work on comparing the performance of CUDA code compiled with nvcc and clang. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_24",
"@cite_19",
"@cite_10"
],
"mid": [
"2522451327",
"2224081020",
"",
"2521614036",
"2529731748"
],
"abstract": [
"In this paper we investigate the current compiler technologies supporting OpenMP 4.x features targeting a range of devices, in particular, the Cray compiler 8.5.0 targeting an Intel Xeon Broadwell and NVIDIA K20x, IBM’s OpenMP 4.5 Clang branch (clang-ykt) targeting an NVIDIA K20x, the Intel compiler 16 targeting an Intel Xeon Phi Knights Landing, and GCC 6.1 targeting an AMD APU. We outline the mechanisms that they use to map the OpenMP model onto their target architectures, and conduct performance testing with a number of representative data parallel kernels. Following this we present a discussion about the current state of play in terms of performance portability and propose some straightforward guidelines for writing performance portable code, derived from our observations. At the time of writing, developers will likely have to rely on the pre-processor for certain kernels to achieve functional portability, but we expect that future homogenisation of required directives between compilers and architectures is feasible.",
"In this paper we describe the process of porting the NekBone mini-application to run on a Cray XC30 hybrid supercomputer using OpenMP device constructs, as introduced in version 4.0 of the OpenMP standard and implemented in a pre-release version of the Cray Compilation Environment (CCE) compiler. We document the process of porting and show how the performance evolves during the addition on the 66 constructs needed to accelerate the application. In doing so, we provide a user-centric introduction to the device constructs and an overview of the approach needed to port a parallel application using these. Some contrasts with OpenACC are also drawn to aid those wishing to either implement both programming models or to migrate from one to the other.",
"",
"Many application developers need code that runs efficiently on multiple architectures, but cannot afford to maintain architecturally specific codes. With the addition of target directives to support offload accelerators, OpenMP now has the machinery to support performance portable code development. In this paper, we describe application ports of Kripke, Cardioid, and LULESH to OpenMP 4.5 and discuss our successes and failures. Challenges encountered include how OpenMP interacts with C++ including classes with virtual methods and lambda functions. Also, the lack of deep copy support in OpenMP increased code complexity. Finally, GPUs inability to handle virtual function calls required code restructuring. Despite these challenges we demonstrate OpenMP obtains performance within 10 of hand written CUDA for memory bandwidth bound kernels in LULESH. In addition, we show with a minor change to the OpenMP standard that register usage for OpenMP code can be reduced by up to 10 .",
"This paper discusses the performance of IBM’s Power8 CPUs, on a number of skeleton, financial and CFD benchmarks and applications. Implicitly, the performance of the software toolchain is also tested - the bare-bones Little-Endian Ubuntu, the GNU 5.3 and the XL 14.1.3 compilers and OpenMP runtimes. First, we aim to establish some roofline numbers on bandwidth and compute throughput, then move on to benchmark explicit and implicit one- three-factor Black-Scholes computations, and CFD applications based on the OP2 and OPS frameworks, such as the Airfoil and BookLeaf unstructured-mesh codes, and the CloverLeaf 2D 3D structured mesh simulations. These applications all exhibit different characteristics in terms of computations, communications, memory access patterns, etc. Finally we briefly discuss performance of an industrial CFD code, Rolls-Royce Hydra, and we show initial results from IBM’s CUDA Fortran compiler. Both absolute and relative performance metrics are computed and compared to NVIDIA GPUs and Intel Xeon CPUs."
]
} |
1711.01714 | 2767703455 | Video understanding has attracted much research attention especially since the recent availability of large-scale video benchmarks. In this paper, we address the problem of multi-label video classification. We first observe that there exists a significant knowledge gap between how machines and humans learn. That is, while current machine learning approaches including deep neural networks largely focus on the representations of the given data, humans often look beyond the data at hand and leverage external knowledge to make better decisions. Towards narrowing the gap, we propose to incorporate external knowledge graphs into video classification. In particular, we unify traditional "knowledgeless" machine learning models and knowledge graphs in a novel end-to-end framework. The framework is flexible to work with most existing video classification algorithms including state-of-the-art deep models. Finally, we conduct extensive experiments on the largest public video dataset YouTube-8M. The results are promising across the board, improving mean average precision by up to 2.9 . | Video understanding has been an active research area in computer vision. Significant progress has been made especially since the release of large-scale benchmarks such as Sports-1M @cite_28 , YFCC-100M @cite_32 and YouTube-8M @cite_9 . | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_32"
],
"mid": [
"2308045930",
"2524365899",
"1544092585"
],
"abstract": [
"",
"Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of 8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.",
"We present the Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M), the largest public multimedia collection that has ever been released. The dataset contains a total of 100 million media objects, of which approximately 99.2 million are photos and 0.8 million are videos, all of which carry a Creative Commons license. Each media object in the dataset is represented by several pieces of metadata, e.g. Flickr identifier, owner name, camera, title, tags, geo, media source. The collection provides a comprehensive snapshot of how photos and videos were taken, described, and shared over the years, from the inception of Flickr in 2004 until early 2014. In this article we explain the rationale behind its creation, as well as the implications the dataset has for science, research, engineering, and development. We further present several new challenges in multimedia research that can now be expanded upon with our dataset."
]
} |
1711.01714 | 2767703455 | Video understanding has attracted much research attention especially since the recent availability of large-scale video benchmarks. In this paper, we address the problem of multi-label video classification. We first observe that there exists a significant knowledge gap between how machines and humans learn. That is, while current machine learning approaches including deep neural networks largely focus on the representations of the given data, humans often look beyond the data at hand and leverage external knowledge to make better decisions. Towards narrowing the gap, we propose to incorporate external knowledge graphs into video classification. In particular, we unify traditional "knowledgeless" machine learning models and knowledge graphs in a novel end-to-end framework. The framework is flexible to work with most existing video classification algorithms including state-of-the-art deep models. Finally, we conduct extensive experiments on the largest public video dataset YouTube-8M. The results are promising across the board, improving mean average precision by up to 2.9 . | Finally, knowledge graph is a popular choice to represent external knowledge, for capturing both concepts and their pairwise relationships. The use of knowledge graphs have already demonstrated various degrees of success in machine learning applications including Web search and social media @cite_3 . Quite a number of large-scale knowledge graphs are available commercially or in open source, which are generally constructed based on human curation @cite_11 , crowdsourcing @cite_8 @cite_12 , and distillation from semi-structured @cite_17 @cite_25 or unstructured data @cite_27 @cite_7 . The details of knowledge graph construction is beyond the scope of this work. | {
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_27",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2107658650",
"2014016001",
"2016089260",
"2016753842",
"1512387364",
"102708294",
"2277195237",
"2022166150"
],
"abstract": [
"Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 10 5 general concepts spanning human reality. Most of the time has been spent codifying knowledge about these concepts; approximately 10 6 commonsense axioms have been handcrafted for and entered into CYC's knowledge base, and millions more have been inferred and cached by CYC. This article examines the fundamental assumptions of doing such a large-scale project, reviews the technical lessons learned by the developers, and surveys the range of applications that are or soon will be enabled by the technology.",
"While tuple extraction for a given relation has been an active research area, its dual problem of pattern search-- to find and rank patterns in a principled way-- has not been studied explicitly. In this paper, we propose and address the problem of pattern search, in addition to tuple extraction. As our objectives, we stress reusability for pattern search and scalability of tuple extraction, such that our approach can be applied to very large corpora like the Web. As the key foundation, we propose a conceptual model PRDualRank to capture the notion of precision and recall for both tuples and patterns in a principled way, leading to the \"rediscovery\" of the Pattern-Relation Duality-- the formal quantification of the reinforcement between patterns and tuples with the metrics of precision and recall. We also develop a concrete framework for PRDualRank, guided by the principles of a perfect sampling process over a complete corpus. Finally, we evaluated our framework over the real Web. Experiments show that on all three target relations our principled approach greatly outperforms the previous state-of-the-art system in both effectiveness and efficiency. In particular, we improved optimal F-score by up to 64 .",
"ConceptNet is a freely available commonsense knowledge base and natural-language-processing tool-kit which supports many practical textual-reasoning tasks over real-world documents including topic-gisting, analogy-making, and other context oriented inferences. The knowledge base is a semantic network presently consisting of over 1.6 million assertions of commonsense knowledge encompassing the spatial, physical, social, temporal, and psychological aspects of everyday life. ConceptNet is generated automatically from the 700 000 sentences of the Open Mind Common Sense Project — a World Wide Web based collaboration with over 14 000 authors.",
"Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.",
"We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.",
"DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.",
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that \"the person is riding a horse-drawn carriage.\" In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of @math 35 objects, @math 26 attributes, and @math 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.",
"We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as HASONEPRIZE). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95 . YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques."
]
} |
1711.01861 | 2753890080 | Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters. We learn a Bayesian mixture-density network approximating the posterior over multiple rounds of adaptively chosen simulations. Furthermore, we propose an efficient approach for handling missing features and parameter settings for which the simulator fails, as well as a strategy for automatically learning relevant features using recurrent neural networks. On synthetic data, our approach efficiently estimates posterior distributions and recovers ground-truth parameters. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, which yield model-predicted voltage traces that accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling. | Given experimental data @math (e.g. intracellular voltage measurements of a single neuron, or extracellular recordings from a neural population), a model @math parameterised by @math (e.g. biophysical parameters, or connectivity strengths in a network simulation) and a prior distribution @math , our goal is to perform statistical inference, i.e. to find the posterior distribution @math . We assume that the model @math is only defined through a @cite_27 @cite_46 : we can generate samples @math from it, but not evaluate @math (or its gradients) explicitly. In neural modelling, many models are defined through specification of a dynamical system with external or intrinsic noise sources or even through a black-box simulator (e.g. using the NEURON software @cite_5 ). | {
"cite_N": [
"@cite_46",
"@cite_5",
"@cite_27"
],
"mid": [
"",
"2078588555",
"114923250"
],
"abstract": [
"",
"Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.",
"A prescribed statistical model is a parametric specification of the distribution of a random vector, whilst an implicit statistical model is one defined at a more fundamental level in terms of a generating stochastic mechanism. This paper develops methods of inference which can be used for implicit statistical models whose distribution theory is intractable. The kernel method of probability density estimation is advocated for estimating a log-likelihood from simulations of such a model. The development and testing of an algorithm for maximizing this estimated log-likelihood function is described. An illustrative example involving a stochastic model for quantal response assays is given. Possible applications of the maximization algorithm to ad hoc methods of parameter estimation are noted briefly, and illustrated by an example involving a model for the spatial pattern of displaced amacrine cells in the retina of a rabbit."
]
} |
1711.01861 | 2753890080 | Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters. We learn a Bayesian mixture-density network approximating the posterior over multiple rounds of adaptively chosen simulations. Furthermore, we propose an efficient approach for handling missing features and parameter settings for which the simulator fails, as well as a strategy for automatically learning relevant features using recurrent neural networks. On synthetic data, our approach efficiently estimates posterior distributions and recovers ground-truth parameters. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, which yield model-predicted voltage traces that accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling. | In addition, and in line with parameter-fitting approaches in neuroscience and most ABC techniques @cite_27 @cite_46 @cite_2 , we are often interested in capturing summary statistics of the experimental data (e.g. firing rate, spike-latency, resting potential of a neuron). Therefore, we can think of @math as resulting from applying a feature function @math to the raw simulator output @math , @math , with @math . | {
"cite_N": [
"@cite_46",
"@cite_27",
"@cite_2"
],
"mid": [
"",
"114923250",
"2951949147"
],
"abstract": [
"",
"A prescribed statistical model is a parametric specification of the distribution of a random vector, whilst an implicit statistical model is one defined at a more fundamental level in terms of a generating stochastic mechanism. This paper develops methods of inference which can be used for implicit statistical models whose distribution theory is intractable. The kernel method of probability density estimation is advocated for estimating a log-likelihood from simulations of such a model. The development and testing of an algorithm for maximizing this estimated log-likelihood function is described. An illustrative example involving a stochastic model for quantal response assays is given. Possible applications of the maximization algorithm to ad hoc methods of parameter estimation are noted briefly, and illustrated by an example involving a model for the spatial pattern of displaced amacrine cells in the retina of a rabbit.",
"Scientists often express their understanding of the world through a computationally demanding simulation program. Analyzing the posterior distribution of the parameters given observations (the inverse problem) can be extremely challenging. The Approximate Bayesian Computation (ABC) framework is the standard statistical tool to handle these likelihood free problems, but they require a very large number of simulations. In this work we develop two new ABC sampling algorithms that significantly reduce the number of simulations necessary for posterior inference. Both algorithms use confidence estimates for the accept probability in the Metropolis Hastings step to adaptively choose the number of necessary simulations. Our GPS-ABC algorithm stores the information obtained from every simulation in a Gaussian process which acts as a surrogate function for the simulated statistics. Experiments on a challenging realistic biological problem illustrate the potential of these algorithms."
]
} |
1711.01861 | 2753890080 | Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters. We learn a Bayesian mixture-density network approximating the posterior over multiple rounds of adaptively chosen simulations. Furthermore, we propose an efficient approach for handling missing features and parameter settings for which the simulator fails, as well as a strategy for automatically learning relevant features using recurrent neural networks. On synthetic data, our approach efficiently estimates posterior distributions and recovers ground-truth parameters. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, which yield model-predicted voltage traces that accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling. | Classical ABC algorithms simulate from multiple parameters, and reject parameter sets which yield data that are not within a specified distance from the empirically observed features. In their basic form, proposals are drawn from the prior ( rejection-ABC' @cite_51 ). More efficient variants make use of a Markov-Chain Monte-Carlo @cite_12 @cite_52 or Sequential Monte-Carlo (SMC) samplers @cite_24 @cite_30 . Sampling-based ABC approaches require the design of a distance metric on summary features, as well as a rejection criterion @math , and are exact only in the limit of small @math (i.e. many rejections) @cite_21 , implying strong trade-offs between accuracy and scalability. In SMC-ABC, importance sampling is used to sequentially sample from more accurate posteriors while @math is gradually decreased. | {
"cite_N": [
"@cite_30",
"@cite_21",
"@cite_52",
"@cite_24",
"@cite_51",
"@cite_12"
],
"mid": [
"2051823273",
"2963387352",
"",
"2067392831",
"2034795216",
"2045973738"
],
"abstract": [
"Methods of approximate Bayesian computation (ABC) are increasingly used for analysis of complex models. A major challenge for ABC is over-coming the often inherent problem of high rejection rates in the accept reject methods based on prior:predictive sampling. A number of recent developments aim to address this with extensions based on sequential Monte Carlo (SMC) strategies. We build on this here, introducing an ABC SMC method that uses data-based adaptive weights. This easily implemented and computationally trivial extension of ABC SMC can very substantially improve acceptance rates, as is demonstrated in a series of examples with simulated and real data sets, including a currently topical example from dynamic modelling in systems biology applications.",
"Approximate Bayesian computation (ABC) methods are used to approximate posterior distributions using simulation rather than likelihood calculations. We introduce Gaussian process (GP) accelerated ABC, which we show can significantly reduce the number of simulations required. As computational resource is usually the main determinant of accuracy in ABC, GP-accelerated methods can thus enable more accurate inference in some models. GP models of the unknown log-likelihood function are used to exploit continuity and smoothness, reducing the required computation. We use a sequence of models that increase in accuracy, using intermediate models to rule out regions of the parameter space as implausible. The methods will not be suitable for all problems, but when they can be used, can result in significant computational savings. For the Ricker model, we are able to achieve accurate approximations to the posterior distribution using a factor of 100 fewer simulator evaluations than comparable Monte Carlo approaches, and for a population genetics model we are able to approximate the exact posterior for the first time.",
"",
"Sequential techniques can enhance the efficiency of the approximate Bayesian computation algorithm, as in 's (2007) partial rejection control version. While this method is based upon the theoretical works of Del (2006), the application to approximate Bayesian computation results in a bias in the approximation to the posterior. An alternative version based on genuine importance sampling arguments bypasses this difficulty, in connection with the population Monte Carlo method of (2004), and it includes an automatic scaling of the forward kernel. When applied to a population genetics example, it compares favourably with two other versions of the approximate algorithm. Copyright 2009, Oxford University Press.",
"We use variation at a set of eight human Y chromosome microsatellite loci to investigate the demographic history of the Y chromosome. Instead of assuming a population of constant size, as in most of the previous work on the Y chromosome, we consider a model which permits a period of recent population growth. We show that for most of the populations in our sample this model fits the data far better than a model with no growth. We estimate the demographic parameters of this model for each population and also the time to the most recent common ancestor. Since there is some uncertainty about the details of the microsatellite mutation process, we consider several plausible mutation schemes and estimate the variance in mutation size simultaneously with the demographic parameters of interest. Our finding of a recent common ancestor (probably in the last 120,000 years), coupled with a strong signal of demographic expansion in all populations, suggests either a recent human expansion from a small ancestral population, or natural selection acting on the Y chromosome.",
"Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion."
]
} |
1711.01467 | 2752386593 | We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII dataset with 12.5 relative improvement. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem. | Human action recognition is a well studied problem with various standard benchmarks spanning across still images @cite_43 @cite_22 @cite_15 @cite_46 @cite_26 and videos @cite_32 @cite_56 @cite_7 @cite_18 . The newer image based datasets such as HICO @cite_43 and MPII @cite_15 are large and highly diverse, containing 600 and 393 classes respectively. In contrast, collecting such diverse video based action datasets is hard, and hence existing popular benchmarks like UCF101 @cite_18 or HMDB51 @cite_56 contain only 101 and 51 categories each. This in turn has lead to much higher baseline performance on videos, eg. @math 32 | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_15",
"@cite_32",
"@cite_56",
"@cite_43",
"@cite_46"
],
"mid": [
"24089286",
"2038765747",
"2031489346",
"2337252826",
"1511568086",
"2619947201",
"2126579184",
"2214124602",
"1953661235"
],
"abstract": [
"We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.",
"In this work, we propose to use attributes and parts for recognizing human actions in still images. We define action attributes as the verbs that describe the properties of human actions, while the parts of actions are objects and poselets that are closely related to the actions. We jointly model the attributes and parts by learning a set of sparse bases that are shown to carry much semantic meaning. Then, the attributes and parts of an action image can be reconstructed from sparse coefficients with respect to the learned bases. This dual sparsity provides theoretical guarantee of our bases learning and feature reconstruction approach. On the PASCAL action dataset and a new “Stanford 40 Actions” dataset, we show that our method extracts meaningful high-order interactions between attributes and parts in human actions while achieving state-of-the-art classification performance.",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 s, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.",
"Holistic methods based on dense trajectories [29, 30] are currently the de facto standard for recognition of human activities in video. Whether holistic representations will sustain or will be superseded by higher level video encoding in terms of body pose and motion is the subject of an ongoing debate [12]. In this paper we aim to clarify the underlying factors responsible for good performance of holistic and pose-based representations. To that end we build on our recent dataset [2] leveraging the existing taxonomy of human activities. This dataset includes (24,920 ) video snippets covering (410 ) human activities in total. Our analysis reveals that holistic and pose-based methods are highly complementary, and their performance varies significantly depending on the activity. We find that holistic methods are mostly affected by the number and speed of trajectories, whereas pose-based methods are mostly influenced by viewpoint of the person. We observe striking performance differences across activities: for certain activities results with pose-based features are more than twice as accurate compared to holistic features, and vice versa. The best performing approach in our comparison is based on the combination of holistic and pose-based approaches, which again underlines their complementarity.",
"We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.",
"With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion.",
"We introduce a new benchmark \"Humans Interacting with Common Objects\" (HICO) for recognizing human-object interactions (HOI). We demonstrate the key features of HICO: a diverse set of interactions with common object categories, a list of well-defined, sense-based HOI categories, and an exhaustive labeling of co-occurring interactions with an object category in each image. We perform an in-depth analysis of representative current approaches and show that DNNs enjoy a significant edge. In addition, we show that semantic knowledge can significantly improve HOI recognition, especially for uncommon categories.",
"Which common human actions and interactions are recognizable in monocular still images? Which involve objects and or other people? How many is a person performing at a time? We address these questions by exploring the actions and interactions that are detectable in the images of the MS COCO dataset. We make two main contributions. First, a list of 140 common ‘visual actions’, obtained by analyzing the largest online verb lexicon currently available for English (VerbNet) and human sentences used to describe images in MS COCO. Second, a complete set of annotations for those ‘visual actions’, composed of subject-object and associated verb, which we call COCO-a (a for ‘actions’). COCO-a is larger than existing action datasets in terms of number instances of actions, and is unique because it is data-driven, rather than experimenter-biased. Other unique features are that it is exhaustive, and that all subjects and objects are localized. A statistical analysis of the accuracy of our annotations and of each action, interaction and subject-object combination is provided."
]
} |
1711.01467 | 2752386593 | We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII dataset with 12.5 relative improvement. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem. | Hard attention: Previous works in image based action recognition have shown impressive performance by incorporating evidence from the human, context and pose keypoint bounding boxes @cite_27 @cite_29 @cite_17 . Gkioxari el al. @cite_29 modified R-CNN pipeline to propose R*CNN, where they choose an auxiliary box to encode context apart from the human bounding box. Mallya and Lazebnik @cite_17 improve upon it by using the full image as the context and using multiple instance learning (MIL) to reason over all humans present in the image to predict an action label for the image. Our approach gets rid of the bounding box detection step and improves over both these methods by automatically learning to attend to the most informative parts of the image for the task. | {
"cite_N": [
"@cite_27",
"@cite_29",
"@cite_17"
],
"mid": [
"1744759976",
"2950209802",
"2339712187"
],
"abstract": [
"This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art.",
"There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2 mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.",
"This paper proposes deep convolutional network models that utilize local and global context to make human activity label predictions in still images, achieving state-of-the-art performance on two recent datasets with hundreds of labels each. We use multiple instance learning to handle the lack of supervision on the level of individual person instances, and weighted loss to handle unbalanced training data. Further, we show how specialized features trained on these datasets can be used to improve accuracy on the Visual Question Answering (VQA) task, in the form of multiple choice fill-in-the-blank questions (Visual Madlibs). Specifically, we tackle two types of questions on person activity and person-object relationship and show improvements over generic features trained on the ImageNet classification task"
]
} |
1711.01467 | 2752386593 | We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII dataset with 12.5 relative improvement. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem. | Soft attention: There has been relatively little work that explores unconstrained soft' attention for action recognition, with the exception of @cite_4 @cite_10 for spatio-temporal and @cite_28 for temporal attention. Importantly, all these consider a video setting, where a LSTM network predicts a spatial attention map for the current frame. Our method, however, uses a single frame to both predict and apply spatial attention, making it amenable to both single image and video based use cases. @cite_10 also uses pose keypoints labeled in 3D videos to drive attention to parts of the body. In contrast, we learn an unconstrained attention model that frequently learns to look around the human body for objects that make it easier to classify the action. | {
"cite_N": [
"@cite_28",
"@cite_10",
"@cite_4"
],
"mid": [
"2551975789",
"2950568498",
""
],
"abstract": [
"By extracting spatial and temporal characteristics in one network, the two-stream ConvNets can achieve the state-of-the-art performance in action recognition. However, such a framework typically suffers from the separately processing of spatial and temporal information between the two standalone streams and is hard to capture long-term temporal dependence of an action. More importantly, it is incapable of finding the salient portions of an action, say, the frames that are the most discriminative to identify the action. To address these problems, a oint etwork based ttention (JNA) is proposed in this study. We find that the fully-connected fusion, branch selection and spatial attention mechanism are totally infeasible for action recognition. Thus in our joint network, the spatial and temporal branches share some information during the training stage. We also introduce an attention mechanism on the temporal domain to capture the long-term dependence meanwhile finding the salient portions. Extensive experiments are conducted on two benchmark datasets, UCF101 and HMDB51. Experimental results show that our method can improve the action recognition performance significantly and achieves the state-of-the-art results on both datasets.",
"Human action recognition is an important task in computer vision. Extracting discriminative spatial and temporal features to model the spatial and temporal evolutions of different actions plays a key role in accomplishing this task. In this work, we propose an end-to-end spatial and temporal attention model for human action recognition from skeleton data. We build our model on top of the Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which learns to selectively focus on discriminative joints of skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames. Furthermore, to ensure effective training of the network, we propose a regularized cross-entropy loss to drive the model learning process and develop a joint training strategy accordingly. Experimental results demonstrate the effectiveness of the proposed model,both on the small human action recognition data set of SBU and the currently largest NTU dataset.",
""
]
} |
1711.01427 | 2767316455 | In recent years, neural networks have proven to be effective in Chinese word segmentation. However, this promising performance relies on large-scale training data. Neural networks with conventional architectures cannot achieve the desired results in low-resource datasets due to the lack of labelled training data. In this paper, we propose a deep stacking framework to improve the performance on word segmentation tasks with insufficient data by integrating datasets from diverse domains. Our framework consists of two parts, domain-based models and deep stacking networks. The domain-based models are used to learn knowledge from different datasets. The deep stacking networks are designed to integrate domain-based models. To reduce model conflicts, we innovatively add communication paths among models and design various structures of deep stacking networks, including Gaussian-based Stacking Networks, Concatenate-based Stacking Networks, Sequence-based Stacking Networks and Tree-based Stacking Networks. We conduct experiments on six low-resource datasets from various domains. Our proposed framework shows significant performance improvements on all datasets compared with several strong baselines. | Our work focus on domain adaptation and semi-supervised learning for neural word segmentation in Chinese social media. A number of recent works attempted to extract features automatically by using neural networks @cite_19 @cite_26 @cite_2 @cite_10 @cite_16 @cite_14 . @cite_26 used CNN to capture local information within a fixed size window and proposed a tensor framework to capture the information of previous tags. Ma and Hinrichs @cite_2 proposed an embedding matching approach to CWS, which took advantage of distributed representations. The training and prediction algorithms had linear-time complexity. @cite_10 proposed gated recursive neural networks to model feature combinations of context characters. This gating mechanism was used in Cai and Zhao @cite_13 work. @cite_16 proposed a neural model for word-based Chinese word segmentation, rather than traditional character-based CWS, by replacing the manually designed discrete features with neural features in a word-based segmentation framework. | {
"cite_N": [
"@cite_13",
"@cite_26",
"@cite_14",
"@cite_19",
"@cite_2",
"@cite_16",
"@cite_10"
],
"mid": [
"2436788615",
"2252225757",
"2507296208",
"2251811146",
"2250799792",
"2516334389",
"2251362855"
],
"abstract": [
"Most previous approaches to Chinese word segmentation formalize this problem as a character-based sequence labeling task where only contextual information within fixed sized local windows and simple interactions between adjacent tags can be captured. In this paper, we propose a novel neural framework which thoroughly eliminates context windows and can utilize complete segmentation history. Our model employs a gated combination neural network over characters to produce distributed representations of word candidates, which are then given to a long short-term memory (LSTM) language scoring model. Experiments on the benchmark datasets show that without the help of feature engineering as most existing approaches, our models achieve competitive or better performances with previous state-of-the-art methods.",
"Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability to alleviate the burden of manual feature engineering. In this paper, we propose a novel neural network model for Chinese word segmentation called Max-Margin Tensor Neural Network (MMTNN). By exploiting tag embeddings and tensorbased transformation, MMTNN has the ability to model complicated interactions between tags and context characters. Furthermore, a new tensor factorization approach is proposed to speed up the model and avoid overfitting. Experiments on the benchmark dataset show that our model achieves better performances than previous neural network models and that our model can achieve a competitive performance with minimal feature engineering. Despite Chinese word segmentation being a specific case, MMTNN can be easily generalized and applied to other sequence labeling tasks.",
"Recently, many neural network models have been applied to Chinese word segmentation. However, such models focus more on collecting local information while long distance dependencies are not well learned. To integrate local features with long distance dependencies, we propose a dependency-based gated recursive neural network. Local features are first collected by bi-directional long short term memory network, then combined and refined to long distance dependencies via gated recursive neural network. Experimental results show that our model is a competitive model for Chinese word segmentation.",
"This study explores the feasibility of performing Chinese word segmentation (CWS) and POS tagging by deep learning. We try to avoid task-specific feature engineering, and use deep layers of neural networks to discover relevant features to the tasks. We leverage large-scale unlabeled data to improve internal representation of Chinese characters, and use these improved representations to enhance supervised word segmentation and POS tagging models. Our networks achieved close to state-of-theart performance with minimal computational cost. We also describe a perceptron-style algorithm for training the neural networks, as an alternative to maximum-likelihood method, to speed up the training process and make the learning algorithm easier to be implemented.",
"This paper proposes an embedding matching approach to Chinese word segmentation, which generalizes the traditional sequence labeling framework and takes advantage of distributed representations. The training and prediction algorithms have linear-time complexity. Based on the proposed model, a greedy segmenter is developed and evaluated on benchmark corpora. Experiments show that our greedy segmenter achieves improved results over previous neural network-based word segmenters, and its performance is competitive with state-of-the-art methods, despite its simple feature set and the absence of external resources for training.",
"",
"Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this paper, we propose a gated recursive neural network (GRNN) for Chinese word segmentation, which contains reset and update gates to incorporate the complicated combinations of the context characters. Since GRNN is relative deep, we also use a supervised layer-wise training method to avoid the problem of gradient diffusion. Experiments on the benchmark datasets show that our model outperforms the previous neural network models as well as the state-of-the-art methods."
]
} |
1711.01131 | 2765737382 | Sparse variational approximations allow for principled and scalable inference in Gaussian Process (GP) models. In settings where several GPs are part of the generative model, theses GPs are a posteriori coupled. For many applications such as regression where predictive accuracy is the quantity of interest, this coupling is not crucial. Howewer if one is interested in posterior uncertainty, it cannot be ignored. A key element of variational inference schemes is the choice of the approximate posterior parameterization. When the number of latent variables is large, mean field (MF) methods provide fast and accurate posterior means while more structured posterior lead to inference algorithm of greater computational complexity. Here, we extend previous sparse GP approximations and propose a novel parameterization of variational posteriors in the multi-GP setting allowing for fast and scalable inference capturing posterior dependencies. | Variational inference for the CGP setting has so far only used the mean-field approximation as described in @cite_12 . When posterior dependencies are a quantity of interest, a natural approach is to increase the complexity of the variational posterior to capture these dependencies. This often results in a prohibitive increase in the complexity of the inference. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2339762427"
],
"abstract": [
"Gaussian process models are flexible, Bayesian non-parametric approaches to regression. Properties of multivariate Gaussians mean that they can be combined linearly in the manner of additive models and via a link function (like in generalized linear models) to handle non-Gaussian data. However, the link function formalism is restrictive, link functions are always invertible and must convert a parameter of interest to a linear combination of the underlying processes. There are many likelihoods and models where a non-linear combination is more appropriate. We term these more general models Chained Gaussian Processes: the transformation of the GPs to the likelihood parameters will not generally be invertible, and that implies that linearisation would only be possible with multiple (localized) links, i.e. a chain. We develop an approximate inference procedure for Chained GPs that is scalable and applicable to any factorized likelihood. We demonstrate the approximation on a range of likelihood functions."
]
} |
1711.00811 | 2766808838 | Deep neural networks are surprisingly efficient at solving practical tasks, but the theory behind this phenomenon is only starting to catch up with the practice. Numerous works show that depth is the key to this efficiency. A certain class of deep convolutional networks -- namely those that correspond to the Hierarchical Tucker (HT) tensor decomposition -- has been proven to have exponentially higher expressive power than shallow networks. I.e. a shallow network of exponential width is required to realize the same score function as computed by the deep architecture. In this paper, we prove the expressive power theorem (an exponential lower bound on the width of the equivalent shallow network) for a class of recurrent neural networks -- ones that correspond to the Tensor Train (TT) decomposition. This means that even processing an image patch by patch with an RNN can be exponentially more efficient than a (shallow) convolutional network with one hidden layer. Using theoretical results on the relation between the tensor decompositions we compare expressive powers of the HT- and TT-Networks. We also implement the recurrent TT-Networks and provide numerical evidence of their expressivity. | A large body of work is devoted to analyzing the theoretical properties of neural networks ( @cite_16 @cite_28 @cite_32 ). Recent studies focus on depth efficiency ( @cite_33 @cite_0 @cite_20 @cite_9 ), in most cases providing worst-case guaranties such as bounds between deep and shallow networks width. Two works are especially relevant since they analyze depth efficiency from the viewpoint of tensor decompositions: expressive power of the Hierarchical Tucker decomposition ( @cite_18 ) and its generalization to handle activation functions such as ReLU ( @cite_8 ). However, all of the works above focus on feedforward networks, while we tackle recurrent architectures. The only other work that tackles expressivity of RNNs is the concurrent work that applies the TT-decomposition to explicitly modeling high-order interactions of the previous hidden states and analyses the expressive power of the resulting architecture . This work, although very related to ours, analyses a different class of recurrent models. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_32",
"@cite_0",
"@cite_16",
"@cite_20"
],
"mid": [
"1852909287",
"2433379750",
"2551156993",
"2137983211",
"",
"",
"",
"2103496339",
""
],
"abstract": [
"It has long been conjectured that hypotheses spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical networks than with shallow ones. Despite the vast empirical evidence supporting this belief, theoretical justifications to date are limited. In particular, they do not account for the locality, sharing and pooling constructs of convolutional networks, the most successful deep learning architecture to date. In this work we derive a deep network architecture based on arithmetic circuits that inherently employs locality, sharing and pooling. An equivalence between the networks and hierarchical tensor factorizations is established. We show that a shallow network corresponds to CP (rank-1) decomposition, whereas a deep network corresponds to Hierarchical Tucker decomposition. Using tools from measure theory and matrix algebra, we prove that besides a negligible set, all functions that can be implemented by a deep network of polynomial size, require exponential size in order to be realized (or even approximated) by a shallow network. Since log-space computation transforms our networks into SimNets, the result applies directly to a deep learning architecture demonstrating promising empirical performance. The construction and theory developed in this paper shed new light on various practices and ideas employed by the deep learning community.",
"We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows: (1) The complexity of the computed function grows exponentially with depth. (2) All weights are not equal: trained networks are more sensitive to their lower (initial) layer weights. (3) Regularizing on trajectory length (trajectory regularization) is a simpler alternative to batch normalization, with the same performance.",
"Tensor networks are approximations of high-order tensors which are efficient to work with and have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing tensor networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize non-linear kernel learning models. For the MNIST data set we obtain less than 1 test set classification error. We discuss an interpretation of the additional structure imparted by the tensor network to the learned model.",
"Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators.",
"",
"",
"",
"In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.",
""
]
} |
1711.00931 | 2964334533 | Abstract The SPARC TSO weak memory model is defined axiomatically, with a non-compositional formulation that makes modular reasoning about programs difficult. Our denotational approach uses pomsets to provide a compositional semantics capturing exactly the behaviours permitted by SPARC TSO. Our approach facilitates the study of SPARC TSO and supports modular analysis of program behaviour. | Other approaches to semantics for weak memory models mostly use execution graphs and operational semantics. Execution graphs @cite_3 @cite_9 serve to describe the executional behaviour of an entire program, an inherently non-modular approach. We see our denotational framework as offering an alternative basis for program analysis, compositional and modular by design. Boudol and Petri @cite_4 gave an operational semantics framework for weak memory models that uses buffered states. Jagadeesan et al. @cite_5 adapted a fully abstract, trace-based semantics by Brookes @cite_0 to give a fully abstract denotational semantics for TSO. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_5"
],
"mid": [
"2104245532",
"2152885346",
"2138074470",
"2147218830",
"75349309"
],
"abstract": [
"Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allowed to occur, and the programming language semantics is implicit as determining some of these constraints. In this work we propose a new approach to formalizing a memory model in which the model itself is part of a weak operational semantics for a (possibly concurrent) programming language. We formalize in this way a model that allows write operations to the store to be buffered. This enables us to derive the ordering constraints from the weak semantics of programs, and to prove, at the programming language level, that the weak semantics implements the usual interleaving semantics for data-race free programs, hence in particular that it implements the usual semantics for sequential code.",
"Currently multi-threaded C or C++ programs combine a single-threaded programming language with a separate threads library. This is not entirely sound [7]. We describe an effort, currently nearing completion, to address these issues by explicitly providing semantics for threads in the next revision of the C++ standard. Our approach is similar to that recently followed by Java [25], in that, at least for a well-defined and interesting subset of the language, we give sequentially consistent semantics to programs that do not contain data races. Nonetheless, a number of our decisions are often surprising even to those familiar with the Java effort: We (mostly) insist on sequential consistency for race-free programs, in spite of implementation issues that came to light after the Java work. We give no semantics to programs with data races. There are no benign C++ data races. We use weaker semantics for trylock than existing languages or libraries, allowing us to promise sequential consistency with an intuitive race definition, even for programs with trylock. This paper describes the simple model we would like to be able to provide for C++ threads programmers, and explain how this, together with some practical, but often under-appreciated implementation constraints, drives us towards the above decisions.",
"Shared-memory concurrency in C and C++ is pervasive in systems programming, but has long been poorly defined. This motivated an ongoing shared effort by the standards committees to specify concurrent behaviour in the next versions of both languages. They aim to provide strong guarantees for race-free programs, together with new (but subtle) relaxed-memory atomic primitives for high-performance concurrent code. However, the current draft standards, while the result of careful deliberation, are not yet clear and rigorous definitions, and harbour substantial problems in their details. In this paper we establish a mathematical (yet readable) semantics for C++ concurrency. We aim to capture the intent of the current ( Final Committee') Draft as closely as possible, but discuss changes that fix many of its problems. We prove that a proposed x86 implementation of the concurrency primitives is correct with respect to the x86-TSO model, and describe our Cppmem tool for exploring the semantics of examples, using code generated from our Isabelle HOL definitions. Having already motivated changes to the draft standard, this work will aid discussion of any further changes, provide a correctness condition for compilers, and give a much-needed basis for analysis and verification of concurrent C and C++ programs.",
"Gives a new denotational semantics for a shared variable parallel programming language and proves full abstraction. The semantics gives identical meanings to commands if and only if they induce the same partial correctness behavior in all program contexts. The meaning of a command is a set of transition traces, which record the ways in which a command may interact with and be affected by its environment. It is shown how to modify the semantics to incorporate new program constructs, to allow for different levels of granularity or atomicity, and to model fair infinite computation, in each case achieving full abstraction with respect to an appropriate notion of program behavior. >",
"We revisit the Brookes [1996] semantics for a shared variable parallel programming language in the context of the Total Store Ordering TSO relaxed memory model. We describe a denotational semantics that is fully abstract for Brookes' language and also sound for the new commands that are specific to TSO. Our description supports the folklore sentiment about the simplicity of the TSO memory model."
]
} |
1711.00851 | 2766462876 | We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations (on the training data; for previously unseen examples, the approach will be guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well). The basic idea of the approach is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a toy 2D robust classification task, and on a simple convolutional architecture applied to MNIST, where we produce a classifier that provably has less than 8.4 test error for any adversarial attack with bounded @math norm less than @math . This represents the largest verified network that we are aware of, and we discuss future challenges in scaling the approach to much larger domains. | In addition to general work in adversarial attacks and defenses, our work relates most closely to several ongoing thrusts in adversarial examples. First, there is a great deal of ongoing work using exact (combinatorial) solvers to verify properties of neural networks, including robustness to adversarial attacks. These typically employ either Satisfiability Modulo Theories (SMT) solvers @cite_6 @cite_0 @cite_13 @cite_7 or integer programming approaches @cite_2 @cite_4 @cite_14 . Of particular note is the PLANET solver @cite_13 , which also uses linear ReLU relaxations, though it employs them just as a sub-step in a larger combinatorial solver. The obvious advantage of these approaches is that they are able to reason about the adversarial polytope, but because they are fundamentally combinatorial in nature, it seems prohibitively difficult to scale them even to medium-sized networks such as those we study here. In addition, unlike in the work we present here, the verification procedures are too computationally costly to be integrated easily to a robust training procedure. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_13"
],
"mid": [
"",
"2768915615",
"2787708942",
"1673923490",
"2950147618",
"2721006554",
"134960717"
],
"abstract": [
"",
"Neural networks have demonstrated considerable success in a wide variety of real-world problems. However, the presence of adversarial examples - slightly perturbed inputs that are misclassified with high confidence - limits our ability to guarantee performance for these networks in safety-critical applications. We demonstrate that, for networks that are piecewise affine (for example, deep networks with ReLU and maxpool units), proving no adversarial example exists - or finding the closest example if one does exist - can be naturally formulated as solving a mixed integer program. Solves for a fully-connected MNIST classifier with three hidden layers can be completed an order of magnitude faster than those of the best existing approach. To address the concern that adversarial examples are irrelevant because pixel-wise attacks are unlikely to happen in natural images, we search for adversaries over a natural class of perturbations written as convolutions with an adversarial blurring kernel. When searching over blurred images, we find that as opposed to pixelwise attacks, some misclassifications are impossible. Even more interestingly, a small fraction of input images are provably robust to blurs: every blurred version of the input is classified with the same, correct label.",
"We identify obfuscated gradients as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat optimization-based attacks, we find defenses relying on this effect can be circumvented. For each of the three types of obfuscated gradients we discover, we describe indicators of defenses exhibiting this effect and develop attack techniques to overcome it. In a case study, examining all defenses accepted to ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 8 defenses relying on obfuscated gradients. Using our new attack techniques, we successfully circumvent all 7 of them.",
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.",
"We study the reachability problem for systems implemented as feed-forward neural networks whose activation function is implemented via ReLU functions. We draw a correspondence between establishing whether some arbitrary output can ever be outputed by a neural system and linear problems characterising a neural system of interest. We present a methodology to solve cases of practical interest by means of a state-of-the-art linear programs solver. We evaluate the technique presented by discussing the experimental results obtained by analysing reachability properties for a number of benchmarks in the literature.",
"Human-centered computing is an emerging research field that aims to understand human behavior and integrate users and their social context with computer systems. One of the most recent, challenging and appealing applications in this framework consists in sensing human body motion using smartphones to gather context information about people actions. In this context, we describe in this work an Activity Recognition database, built from the recordings of 30 subjects doing Activities of Daily Living (ADL) while carrying a waist-mounted smartphone with embedded inertial sensors, which is released to public domain on a well-known on-line repository. Results, obtained on the dataset by exploiting a multiclass Support Vector Machine (SVM), are also acknowledged."
]
} |
1711.00851 | 2766462876 | We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations (on the training data; for previously unseen examples, the approach will be guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well). The basic idea of the approach is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a toy 2D robust classification task, and on a simple convolutional architecture applied to MNIST, where we produce a classifier that provably has less than 8.4 test error for any adversarial attack with bounded @math norm less than @math . This represents the largest verified network that we are aware of, and we discuss future challenges in scaling the approach to much larger domains. | The next line of related work are methods for computing bounds on the possible perturbation regions of deep networks. For example, Parseval networks @cite_1 attempt to achieve some degree of adversarial robustness by regularizing the @math operator norm of the weight matrices (keeping the network non-expansive in the @math norm); similarly, the work by shows how to limit the possible layerwise norm expansions in a variety of different layer types. In this work, we study similar layerwise'' bounds, and show that they are typically substantially (by many orders of magnitude) worse than the outer bounds we present. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2610190180"
],
"abstract": [
"We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most important feature of Parseval networks is to maintain weight matrices of linear and convolutional layers to be (approximately) Parseval tight frames, which are extensions of orthogonal matrices to non-square matrices. We describe how these constraints can be maintained efficiently during SGD. We show that Parseval networks match the state-of-the-art in terms of accuracy on CIFAR-10 100 and Street View House Numbers (SVHN) while being more robust than their vanilla counterpart against adversarial examples. Incidentally, Parseval networks also tend to train faster and make a better usage of the full capacity of the networks."
]
} |
1711.01006 | 2766730959 | While neural machine translation (NMT) has become the new paradigm, the parameter optimization requires large-scale parallel data which is scarce in many domains and language pairs. In this paper, we address a new translation scenario in which there only exists monolingual corpora and phrase pairs. We propose a new method towards translation with partially aligned sentence pairs which are derived from the phrase pairs and monolingual corpora. To make full use of the partially aligned corpora, we adapt the conventional NMT training method in two aspects. On one hand, different generation strategies are designed for aligned and unaligned target words. On the other hand, a different objective function is designed to model the partially aligned parts. The experiments demonstrate that our method can achieve a relatively good result in such a translation scenario, and tiny bitexts can boost translation quality to a large extent. | Most of existing work in neural machine translation focus on integrating SMT strategies @cite_10 @cite_24 @cite_19 @cite_3 , handling rare words @cite_1 @cite_23 @cite_8 and designing the better framework @cite_11 @cite_5 @cite_13 . As for translation scenarios, training NMT model under different scenarios has drawn intensive attention in recent years. Actually, there have been some effective methods to deal with them. We divide the related work into three categories: | {
"cite_N": [
"@cite_13",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"2534200568",
"",
"2577335011",
"2195405088",
"2608870981",
"2952659248",
"1816313093",
"2949335953",
"2566564022",
"2291126447"
],
"abstract": [
"",
"",
"Neural Machine translation has shown promising results in recent years. In order to control the computational complexity, NMT has to employ a small vocabulary, and massive rare words outside the vocabulary are all replaced with a single unk symbol. Besides the inability to translate rare words, this kind of simple approach leads to much increased ambiguity of the sentences since meaningless unks break the structure of sentences, and thus hurts the translation and reordering of the in-vocabulary words. To tackle this problem, we propose a novel substitution-translation-restoration method. In substitution step, the rare words in a testing sentence are replaced with similar in-vocabulary words based on a similarity model learnt from monolingual data. In translation and restoration steps, the sentence will be translated with a model trained on new bilingual data with rare words replaced, and finally the translations of the replaced words will be substituted by that of original ones. Experiments on Chinese-to-English translation demonstrate that our proposed method can achieve more than 4 BLEU points over the attention-based NMT. When compared to the recently proposed method handling rare words in NMT, our method can also obtain an improvement by nearly 3 BLEU points.",
"We propose minimum risk training for end-to-end neural machine translation. Unlike conventional maximum likelihood estimation, minimum risk training is capable of optimizing model parameters directly with respect to arbitrary evaluation metrics, which are not necessarily differentiable. Experiments show that our approach achieves significant improvements over maximum likelihood estimation on a state-of-the-art neural machine translation system across various languages pairs. Transparent to architectures, our approach can be applied to more neural networks and potentially benefit more NLP tasks.",
"Neural machine translation (NMT) becomes a new approach to machine translation and generates much more fluent results compared to statistical machine translation (SMT). However, SMT is usually better than NMT in translation adequacy. It is therefore a promising direction to combine the advantages of both NMT and SMT. In this paper, we propose a neural system combination framework leveraging multi-source NMT, which takes as input the outputs of NMT and SMT systems and produces the final translation. Extensive experiments on the Chinese-to-English translation task show that our model archives significant improvement by 5.3 BLEU points over the best single system output and 3.4 BLEU points over the state-of-the-art traditional system combination methods.",
"Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years. However, recent studies show that NMT generally produces fluent but inadequate translations ( 2016b; 2016a; 2016; 2017). This is in contrast to conventional Statistical Machine Translation (SMT), which usually yields adequate but non-fluent translations. It is natural, therefore, to leverage the advantages of both models for better translations, and in this work we propose to incorporate SMT model into NMT framework. More specifically, at each decoding step, SMT offers additional recommendations of generated words based on the decoding information from NMT (e.g., the generated partial translation and attention history). Then we employ an auxiliary classifier to score the SMT recommendations and a gating function to combine the SMT recommendations with NMT generations, both of which are jointly trained within the NMT architecture in an end-to-end manner. Experimental results on Chinese-English translation show that the proposed approach achieves significant and consistent improvements over state-of-the-art NMT and SMT systems on multiple NIST test sets.",
"Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively.",
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"Neural machine translation (NMT) conducts end-to-end translation with a source language encoder and a target language decoder, making promising translation performance. However, as a newly emerged approach, the method has some limitations. An NMT system usually has to apply a vocabulary of certain size to avoid the time-consuming training and decoding, thus it causes a serious out-of-vocabulary problem. Furthermore, the decoder lacks a mechanism to guarantee all the source words to be translated and usually favors short translations, resulting in fluent but inadequate translations. In order to solve the above problems, we incorporate statistical machine translation (SMT) features, such as a translation model and an n-gram language model, with the NMT model under the log-linear framework. Our experiments show that the proposed method significantly improves the translation quality of the state-of-the-art NMT system on Chinese-to-English translation tasks. Our method produces a gain of up to 2.33 BLEU score on NIST open test sets.",
""
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | While the majority of pathway visualization tools apply to reaction-based modeling, our project introduces a visual analysis technique for rule-based models. In addition to the language described above, the language @cite_32 @cite_58 is also widely used by systems biologists. While the differences between these languages are minimal, one benefit of , according to @cite_56 , is that tools in the ecosystem make use of formal methods to aid in information discovery and in debugging models. | {
"cite_N": [
"@cite_58",
"@cite_32",
"@cite_56"
],
"mid": [
"2303100661",
"2175715045",
"2395803168"
],
"abstract": [
"Summary : BioNetGen is an open-source software package for rule-based modeling of complex biochemical systems. Version 2.2 of the software introduces numerous new features for both model specification and simulation. Here, we report on these additions, discussing how they facilitate the construction, simulation and analysis of larger and more complex models than previously possible. Availability and Implementation : Stable BioNetGen releases (Linux, Mac OS X and Windows), with documentation, are available at http: bionetgen.org . Source code is available at http: github.com RuleWorld bionetgen . Contact: bionetgen.help@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online.",
"Rule-based modeling involves the representation of molecules as structured objects and molecular interactions as rules for transforming the attributes of these objects. The approach is notable in that it allows one to systematically incorporate site-specific details about proteinprotein interactions into a model for the dynamics of a signal-transduction system, but the method has other applications as well, such as following the fates of individual carbon atoms in metabolic reactions. The consequences of protein-protein interactions are difficult to specify and track with a conventional modeling approach because of the large number of protein phosphoforms and protein complexes that these interactions potentially generate. Here, we focus on how a rule-based model is specified in the BioNetGen language (BNGL) and how a model specification is analyzed using the BioNetGen software tool. We also discuss new developments in rule-based modeling that should enable the construction and analyses of comprehensive models for signal transduction pathways and similarly large-scale models for other biochemical systems.",
""
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | A modeling framework called aims to make it easier to build mathematical models of biochemical systems as Python programs @cite_12 . In their approach, models are not only created using programs, these models are already executable programs. transforms the Python code into either or rules, and provides methods that make it easier to create macros that encode recurrent biochemical patterns and to define complex networks as reusable modules. Pedersen et al. @cite_47 also introduce a modular extension to that provides a means for writing modular rule-based models. | {
"cite_N": [
"@cite_47",
"@cite_12"
],
"mid": [
"591497760",
"2097240335"
],
"abstract": [
"Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.",
"Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rulebased languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the opensource software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | Efforts to effectively visualize graphs with nodes or edges that represent temporal data or that have a topology that evolves over time are cataloged by @cite_37 . They survey the landscape of dynamic graph visualization, categorizing projects primarily in terms of how they represent time, that is, whether or not they use animation or a static timeline to show the evolution of networks. These categories are then further parcellated according to which layout strategies they utilize and how they address particular problems inherent in dynamic datasets. Inspired by @cite_57 , who investigate animated network movies'' for a range of sociological datasets, our tool features an animated node-link diagram whose layout is determined by clusters of influence (measured by how likely rules are to fire at the same time), either on a per-frame basis or within a user-selected time window. An interactive timeline is used to navigate through time, and more detailed information about selected nodes is presented for the currently selected time period. | {
"cite_N": [
"@cite_57",
"@cite_37"
],
"mid": [
"21202130",
"2287322623"
],
"abstract": [
"Increased interest in longitudinal social networks and the recognition that visualization fosters theoretical insight create a need for dynamic network visualizations, or network “movies.” This article confronts theoretical questions surrounding the temporal representations of social networks and technical questions about how best to link network change to changes in the graphical representation. The authors divide network movies into (1) static flip books, where node position remains constant but edges cumulate over time, and (2) dynamic movies, where nodes move as a function of changes in relations. Flip books are particularly useful in contexts where relations are sparse. For more connected networks, movies are often more appropriate. Three empirical examples demonstrate the advantages of different movie styles. A new software program for creating network movies is discussed in the appendix.",
"Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. A bibliographic analysis provides insights into the organization and development of the field and its community. Finally, we identify and discuss challenges for future research. We also provide feedback from experts, collected with a questionnaire, which gives a broad perspective of these challenges and the current state of the field."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | Interesting recent approaches to visualizing dynamic data include Archambault and Purchase's work on dynamic attribute cascades @cite_21 , 's @cite_31 , and 's @cite_19 and @cite_35 techniques. However, since a main goal of our visualization was to emphasize the relationship of rules to other rules, we elected to use a visual representation that made it easier to apply visual encodings to the links between nodes (see DIN_Section ). Techniques by @cite_24 @cite_60 , @cite_65 , and @cite_6 present multiple synchronized representations of a dynamic brain network to provide additional insight into the community dynamics within the network. Our tool also presents auxiliary representations to support the analysis of dynamic data, providing detail on demand for selected nodes. | {
"cite_N": [
"@cite_35",
"@cite_60",
"@cite_21",
"@cite_65",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_31"
],
"mid": [
"2039163945",
"2577494539",
"1964245472",
"2605987350",
"2284714407",
"2307372197",
"2166953159",
"2042665732"
],
"abstract": [
"Designing visualizations of dynamic networks is challenging, both because the data sets tend to be complex and because the tasks associated with them are often cognitively demand- ing. We introduce the Matrix Cube, a novel visual representation and navigation model for dynamic networks, inspired by the way people comprehend and manipulate physical cubes. Users can change their perspective on the data by rotating or decomposing the 3D cube. These manipulations can produce a range of different 2D visualizations that emphasize specific aspects of the dynamic network suited to particular analysis tasks. We describe Matrix Cubes and the interactions that can be performed on them in the Cubix system. We then show how two domain experts, an astronomer and a neurologist, used Cubix to explore and report on their own network data.",
"",
"Cascades appear in many applications, including biological graphs and social media analysis. In a cascade, a dynamic attribute propagates through a graph, following its edges. We present the results of a formal user study that tests the effectiveness of different types of cascade visualisations on node-link diagrams for the task of judging cascade spread. Overall, we found that a small multiples presentation was significantly faster than animation with no significant difference in terms of error rate. Participants generally preferred animation over small multiples and a hierarchical layout to a force-directed layout. Considering each presentation method separately, when comparing force-directed layouts to hierarchical layouts, hierarchical layouts were found to be significantly faster for both presentation methods and significantly more accurate for animation. Representing the history of the cascade had no significant effect. Thus, for our task, this experiment supports the use of a small multiples interf...",
"The effective application of spatio-temporal network models to neuroimaging data is an emerging challenge in the field of neuroscience, and could help scientists to better understand the behavior of the brain across a range of different experiments. One of the main problems with deriving spatiotemporal networks is that it is difficulty to provide a clear view of computed results. In this paper, we introduce an interactive visualization tool for spatio-temporal networks computed from neuroimaging datasets. Our tool allows the user to change parameters interactively, at run-time, and to compare multiple versions of the results directly. Furthermore, we describe our approach to rapidly calculating spatio-temporal networks to support visual analytics tasks for neuroimaging data.",
"This paper describes novel methods for constructing the intrinsic geometry of the human brain connectome using dimensionality-reduction techniques. We posit that the high-dimensional, complex geometry that represents this intrinsic topology can be mathematically embedded into lower dimensions using coupling patterns encoded in the corresponding brain connectivity graphs. We tested both linear and nonlinear dimensionality-reduction techniques using the diffusion-weighted structural connectome data acquired from a sample of healthy subjects. Results supported the nonlinearity of brain connectivity data, as linear reduction techniques such as the multidimensional scaling yielded inferior lower-dimensional embeddings. To further validate our results, we demonstrated that for tractography-derived structural connectome more influential regions such as rich-club members of the brain are more centrally mapped or embedded. Further, abnormal brain connectivity can be visually understood by inspecting the altered geometry of these three-dimensional (3D) embeddings that represent the topology of the human brain, as illustrated using simulated lesion studies of both targeted and random removal. Last, in order to visualize brain’s intrinsic topology we have developed software that is compatible with virtual reality technologies, thus allowing researchers to collaboratively and interactively explore and manipulate brain connectome data.",
"",
"Identifying, tracking and understanding changes in dynamic networks are complex and cognitively demanding tasks. We present GraphDiaries, a visual interface designed to improve support for these tasks in any node-link based graph visualization system. GraphDiaries relies on animated transitions that highlight changes in the network between time steps, thus helping users identify and understand those changes. To better understand the tasks related to the exploration of dynamic networks, we first introduce a task taxonomy, that informs the design of GraphDiaries, presented afterwards. We then report on a user study, based on representative tasks identified through the taxonomy, and that compares GraphDiaries to existing techniques for temporal navigation in dynamic networks, showing that it outperforms them in terms of both task time and errors for several of these tasks.",
"Maps offer a familiar way to present geographic data (continents, countries), and additional information (topography, geology), can be displayed with the help of contours and heat-map overlays. In this paper, we consider visualizing large-scale dynamic relational data by taking advantage of the geographic map metaphor. We describe a map-based visualization system which uses animation to convey dynamics in large data sets, and which aims to preserve the viewer's mental map while also offering readable views at all times. Our system is fully functional and has been used to visualize user traffic on the Internet radio station last.fm, as well as TV-viewing patterns from an IPTV service. All map images in this paper are available in high-resolution at [CHECK END OF SENTENCE] as are several movies illustrating the dynamic visualization."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | @cite_39 provide a thorough overview of different approaches to grouping data within graphs. A taxonomy of methods categorizes groups as juxtaposed, embedded, superimposed, or encoded using visual node attributes. Our technique utilizes superimposition, providing colored clusters as a way to show group membership of nodes that are similar, as well as visual node attributes, enabling a user to apply coloring to indicate a secondary grouping of nodes. @cite_48 present a network layout in which each node represents a cluster, and contains a time plot providing an overview of the temporal trend of the cluster, as well as a secondary view that shows time series data describing changes to a selected cluster. Our software tool also allows the user to examine more detailed information about temporal trends within the network. | {
"cite_N": [
"@cite_48",
"@cite_39"
],
"mid": [
"2026662997",
"2468929674"
],
"abstract": [
"The visual analysis of dynamic networks is a challenging task. In this paper, we introduce a new approach supporting the discovery of substructures sharing a similar trend over time by combining computation, visualization and interaction. With existing techniques, their discovery would be a tedious endeavor because of the number of nodes, edges as well as time points to be compared. First, on the basis of the supergraph, we therefore group nodes and edges according to their associated attributes that are changing over time. Second, the supergraph is visualized to provide an overview of the groups of nodes and edges with similar behavior over time in terms of their associated attributes. Third, we provide specific interactions to explore and refine the temporal clustering, allowing the user to further steer the analysis of the dynamic network. We demonstrate our approach by the visual analysis of a large wireless mesh network.",
"Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to indicate groups implicitly. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. The derived taxonomies for group structure and visualization types are also applied to group visualizations of edges. We survey group-only, group–node, group–edge and group–network tasks that are described in the literature as use cases of group visualizations. We discuss results from evaluations of existing visualization techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | Elmqvist and Tsigas @cite_11 introduce the Growing Squares and Growing Polygons techniques to explore causality, finding that they are significantly more useful than static graphs or Hesse diagrams for reasoning about systems. In these techniques, differently sized shapes are used to indicate information flow in a system of interacting processes, filling with different colors to represent the changing influence of the different processes. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2012703196"
],
"abstract": [
"Causality visualization is an important tool for many scientific domains that involve complex interactions between multiple entities (examples include parallel and distributed systems in computer science). However, traditional visualization techniques such as Hasse diagrams are not well-suited to large system executions, and users often have difficulties answering even basic questions using them, or have to spend inordinate amounts of time to do so. In this paper, we present the Growing Squares and Growing Polygons methods, two sibling visualization techniques that were designed to solve this problem by providing efficient 2D causality visualization through the use of color, texture, and animation. Both techniques have abandoned the traditional linear timeline and instead map the time parameter to the size of geometrical primitives representing the processes; in the Growing Squares case, each process is a color-coded square that receives color influences from other process squares as messages reach it; in the Growing Polygons case, each process is instead an n-sided polygon consisting of triangular sectors showing color-coded influences from the other processes. We have performed user studies of both techniques, comparing them with Hasse diagrams, and they have been shown to be significantly more efficient than old techniques, both in terms of objective performance as well as the subjective opinion of the test subjects (the Growing Squares technique is, however, only significantly more efficient for small systems)."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | @cite_16 explore the use of visual causal vectors to indicate causal relationships between data elements, and Bartram and Yao @cite_43 utilize animated causal overlays in order to highlight causal flows and to indicate the relative strength of the causal effect. @cite_20 also find that the use of animation is superior in terms of both accuracy and speed in comparison to static representations for facilitating comprehension of complex causal relations. | {
"cite_N": [
"@cite_43",
"@cite_16",
"@cite_20"
],
"mid": [
"2040355224",
"2190890972",
"2150014546"
],
"abstract": [
"Most approaches to representing causality, such as the common causal graph, require a separate and static view, but in many cases it is useful to add the dimension of causality to the context of an existing visualization. Building on research from perceptual psychology that shows the perception of causality is a low-level visual event derived from certain types of motion, we are investigating how to add animated causal representations, called visual causal vectors, onto other visualizations. We refer to these as causal overlays. Our initial experimental results show this approach has great potential but that extra cues are needed to elicit the perception of causality when the motions are overlaid on other graphical objects. In this paper we describe the approach and report on a study that examined two issues of this technique: how to accurately convey the causal flow and how to represent the strength of the causal effect.",
"",
"Michotte's theory of ampliation suggests that causal relationships are perceived by objects animated under appropriate spatiotemporal conditions. We extend the theory of ampliation and propose that the immediate perception of complex causal relations is also dependent on a set of structural and temporal rules. We designed animated representations, based on Michotte's rules, for showing complex causal relationships or causal semantics. In this paper we describe a set of animations for showing semantics such as causal amplification, causal strength, causal dampening, and causal multiplicity. In a two part study we compared the effectiveness of both the static and animated representations. The first study (N=44) asked participants to recall passages that were previously displayed using both types of representations. Participants were 8 more accurate in recalling causal semantics when they were presented using animations instead of static graphs. In the second study (N=112) we evaluated the intuitiveness of the representations. Our results showed that while users were as accurate with the static graphs as with the animations, they were 9 faster in matching the correct causal statements in the animated condition. Overall our results show that animated diagrams that are designed based on perceptual rules such as those proposed by Michotte have the potential to facilitate comprehension of complex causal relations."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | @cite_46 survey biological pathway visualization projects and introduce a taxonomy of visualization tasks for the analysis of biological pathway data. Relevant tasks are organized in a high-level categorization as tasks, tasks, and tasks. Interestingly, simulations of rule-based models are not explicitly discussed. However, this taxonomy does describe tasks that are relevant to visualizations of rule-based models, and our approach in particular, such as @cite_17 @cite_15 @cite_28 , @cite_54 @cite_53 @cite_5 @cite_41 @cite_40 , and @cite_7 @cite_42 @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_15",
"@cite_28",
"@cite_41",
"@cite_54",
"@cite_53",
"@cite_42",
"@cite_40",
"@cite_5",
"@cite_46",
"@cite_17"
],
"mid": [
"2162143298",
"2135306251",
"2508189011",
"1997294932",
"2166365305",
"2754042509",
"",
"196785825",
"",
"2158190097",
"2588258456",
"1630068686"
],
"abstract": [
"Motivation: Prior biological knowledge greatly facilitates the meaningful interpretation of gene-expression data. Causal networks constructed from individual relationships curated from the literature are particularly suited for this task, since they create mechanistic hypotheses that explain the expression changes observed in datasets. Results: We present and discuss a suite of algorithms and tools for inferring and scoring regulator networks upstream of gene-expression data based on a large-scale causal network derived from the Ingenuity Knowledge Base. We extend the method to predict downstream effects on biological functions and diseases and demonstrate the validity of our approach by applying it to example datasets. Availability: The causal analytics tools ‘Upstream Regulator Analysis’, ‘Mechanistic Networks’, ‘Causal Network Analysis’ and ‘Downstream Effects Analysis’ are implemented and available within Ingenuity Pathway Analysis (IPA, http: www.ingenuity.com). Supplementary information: Supplementary material is available at Bioinformatics online.",
"Jointly analyzing biological pathway maps and experimental data is critical for understanding how biological processes work in different conditions and why different samples exhibit certain characteristics. This joint analysis, however, poses a significant challenge for visualization. Current techniques are either well suited to visualize large amounts of pathway node attributes, or to represent the topology of the pathway well, but do not accomplish both at the same time. To address this we introduce enRoute, a technique that enables analysts to specify a path of interest in a pathway, extract this path into a separate, linked view, and show detailed experimental data associated with the nodes of this extracted path right next to it. This juxtaposition of the extracted path and the experimental data allows analysts to simultaneously investigate large amounts of potentially heterogeneous data, thereby solving the problem of joint analysis of topology and node attributes. As this approach does not modify the layout of pathway maps, it is compatible with arbitrary graph layouts, including those of hand-crafted, image-based pathway maps. We demonstrate the technique in context of pathways from the KEGG and the Wikipathways databases. We apply experimental data from two public databases, the Cancer Cell Line Encyclopedia (CCLE) and The Cancer Genome Atlas (TCGA) that both contain a wide variety of genomic datasets for a large number of samples. In addition, we make use of a smaller dataset of hepatocellular carcinoma and common xenograft models. To verify the utility of enRoute, domain experts conducted two case studies where they explore data from the CCLE and the hepatocellular carcinoma datasets in the context of relevant pathways.",
"Node-link diagrams are widely used for visualizing relational data in a wide range of fields. However, in many situations it is useful to provide set membership information for elements in networks. We present BranchingSets, an interactive visualization technique that uses visual encodings similar to Kelp Diagrams in order to augment traditional node-link diagrams with information about the categories that both nodes and links belong to. BranchingSets introduces novel user-driven methods to procedurally navigate the graph topology and to interactively inspect complex, hierarchical data associated with individual nodes. Results indicate that users find the technique engaging and easy to use. This is further confirmed by a quantitative study that compares the effectiveness of the visual encodings used in BranchingSets to other techniques for displaying set membership within node-link diagrams, finding our technique more accurate and more efficient for facilitating interactive queries on networks containing nodes that belong to multiple sets.",
"Background The interpretation of the results from genome-scale experiments is a challenging and important problem in contemporary biomedical research. Biological networks that integrate experimental results with existing knowledge from biomedical databases and published literature can provide a rich resource and powerful basis for hypothesizing about mechanistic explanations for observed gene-phenotype relationships. However, the size and density of such networks often impede their efficient exploration and understanding.",
"Biological pathway maps are highly relevant tools for many tasks in molecular biology. They reduce the complexity of the overall biological network by partitioning it into smaller manageable parts. While this reduction of complexity is their biggest strength, it is, at the same time, their biggest weakness. By removing what is deemed not important for the primary function of the pathway, biologists lose the ability to follow and understand cross-talks between pathways. Considering these cross-talks is, however, critical in many analysis scenarios, such as judging effects of drugs. In this paper we introduce Entourage, a novel visualization technique that provides contextual information lost due to the artificial partitioning of the biological network, but at the same time limits the presented information to what is relevant to the analyst's task. We use one pathway map as the focus of an analysis and allow a larger set of contextual pathways. For these context pathways we only show the contextual subsets, i.e., the parts of the graph that are relevant to a selection. Entourage suggests related pathways based on similarities and highlights parts of a pathway that are interesting in terms of mapped experimental data. We visualize interdependencies between pathways using stubs of visual links, which we found effective yet not obtrusive. By combining this approach with visualization of experimental data, we can provide domain experts with a highly valuable tool. We demonstrate the utility of Entourage with case studies conducted with a biochemist who researches the effects of drugs on pathways. We show that the technique is well suited to investigate interdependencies between pathways and to analyze, understand, and predict the effect that drugs have on different cell types.",
"Systems biologists and cancer researchers require interactive visualization tools that enable them to more easily navigate and discover patterns at different levels of the biological hierarchy of signaling pathways. Furthermore, biologists are often interested in understanding and exploring the causal biochemical links between processes. When exploring the literature of particular biological pathways or specific proteins within those pathways, biologists find it useful to know the contexts in which biochemical links are active and, importantly, to be aware of potential conflicts when different experiments introduce alternative interpretations of the function of a pathway or biochemical reaction. We introduce BioLinker, a interactive visualization system that helps users to perform bottom-up exploration of complex protein interaction networks. Five interconnected views provide the user with a range of ways to explore pathway data, including views that show potential conflicts within pathway databases and publications and that highlight contextual information about individual proteins. Additionally, we discuss system details to show how our system manages the large amount of protein interactions extracted from the literature of biological pathways.",
"",
"Knowledge of immune system and host-pathogen pathways can inform development of targeted therapies and molecular diagnostics based on a mechanistic understanding of disease pathogenesis and the host response. We investigated the feasibility of rapid target discovery for novel broad-spectrum molecular therapeutics through comprehensive systems biology modeling and analysis of pathogen and host-response pathways and mechanisms. We developed a system to identify and prioritize candidate host targets based on strength of mechanistic evidence characterizing the role of the target in pathogenesis and tractability desiderata that include optimal delivery of new indications through potential repurposing of existing compounds or therapeutics. Empirical validation of predicted targets in cellular and mouse model systems documented an effective target prediction rate of 34 , suggesting that such computational discovery approaches should be part of target discovery efforts in operational clinical or biodefense research initiatives. We describe our target discovery methodology, technical implementation, and experimental results. Our work demonstrates the potential for in silico pathway models to enable rapid, systematic identification and prioritization of novel targets against existing or emerging biological threats, thus accelerating drug discovery and medical countermeasures research.",
"",
"Background Biological networks have a growing importance for the interpretation of high-throughput “omics” data. Integrative network analysis makes use of statistical and combinatorial methods to extract smaller subnetwork modules, and performs enrichment analysis to annotate the modules with ontology terms or other available knowledge. This process results in an annotated module, which retains the original network structure and includes enrichment information as a set system. A major bottleneck is a lack of tools that allow exploring both network structure of extracted modules and its annotations.",
"Background Understanding complicated networks of interactions and chemical components is essential to solving contemporary problems in modern biology, especially in domains such as cancer and systems research. In these domains, biological pathway data is used to represent chains of interactions that occur within a given biological process. Visual representations can help researchers understand, interact with, and reason about these complex pathways in a number of ways. At the same time, these datasets offer unique challenges for visualization, due to their complexity and heterogeneity.",
"Background Biologists make use of pathway visualization tools for a range of tasks, including investigating inter-pathway connectivity and retrieving details about biological entities and interactions. Some of these tasks require an understanding of the hierarchical nature of elements within the pathway or the ability to make comparisons between multiple pathways. We introduce a technique inspired by LineSets that enables biologists to fulfill these tasks more effectively."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | A range of projects introduce techniques for visualizing rule-based models of protein-protein interaction networks. @cite_1 discuss the use of to visually represent stochastic trajectories for user-defined observables in order to tell a story'' that summarizes how a given event type can be obtained. @cite_63 introduce that explicitly define the topology of rule-based networks, and that can be used for simulating the interaction of molecular rules. Inspired by these efforts, as well as by the earlier process diagrams of @cite_36 , @cite_45 provide guidelines for visualizing and annotating rule-based models using interactive that represent a cell signaling system so that it is both visual and executable. | {
"cite_N": [
"@cite_36",
"@cite_45",
"@cite_1",
"@cite_63"
],
"mid": [
"2045478141",
"2132709248",
"2144063122",
"2167804446"
],
"abstract": [
"With the increased interest in understanding biological networks, such as protein-protein interaction networks and gene regulatory networks, methods for representing and communicating such networks in both human- and machine-readable form have become increasingly important. Although there has been significant progress in machine-readable representation of networks, as exemplified by the Systems Biology Mark-up Language (SBML) ( http: www.sbml.org ) issues in human-readable representation have been largely ignored. This article discusses human-readable diagrammatic representations and proposes a set of notations that enhances the formality and richness of the information represented. The process diagram is a fully state transition–based diagram that can be translated into machine-readable forms such as SBML in a straightforward way. It is supported by CellDesigner, a diagrammatic network editing software ( http: www.celldesigner.org ), and has been used to represent a variety of networks of various sizes (from only a few components to several hundred components).",
"Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme–substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FceRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models.",
"Modelling is becoming a necessity in studying biological signalling pathways, because the combinatorial complexity of such systems rapidly overwhelms intuitive and qualitative forms of reasoning. Yet, this same combinatorial explosion makes the traditional modelling paradigm based on systems of differential equations impractical. In contrast, agentbased or concurrent languages, such as ? [1,2,3] or the closely related BioNetGen language [4,5,6,7,8,9,10], describe biological interactions in terms of rules, thereby avoiding the combinatorial explosion besetting differential equations. Rules are expressed in an intuitive graphical form that transparently represents biological knowledge. In this way, rules become a natural unit of model building, modification, and discussion. We illustrate this with a sizeable example obtained from refactoring two models of EGF receptor signalling that are based on differential equations [11,12]. An exciting aspect of the agent-based approach is that it naturally lends itself to the identification and analysis of the causal structures that deeply shape the dynamical, and perhaps even evolutionary, characteristics of complex distributed biological systems. In particular, one can adapt the notions of causality and conflict, familiar from concurrency theory, to ?, our representation language of choice. Using the EGF receptor model as an example, we show how causality enables the formalization of the colloquial concept of pathway and, perhaps more surprisingly, how conflict can be used to dissect the signalling dynamics to obtain a qualitative handle on the range of system behaviours. By taming the combinatorial explosion, and exposing the causal structures and key kinetic junctures in a model, agent- and rule-based representations hold promise for making modelling more powerful, more perspicuous, and of appeal to a wider audience.",
"To help us understand how bioregulatory networks operate, we need a standard notation for diagrams analogous to electronic circuit diagrams. Such diagrams must surmount the difficulties posed by complex patterns of protein modifications and multiprotein complexes. To meet that challenge, we have designed the molecular interaction map (MIM) notation (http: discover.nci.nih.gov mim ). Here we show the advantages of the MIM notation for three important types of diagrams: (1) explicit diagrams that define specific pathway models for computer simulation; (2) heuristic maps that organize the available information about molecular interactions and encompass the possible processes or pathways; and (3) diagrams of combinatorially complex models. We focus on signaling from the epidermal growth factor receptor family (EGFR, ErbB), a network that reflects the major challenges of representing in a compact manner the combinatorial complexity of multimolecular complexes. By comparing MIMs with other diagrams of this network that have recently been published, we show the utility of the MIM notation. These comparisons may help cell and systems biologists adopt a graphical language that is unambiguous and generally understood."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | 's @cite_49 provides a framework for editing and exploring rule-based systems of intracellular biochemistry, such as . In this framework, the primary visualization provides an interactive contact map in which molecules are rendered as large gray nodes and domain states are positioned inside these nodes. Rules are represented as links between specific sites within the nodes, and nested isocontours are used to define a compartment hierarchy of elements related to particular components of the cell. A secondary visualization shows the relations between the reaction rules that describe the behavior of a system. That is, similar to our approach, the influence graph shows if a rule activates or inhibits another rule. However, our approach further emphasizes how these influences can change over time, facilitating the analysis of the dynamics of the system. | {
"cite_N": [
"@cite_49"
],
"mid": [
"2086928685"
],
"abstract": [
"Background Rule-based modeling (RBM) is a powerful and increasingly popular approach to modeling cell signaling networks. However, novel visual tools are needed in order to make RBM accessible to a broad range of users, to make specification of models less error prone, and to improve workflows."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | @cite_29 provide an overview of graphical modeling software tools for representing or simulating reaction-based models, and detailing usability and perceptual issues that they can introduce. They find that creating graphical languages with many glyphs reduces ambiguity, but at the cost of introducing visual clutter and making the visual layout unappealing to users. They advocate for a minimalist approach that presents only the visual elements necessary for a particular analysis. Our tool also uses a smaller visual language for representing rules and clusters of rules. | {
"cite_N": [
"@cite_29"
],
"mid": [
"2011257065"
],
"abstract": [
"Modeling biological systems to understand their mechanistic behavior is an important activity in molecular systems biology. Mathematical modeling typically requires deep mathematical or computing knowledge, and this limits the spread of modeling tools among biologists. Graphical modeling languages have been introduced to minimize this limit. Here, we survey the main graphical formalisms (supported by software tools) available to model biological systems with a primary focus on their usability, within the framework of modeling reaction pathways with two-dimensional (2D) (possibly nested) compartments. Considering the main characteristics of the surveyed formalisms, we synthesise a new proposal (Style) and report the results of an online survey conducted among biologists to assess usability of available graphical formalisms. We consider this proposal a guideline developed from what we learned in the survey, which can inform development of graphical formalisms to model reaction pathways in 2D space."
]
} |
1711.00967 | 2752551804 | We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle. | introduce @cite_50 , a web-based framework that uses to run simulations of rules. The visualization output is a simple chart that shows the population of predefined observables'' within the system, that is biological agents (e.g., proteins or protein complexes) that are affected by the rules. This visualization provides an overview of the system, but does not indicate specifically which rules are responsible for these changes in observables, nor provide insight into how the activity of rules affects other rules. Our tool provides this type of visual output in a secondary panel, providing an alternative perspective of the system. | {
"cite_N": [
"@cite_50"
],
"mid": [
"2130496721"
],
"abstract": [
"Summary: A host of formal, textual languages for modelling cellular processes have recently emerged, but their simulation tools often require an installation process which can pose a barrier for use. Bio Simulators is a framework for easy online deployment of simulators, providing a uniform web-based user interface to a diverse pool of tools. The framework is demonstrated through two plugins based on the KaSim Kappa simulator, one running directly in the browser and another running in the cloud. Availability: Web tool: bsims.azurewebsites.net. KaSim client side simulator: github.com NicolasOury KaSimJS. KaSim cloud simulator:"
]
} |
1711.00529 | 2765483525 | This paper introduces a new web-based software tool for annotating text, Text Annotation Graphs, or TAG. It provides functionality for representing complex relationships between words and word phrases that are not available in other software tools, including the ability to define and visualize relationships between the relationships themselves (semantic hypergraphs). Additionally, we include an approach to representing text annotations in which annotation subgraphs, or semantic summaries, are used to show relationships outside of the sequential context of the text itself. Users can use these subgraphs to quickly find similar structures within the current document or external annotated documents. Initially, TAG was developed to support information extraction tasks on a large database of biomedical articles. However, our software is flexible enough to support a wide range of annotation tasks for any domain. Examples are provided that showcase TAG's capabilities on morphological parsing and event extraction tasks. | is inspired by the rapid annotation tool @cite_1 . is widely used for representing syntactic structure, but can also represent semantic events, and has been applied to a range of domain-specific NLP tasks, including biomedical data @cite_9 . supports a range of useful features that improve the overall efficiency of manual annotation tasks. However, does not support the ability to draw links between links, which makes it difficult to represent relations linking several predicate-less relations, a feature necessary to completely describe complex events. Complex relations are previously explored by the authors in a range of different visualization projects that represent hierarchically-nested and or clustered data derived from the machine reading of scientific texts describing biochemical events @cite_11 @cite_3 @cite_5 @cite_4 @cite_7 @cite_14 @cite_2 @cite_17 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_17",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_11"
],
"mid": [
"2588258456",
"2754042509",
"",
"",
"8550301",
"2508189011",
"",
"1630068686",
"2754144376",
"2126619073"
],
"abstract": [
"Background Understanding complicated networks of interactions and chemical components is essential to solving contemporary problems in modern biology, especially in domains such as cancer and systems research. In these domains, biological pathway data is used to represent chains of interactions that occur within a given biological process. Visual representations can help researchers understand, interact with, and reason about these complex pathways in a number of ways. At the same time, these datasets offer unique challenges for visualization, due to their complexity and heterogeneity.",
"Systems biologists and cancer researchers require interactive visualization tools that enable them to more easily navigate and discover patterns at different levels of the biological hierarchy of signaling pathways. Furthermore, biologists are often interested in understanding and exploring the causal biochemical links between processes. When exploring the literature of particular biological pathways or specific proteins within those pathways, biologists find it useful to know the contexts in which biochemical links are active and, importantly, to be aware of potential conflicts when different experiments introduce alternative interpretations of the function of a pathway or biochemical reaction. We introduce BioLinker, a interactive visualization system that helps users to perform bottom-up exploration of complex protein interaction networks. Five interconnected views provide the user with a range of ways to explore pathway data, including views that show potential conflicts within pathway databases and publications and that highlight contextual information about individual proteins. Additionally, we discuss system details to show how our system manages the large amount of protein interactions extracted from the literature of biological pathways.",
"",
"",
"We introduce the brat rapid annotation tool (BRAT), an intuitive web-based tool for text annotation supported by Natural Language Processing (NLP) technology. BRAT has been developed for rich structured annotation for a variety of NLP tasks and aims to support manual curation efforts and increase annotator productivity using NLP techniques. We discuss several case studies of real-world annotation projects using pre-release versions of BRAT and present an evaluation of annotation assisted by semantic class disambiguation on a multicategory entity mention annotation task, showing a 15 decrease in total annotation time. BRAT is available under an open-source license from: http: brat.nlplab.org",
"Node-link diagrams are widely used for visualizing relational data in a wide range of fields. However, in many situations it is useful to provide set membership information for elements in networks. We present BranchingSets, an interactive visualization technique that uses visual encodings similar to Kelp Diagrams in order to augment traditional node-link diagrams with information about the categories that both nodes and links belong to. BranchingSets introduces novel user-driven methods to procedurally navigate the graph topology and to interactively inspect complex, hierarchical data associated with individual nodes. Results indicate that users find the technique engaging and easy to use. This is further confirmed by a quantitative study that compares the effectiveness of the visual encodings used in BranchingSets to other techniques for displaying set membership within node-link diagrams, finding our technique more accurate and more efficient for facilitating interactive queries on networks containing nodes that belong to multiple sets.",
"",
"Background Biologists make use of pathway visualization tools for a range of tasks, including investigating inter-pathway connectivity and retrieving details about biological entities and interactions. Some of these tasks require an understanding of the hierarchical nature of elements within the pathway or the ability to make comparisons between multiple pathways. We introduce a technique inspired by LineSets that enables biologists to fulfill these tasks more effectively.",
"This paper introduces CactusTree, a novel visualization technique for representing hierarchical datasets. We introduce details about the construction of CactusTrees and describe how they can be used to represent nested data and relationships between elements in the data. We explain how our design decisions were informed by tasks common to a range of scientific domains. A key contribution of this article is the introduction of descriptive features that can be used to characterize trees in terms of their structural and connective qualities.",
"Background Molecular and systems biologists are tasked with the comprehension and analysis of incredibly complex networks of biochemical interactions, called pathways, that occur within a cell. Through interviews with domain experts, we identified four common tasks that require an understanding of the causality within pathways, that is, the downstream and upstream relationships between proteins and biochemical reactions, including: visualizing downstream consequences of perturbing a protein; finding the shortest path between two proteins; detecting feedback loops within the pathway; and identifying common downstream elements from two or more proteins."
]
} |
1711.00529 | 2765483525 | This paper introduces a new web-based software tool for annotating text, Text Annotation Graphs, or TAG. It provides functionality for representing complex relationships between words and word phrases that are not available in other software tools, including the ability to define and visualize relationships between the relationships themselves (semantic hypergraphs). Additionally, we include an approach to representing text annotations in which annotation subgraphs, or semantic summaries, are used to show relationships outside of the sequential context of the text itself. Users can use these subgraphs to quickly find similar structures within the current document or external annotated documents. Initially, TAG was developed to support information extraction tasks on a large database of biomedical articles. However, our software is flexible enough to support a wide range of annotation tasks for any domain. Examples are provided that showcase TAG's capabilities on morphological parsing and event extraction tasks. | is a flexible tool that supports multiple annotation layers, and includes features to facilitate quality control, annotator management, and curation @cite_8 @cite_16 . The visualrepresentation is similar to , and the interface focuses mainly on resolving disagreeing annotations between users. includes a variety of built-in annotation layers, such as dependency relations, co-reference chains, and lemma forms, but in annotation expressiveness is limited in that it is not possible to create nested arcs that link to other arcs. | {
"cite_N": [
"@cite_16",
"@cite_8"
],
"mid": [
"2775590885",
"2251026067"
],
"abstract": [
"We introduce the third major release of WebAnno, a generic web-based annotation tool for distributed teams. New features in this release focus on semantic annotation tasks (e.g. semantic role labelling or event annotation) and allow the tight integration of semantic annotations with syntactic annotations. In particular, we introduce the concept of slot features, a novel constraint mechanism that allows modelling the interaction between semantic and syntactic annotations, as well as a new annotation user interface. The new features were developed and used in an annotation project for semantic roles on German texts. The paper briefly introduces this project and reports on experiences performing annotations with the new tool. On a comparative evaluation, our tool reaches significant speedups over WebAnno 2 for a semantic annotation task.",
"In this paper, we present a flexible approach to the efficient and exhaustive manual annotation of text documents. For this purpose, we extend WebAnno (, 2013) an open-source web-based annotation tool. 1 While it was previously limited to specific annotation layers, our extension allows adding and configuring an arbitrary number of layers through a web-based UI. These layers can be annotated separately or simultaneously, and support most types of linguistic annotations such as spans, semantic classes, dependency relations, lexical chains, and morphology. Further, we tightly integrate a generic machine learning component for automatic annotation suggestions of span annotations. In two case studies, we show that automatic annotation suggestions, combined with our split-pane UI concept, significantly reduces annotation time."
]
} |
1711.00520 | 2766406951 | Prosodic modeling is a core problem in speech synthesis. The key challenge is producing desirable prosody from textual input containing only phonetic information. In this preliminary study, we introduce the concept of "style tokens" in Tacotron, a recently proposed end-to-end neural speech synthesis model. Using style tokens, we aim to extract independent prosodic styles from training data. We show that without annotation data or an explicit supervision signal, our approach can automatically learn a variety of prosodic variations in a purely data-driven way. Importantly, each style token corresponds to a fixed style factor regardless of the given text sequence. As a result, we can control the prosodic style of synthetic speech in a somewhat predictable and globally consistent way. | Prosody and speaking style modeling have been studied since the era of HMM-based TTS research. For example, @cite_11 proposes a system that first clusters the training set, and then performs HMM-based cluster-adaptive training. @cite_3 proposes estimating the transformation matrix for a set of predefined style vectors. | {
"cite_N": [
"@cite_3",
"@cite_11"
],
"mid": [
"2039800941",
"2156146072"
],
"abstract": [
"This paper describes a technique for controlling the degree of expressivity of a desired emotional expression and or speaking style of synthesized speech in an HMM-based speech synthesis framework. With this technique, multiple emotional expressions and speaking styles of speech are modeled in a single model by using a multiple-regression hidden semi-Markov model (MRHSMM). A set of control parameters, called the style vector, is defined, and each speech synthesis unit is modeled by using the MRHSMM, in which mean parameters of the state output and duration distributions are expressed by multiple-regression of the style vector. In the synthesis stage, the mean parameters of the synthesis units are modified by transforming an arbitrarily given style vector that corresponds to a point in a low-dimensional space, called style space, each of whose coordinates represents a certain specific speaking style or emotion of speech. The results of subjective evaluation tests show that style and its intensity can be controlled by changing the style vector.",
"Current text-to-speech synthesis (TTS) systems are often perceived as lacking expressiveness, limiting the ability to fully convey information. This paper describes initial investigations into improving expressiveness for statistical speech synthesis systems. Rather than using hand-crafted definitions of expressive classes, an unsupervised clustering approach is described which is scalable to large quantities of training data. To incorporate this “expression cluster” information into an HMM-TTS system two approaches are described: cluster questions in the decision tree construction; and average expression speech synthesis (AESS) using cluster-based linear transform adaptation. The performance of the approaches was evaluated on audiobook data in which the reader exhibits a wide range of expressiveness. A subjective listening test showed that synthesising with AESS results in speech that better reflects the expressiveness of human speech than a baseline expression-independent system."
]
} |
1711.00714 | 2766013736 | Insights into social phenomenon can be gleaned from trends and patterns in corpora of documents associated with that phenomenon. Recent years have witnessed the use of computational techniques, mostly based on keywords, to analyze large corpora for these purposes. In this paper, we extend these techniques to incorporate semantic features. We introduce Doris, an interactive exploration tool that combines semantic features with information retrieval techniques to enable exploration of document corpora corresponding to the social phenomenon. We discuss the semantic techniques and describe an implementation on a corpus of United States (US) presidential speeches. We illustrate, with examples, how the ability to combine syntactic and semantic features in a visualization helps researchers more easily gain insights into the underlying phenomenon. | Though the use of computational techniques for analyzing text corpora in the context of social science research is relatively new, there is already a rich and growing body of work that use various techniques drawn from the information retrieval community in order to better understand social and political phenomenon.The work by Shen, Aiden, Norvig, et. al. @cite_10 , which performed a quantitative analysis of the unigrams and bigrams in millions of digitized books, though very simple in its analysis, was very influential. It illustrated how even simple techniques, when applied across very large corpora, can provide interesting insights. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2019096529"
],
"abstract": [
"We constructed a corpus of digitized texts containing about 4 of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of ‘culturomics,’ focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. Culturomics extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities."
]
} |
1711.00714 | 2766013736 | Insights into social phenomenon can be gleaned from trends and patterns in corpora of documents associated with that phenomenon. Recent years have witnessed the use of computational techniques, mostly based on keywords, to analyze large corpora for these purposes. In this paper, we extend these techniques to incorporate semantic features. We introduce Doris, an interactive exploration tool that combines semantic features with information retrieval techniques to enable exploration of document corpora corresponding to the social phenomenon. We discuss the semantic techniques and describe an implementation on a corpus of United States (US) presidential speeches. We illustrate, with examples, how the ability to combine syntactic and semantic features in a visualization helps researchers more easily gain insights into the underlying phenomenon. | Baker, Gabrielatos, et. al @cite_9 examine a 140 million word corpus of British news articles concerning refugees and immigration using techniques usually associated with corpus linguistics. They study the extent to which methods normally associated with corpus linguistics can be effectively used by critical discourse analysts. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2133518763"
],
"abstract": [
"This article discusses the extent to which methods normally associated with corpus linguistics can be effectively used by critical discourse analysts. Our research is based on the analysis of a 140-million-word corpus of British news articles about refugees, asylum seekers, immigrants and migrants (collectively RASIM). We discuss how processes such as collocation and concordance analysis were able to identify common categories of representation of RASIM as well as directing analysts to representative texts in order to carry out qualitative analysis. The article suggests a framework for adopting corpus approaches in critical discourse analysis."
]
} |
1711.00714 | 2766013736 | Insights into social phenomenon can be gleaned from trends and patterns in corpora of documents associated with that phenomenon. Recent years have witnessed the use of computational techniques, mostly based on keywords, to analyze large corpora for these purposes. In this paper, we extend these techniques to incorporate semantic features. We introduce Doris, an interactive exploration tool that combines semantic features with information retrieval techniques to enable exploration of document corpora corresponding to the social phenomenon. We discuss the semantic techniques and describe an implementation on a corpus of United States (US) presidential speeches. We illustrate, with examples, how the ability to combine syntactic and semantic features in a visualization helps researchers more easily gain insights into the underlying phenomenon. | In @cite_5 , Grimmer uses a statistical topic model on press releases from the House of Representatives from 2005 to 2010 to demonstrate the shift in portrayed representation due to electoral pressure. The author shows how members of the House change rhetoric, specifically in terms of taking credit, due to political pressure. Hillard, Purpura and Wilkerson @cite_8 examine over 300,000 congressional bill titles (that researchers have assigned topics to) and use supervised learning algorithms to allocate topics. The authors show a successful method of classifying large sets of data computationally. Hopkins and King @cite_1 develop a method that gives estimates of category proportions for large sets of data. Using data sets that include relevant political opinions, the authors focus on document category proportions rather than absolute counts of individual categories. | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_8"
],
"mid": [
"",
"2171060319",
"2032158738"
],
"abstract": [
"",
"The increasing availability of digitized text presents enormous opportunities for social scientists. Yet hand coding many blogs, speeches, government records, newspapers, or other sources of unstructured text is infeasible. Although computer scientists have methods for automated content analysis, most are optimized to classify individual documents, whereas social scientists instead want generalizations about the population of documents, such as the proportion in a given category. Unfortunately, even a method with a high percent of individual documents correctly classified can be hugely biased when estimating category proportions. By directly optimizing for this social science goal, we develop a method that gives approximately unbiased estimates of category proportions even when the optimal classifier performs poorly. We illustrate with diverse data sets, including the daily expressed opinions of thousands of people about the U.S. presidency. We also make available software that implements our methods and large corpora of text for further analysis.",
"ABSTRACT Social scientists interested in mixed-methods research have traditionally turned to human annotators to classify the documents or events used in their analyses. The rapid growth of digitized government documents in recent years presents new opportunities for research but also new challenges. With more and more data coming online, relying on human annotators becomes prohibitively expensive for many tasks. For researchers interested in saving time and money while maintaining confidence in their results, we show how a particular supervised learning system can provide estimates of the class of each document (or event). This system maintains high classification accuracy and provides accurate estimates of document proportions, while achieving reliability levels associated with human efforts. We estimate that it lowers the costs of classifying large numbers of complex documents by 80 or more."
]
} |
1711.00714 | 2766013736 | Insights into social phenomenon can be gleaned from trends and patterns in corpora of documents associated with that phenomenon. Recent years have witnessed the use of computational techniques, mostly based on keywords, to analyze large corpora for these purposes. In this paper, we extend these techniques to incorporate semantic features. We introduce Doris, an interactive exploration tool that combines semantic features with information retrieval techniques to enable exploration of document corpora corresponding to the social phenomenon. We discuss the semantic techniques and describe an implementation on a corpus of United States (US) presidential speeches. We illustrate, with examples, how the ability to combine syntactic and semantic features in a visualization helps researchers more easily gain insights into the underlying phenomenon. | Laver, Benoit, et. al. @cite_14 presents a unique way of determining political stances using computational techniques based on language-blind scoring technique. The authors introduce uncertainty measures, allowing researchers the ability to make better observations. Thomas and Pang @cite_19 use a corpus of U.S. congressional floor debates to attempt to determine support or opposition in certain issues. The authors take into account Support Vector Machines and the fact that the data is conversational to create a classification framework. | {
"cite_N": [
"@cite_19",
"@cite_14"
],
"mid": [
"1967807490",
"2009659525"
],
"abstract": [
"We investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation. To address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another. We find that the incorporation of such information yields substantial improvements over classifying speeches in isolation.",
"We present a new way of extracting policy positions from political texts that treats texts not as discourses to be understood and interpreted but rather, as data in the form of words. We compare this approach to previous methods of text analysis and use it to replicate published estimates of the policy positions of political parties in Britain and Ireland, on both economic and social policy dimensions. We “export” the method to a non-English-language environment, analyzing the policy positions of German parties, including the PDS as it entered the former West German party system. Finally, we extend its application beyond the analysis of party manifestos, to the estimation of political positions from legislative speeches. Our “language-blind” word scoring technique successfully replicates published policy estimates without the substantial costs of time and labor that these require. Furthermore, unlike in any previous method for extracting policy positions from political texts, we provide uncertainty measures for our estimates, allowing analysts to make informed judgments of the extent to which differences between two estimated policy positions can be viewed as significant or merely as products of measurement error."
]
} |
1711.00714 | 2766013736 | Insights into social phenomenon can be gleaned from trends and patterns in corpora of documents associated with that phenomenon. Recent years have witnessed the use of computational techniques, mostly based on keywords, to analyze large corpora for these purposes. In this paper, we extend these techniques to incorporate semantic features. We introduce Doris, an interactive exploration tool that combines semantic features with information retrieval techniques to enable exploration of document corpora corresponding to the social phenomenon. We discuss the semantic techniques and describe an implementation on a corpus of United States (US) presidential speeches. We illustrate, with examples, how the ability to combine syntactic and semantic features in a visualization helps researchers more easily gain insights into the underlying phenomenon. | In addition to the work involving the use of computational techniques in the social sciences, we draw on work on Topic Models @cite_2 and word embeddings @cite_12 . Our interface is influenced by the work of Freeman and Gelernter @cite_11 , in which they introduced the idea of temporal presentation of a set of documents. Bergman, Beyth-Marom, et. al. @cite_0 adapted this work to the context of search interfaces. Recent years have seen the adoption of this class of interfaces in widely used software systems, including the Apple Mac interface. | {
"cite_N": [
"@cite_0",
"@cite_11",
"@cite_12",
"@cite_2"
],
"mid": [
"2003482363",
"2047997333",
"2141599568",
"1880262756"
],
"abstract": [
"Traditionally users access their personal files mainly by usingfolder navigation. We evaluate whether recent improvements indesktop search have changed this fundamental aspect of PersonalInformation Management (PIM). We tested this in two studies usingthe same questionnaire: (a) The Windows Studya longitudinalcomparison of Google Desktop and Windows XP SearchCompanion, and (b) The Mac Studya large scale comparison of MacSpotlight and Sherlock. There were few effects forimproved search. First, regardless of search engine, there was astrong navigation preference: on average, users estimated that theyused navigation for 56-68 of file retrieval events but searchedfor only 4-15 of events. Second, the effect of improving thequality of the search engine on search usage was limited andinconsistent. Third, search was used mainly as a last resort whenusers could not remember file location. Finally, there was noevidence that using improved desktop search engines leads people tochange their filing habits to become less reliant on hierarchicalfile organization. We conclude by offering theoretical explanationsfor navigation preference, relating to differences between PIM andInternet retrieval, and suggest alternative design directions forPIM systems.",
"Conventional software systems, such as those based on the “desktop metaphor,” are ill-equipped to manage the electronic information and events of the typical computer user. We introduce a new metaphor, Lifestreams, for dynamically organizing a user's personal workspace. Lifestreams uses a simple organizational metaphor, a time-ordered stream of documents, as an underlying storage system. Stream filters are used to organize, monitor and summarize information for the user. Combined, they provide a system that subsumes many separate desktop applications. This paper describes the Lifestreams model and our prototype system.",
"Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male female relationship is automatically learned, and with the induced vector representations, “King Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40 of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems.",
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model."
]
} |
1711.00499 | 2767128008 | Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we propose a simple space aggregation that hugely simplifies the correlation learning problem. Our results on benchmark data are compelling and show promising potential even without refining the solution. | More recently, @cite_14 expanded on Zbontar's work and proposed a way to obtain disparity values for all possible displacements without manually pairing patch candidates. In other words, a wider image is passed though one of the branches of the siamese architecture and the computed features are correlated with the ones extracted from the target patch. This allows the computation of matching costs for all disparities with one-pass of the CNN. This work also shows that the inner product is a fast and effective way to compute feature correlation. Again, because inference for each pixel is made independently, hand-crafted feature regularization is used to smooth the results. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2440384215"
],
"abstract": [
"In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches."
]
} |
1711.00499 | 2767128008 | Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we propose a simple space aggregation that hugely simplifies the correlation learning problem. Our results on benchmark data are compelling and show promising potential even without refining the solution. | The work presented here is most similar to the one developed by @cite_14 but with two major contributions. First, we show that the loss of the detail from pooling operations can be compensated with deconvolution operations if these are applied in the feature space, before computing correlation. This allows to hugely increase the global receptive field of the feature extractors, resulting in a more robust matching even before spatial regularization. Second, we show that a simple feature aggregation can be used to simplify the learning problem, resulting in effective, more easily learned, data driven correlation metric. To reiterate, because we are just proposing to improve the feature extraction step, our aim is not to beat the current state-of-the-art for full stereo matching pipelines. Our contribution provides a very effective and fast stereo matching network that can easily be further improved by plugging it to most current CNN stereo matching models. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2440384215"
],
"abstract": [
"In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches."
]
} |
1711.00583 | 2767183759 | There is an emerging trend to leverage noisy image datasets in many visual recognition tasks. However, the label noise among the datasets severely degenerates the performance of deep learning approaches. Recently, one mainstream is to introduce the latent label to handle label noise, which has shown promising improvement in the network designs. Nevertheless, the mismatch between latent labels and noisy labels still affects the predictions in such methods. To address this issue, we propose a quality embedding model, which explicitly introduces a quality variable to represent the trustworthiness of noisy labels. Our key idea is to identify the mismatch between the latent and noisy labels by embedding the quality variables into different subspaces, which effectively minimizes the noise effect. At the same time, the high-quality labels is still able to be applied for training. To instantiate the model, we further propose a Contrastive-Additive Noise network (CAN), which consists of two important layers: (1) the contrastive layer estimates the quality variable in the embedding space to reduce noise effect; and (2) the additive layer aggregates the prior predictions and noisy labels as the posterior to train the classifier. Moreover, to tackle the optimization difficulty, we deduce an SGD algorithm with the reparameterization tricks, which makes our method scalable to big data. We conduct the experimental evaluation of the proposed method over a range of noisy image datasets. Comprehensive results have demonstrated CAN outperforms the state-of-the-art deep learning approaches. | Social websites and crowdsourcing platforms provide us an effective way to gather a large amount of low-cost annotations for images. However, in the visual recognition tasks such as image classification, the noise among labels shall severely degenerate the performance of classification models @cite_45 . To exploit the great value of noisy labels, several noise-aware deep learning methods have been proposed for the image classification task. Here, we briefly review these related works. | {
"cite_N": [
"@cite_45"
],
"mid": [
"1994550352"
],
"abstract": [
"Machine learning techniques often have to deal with noisy data, which may affect the accuracy of the resulting data models. Therefore, effectively dealing with noise is a key aspect in supervised learning to obtain reliable models from data. Although several authors have studied the effect of noise for some particular learners, comparisons of its effect among different learners are lacking. In this paper, we address this issue by systematically comparing how different degrees of noise affect four supervised learners that belong to different paradigms. Specifically, we consider the Naive Bayes probabilistic classifier, the C4.5 decision tree, the IBk instance-based learner and the SMO support vector machine. We have selected four methods which enable us to contrast different learning paradigms, and which are considered to be four of the top ten algorithms in data mining ( 2007). We test them on a collection of data sets that are perturbed with noise in the input attributes and noise in the output class. As an initial hypothesis, we assign the techniques to two groups, NB with C4.5 and IBk with SMO, based on their proposed sensitivity to noise, the first group being the least sensitive. The analysis enables us to extract key observations about the effect of different types and degrees of noise on these learning techniques. In general, we find that Naive Bayes appears as the most robust algorithm, and SMO the least, relative to the other two techniques. However, we find that the underlying empirical behavior of the techniques is more complex, and varies depending on the noise type and the specific data set being processed. In general, noise in the training data set is found to give the most difficulty to the learners."
]
} |
1711.00583 | 2767183759 | There is an emerging trend to leverage noisy image datasets in many visual recognition tasks. However, the label noise among the datasets severely degenerates the performance of deep learning approaches. Recently, one mainstream is to introduce the latent label to handle label noise, which has shown promising improvement in the network designs. Nevertheless, the mismatch between latent labels and noisy labels still affects the predictions in such methods. To address this issue, we propose a quality embedding model, which explicitly introduces a quality variable to represent the trustworthiness of noisy labels. Our key idea is to identify the mismatch between the latent and noisy labels by embedding the quality variables into different subspaces, which effectively minimizes the noise effect. At the same time, the high-quality labels is still able to be applied for training. To instantiate the model, we further propose a Contrastive-Additive Noise network (CAN), which consists of two important layers: (1) the contrastive layer estimates the quality variable in the embedding space to reduce noise effect; and (2) the additive layer aggregates the prior predictions and noisy labels as the posterior to train the classifier. Moreover, to tackle the optimization difficulty, we deduce an SGD algorithm with the reparameterization tricks, which makes our method scalable to big data. We conduct the experimental evaluation of the proposed method over a range of noisy image datasets. Comprehensive results have demonstrated CAN outperforms the state-of-the-art deep learning approaches. | This line of research aims at designing a robust loss function to alleviate noise effect. For instance, Joulin @cite_46 weight the cross-entropy loss with the sample number to balance the emphasis of noise in positive and negative instances. Izadinia @cite_31 estimate a global ratio of positive samples to weaken the supervision in the loss function. Reed @cite_21 consider the consistency of predictions in similar images and apply bootstrap to the loss function. They substitute the noisy label with a weight combination of the noisy label and the prediction to encourage the consistent output. Recently, Li @cite_10 re-weight the noisy label with a soft label learned from side information. They train a teacher network with the clean dataset to compute the soft label by leveraging the knowledge graph. The soft label is then combined with the noisy label in the loss function to pilot student model's learning. Andreas @cite_32 rectify labels in the cross-entropy loss with a label-correction network trained on the extra clean dataset. While these methods are concerned with modifying the labels in the loss function by re-weighting or rectification, our approach also models the auxiliary trustworthiness of noisy image labels to reduce the noise effect on training. | {
"cite_N": [
"@cite_31",
"@cite_21",
"@cite_32",
"@cite_46",
"@cite_10"
],
"mid": [
"",
"2962762541",
"2952927437",
"2100031962",
"2949157943"
],
"abstract": [
"",
"Current state-of-the-art deep learning systems for visual object recognition and detection use purely supervised training with regularization such as dropout to avoid overfitting. The performance depends critically on the amount of labeled examples, and in current practice the labels are assumed to be unambiguous and accurate. However, this assumption often does not hold; e.g. in recognition, class labels may be missing; in detection, objects in the image may not be localized; and in general, the labeling may be subjective. In this work we propose a generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency. We consider a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data. In experiments we demonstrate that our approach yields substantial robustness to label noise on several datasets. On MNIST handwritten digits, we show that our model is robust to label corruption. On the Toronto Face Database, we show that our model handles well the case of subjective labels in emotion recognition, achieving state-of-theart results, and can also benefit from unlabeled face images with no modification to our method. On the ILSVRC2014 detection challenge data, we show that our approach extends to very deep networks, high resolution images and structured outputs, and results in improved scalable detection.",
"We present an approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations. One common approach to combine clean and noisy data is to first pre-train a network using the large noisy dataset and then fine-tune with the clean dataset. We show this approach does not fully leverage the information contained in the clean set. Thus, we demonstrate how to use the clean annotations to reduce the noise in the large dataset before fine-tuning the network using both the clean set and the full set with reduced noise. The approach comprises a multi-task network that jointly learns to clean noisy annotations and to accurately classify images. We evaluate our approach on the recently released Open Images dataset, containing 9 million images, multiple annotations per image and over 6000 unique classes. For the small clean set of annotations we use a quarter of the validation set with 40k images. Our results demonstrate that the proposed approach clearly outperforms direct fine-tuning across all major categories of classes in the Open Image dataset. Further, our approach is particularly effective for a large number of classes with medium level of noise in annotations (20-80 false positive annotations).",
"Convolutional networks trained on large supervised datasets produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weakly-labeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and comments, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity and learn correspondences between different languages.",
"The ability of learning from noisy labels is very useful in many visual recognition tasks, as a vast amount of data with noisy labels are relatively easy to obtain. Traditionally, the label noises have been treated as statistical outliers, and approaches such as importance re-weighting and bootstrap have been proposed to alleviate the problem. According to our observation, the real-world noisy labels exhibit multi-mode characteristics as the true labels, rather than behaving like independent random outliers. In this work, we propose a unified distillation framework to use side information, including a small clean dataset and label relations in knowledge graph, to \"hedge the risk\" of learning from noisy labels. Furthermore, unlike the traditional approaches evaluated based on simulated label noises, we propose a suite of new benchmark datasets, in Sports, Species and Artifacts domains, to evaluate the task of learning from noisy labels in the practical setting. The empirical study demonstrates the effectiveness of our proposed method in all the domains."
]
} |
1711.00614 | 2765308781 | The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem. We introduce a long short-term memory based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution. We also introduce an LSTM-VAE-based detector using a reconstruction-based anomaly score and a state-based threshold. For evaluations with 1,555 robot-assisted feeding executions including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve (AUC) of 0.8710 than 5 other baseline detectors from the literature. We also show the multimodal fusion through the LSTM-VAE is effective by comparing our detector with 17 raw sensory signals versus 4 hand-engineered features. | Anomaly detection is known as novelty, outlier, or event detections in other domains @cite_31 . In robotics, it has been used to detect the failure of manipulation tasks: bin picking @cite_6 , bottle opening @cite_16 , etc. Many classic machine learning approaches have also been used: support vector machine (SVM) @cite_0 @cite_25 , self-organizing map (SOM) @cite_1 , k-nearest neighbors (kNN) @cite_5 , etc. To detect anomalies from time-series signals, researchers have also used hidden Markov models @cite_12 or Kalman filters @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_12"
],
"mid": [
"1516013621",
"1975126483",
"2159639291",
"2012831112",
"2399921532",
"2017583040",
"2296055201",
"",
"2409205391"
],
"abstract": [
"In this paper, we propose an estimation method of human joint movements from measured EMG signals for assistive robot control. We focus on how to estimate joint movements using multiple EMG electrodes even under sensor failure situations. In real world applications, EMG sensor electrodes might become disconnected or detached from skin surfaces. If we consider EMG-based robot control for assistive robots, such sensor failures lead to significant errors in the estimation of user joint movements. To cope with these sensor failures, we propose a state estimation model that takes uncertain observations into account. Sensor channel anomalies are found by checking the covariance of the EMG signals measured by multiple EMG electrodes. To validate the proposed control framework, we artificially disconnect an EMG electrode or detach one side of an EMG probe from the skin surface during elbow joint movement estimation. We show proper control of a one-DOF exoskeleton robot based on the estimated joint torque using our proposed method even when one EMG electrode has a sensor problem; a standard method with no tolerability against uncertain observations was unable to deal with these fault situations. Furthermore, the errors of the estimated joint torque with our proposed method were smaller than the standard method or a method with a conventional sensor fault detection algorithm.",
"Autonomous mobile robots are designed to behave appropriately in changing real-world environments without human intervention. In order to satisfy the requirements of autonomy, robots have to cope with unknown settings and several issues of uncertainties in dynamic, unstructured and complex environments. A first step is to provide a robot with cognitive capabilities and the ability of self-examination to detect behavioral abnormalities. Unfortunately, most existing anomaly detection systems are neither suitable for the domain of robotic behavior nor flexible enough or even well generalizable. In the following article, we introduce a novel anomaly detection framework based on spatial-temporal models for robotic behaviors which is generally applicable for e.g., plan execution monitoring. The introduced framework combines the methodology of Kohonen’s Self-organizing Maps (SOMs) and Probabilistic Graphical Models (PGM) exploiting all advantages of both concepts. The underlying methods of the framework are discussed briefly, whereas the data-driven training of the spatial-temporal model and the reasoning process are described in detail. Finally, the framework is evaluated with different scenarios to emphasize its potential and its high level of generalization and flexibility in robotic application.",
"Iteration is often sufficient for a simple hand to accomplish complex tasks, at the cost of an increase in the expected time to completion. In this paper, we minimize that overhead time by allowing a simple hand to abort early and retry as soon as it realizes that the task is likely to fail. We present two key contributions. First, we learn a probabilistic model of the relationship between the likelihood of success of a grasp and its grasp signature—the trace of the state of the hand along the entire grasp motion. Second, we model the iterative process of early abort and retry as a Markov chain and optimize the expected time to completion of the grasping task by effectively thresholding the likelihood of success. Experiments with our simple hand prototype tasked with grasping and singulating parts from a bin show that early abort and retry significantly increases efficiency.",
"This paper addresses failure detection in automated parts assembly, using the force signature captured during the contact phase of the assembly process. We use a supervised learning approach, specifically a Support Vector Machine (SVM), to distinguish between successful and failed assemblies. This paper describes our implementation and experimental results obtained with an electronic assembly application. We also analyze the tradeoff between system accuracy and number of training examples. We show that a less expensive sensor (a single-axis load cell instead of a six-axis force torque sensor) provides enough information to detect failure. Finally, we use Principal Component Analysis (PCA) to compress the force signature and as a result reduce the number of examples required to train the system.",
"This paper addresses an application of anomaly detection from subsequences of time series (STS) to autonomous robots’ behaviors. An important aspect of mining sequential data is selecting the temporal parameters, such as the subsequence length and the degree of smoothing. For example in the task at hand, the patterns of the robot’s velocity, which is one of its fundamental features, vary significantly subject to the interval for measuring the displacement. Selecting the time scale and resolution is difficult in unsupervised settings, and is often more critical than the choice of the method. In this paper, we propose an ensemble framework for aggregating anomaly detection from different perspectives, i.e., settings of user-defined, temporal parameters. In the proposed framework, each behavior is labeled whether it is an anomaly in multiple settings. The set of labels are used as meta-features of the respective behaviors. Cluster analysis in a meta-feature space partitions anomalous behaviors pertained to a specific range of parameters. The framework also includes a scalable implementation of the instance-based anomaly detection. We evaluate the proposed framework by ROC analysis, in comparison to conventional ensemble methods for anomaly detection.",
"Safety is one of the key issues in the use of robots, especially when human–robot interaction is targeted. Although unforeseen environment situations, such as collisions or unexpected user interaction, can be handled with specially tailored control algorithms, hard- or software failures typically lead to situations where too large torques are controlled, which cause an emergency state: hitting an end stop, exceeding a torque, and so on—which often halts the robot when it is too late. No sufficiently fast and reliable methods exist which can early detect faults in the abundance of sensor and controller data. This is especially difficult since, in most cases, no anomaly data are available. In this paper we introduce a new robot anomaly detection system (RADS) which can cope with abundant data in which no or very little anomaly information is present.",
"One of the main challenges in autonomous manipulation is to generate appropriate multi-modal reference trajectories that enable feedback controllers to compute control commands that compensate for unmodeled perturbations and therefore to achieve the task at hand. We propose a data-driven approach to incrementally acquire reference signals from experience and decide online when and to which successive behavior to switch, ensuring successful task execution. We reformulate this online decision making problem as a pair of related classification problems. Both process the current sensor readings, composed from multiple sensor modalities, in real-time (at 30 Hz). Our approach exploits that movement generation can dictate sensor feedback. Thus, enforcing stereotypical behavior will yield stereotypical sensory events which can be accumulated and stored along with the movement plan. Such movement primitives, augmented with sensor experience, are called Associative Skill Memories (ASMs). Sensor experience consists of (real) sensors, including haptic, auditory information and visual information, as well as additional (virtual) features. We show that our approach can be used to teach dexterous tasks, e.g. a bimanual manipulation task on a real platform that requires precise manipulation of relatively small objects. Task execution is robust against perturbation and sensor noise, because our method decides online whether or not to switch to alternative ASMs due to unexpected sensory signals.",
"",
"Online detection of anomalous execution can be valuable for robot manipulation, enabling robots to operate more safely, determine when a behavior is inappropriate, and otherwise exhibit more common sense. By using multiple complementary sensory modalities, robots could potentially detect a wider variety of anomalies, such as anomalous contact or a loud utterance by a human. However, task variability and the potential for false positives make online anomaly detection challenging, especially for long-duration manipulation behaviors. In this paper, we provide evidence for the value of multimodal execution monitoring and the use of a detection threshold that varies based on the progress of execution. Using a data-driven approach, we train an execution monitor that runs in parallel to a manipulation behavior. Like previous methods for anomaly detection, our method trains a hidden Markov model (HMM) using multimodal observations from non-anomalous executions. In contrast to prior work, our system also uses a detection threshold that changes based on the execution progress. We evaluated our approach with haptic, visual, auditory, and kinematic sensing during a variety of manipulation tasks performed by a PR2 robot. The tasks included pushing doors closed, operating switches, and assisting able-bodied participants with eating yogurt. In our evaluations, our anomaly detection method performed substantially better with multimodal monitoring than single modality monitoring. It also resulted in more desirable ROC curves when compared with other detection threshold methods from the literature, obtaining higher true positive rates for comparable false positive rates."
]
} |
1711.00614 | 2765308781 | The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem. We introduce a long short-term memory based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution. We also introduce an LSTM-VAE-based detector using a reconstruction-based anomaly score and a state-based threshold. For evaluations with 1,555 robot-assisted feeding executions including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve (AUC) of 0.8710 than 5 other baseline detectors from the literature. We also show the multimodal fusion through the LSTM-VAE is effective by comparing our detector with 17 raw sensory signals versus 4 hand-engineered features. | Researchers have often reduced the dimension of high-dimensional inputs using principal component analysis (PCA) before applying probabilistic or distance-based detection @cite_0 @cite_14 . However, the compressed representations of outliers (i.e., anomalous data) may be inliers in latent space. Instead, we use a reconstruction-based method that recovers inputs from its compressed representation so that it can measure reconstruction error with the anomaly score. An AE is a representative reconstruction approach that is a connected network with an encoder and a decoder @cite_20 . It has also been applied for reconstructing time-series data using a sliding time-window @cite_15 . However, the window method does not represent dependencies between nearby windows and a window may not include an anomaly. | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_14",
"@cite_20"
],
"mid": [
"2012831112",
"2088335308",
"",
"2100495367"
],
"abstract": [
"This paper addresses failure detection in automated parts assembly, using the force signature captured during the contact phase of the assembly process. We use a supervised learning approach, specifically a Support Vector Machine (SVM), to distinguish between successful and failed assemblies. This paper describes our implementation and experimental results obtained with an electronic assembly application. We also analyze the tradeoff between system accuracy and number of training examples. We show that a less expensive sensor (a single-axis load cell instead of a six-axis force torque sensor) provides enough information to detect failure. Finally, we use Principal Component Analysis (PCA) to compress the force signature and as a result reduce the number of examples required to train the system.",
"For humans to accurately understand the world around them, multimodal integration is essential because it enhances perceptual precision and reduces ambiguity. Computational models replicating such human ability may contribute to the practical use of robots in daily human living environments; however, primarily because of scalability problems that conventional machine learning algorithms suffer from, sensory-motor information processing in robotic applications has typically been achieved via modal-dependent processes. In this paper, we propose a novel computational framework enabling the integration of sensory-motor time-series data and the self-organization of multimodal fused representations based on a deep learning approach. To evaluate our proposed model, we conducted two behavior-learning experiments utilizing a humanoid robot; the experiments consisted of object manipulation and bell-ringing tasks. From our experimental results, we show that large amounts of sensory-motor information, including raw RGB images, sound spectrums, and joint angles, are directly fused to generate higher-level multimodal representations. Further, we demonstrated that our proposed framework realizes the following three functions: (1) cross-modal memory retrieval utilizing the information complementation capability of the deep autoencoder; (2) noise-robust behavior recognition utilizing the generalization capability of multimodal features; and (3) multimodal causality acquisition and sensory-motor prediction based on the acquired causality. Novel computational framework for sensory-motor integration learning.Cross-modal memory retrieval utilizing a deep autoencoder.Noise-robust behavior recognition utilizing acquired multimodal features.Multimodal causality acquisition and sensory-motor prediction.",
"",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
]
} |
1711.00614 | 2765308781 | The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem. We introduce a long short-term memory based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution. We also introduce an LSTM-VAE-based detector using a reconstruction-based anomaly score and a state-based threshold. For evaluations with 1,555 robot-assisted feeding executions including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve (AUC) of 0.8710 than 5 other baseline detectors from the literature. We also show the multimodal fusion through the LSTM-VAE is effective by comparing our detector with 17 raw sensory signals versus 4 hand-engineered features. | Another relevant approach is a variational autoencoder (VAE) @cite_27 . Unlike the AE, a VAE models the underlying probability distribution of observations using variational inference (VI). Bayer and Osendorfer used VI to learn the underlying distribution of sequences and introduced stochastic recurrent networks @cite_7 . used their work to detect robot anomalies using unimodal signals @cite_28 . Our work also uses variational inference, but we do not predict and instead only reconstruct data using an LSTM-based autoencoder for multimodal anomaly detection. | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_7"
],
"mid": [
"2286533962",
"",
"1884859883"
],
"abstract": [
"Approximate variational inference has shown to be a powerful tool for modeling unknown complex probability distributions. Recent advances in the field allow us to learn probabilistic models of sequences that actively exploit spatial and temporal structure. We apply a Stochastic Recurrent Network (STORN) to learn robot time series data. Our evaluation demonstrates that we can robustly detect anomalies both off- and on-line.",
"",
"Leveraging advances in variational inference, we propose to enhance recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs). The model i) can be trained with stochastic gradient methods, ii) allows structured and multi-modal conditionals at each time step, iii) features a reliable estimator of the marginal likelihood and iv) is a generalisation of deterministic recurrent neural networks. We evaluate the method on four polyphonic musical data sets and motion capture data."
]
} |
1711.00405 | 2767037717 | Consider a network design application where we wish to lay down a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. To learn the precise cost of any edge, we have to conduct a study that incurs a price. Our goal is to find a spanning tree while minimizing the disutility, which is the sum of the tree cost and the total price that we spend on the studies. In a different application, each edge gives a stochastic reward value. Our goal is to find a spanning tree while maximizing the utility, which is the tree reward minus the prices that we pay. Situations such as the above two often arise in practice where we wish to find a good solution to an optimization problem, but we start with only some partial knowledge about the parameters of the problem. The missing information can be found only after paying a probing price, which we call the price of information. What strategy should we adopt to optimize our expected utility disutility? A classical example of the above setting is Weitzman's "Pandora's box" problem where we are given probability distributions on values of @math independent random variables. The goal is to choose a single variable with a large value, but we can find the actual outcomes only after paying a price. Our work is a generalization of this model to other combinatorial optimization problems such as matching, set cover, facility location, and prize-collecting Steiner tree. We give a technique that reduces such problems to their non-price counterparts, and use it to design exact approximation algorithms to optimize our utility disutility. Our techniques extend to situations where there are additional constraints on what parameters can be probed or when we can simultaneously probe a subset of the parameters. | The Pandora's box solution can be written as a special case of the Gittins index theorem @cite_19 . @cite_22 consider a minimization variant of the Gittins index theorem when there is no discounting. Another very relevant paper is that of @cite_13 , while their results are to design auctions. Their proof of the Pandora's box problem inspired this work. | {
"cite_N": [
"@cite_19",
"@cite_13",
"@cite_22"
],
"mid": [
"",
"2304126241",
"1987077169"
],
"abstract": [
"",
"When exploring acquisition targets, firms typically begin with the possibilities offering greatest option value and work their way down, as prescribed by optimal search theory. Yet the market designs economists have often prescribed, involving simultaneous or ascending prices, stymie this process. As a result they may be arbitrarily inefficient when one accounts for the costs bidders must invest to learn their value for acquiring different items. We present a model that incorporates such costs, and a simple descending price procedure that we prove robustly approximates the fully optimal sequential search process quite generally. Our results exploit a novel characterization of Weitzman's \"Pandora's Box\" problem in terms of option pricing theory that connects seamlessly with recent techniques from algorithmic mechanism design.",
"We analyze and solve a game in which a player chooses which of several Markov chains to advance, with the object of minimizing the expected time (or cost) for one of the chains to reach a target state. The solution entails computing (in polynomial time) a function @math ---a variety of \"Gittins index\"---on the states of the individual chains, the minimization of which produces an optimal strategy. It turns out that @math is a useful cousin of the expected hitting time of a Markov chain but is defined, for example, even for random walks on infinite graphs. We derive the basic properties of @math and consider its values in some natural situations."
]
} |
1711.00715 | 2765234359 | The emergence of "Fake News" and misinformation via online news and social media has spurred an interest in computational tools to combat this phenomenon. In this paper we present a new "Related Fact Checks" service, which can help a reader critically evaluate an article and make a judgment on its veracity by bringing up fact checks that are relevant to the article. We describe the core technical problems that need to be solved in building a "Related Fact Checks" service, and present results from an evaluation of an implementation. | Much of the work on the spread of fake news has focused on its dissemination through social media. Shao, et al @cite_28 show how bots on social networks have contributed greatly to the spread of fake news. Jin and Dougherty @cite_19 apply epidemiological models to information diffusion on Twitter. This paper is the first to employ the SIEZ model to Twitter data and shows the success of this method in capturing the spread of information on Twitter. Tacchini and Ballarin @cite_32 shows that Facebook posts can be determined to be hoaxes or real based on the number of likes. The authors use two classification techniques; one is based on logistic regression while the other on a novel adaptation of boolean crowdsourcing algorithms. Gupta, Lamba, et al @cite_9 show how a small number of users were responsible for a large number of retweets of fake images of Hurricane Sandy. Gupta, et al @cite_17 used regression analysis to identify the important features which predict credibility. The authors used machine learning to create an algorithm which ranked tweets based on the credibility of sources. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_32",
"@cite_19",
"@cite_17"
],
"mid": [
"",
"1796766288",
"2607906563",
"2164082612",
"1973668723"
],
"abstract": [
"",
"In today's world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events, it can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper, is to highlight the role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter, during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that top thirty users out of 10,215 users (0.3 ) resulted in 90 of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11 ) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97 accuracy in predicting fake images from real. Also, tweet based features were very effective in distinguishing fake images tweets from real, while the performance of user based features was very poor. Our results, showed that, automated techniques can be used in identifying real images from fake images posted on Twitter.",
"In recent years, the reliability of information on the Internet has emerged as a crucial issue of modern society. Social network sites (SNSs) have revolutionized the way in which information is spread by allowing users to freely share content. As a consequence, SNSs are also increasingly used as vectors for the diffusion of misinformation and hoaxes. The amount of disseminated information and the rapidity of its diffusion make it practically impossible to assess reliability in a timely manner, highlighting the need for automatic hoax detection systems. As a contribution towards this objective, we show that Facebook posts can be classified with high accuracy as hoaxes or non-hoaxes on the basis of the users who \"liked\" them. We present two classification techniques, one based on logistic regression, the other on a novel adaptation of boolean crowdsourcing algorithms. On a dataset consisting of 15,500 Facebook posts and 909,236 users, we obtain classification accuracies exceeding 99 even when the training set contains less than 1 of the posts. We further show that our techniques are robust: they work even when we restrict our attention to the users who like both hoax and non-hoax posts. These results suggest that mapping the diffusion pattern of information can be a useful component of automatic hoax detection systems.",
"Characterizing information diffusion on social platforms like Twitter enables us to understand the properties of underlying media and model communication patterns. As Twitter gains in popularity, it has also become a venue to broadcast rumors and misinformation. We use epidemiological models to characterize information cascades in twitter resulting from both news and rumors. Specifically, we use the SEIZ enhanced epidemic model that explicitly recognizes skeptics to characterize eight events across the world and spanning a range of event types. We demonstrate that our approach is accurate at capturing diffusion in these events. Our approach can be fruitfully combined with other strategies that use content modeling and graph theoretic features to detect (and possibly disrupt) rumors.",
"Twitter has evolved from being a conversation or opinion sharing medium among friends into a platform to share and disseminate information about current events. Events in the real world create a corresponding spur of posts (tweets) on Twitter. Not all content posted on Twitter is trustworthy or useful in providing information about the event. In this paper, we analyzed the credibility of information in tweets corresponding to fourteen high impact news events of 2011 around the globe. From the data we analyzed, on average 30 of total tweets posted about an event contained situational information about the event while 14 was spam. Only 17 of the total tweets posted about the event contained situational awareness information that was credible. Using regression analysis, we identified the important content and sourced based features, which can predict the credibility of information in a tweet. Prominent content based features were number of unique characters, swear words, pronouns, and emoticons in a tweet, and user based features like the number of followers and length of username. We adopted a supervised machine learning and relevance feedback approach using the above features, to rank tweets according to their credibility score. The performance of our ranking algorithm significantly enhanced when we applied re-ranking strategy. Results show that extraction of credible information from Twitter can be automated with high confidence."
]
} |
1711.00715 | 2765234359 | The emergence of "Fake News" and misinformation via online news and social media has spurred an interest in computational tools to combat this phenomenon. In this paper we present a new "Related Fact Checks" service, which can help a reader critically evaluate an article and make a judgment on its veracity by bringing up fact checks that are relevant to the article. We describe the core technical problems that need to be solved in building a "Related Fact Checks" service, and present results from an evaluation of an implementation. | Most previous work in Information Retrieval is based on features corresponding to the frequency of occurrence of terms n-grams in a document and in the corpus. Kurland and Lee @cite_33 introduced the idea of using the structure of the corpus to help with ad-hoc queries. In their work, they used clustering to identify the structure in the corpus. Long-running themes are one kind of structure that may occur in corpora of news articles. Here, we use Topic Modeling for identifying these themes, and then use these themes to retrieve relevant fact checks. We believe that this approach has wide applicability in Information Retrieval, beyond fake news and fact checks. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2027445772"
],
"abstract": [
"Most previous work on the recently developed language-modeling approach to information retrieval focuses on document-specific characteristics, and therefore does not take into account the structure of the surrounding corpus. We propose a novel algorithmic framework in which information provided by document-based language models is enhanced by the incorporation of information drawn from clusters of similar documents. Using this framework, we develop a suite of new algorithms. Even the simplest typically outperforms the standard language-modeling approach in precision and recall, and our new interpolation algorithm posts statistically significant improvements for both metrics over all three corpora tested."
]
} |
1711.00715 | 2765234359 | The emergence of "Fake News" and misinformation via online news and social media has spurred an interest in computational tools to combat this phenomenon. In this paper we present a new "Related Fact Checks" service, which can help a reader critically evaluate an article and make a judgment on its veracity by bringing up fact checks that are relevant to the article. We describe the core technical problems that need to be solved in building a "Related Fact Checks" service, and present results from an evaluation of an implementation. | Recent studies in the field of Political Science suggest that exposing readers to fact checks has a substantial positive impact in the long run. Hill @cite_14 finds empirical evidence that voters do gradually change their opinions when presented with the facts. Similarly, Peterson @cite_24 finds that voters with more information are less likely to vote along party lines. @cite_15 investigate characteristics of readers who believe in fake news and find an inverse correlation between critical thinking abilities and the likelihood of believing in fake news. Finally, @cite_8 study the perceived trustworthiness of fact check services themselves and conclude that fact checking services need to increase their transparency by disclosing their methodologies and funding sources. | {
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_14",
"@cite_8"
],
"mid": [
"2750223216",
"2745461309",
"2746088002",
"2748697670"
],
"abstract": [
"Voters are often highly dependent on partisanship to structure their preferences toward political candidates and policy proposals. What conditions enable partisan cues to “dominate” public opinion? Here I theorize that variation in voters’ reliance on partisanship results, in part, from the opportunities their environment provides to learn about politics. A conjoint experiment and an observational study of voting in congressional elections both support the expectation that more detailed information environments reduce the role of partisanship in candidate choice. These findings clarify previously unexplained cross-study variation in party cue effects. They also challenge competing claims that partisan cues inhibit responsiveness to such a degree that voters fail to use other information or that high-information environments increase voter reliance on partisanship.",
"",
"Although many studies suggest that voters learn about political facts with prejudice toward their preexisting beliefs, none have fully characterized all inputs to Bayes’ Rule, leaving uncertainty about the magnitude of bias. This paper evaluates political learning by first highlighting the importance of careful measures of each input and then presenting a statistical model and experiment that measure the magnitude of departure from Bayesian learning. Subjects learn as cautious Bayesians, updating their beliefs at about 73 of perfect application of Bayes’ Rule. They are also modestly biased. For information consistent with prior beliefs, subject learning is not statistically distinguishable from perfect Bayesian. Inconsistent information, however, corresponds to learning less than perfect. Despite bias, beliefs do not polarize. With small monetary incentives for accuracy, aggregate beliefs converge toward common truth. Cautious Bayesian learning appears to be a reasonable model of how citizens process pol...",
"Even when checked by fact checkers, facts are often still open to preexisting bias and doubt."
]
} |
1711.00113 | 2767024947 | Normal-form bisimilarity is a simple, easy-to-use behavioral equivalence that relates terms in lambda-calculi by decomposing their normal forms into bisimilar subterms. Besides, they allow for powerful up-to techniques, such as bisimulation up to context, which simplify bisimulation proofs even further. However, proving soundness of these relations becomes complicated in the presence of eta-expansion and usually relies on ad hoc proof methods which depend on the language. In this paper we propose a more systematic proof method to show that an extensional normal-form bisimilarity along with its corresponding bisimulation up to context are sound. We illustrate our technique with three calculi: the call-by-value lambda-calculus, the call-by-value lambda-calculus with the delimited-control operators shift and reset, and the call-by-value lambda-calculus with the abortive control operators call cc and abort. In the first two cases, there was previously no sound bisimulation up to context validating the eta-law, whereas no theory of normal-form bisimulations for the calculus of abortive control has been presented before. Our results have been fully formalized in the Coq proof assistant. | Normal-form bisimilarity has been first introduced by Sangiorgi @cite_15 and has then been defined for many variants of the @math -calculus, considering @math -expansion @cite_28 @cite_9 @cite_24 @cite_12 @cite_27 @cite_11 @cite_5 @cite_3 or not @cite_16 @cite_29 . In this section we focus on the articles treating the @math -law, and in particular on the congruence and soundness proofs presented therein. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_29",
"@cite_3",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"1537995368",
"1712407203",
"2009226222",
"2086744387",
"",
"1491719272",
"45878689",
"2158386061",
"",
"2100747331",
"2152787740"
],
"abstract": [
"Abstract On the basis of an operational bisimulation account of Bohm tree equivalence, a novel operationally-based development of the Bohm tree theory is presented, including an elementary congruence proof for Bohm tree equivalence. The approach is also applied to other sensible and lazy tree theories. Finally, a syntactic proof principle, called bisimulation up to context, is derived from the congruence proofs. It is used to give a simple syntactic proof of the least fixed point property of fixed point combinators. The paper surveys notions of bisimulation and trees for sensible λ-theories based on reduction to head normal forms as well as for lazy λ-theories based on weak head normal forms.",
"This paper describes two new bisimulation equivalences for the pure untyped call-by-value spl lambda -calculus, called enf bisimilarity and enf bisimilarity up to spl eta . They are based on eager reduction of terms to eager normal form (enf), analogously to co-inductive bisimulation characterizations of Levy-Longo tree equivalence and Bohm tree equivalence (up to spl eta ). We argue that enf bisimilarity is the call-by-value analogue of Levy-Longo tree equivalence. Enf bisimilarity (up to spl eta ) is the congruence on source terms induced by the call-by-value CPS transform and Bohm tree equivalence (up to spl eta ) on target terms. Enf bisimilarity and enf bisimilarity up to spl eta enjoy powerful bisimulation proof principles which, among other things, can be used to establish a retraction theorem for the call-by-value CPS transform.",
"This paper presents co-inductive operational theories of program refinement and equivalence, called whnf similarity and whnf simulation equivalence, for the @l-calculus extended with McCarthy's ambiguous choice operator amb. The associated whnf simulation co-induction proof principle is useful for establishing non-trivial equivalences and refinement relationships between programs. Whnf similarity is a pre-congruence and whnf simulation equivalence is a congruence and a conservative extension of the Levy-Longo tree theory for the pure @l-calculus.",
"Delimited continuations are more expressive than traditional abortive continuations and they apparently require a framework beyond traditional continuation-passing style (CPS). We show that this is not the case: standard CPS is sufficient to explain the common control operators for delimited continuations. We demonstrate this fact and present an implementation as a Scheme library. We then investigate a typed account of delimited continuations that makes explicit where control effects can occur. This results in a monadic framework for typed and encapsulated delimited continuations, which we design and implement as a Haskell library.",
"",
"Normal form bisimulation is a powerful theory of program equivalence, originally developed to characterize Levy-Longo tree equivalence and Boehm tree equivalence. It has been adapted to a range of un-typed, higher-order calculi, but types have presented a difficulty. In this paper, we present an account of normal form bisimulation for types, including recursive types. We develop our theory for a continuation-passing style calculus, Jump-With-Argument (JWA), where normal form bisimilarity takes a very simple form. We give a novel congruence proof, based on insights from game semantics. A notable feature is the seamless treatment of eta-expansion. We demonstrate the normal form bisimulation proof principle by using it to establish a syntactic minimal invariance result and the uniqueness of the fixed point operator at each type.",
"We define a notion of normal form bisimilarity for the untyped call-by-value λ -calculus extended with the delimited-control operators shift and reset. Normal form bisimilarities are simple, easy-to-use behavioral equivalences which relate terms without having to test them within all contexts (like contextual equivalence), or by applying them to function arguments (like applicative bisimilarity). We prove that the normal form bisimilarity for shift and reset is sound but not complete w.r.t. contextual equivalence and we define up-to techniques that aim at simplifying bisimulation proofs. Finally, we illustrate the simplicity of the techniques we develop by proving several equivalences on terms.",
"The use of lambda calculus in richer settings, possibly involving parallelism, is examined in terms of its effect on the equivalence between lambda terms, focusing on S. Abramsky's (Ph.D thesis, Univ. of London, 1987) lazy lambda calculus. First, the lambda calculus is studied within a process calculus by examining the equivalence induced by R. Milner's (1992) encoding into the pi -calculus. Exact operational and denotational characterizations for this equivalence are given. Second, Abramsky's applicative bisimulation is examined when the lambda calculus is augmented with (well-formed) operators, i.e. symbols equipped with reduction rules describing their behavior. Then, maximal discrimination is obtained when all operators are considered; it is shown that this discrimination coincides with the one given by the above equivalence and that the adoption of certain nondeterministic operators is sufficient and necessary to induce it. >",
"",
"We present a new co-inductive syntactic theory, eager normal form bisimilarity, for the untyped call-by-value lambda calculus extended with continuations and mutable references.We demonstrate that the associated bisimulation proof principle is easy to use and that it is a powerful tool for proving equivalences between recursive imperative higher-order programs.The theory is modular in the sense that eager normal form bisimilarity for each of the calculi extended with continuations and or mutable references is a fully abstract extension of eager normal form bisimilarity for its sub-calculi. For each calculus, we prove that eager normal form bisimilarity is a congruence and is sound with respect to contextual equivalence. Furthermore, for the calculus with both continuations and mutable references, we show that eager normal form bisimilarity is complete: it coincides with contextual equivalence.",
"This paper presents a new bisimulation theory for parametric polymorphism which enables straight forward co-inductive proofs of program equivalences involving existential types. The theory is an instance of typed normal form bisimulation and demonstrates the power of this recent framework for modeling typed lambda calculi as labelled transition systems.We develop our theory for a continuation-passing style calculus, Jump-With-Argument, where normal form bisimulation takes a simple form. We equip the calculus with both existential and recursive types. An \"ultimate pattern matching theorem\" enables us to define bisimilarity and we show it to be a congruence. We apply our theory to proving program equivalences, type isomorphisms and genericity."
]
} |
1711.00113 | 2767024947 | Normal-form bisimilarity is a simple, easy-to-use behavioral equivalence that relates terms in lambda-calculi by decomposing their normal forms into bisimilar subterms. Besides, they allow for powerful up-to techniques, such as bisimulation up to context, which simplify bisimulation proofs even further. However, proving soundness of these relations becomes complicated in the presence of eta-expansion and usually relies on ad hoc proof methods which depend on the language. In this paper we propose a more systematic proof method to show that an extensional normal-form bisimilarity along with its corresponding bisimulation up to context are sound. We illustrate our technique with three calculi: the call-by-value lambda-calculus, the call-by-value lambda-calculus with the delimited-control operators shift and reset, and the call-by-value lambda-calculus with the abortive control operators call cc and abort. In the first two cases, there was previously no sound bisimulation up to context validating the eta-law, whereas no theory of normal-form bisimulations for the calculus of abortive control has been presented before. Our results have been fully formalized in the Coq proof assistant. | @cite_28 , Lassen defines several equivalences for the call-by-name @math -calculus, depending on the chosen semantics. He defines and for the semantics based on reduction to head normal form (where @math -expansion applies to any term @math , not only to a value as in the call-by-value @math -calculus), and based on reduction to weak head normal form. (It does not make sense to consider a , since it would be unsound, e.g., it would relate a non-terminating term @math with a normal form @math .) The paper also defines a bisimulation up to context for each bisimilarity. | {
"cite_N": [
"@cite_28"
],
"mid": [
"1537995368"
],
"abstract": [
"Abstract On the basis of an operational bisimulation account of Bohm tree equivalence, a novel operationally-based development of the Bohm tree theory is presented, including an elementary congruence proof for Bohm tree equivalence. The approach is also applied to other sensible and lazy tree theories. Finally, a syntactic proof principle, called bisimulation up to context, is derived from the congruence proofs. It is used to give a simple syntactic proof of the least fixed point property of fixed point combinators. The paper surveys notions of bisimulation and trees for sensible λ-theories based on reduction to head normal forms as well as for lazy λ-theories based on weak head normal forms."
]
} |
1711.00113 | 2767024947 | Normal-form bisimilarity is a simple, easy-to-use behavioral equivalence that relates terms in lambda-calculi by decomposing their normal forms into bisimilar subterms. Besides, they allow for powerful up-to techniques, such as bisimulation up to context, which simplify bisimulation proofs even further. However, proving soundness of these relations becomes complicated in the presence of eta-expansion and usually relies on ad hoc proof methods which depend on the language. In this paper we propose a more systematic proof method to show that an extensional normal-form bisimilarity along with its corresponding bisimulation up to context are sound. We illustrate our technique with three calculi: the call-by-value lambda-calculus, the call-by-value lambda-calculus with the delimited-control operators shift and reset, and the call-by-value lambda-calculus with the abortive control operators call cc and abort. In the first two cases, there was previously no sound bisimulation up to context validating the eta-law, whereas no theory of normal-form bisimulations for the calculus of abortive control has been presented before. Our results have been fully formalized in the Coq proof assistant. | In @cite_9 , Lassen claims that It is also possible to prove congruence of enf bisimilarity and enf bisimilarity up to @math directly like the congruence proofs for other normal form bisimilarities (tree equivalences) in @cite_28 , although the congruence proofs (...) require non-trivial changes to the relational substitutive context closure operation in op.cit. (...) Moreover, from the direct congruence proofs, we can derive bisimulation “up to context” proof principles like those for other normal form bisimilarities in op.cit.'' To our knowledge, such a proof is not published anywhere; we tried to carry out the congruence proof by following this comment, but we do not know how to conclude in the case of enf bisimilarity up to @math . We discuss what the problem is at the end of the proof of Lemma . | {
"cite_N": [
"@cite_28",
"@cite_9"
],
"mid": [
"1537995368",
"1712407203"
],
"abstract": [
"Abstract On the basis of an operational bisimulation account of Bohm tree equivalence, a novel operationally-based development of the Bohm tree theory is presented, including an elementary congruence proof for Bohm tree equivalence. The approach is also applied to other sensible and lazy tree theories. Finally, a syntactic proof principle, called bisimulation up to context, is derived from the congruence proofs. It is used to give a simple syntactic proof of the least fixed point property of fixed point combinators. The paper surveys notions of bisimulation and trees for sensible λ-theories based on reduction to head normal forms as well as for lazy λ-theories based on weak head normal forms.",
"This paper describes two new bisimulation equivalences for the pure untyped call-by-value spl lambda -calculus, called enf bisimilarity and enf bisimilarity up to spl eta . They are based on eager reduction of terms to eager normal form (enf), analogously to co-inductive bisimulation characterizations of Levy-Longo tree equivalence and Bohm tree equivalence (up to spl eta ). We argue that enf bisimilarity is the call-by-value analogue of Levy-Longo tree equivalence. Enf bisimilarity (up to spl eta ) is the congruence on source terms induced by the call-by-value CPS transform and Bohm tree equivalence (up to spl eta ) on target terms. Enf bisimilarity and enf bisimilarity up to spl eta enjoy powerful bisimulation proof principles which, among other things, can be used to establish a retraction theorem for the call-by-value CPS transform."
]
} |
1711.00113 | 2767024947 | Normal-form bisimilarity is a simple, easy-to-use behavioral equivalence that relates terms in lambda-calculi by decomposing their normal forms into bisimilar subterms. Besides, they allow for powerful up-to techniques, such as bisimulation up to context, which simplify bisimulation proofs even further. However, proving soundness of these relations becomes complicated in the presence of eta-expansion and usually relies on ad hoc proof methods which depend on the language. In this paper we propose a more systematic proof method to show that an extensional normal-form bisimilarity along with its corresponding bisimulation up to context are sound. We illustrate our technique with three calculi: the call-by-value lambda-calculus, the call-by-value lambda-calculus with the delimited-control operators shift and reset, and the call-by-value lambda-calculus with the abortive control operators call cc and abort. In the first two cases, there was previously no sound bisimulation up to context validating the eta-law, whereas no theory of normal-form bisimulations for the calculus of abortive control has been presented before. Our results have been fully formalized in the Coq proof assistant. | St vring and Lassen @cite_12 define extensional enf bisimilarities for three calculi: @math (continuations), @math (mutable state), and @math (continuations and mutable state). The congruence proof is rather convoluted and is done in two stages: first, prove congruence of a non-extensional bisimilarity using the nested induction of @cite_28 , then extend the result to the extensional bisimilarity by a syntactic translation that takes advantage of an infinite @math -expansion combinator. The paper does not mention bisimulation up to context. | {
"cite_N": [
"@cite_28",
"@cite_12"
],
"mid": [
"1537995368",
"2100747331"
],
"abstract": [
"Abstract On the basis of an operational bisimulation account of Bohm tree equivalence, a novel operationally-based development of the Bohm tree theory is presented, including an elementary congruence proof for Bohm tree equivalence. The approach is also applied to other sensible and lazy tree theories. Finally, a syntactic proof principle, called bisimulation up to context, is derived from the congruence proofs. It is used to give a simple syntactic proof of the least fixed point property of fixed point combinators. The paper surveys notions of bisimulation and trees for sensible λ-theories based on reduction to head normal forms as well as for lazy λ-theories based on weak head normal forms.",
"We present a new co-inductive syntactic theory, eager normal form bisimilarity, for the untyped call-by-value lambda calculus extended with continuations and mutable references.We demonstrate that the associated bisimulation proof principle is easy to use and that it is a powerful tool for proving equivalences between recursive imperative higher-order programs.The theory is modular in the sense that eager normal form bisimilarity for each of the calculi extended with continuations and or mutable references is a fully abstract extension of eager normal form bisimilarity for its sub-calculi. For each calculus, we prove that eager normal form bisimilarity is a congruence and is sound with respect to contextual equivalence. Furthermore, for the calculus with both continuations and mutable references, we show that eager normal form bisimilarity is complete: it coincides with contextual equivalence."
]
} |
1711.00113 | 2767024947 | Normal-form bisimilarity is a simple, easy-to-use behavioral equivalence that relates terms in lambda-calculi by decomposing their normal forms into bisimilar subterms. Besides, they allow for powerful up-to techniques, such as bisimulation up to context, which simplify bisimulation proofs even further. However, proving soundness of these relations becomes complicated in the presence of eta-expansion and usually relies on ad hoc proof methods which depend on the language. In this paper we propose a more systematic proof method to show that an extensional normal-form bisimilarity along with its corresponding bisimulation up to context are sound. We illustrate our technique with three calculi: the call-by-value lambda-calculus, the call-by-value lambda-calculus with the delimited-control operators shift and reset, and the call-by-value lambda-calculus with the abortive control operators call cc and abort. In the first two cases, there was previously no sound bisimulation up to context validating the eta-law, whereas no theory of normal-form bisimulations for the calculus of abortive control has been presented before. Our results have been fully formalized in the Coq proof assistant. | Lassen and Levy @cite_27 @cite_11 define a normal-form bisimilarity for a CPS calculus called JWA equipped with a rich type system (including product, sum, recursive types; @cite_11 adds existential types). The bisimilarity respects the @math -law, and the congruence proof is done in terms of game semantics notions. Again, these papers do not mention bisimulation up to context. | {
"cite_N": [
"@cite_27",
"@cite_11"
],
"mid": [
"1491719272",
"2152787740"
],
"abstract": [
"Normal form bisimulation is a powerful theory of program equivalence, originally developed to characterize Levy-Longo tree equivalence and Boehm tree equivalence. It has been adapted to a range of un-typed, higher-order calculi, but types have presented a difficulty. In this paper, we present an account of normal form bisimulation for types, including recursive types. We develop our theory for a continuation-passing style calculus, Jump-With-Argument (JWA), where normal form bisimilarity takes a very simple form. We give a novel congruence proof, based on insights from game semantics. A notable feature is the seamless treatment of eta-expansion. We demonstrate the normal form bisimulation proof principle by using it to establish a syntactic minimal invariance result and the uniqueness of the fixed point operator at each type.",
"This paper presents a new bisimulation theory for parametric polymorphism which enables straight forward co-inductive proofs of program equivalences involving existential types. The theory is an instance of typed normal form bisimulation and demonstrates the power of this recent framework for modeling typed lambda calculi as labelled transition systems.We develop our theory for a continuation-passing style calculus, Jump-With-Argument, where normal form bisimulation takes a simple form. We equip the calculus with both existential and recursive types. An \"ultimate pattern matching theorem\" enables us to define bisimilarity and we show it to be a congruence. We apply our theory to proving program equivalences, type isomorphisms and genericity."
]
} |
1711.00113 | 2767024947 | Normal-form bisimilarity is a simple, easy-to-use behavioral equivalence that relates terms in lambda-calculi by decomposing their normal forms into bisimilar subterms. Besides, they allow for powerful up-to techniques, such as bisimulation up to context, which simplify bisimulation proofs even further. However, proving soundness of these relations becomes complicated in the presence of eta-expansion and usually relies on ad hoc proof methods which depend on the language. In this paper we propose a more systematic proof method to show that an extensional normal-form bisimilarity along with its corresponding bisimulation up to context are sound. We illustrate our technique with three calculi: the call-by-value lambda-calculus, the call-by-value lambda-calculus with the delimited-control operators shift and reset, and the call-by-value lambda-calculus with the abortive control operators call cc and abort. In the first two cases, there was previously no sound bisimulation up to context validating the eta-law, whereas no theory of normal-form bisimulations for the calculus of abortive control has been presented before. Our results have been fully formalized in the Coq proof assistant. | In a previous work @cite_5 , we define extensional enf bisimilarities and bisimulations up to context for a call-by-value @math -calculus with delimited-control operators. The (unpublished) congruence and soundness proofs follow Lassen @cite_28 , but are incorrect: one case in the induction, that turns out to be problematic, has been forgotten. @cite_3 we fix the congruence proof of the extensional bisimilarity, by doing a nested induction on a different notion of closure than Lassen. This approach fails when proving soundness of a bisimulation up to context, and therefore bisimulation up to context does not respect the @math -law in @cite_3 . | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_3"
],
"mid": [
"1537995368",
"45878689",
"2086744387"
],
"abstract": [
"Abstract On the basis of an operational bisimulation account of Bohm tree equivalence, a novel operationally-based development of the Bohm tree theory is presented, including an elementary congruence proof for Bohm tree equivalence. The approach is also applied to other sensible and lazy tree theories. Finally, a syntactic proof principle, called bisimulation up to context, is derived from the congruence proofs. It is used to give a simple syntactic proof of the least fixed point property of fixed point combinators. The paper surveys notions of bisimulation and trees for sensible λ-theories based on reduction to head normal forms as well as for lazy λ-theories based on weak head normal forms.",
"We define a notion of normal form bisimilarity for the untyped call-by-value λ -calculus extended with the delimited-control operators shift and reset. Normal form bisimilarities are simple, easy-to-use behavioral equivalences which relate terms without having to test them within all contexts (like contextual equivalence), or by applying them to function arguments (like applicative bisimilarity). We prove that the normal form bisimilarity for shift and reset is sound but not complete w.r.t. contextual equivalence and we define up-to techniques that aim at simplifying bisimulation proofs. Finally, we illustrate the simplicity of the techniques we develop by proving several equivalences on terms.",
"Delimited continuations are more expressive than traditional abortive continuations and they apparently require a framework beyond traditional continuation-passing style (CPS). We show that this is not the case: standard CPS is sufficient to explain the common control operators for delimited continuations. We demonstrate this fact and present an implementation as a Scheme library. We then investigate a typed account of delimited continuations that makes explicit where control effects can occur. This results in a monadic framework for typed and encapsulated delimited continuations, which we design and implement as a Haskell library."
]
} |
1711.00248 | 2765693025 | Image-based clothing retrieval is receiving increasing interest with the growth of online shopping. In practice, users may often have a desired piece of clothing in mind (e.g., either having seen it before on the street or requiring certain specific clothing attributes) but may be unable to supply an image as a query. We model this problem as a new type of image retrieval task in which the target image resides only in the user's mind (called "mental image retrieval" hereafter). Because of the absence of an explicit query image, we propose to solve this problem through relevance feedback. Specifically, a new Bayesian formulation is proposed that simultaneously models the retrieval target and its high-level representation in the mind of the user (called the "user metric" hereafter) as posterior distributions of pre-fetched shop images and heterogeneous features extracted from multiple clothing attributes, respectively. Requiring only clicks as user feedback, the proposed algorithm is able to account for the variability in human decision-making. Experiments with real users demonstrate the effectiveness of the proposed algorithm. | Relevance feedback (RF) was initially developed for use in document retrieval @cite_7 and was introduced into content-based image retrieval (CBIR) during the 1990s @cite_5 . Since that time, RF algorithms have been shown to enable drastic performance boosts in retrieval systems @cite_15 @cite_54 @cite_42 @cite_37 @cite_41 @cite_43 @cite_27 @cite_55 @cite_38 @cite_26 and attribute learning @cite_28 @cite_24 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_26",
"@cite_7",
"@cite_41",
"@cite_28",
"@cite_54",
"@cite_42",
"@cite_55",
"@cite_24",
"@cite_43",
"@cite_27",
"@cite_5",
"@cite_15"
],
"mid": [
"2057145265",
"2137587644",
"2160550980",
"",
"1973693867",
"2033365921",
"2110329341",
"2097329790",
"2113817964",
"",
"2119263246",
"2162721653",
"1786967778",
"2120486209"
],
"abstract": [
"This paper proposes a new content based image retrieval (CBIR) system combined with relevance feedback and the online feature selection procedures. A measure of inconsistency from relevance feedback is explicitly used as a new semantic criterion to guide the feature selection. By integrating the user feedback information, the feature selection is able to bridge the gap between low-level visual features and high-level semantic information, leading to the improved image retrieval accuracy. Experimental results show that the proposed method obtains higher retrieval accuracy than a commonly used approach.",
"Relevance feedback is a powerful technique for image retrieval and has been an active research direction for the past few years. Various ad hoc parameter estimation techniques have been proposed for relevance feedback. In addition, methods that perform optimization on multilevel image content model have been formulated. However, these methods only perform relevance feedback on low-level image features and fail to address the images' semantic content. In this paper, we propose a relevance feedback framework to take advantage of the semantic contents of images in addition to low-level features. By forming a semantic network on top of the keyword association on the images, we are able to accurately deduce and utilize the images' semantic contents for retrieval purposes. We also propose a ranking measure that is suitable for our framework. The accuracy and effectiveness of our method is demonstrated with experimental results on real-world image collections.",
"Nowadays, content-based image retrieval (CBIR) is the mainstay of image retrieval systems. To be more profitable, relevance feedback techniques were incorporated into CBIR such that more precise results can be obtained by taking user's feedbacks into account. However, existing relevance feedback-based CBIR methods usually request a number of iterative feedbacks to produce refined search results, especially in a large-scale image database. This is impractical and inefficient in real applications. In this paper, we propose a novel method, Navigation-Pattern-based Relevance Feedback (NPRF), to achieve the high efficiency and effectiveness of CBIR in coping with the large-scale image data. In terms of efficiency, the iterations of feedback are reduced substantially by using the navigation patterns discovered from the user query log. In terms of effectiveness, our proposed search algorithm NPRFSearch makes use of the discovered navigation patterns and three kinds of query refinement strategies, Query Point Movement (QPM), Query Reweighting (QR), and Query Expansion (QEX), to converge the search space toward the user's intention effectively. By using NPRF method, high quality of image retrieval on RF can be achieved in a small number of feedbacks. The experimental results reveal that NPRF outperforms other existing methods significantly in terms of precision, coverage, and number of feedbacks.",
"",
"We analyze the nature of the relevance feedback problem in a continuous representation space in the context of content-based image retrieval. Emphasis is put on exploring the uniqueness of the problem and comparing the assumptions, implementations, and merits of various solutions in the literature. An attempt is made to compile a list of critical issues to consider when designing a relevance feedback algorithm. With a comprehensive review as the main portion, this paper also offers some novel solutions and perspectives throughout the discussion.",
"We propose a novel mode of feedback for image search, where a user describes which properties of exemplar images should be adjusted in order to more closely match his her mental model of the image(s) sought. For example, perusing image results for a query “black shoes”, the user might state, “Show me shoe images like these, but sportier.” Offline, our approach first learns a set of ranking functions, each of which predicts the relative strength of a nameable attribute in an image (‘sportiness’, ‘furriness’, etc.). At query time, the system presents an initial set of reference images, and the user selects among them to provide relative attribute feedback. Using the resulting constraints in the multi-dimensional attribute space, our method updates its relevance function and re-ranks the pool of images. This procedure iterates using the accumulated constraints until the top ranked images are acceptably close to the user's envisioned target. In this way, our approach allows a user to efficiently “whittle away” irrelevant portions of the visual feature space, using semantic language to precisely communicate her preferences to the system. We demonstrate the technique for refining image search for people, products, and scenes, and show it outperforms traditional binary relevance feedback in terms of search speed and accuracy.",
"In recent years, a variety of relevance feedback (RF) schemes have been developed to improve the performance of content-based image retrieval (CBIR). Given user feedback information, the key to a RF scheme is how to select a subset of image features to construct a suitable dissimilarity measure. Among various RF schemes, biased discriminant analysis (BDA) based RF is one of the most promising. It is based on the observation that all positive samples are alike, while in general each negative sample is negative in its own way. However, to use BDA, the small sample size (SSS) problem is a big challenge, as users tend to give a small number of feedback samples. To explore solutions to this issue, this paper proposes a direct kernel BDA (DKBDA), which is less sensitive to SSS. An incremental DKBDA (IDKBDA) is also developed to speed up the analysis. Experimental results are reported on a real-world image collection to demonstrate that the proposed methods outperform the traditional kernel BDA (KBDA) and the support vector machine (SVM) based RF algorithms",
"This paper describes Pinview, a content-based image retrieval system that exploits implicit relevance feedback during a search session. Pinview contains several novel methods that infer the intent of the user. From relevance feedback, such as eye movements or clicks, and visual features of images Pinview learns a similarity metric between images which depends on the current interests of the user. It then retrieves images with a specialized reinforcement learning algorithm that balances the tradeo between exploring new images and exploiting the already inferred interests of the user. In practise, we have integrated Pinview to the content-based image retrieval system PicSOM, in order to apply it to realworld image databases. Preliminary experiments show that eye movements provide a rich input modality from which it is possible to learn the interests of the user.",
"With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and Web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.",
"",
"The paper proposes an adaptive retrieval approach based on the concept of relevance-feedback, which establishes a link between high-level concepts and low-level features, using the user's feedback not only to assign proper weights to the features, but also to dynamically select them within a large collection of parameters. The target is to identify a set of relevant features according to a user query while at the same time maintaining a small sized feature vector to attain better matching and lower complexity. To this end, the image description is modified during each retrieval by removing the least significant features and better specifying the most significant ones. The feature adaptation is based on a hierarchical approach. The weights are then adjusted based on previously retrieved relevant and irrelevant images without further user-feedback. The algorithm is not fixed to a given feature set. It can be used with different hierarchical feature sets, provided that the hierarchical structure is defined a priori. Results achieved on different image databases and two completely different feature sets show that the proposed algorithm outperforms previously proposed methods. Further, it is experimentally demonstrated that it approaches the results obtained by state-of-the-art feature-selection techniques having complete knowledge of the data set.",
"Understanding the subjective meaning of a visual query, by converting it into numerical parameters that can be extracted and compared by a computer, is the paramount challenge in the field of intelligent image retrieval, also referred to as the ?semantic gap? problem. In this paper, an innovative approach is proposed that combines a relevance feedback (RF) approach with an evolutionary stochastic algorithm, called particle swarm optimizer (PSO), as a way to grasp user's semantics through optimized iterative learning. The retrieval uses human interaction to achieve a twofold goal: 1) to guide the swarm particles in the exploration of the solution space towards the cluster of relevant images; 2) to dynamically modify the feature space by appropriately weighting the descriptive features according to the users' perception of relevance. Extensive simulations showed that the proposed technique outperforms traditional deterministic RF approaches of the same class, thanks to its stochastic nature, which allows a better exploration of complex, nonlinear, and highly-dimensional solution spaces.",
"Visual impression may differ with each person. User-friendly interfaces for image database systems require special retrieval methods which can adapt to the visual impression of each user. Algorithms for learning personal visual impressions of visual objects are described. The algorithms are based on multivariate data analysis methods. These algorithms provide a model on visual perception process of each user from a small set of training examples. This model is referred to as a personal index to retrieve desired images for the user. These algorithms were implemented and examined in a graphical symbol database system called TRADEMARK and a full color image database called ART MUSEUM. >",
"In content-based image retrieval, understanding the user's needs is a challenging task that requires integrating him in the process of retrieval. Relevance feedback (RF) has proven to be an effective tool for taking the user's judgement into account. In this paper, we present a new RF framework based on a feature selection algorithm that nicely combines the advantages of a probabilistic formulation with those of using both the positive example (PE) and the negative example (NE). Through interaction with the user, our algorithm learns the importance he assigns to image features, and then applies the results obtained to define similarity measures that correspond better to his judgement. The use of the NE allows images undesired by the user to be discarded, thereby improving retrieval accuracy. As for the probabilistic formulation of the problem, it presents a multitude of advantages and opens the door to more modeling possibilities that achieve a good feature selection. It makes it possible to cluster the query data into classes, choose the probability law that best models each class, model missing data, and support queries with multiple PE and or NE classes. The basic principle of our algorithm is to assign more importance to features with a high likelihood and those which distinguish well between PE classes and NE classes. The proposed algorithm was validated separately and in image retrieval context, and the experiments show that it performs a good feature selection and contributes to improving retrieval effectiveness."
]
} |
1711.00248 | 2765693025 | Image-based clothing retrieval is receiving increasing interest with the growth of online shopping. In practice, users may often have a desired piece of clothing in mind (e.g., either having seen it before on the street or requiring certain specific clothing attributes) but may be unable to supply an image as a query. We model this problem as a new type of image retrieval task in which the target image resides only in the user's mind (called "mental image retrieval" hereafter). Because of the absence of an explicit query image, we propose to solve this problem through relevance feedback. Specifically, a new Bayesian formulation is proposed that simultaneously models the retrieval target and its high-level representation in the mind of the user (called the "user metric" hereafter) as posterior distributions of pre-fetched shop images and heterogeneous features extracted from multiple clothing attributes, respectively. Requiring only clicks as user feedback, the proposed algorithm is able to account for the variability in human decision-making. Experiments with real users demonstrate the effectiveness of the proposed algorithm. | In the context of feature weighting in RF, Rui @cite_2 proposed a re-weighting approach in which image feature vectors are converted into weighted-term vectors in MARS. Another solution is to move the query point toward the contour of the user's preference in feature space, as is done, for example, in the famous Rocchio algorithm @cite_7 . The FA-RF method @cite_43 uses two iterative techniques to exploit relevance information: query refinement and feature re-weighting. Recently, Jiang @cite_46 proposed a weighting scheme based on multiple modalities for zero-example video retrieval, in which logistic regression is applied given binary feedback. | {
"cite_N": [
"@cite_43",
"@cite_46",
"@cite_7",
"@cite_2"
],
"mid": [
"2119263246",
"2013075750",
"",
"1654865708"
],
"abstract": [
"The paper proposes an adaptive retrieval approach based on the concept of relevance-feedback, which establishes a link between high-level concepts and low-level features, using the user's feedback not only to assign proper weights to the features, but also to dynamically select them within a large collection of parameters. The target is to identify a set of relevant features according to a user query while at the same time maintaining a small sized feature vector to attain better matching and lower complexity. To this end, the image description is modified during each retrieval by removing the least significant features and better specifying the most significant ones. The feature adaptation is based on a hierarchical approach. The weights are then adjusted based on previously retrieved relevant and irrelevant images without further user-feedback. The algorithm is not fixed to a given feature set. It can be used with different hierarchical feature sets, provided that the hierarchical structure is defined a priori. Results achieved on different image databases and two completely different feature sets show that the proposed algorithm outperforms previously proposed methods. Further, it is experimentally demonstrated that it approaches the results obtained by state-of-the-art feature-selection techniques having complete knowledge of the data set.",
"We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158 in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.",
"",
"Technology advances in the areas of image processing (IP) and information retrieval (IR) have evolved separately for a long time. However, successful content-based image retrieval systems require the integration of the two. There is an urgent need to develop integration mechanisms to link the image retrieval model to text retrieval model, such that the well established text retrieval techniques can be utilized. Approaches of converting image feature vectors (IF domain) to weighted-term vectors (IR domain) are proposed in this paper. Furthermore, the relevance feedback technique from the IR domain is used in content-based image retrieval to demonstrate the effectiveness of this conversion. Experimental results show that the image retrieval precision increases considerably by using the proposed integration approach."
]
} |
1711.00248 | 2765693025 | Image-based clothing retrieval is receiving increasing interest with the growth of online shopping. In practice, users may often have a desired piece of clothing in mind (e.g., either having seen it before on the street or requiring certain specific clothing attributes) but may be unable to supply an image as a query. We model this problem as a new type of image retrieval task in which the target image resides only in the user's mind (called "mental image retrieval" hereafter). Because of the absence of an explicit query image, we propose to solve this problem through relevance feedback. Specifically, a new Bayesian formulation is proposed that simultaneously models the retrieval target and its high-level representation in the mind of the user (called the "user metric" hereafter) as posterior distributions of pre-fetched shop images and heterogeneous features extracted from multiple clothing attributes, respectively. Requiring only clicks as user feedback, the proposed algorithm is able to account for the variability in human decision-making. Experiments with real users demonstrate the effectiveness of the proposed algorithm. | Mental image retrieval, , searching images without any explicit query, was pioneered by Cox @cite_52 . They proposed a Bayesian framework on iterative relevance feedback to retrieve a specific image in the database (target search). Fang and German @cite_51 proposed an efficient display algorithm which only needs one click per iteration by the user, and applied it to mental face retrieval. Afterwards, Ferecatu @cite_59 extended the framework to category search instead of target search. The application was scaled to large-scale datasets by Suditu and Fleuret @cite_33 @cite_34 who adopted a hierarchical and expandable adaptive trace algorithm benefited from adaptive exploration exploitation trade-off. Similar to the idea of mental image retrieval, Auer @cite_40 maintained the weights of images by giving less relevant images a constant discount at each iteration. | {
"cite_N": [
"@cite_33",
"@cite_52",
"@cite_40",
"@cite_59",
"@cite_34",
"@cite_51"
],
"mid": [
"2113021635",
"2155099190",
"170770863",
"2168609675",
"2095610590",
"1558084014"
],
"abstract": [
"It has been shown repeatedly that iterative relevance feedback is a very efficient solution for content-based image retrieval. However, no existing system scales gracefully to hundreds of thousands or millions of images.",
"Presents the theory, design principles, implementation and performance results of PicHunter, a prototype content-based image retrieval (CBIR) system. In addition, this document presents the rationale, design and results of psychophysical experiments that were conducted to address some key issues that arose during PicHunter's development. The PicHunter project makes four primary contributions to research on CBIR. First, PicHunter represents a simple instance of a general Bayesian framework which we describe for using relevance feedback to direct a search. With an explicit model of what users would do, given the target image they want, PicHunter uses Bayes's rule to predict the target they want, given their actions. This is done via a probability distribution over possible image targets, rather than by refining a query. Second, an entropy-minimizing display algorithm is described that attempts to maximize the information obtained from a user at each iteration of the search. Third, PicHunter makes use of hidden annotation rather than a possibly inaccurate inconsistent annotation structure that the user must learn and make queries in. Finally, PicHunter introduces two experimental paradigms to quantitatively evaluate the performance of the system, and psychophysical experiments are presented that support the theoretical claims.",
"We investigate models for content-based image retrieval with relevance feedback, in particular focusing on the exploration-exploitation dilemma. We propose quantitative models for the user behavior and investigate implications of these models. Three search algorithms for efficient searches based on the user models are proposed and evaluated. In the first model a user queries a database for the most (or a sufficiently) relevant image. The user gives feedback to the system by selecting the most relevant image from a number of images presented by the system. In the second model we consider a filtering task where relevant images should be extracted from a database and presented to the user. The feedback of the user is a binary classification of each presented image as relevant or irrelevant. While these models are related, they differ significantly in the kind of feedback provided by the user. This requires very different mechanisms to trade off exploration (finding out what the user wants) and exploitation (serving images which the system believes relevant for the user).",
"Traditional image retrieval methods require a \"query image\" to initiate a search for members of an image category. However, when the image database is unstructured, and when the category is semantic and resides only in the mind of the user, there is no obvious way to begin (the \"page zero \" problem). We propose a new mathematical framework for relevance feedback based on mental matching and starting from a random sample of images. At each iteration the user declares which of several displayed images is closest to his category; performance is measured by the number of iterations necessary to display an instance. Our core contribution is a Bayesian formulation which scales to large databases with no semantic annotation. The two key components are a response model which accounts for the user's subjective perception of similarity and a display algorithm which seeks to maximize the flow of information. Experiments with real users and a database with 20,000 images demonstrate the efficiency of the search process.",
"Content-based image retrieval systems have to cope with two different regimes: understanding broadly the categories of interest to the user, and refining the search in this or these categories to converge to specific images among them. Here, in contrast with other types of retrieval systems, these two regimes are of great importance since the search initialization is hardly optimal (i.e. the page-zero problem) and the relevance feedback must tolerate the semantic gap of the image's visual features. We present a new approach that encompasses these two regimes, and infers from the user actions a seamless transition between them. Starting from a query-free approach meant to solve the page-zero problem, we propose an adaptive exploration exploitation trade-off that transforms the original framework into a versatile retrieval framework with full searching capabilities. Our approach is compared to the state-of-the-art it extends by conducting user evaluations on a collection of 60,000 images from the ImageNet database.",
"We propose a relevance feedback system for retrieving a mental face picture from a large image database. This scenario differs from standard image retrieval since the target image exists only in the mind of the user, who responds to a sequence of machine-generated queries designed to display the person in mind as quickly as possible. At each iteration the user declares which of several displayed faces is “closest” to his target. The central limiting factor is the “semantic gap” between the standard intensity-based features which index the images in the database and the higher-level representation in the mind of the user which drives his answers. We explore a Bayesian, information-theoretic framework for choosing which images to display and for modeling the response of the user. The challenge is to account for psycho-visual factors and sources of variability in human decision-making. We present experiments with real users which illustrate and validate the proposed algorithms."
]
} |
1711.00088 | 2767128461 | We describe a novel architecture for semantic image retrieval---in particular, retrieval of instances of visual situations. Visual situations are concepts such as "a boxing match," "walking the dog," "a crowd waiting for a bus," or "a game of ping-pong," whose instantiations in images are linked more by their common spatial and semantic structure than by low-level visual similarity. Given a query situation description, our architecture---called Situate---learns models capturing the visual features of expected objects as well the expected spatial configuration of relationships among objects. Given a new image, Situate uses these models in an attempt to ground (i.e., to create a bounding box locating) each expected component of the situation in the image via an active search procedure. Situate uses the resulting grounding to compute a score indicating the degree to which the new image is judged to contain an instance of the situation. Such scores can be used to rank images in a collection as part of a retrieval system. In the preliminary study described here, we demonstrate the promise of this system by comparing Situate's performance with that of two baseline methods, as well as with a related semantic image-retrieval system based on "scene graphs." | Here we describe some of the recent approaches most closely related to Situate's goals and architecture. Closely related to our work is the approach of @cite_27 for semantic image retrieval via scene graphs.'' We describe this method in and compare its performance to that of Situate in . | {
"cite_N": [
"@cite_27"
],
"mid": [
"2077069816"
],
"abstract": [
"This paper develops a novel framework for semantic image retrieval based on the notion of a scene graph. Our scene graphs represent objects (“man”, “boat”), attributes of objects (“boat is white”) and relationships between objects (“man standing on boat”). We use these scene graphs as queries to retrieve semantically related images. To this end, we design a conditional random field model that reasons about possible groundings of scene graphs to test images. The likelihoods of these groundings are used as ranking scores for retrieval. We introduce a novel dataset of 5,000 human-generated scene graphs grounded to images and use this dataset to evaluate our method for image retrieval. In particular, we evaluate retrieval using full scene graphs and small scene subgraphs, and show that our method outperforms retrieval methods that use only objects or low-level image features. In addition, we show that our full model can be used to improve object localization compared to baseline methods."
]
} |
1711.00088 | 2767128461 | We describe a novel architecture for semantic image retrieval---in particular, retrieval of instances of visual situations. Visual situations are concepts such as "a boxing match," "walking the dog," "a crowd waiting for a bus," or "a game of ping-pong," whose instantiations in images are linked more by their common spatial and semantic structure than by low-level visual similarity. Given a query situation description, our architecture---called Situate---learns models capturing the visual features of expected objects as well the expected spatial configuration of relationships among objects. Given a new image, Situate uses these models in an attempt to ground (i.e., to create a bounding box locating) each expected component of the situation in the image via an active search procedure. Situate uses the resulting grounding to compute a score indicating the degree to which the new image is judged to contain an instance of the situation. Such scores can be used to rank images in a collection as part of a retrieval system. In the preliminary study described here, we demonstrate the promise of this system by comparing Situate's performance with that of two baseline methods, as well as with a related semantic image-retrieval system based on "scene graphs." | Our situation-retrieval task shares motivation but contrasts with the well-known tasks of event recognition'' or ''action recognition'' in still images (e.g., @cite_4 @cite_7 @cite_0 ). These latter tasks consist of classifying images into one of several event or action categories, without the requirement of localizing objects or relationships. A related task, dubbed Situation Recognition'' in @cite_18 , requires a system to, given an image, predict the most salient verb, along with its subject and object ( semantic roles'' @cite_1 ). | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_0"
],
"mid": [
"2423576022",
"1993991024",
"",
"1551928752",
"2951409066"
],
"abstract": [
"This paper introduces situation recognition, the problem of producing a concise summary of the situation an image depicts including: (1) the main activity (e.g., clipping), (2) the participating actors, objects, substances, and locations (e.g., man, shears, sheep, wool, and field) and most importantly (3) the roles these participants play in the activity (e.g., the man is clipping, the shears are his tool, the wool is being clipped from the sheep, and the clipping is in a field). We use FrameNet, a verb and role lexicon developed by linguists, to define a large space of possible situations and collect a large-scale dataset containing over 500 activities, 1,700 roles, 11,000 objects, 125,000 images, and 200,000 unique situations. We also introduce structured prediction baselines and show that, in activity-centric images, situation-driven prediction of objects and activities outperforms independent object and activity recognition.",
"Abstract Recently still image-based human action recognition has become an active research topic in computer vision and pattern recognition. It focuses on identifying a person׳s action or behavior from a single image. Unlike the traditional action recognition approaches where videos or image sequences are used, a still image contains no temporal information for action characterization. Thus the prevailing spatiotemporal features for video-based action analysis are not appropriate for still image-based action recognition. It is more challenging to perform still image-based action recognition than the video-based one, given the limited source of information as well as the cluttered background for images collected from the Internet. On the other hand, a large number of still images exist over the Internet. Therefore it is demanding to develop robust and efficient methods for still image-based action recognition to understand the web images better for image retrieval or search. Based on the emerging research in recent years, it is time to review the existing approaches to still image-based action recognition and inspire more efforts to advance the field of research. We present a detailed overview of the state-of-the-art methods for still image-based action recognition, and categorize and describe various high-level cues and low-level features for action analysis in still images. All related databases are introduced with details. Finally, we give our views and thoughts for future research.",
"",
"In this paper we introduce the problem of Visual Semantic Role Labeling: given an image we want to detect people doing actions and localize the objects of interaction. Classical approaches to action recognition either study the task of action classification at the image or video clip level or at best produce a bounding box around the person doing the action. We believe such an output is inadequate and a complete understanding can only come when we are able to associate objects in the scene to the different semantic roles of the action. To enable progress towards this goal, we annotate a dataset of 16K people instances in 10K images with actions they are doing and associate objects in the scene with different semantic roles for each action. Finally, we provide a set of baseline algorithms for this task and analyze error modes providing directions for future work.",
"Event recognition from still images is of great importance for image understanding. However, compared with event recognition in videos, there are much fewer research works on event recognition in images. This paper addresses the issue of event recognition from images and proposes an effective method with deep neural networks. Specifically, we design a new architecture, called Object-Scene Convolutional Neural Network (OS-CNN). This architecture is decomposed into object net and scene net, which extract useful information for event understanding from the perspective of objects and scene context, respectively. Meanwhile, we investigate different network architectures for OS-CNN design, and adapt the deep (AlexNet) and very-deep (GoogLeNet) networks to the task of event recognition. Furthermore, we find that the deep and very-deep networks are complementary to each other. Finally, based on the proposed OS-CNN and comparative study of different network architectures, we come up with a solution of five-stream CNN for the track of cultural event recognition at the ChaLearn Looking at People (LAP) challenge 2015. Our method obtains the performance of 85.5 and ranks the @math place in this challenge."
]
} |
1711.00088 | 2767128461 | We describe a novel architecture for semantic image retrieval---in particular, retrieval of instances of visual situations. Visual situations are concepts such as "a boxing match," "walking the dog," "a crowd waiting for a bus," or "a game of ping-pong," whose instantiations in images are linked more by their common spatial and semantic structure than by low-level visual similarity. Given a query situation description, our architecture---called Situate---learns models capturing the visual features of expected objects as well the expected spatial configuration of relationships among objects. Given a new image, Situate uses these models in an attempt to ground (i.e., to create a bounding box locating) each expected component of the situation in the image via an active search procedure. Situate uses the resulting grounding to compute a score indicating the degree to which the new image is judged to contain an instance of the situation. Such scores can be used to rank images in a collection as part of a retrieval system. In the preliminary study described here, we demonstrate the promise of this system by comparing Situate's performance with that of two baseline methods, as well as with a related semantic image-retrieval system based on "scene graphs." | Our task also contrasts with recent work on automatic caption generation for images (e.g., @cite_11 ), in which image content is statistically associated with a language generator. The goal of caption-generation systems is to generate a description of any input image. Even the versions with attention'' (e.g., @cite_30 ), which are able to highlight diffuse areas corresponding roughly to relevant objects, are not generally able to recognize and locate all important objects, relationships, and actions, or more generally to recognize abstract situations. | {
"cite_N": [
"@cite_30",
"@cite_11"
],
"mid": [
"2950178297",
"2463955103"
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. Finally, given the recent surge of interest in this task, a competition was organized in 2015 using the newly released COCO dataset. We describe and analyze the various improvements we applied to our own baseline and show the resulting performance in the competition, which we won ex-aequo with a team from Microsoft Research."
]
} |
1711.00088 | 2767128461 | We describe a novel architecture for semantic image retrieval---in particular, retrieval of instances of visual situations. Visual situations are concepts such as "a boxing match," "walking the dog," "a crowd waiting for a bus," or "a game of ping-pong," whose instantiations in images are linked more by their common spatial and semantic structure than by low-level visual similarity. Given a query situation description, our architecture---called Situate---learns models capturing the visual features of expected objects as well the expected spatial configuration of relationships among objects. Given a new image, Situate uses these models in an attempt to ground (i.e., to create a bounding box locating) each expected component of the situation in the image via an active search procedure. Situate uses the resulting grounding to compute a score indicating the degree to which the new image is judged to contain an instance of the situation. Such scores can be used to rank images in a collection as part of a retrieval system. In the preliminary study described here, we demonstrate the promise of this system by comparing Situate's performance with that of two baseline methods, as well as with a related semantic image-retrieval system based on "scene graphs." | While the literature cited above does not include active detection methods such as Situate that involve feedback, there has been also considerable work on active object detection (e.g., @cite_6 @cite_2 ), often in the context of active perception in robots @cite_26 and modeling visual attention @cite_5 @cite_29 . More recently, several groups have framed active object detection as a Markov decision process and use reinforcement learning to learn a search policy (e.g., @cite_25 ). | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_25"
],
"mid": [
"",
"2951527505",
"2135440260",
"1930392762",
"1484210532",
"2179488730"
],
"abstract": [
"",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
"The dominant visual search paradigm for object class detection is sliding windows. Although simple and effective, it is also wasteful, unnatural and rigidly hardwired. We propose strategies to search for objects which intelligently explore the space of windows by making sequential observations at locations decided based on previous observations. Our strategies adapt to the class being searched and to the content of a particular test image, exploiting context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. In addition to being more elegant than sliding windows, we demonstrate experimentally on the PASCAL VOC 2010 dataset that our strategies evaluate two orders of magnitude fewer windows while achieving higher object detection performance.",
"Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9× fewer windows.",
"We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization."
]
} |
1711.00300 | 2766937060 | Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. As a solution, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy. | There is a vast body of research devoted to detecting malicious domains via static analysis. Such research work can be classified into two techniques, host-based and network-based. Briefly, host-based approaches rely on detecting malware signatures in programs running on end hosts @cite_49 @cite_35 , whereas network-based approaches rely on detecting specific patterns and fingerprints by monitoring the network traffic @cite_15 @cite_22 . Since our approach is network-based, we compare and contrast the most relevant network-based proposals which rely on DNS data for malicious domain detection. Network-based approaches can further be divided into classification based (e.g., @cite_36 @cite_31 @cite_3 @cite_25 @cite_33 @cite_9 ) and inference based approaches (e.g., @cite_22 @cite_53 @cite_59 @cite_2 ). While the classification based approaches primarily rely on local network and host information, inference based approaches exploit the global relationships among domains along with local information in order to better detect malicious domains. Our work falls in the latter category. Now we compare and contrast our work with respect to these two categories below. | {
"cite_N": [
"@cite_35",
"@cite_31",
"@cite_22",
"@cite_33",
"@cite_36",
"@cite_9",
"@cite_53",
"@cite_2",
"@cite_3",
"@cite_49",
"@cite_59",
"@cite_15",
"@cite_25"
],
"mid": [
"1851403712",
"",
"2396855773",
"155384935",
"2057364987",
"2082550445",
"1567574980",
"2166128942",
"2528572867",
"",
"1983776999",
"2401054255",
"2565766771"
],
"abstract": [
"Malicious software - so called malware - poses a major threat to the security of computer systems. The amount and diversity of its variants render classic security defenses ineffective, such that millions of hosts in the Internet are infected with malware in the form of computer viruses, Internet worms and Trojan horses. While obfuscation and polymorphism employed by malware largely impede detection at file level, the dynamic analysis of malware binaries during run-time provides an instrument for characterizing and defending against the threat of malicious software. In this article, we propose a framework for the automatic analysis of malware behavior using machine learning. The framework allows for automatically identifying novel classes of malware with similar behavior (clustering) and assigning unknown malware to these discovered classes (classification). Based on both, clustering and classification, we propose an incremental approach for behavior-based analysis, capable of processing the behavior of thousands of malware binaries on a daily basis. The incremental analysis significantly reduces the run-time overhead of current analysis methods, while providing accurate discovery and discrimination of novel malware variants.",
"",
"Malicious domains are key components to a variety of cyber attacks. Several recent techniques are proposed to identify malicious domains through analysis of DNS data. The general approach is to build classifiers based on DNS-related local domain features. One potential problem is that many local features, e.g., domain name patterns and temporal patterns, tend to be not robust. Attackers could easily alter these features to evade detection without affecting much their attack capabilities. In this paper, we take a complementary approach. Instead of focusing on local features, we propose to discover and analyze global associations among domains. The key challenges are (1) to build meaningful associations among domains; and (2) to use these associations to reason about the potential maliciousness of domains. For the first challenge, we take advantage of the modus operandi of attackers. To avoid detection, malicious domains exhibit dynamic behavior by, for example, frequently changing the malicious domain-IP resolutions and creating new domains. This makes it very likely for attackers to reuse resources. It is indeed commonly observed that over a period of time multiple malicious domains are hosted on the same IPs and multiple IPs host the same malicious domains, which creates intrinsic association among them. For the second challenge, we develop a graph-based inference technique over associated domains. Our approach is based on the intuition that a domain having strong associations with known malicious domains is likely to be malicious. Carefully established associations enable the discovery of a large set of new malicious domains using a very small set of previously known malicious ones. Our experiments over a public passive DNS database show that the proposed technique can achieve high true positive rates (over 95 ) while maintaining low false positive rates (less than 0.5 ). Further, even with a small set of known malicious domains (a couple of hundreds), our technique can discover a large set of potential malicious domains (in the scale of up to tens of thousands).",
"The Domain Name System (DNS) is an essential protocol used by both legitimate Internet applications and cyber attacks. For example, botnets rely on DNS to support agile command and control infrastructures. An effective way to disrupt these attacks is to place malicious domains on a \"blocklist\" (or \"blacklist\") or to add a filtering rule in a firewall or network intrusion detection system. To evade such security countermeasures, attackers have used DNS agility, e.g., by using new domains daily to evade static blacklists and firewalls. In this paper we propose Notos, a dynamic reputation system for DNS. The premise of this system is that malicious, agile use of DNS has unique characteristics and can be distinguished from legitimate, professionally provisioned DNS services. Notos uses passive DNS query data and analyzes the network and zone features of domains. It builds models of known legitimate domains and malicious domains, and uses these models to compute a reputation score for a new domain indicative of whether the domain is malicious or legitimate. We have evaluated Notos in a large ISP's network with DNS traffic from 1.4 million users. Our results show that Notos can identify malicious domains with high accuracy (true positive rate of 96.8 ) and low false positive rate (0.38 ), and can identify these domains weeks or even months before they appear in public blacklists.",
"Network-wide activity is when one computer (the originator) touches many others (the targets). Motives for activity may be benign (mailing lists, CDNs, and research scanning), malicious (spammers and scanners for security vulnerabilities), or perhaps indeterminate (ad trackers). Knowledge of malicious activity may help anticipate attacks, and understanding benign activity may set a baseline or characterize growth. This paper identifies DNS backscatter as a new source of information about network-wide activity. Backscatter is the reverse DNS queries caused when targets or middleboxes automatically look up the domain name of the originator. Queries are visible to the authoritative DNS servers that handle reverse DNS. While the fraction of backscatter they see depends on the server's location in the DNS hierarchy, we show that activity that touches many targets appear even in sampled observations. We use information about the queriers to classify originator activity using machine-learning. Our algorithm has reasonable precision (70-80 ) as shown by data from three different organizations operating DNS servers at the root or country-level. Using this technique we examine nine months of activity from one authority to identify trends in scanning, identifying bursts corresponding to Heartbleed and broad and continuous scanning of ssh.",
"A wide range of malicious activities rely on the domain name service (DNS) to manage their large, distributed networks of infected machines. As a consequence, the monitoring and analysis of DNS queries has recently been proposed as one of the most promising techniques to detect and blacklist domains involved in malicious activities (e.g., phishing, spam, botnets command-and-control, etc.). EXPOSURE is a system we designed to detect such domains in real time, by applying 15 unique features grouped in four categories. We conducted a controlled experiment with a large, real-world dataset consisting of billions of DNS requests. The extremely positive results obtained in the tests convinced us to implement our techniques and deploy it as a free, online service. In this article, we present the Exposure system and describe the results and lessons learned from 17 months of its operation. Over this amount of time, the service detected over 100K malicious domains. The statistics about the time of usage, number of queries, and target IP addresses of each domain are also published on a daily basis on the service Web page.",
"HTTP is a popular channel for malware to communicate with malicious servers (e.g., Command a Control, drive-by download, drop-zone), as well as to attack benign servers. By utilizing HTTP requests, malware easily disguises itself under a large amount of benign HTTP traffic. Thus, identifying malicious HTTP activities is challenging. We leverage an insight that cyber criminals are increasingly using dynamic malicious infrastructures with multiple servers to be efficient and anonymous in (i) malware distribution (using redirectors and exploit servers), (ii) control (using CaC servers) and (iii) monetization (using payment servers), and (iv) being robust against server takedowns (using multiple backups for each type of servers). Instead of focusing on detecting individual malicious domains, we propose a complementary approach to identify a group of closely related servers that are potentially involved in the same malware campaign, which we term as Associated Server Herd (ASH). Our solution, SMASH (Systematic Mining of Associated Server Herds), utilizes an unsupervised framework to infer malware ASHs by systematically mining the relations among all servers from multiple dimensions. We build a prototype system of SMASH and evaluate it with traces from a large ISP. The result shows that SMASH successfully infers a large number of previously undetected malicious servers and possible zero-day attacks, with low false positives. We believe the inferred ASHs provide a better global view of the attack campaign that may not be easily captured by detecting only individual servers.",
"The increasing sophistication of malicious software calls for new defensive techniques that are harder to evade, and are capable of protecting users against novel threats. We present AESOP, a scalable algorithm that identifies malicious executable files by applying Aesop's moral that \"a man is known by the company he keeps.\" We use a large dataset voluntarily contributed by the members of Norton Community Watch, consisting of partial lists of the files that exist on their machines, to identify close relationships between files that often appear together on machines. AESOP leverages locality-sensitive hashing to measure the strength of these inter-file relationships to construct a graph, on which it performs large scale inference by propagating information from the labeled files (as benign or malicious) to the preponderance of unlabeled files. AESOP attained early labeling of 99 of benign files and 79 of malicious files, over a week before they are labeled by the state-of-the-art techniques, with a 0.9961 true positive rate at flagging malware, at 0.0001 false positive rate.",
"Many malware families utilize domain generation algorithms (DGAs) to establish command and control (C&C) connections. While there are many methods to pseudorandomly generate domains, we focus in this paper on detecting (and generating) domains on a per-domain basis which provides a simple and flexible means to detect known DGA families. Recent machine learning approaches to DGA detection have been successful on fairly simplistic DGAs, many of which produce names of fixed length. However, models trained on limited datasets are somewhat blind to new DGA variants. In this paper, we leverage the concept of generative adversarial networks to construct a deep learning based DGA that is designed to intentionally bypass a deep learning based detector. In a series of adversarial rounds, the generator learns to generate domain names that are increasingly more difficult to detect. In turn, a detector model updates its parameters to compensate for the adversarially generated domains. We test the hypothesis of whether adversarially generated domains may be used to augment training sets in order to harden other machine learning models against yet-to-be-observed DGAs. We detail solutions to several challenges in training this character-based generative adversarial network. In particular, our deep learning architecture begins as a domain name auto-encoder (encoder + decoder) trained on domains in the Alexa one million. Then the encoder and decoder are reassembled competitively in a generative adversarial network (detector + generator), with novel neural architectures and training strategies to improve convergence.",
"",
"Enterprises routinely collect terabytes of security relevant data, e.g., network logs and application logs, for several reasons such as cheaper storage, forensic analysis, and regulatory compliance. Analyzing these big data sets to identify actionable security information and hence to improve enterprise security, however, is a relatively unexplored area. In this paper, we introduce a system to detect malicious domains accessed by an enterprise’s hosts from the enterprise’s HTTP proxy logs. Specifically, we model the detection problem as a graph inference problemwe construct a host-domain graph from proxy logs, seed the graph with minimal ground truth information, and then use belief propagation to estimate the marginal probability of a domain being malicious. Our experiments on data collected at a global enterprise show that our approach scales well, achieves high detection rates with low false positive rates, and identifies previously unknown malicious domains when compared with state-of-the-art systems. Since malware infections inside an enterprise spread primarily via malware domain accesses, our approach can be used to detect and prevent malware infections.",
"In recent years Internet miscreants have been leveraging the DNS to build malicious network infrastructures for malware command and control. In this paper we propose a novel detection system called Kopis for detecting malware-related domain names. Kopis passively monitors DNS traffic at the upper levels of the DNS hierarchy, and is able to accurately detect malware domains by analyzing global DNS query resolution patterns. Compared to previous DNS reputation systems such as Notos [3] and Exposure [4], which rely on monitoring traffic from local recursive DNS servers, Kopis offers a new vantage point and introduces new traffic features specifically chosen to leverage the global visibility obtained by monitoring network traffic at the upper DNS hierarchy. Unlike previous work Kopis enables DNS operators to independently (i.e., without the need of data from other networks) detect malware domains within their authority, so that action can be taken to stop the abuse. Moreover, unlike previous work, Kopis can detect malware domains even when no IP reputation information is available. We developed a proof-of-concept version of Kopis, and experimented with eight months of real-world data. Our experimental results show that Kopis can achieve high detection rates (e.g., 98.4 ) and low false positive rates (e.g., 0.3 or 0.5 ). In addition Kopis is able to detect new malware domains days or even weeks before they appear in public blacklists and security forums, and allowed us to discover the rise of a previously unknown DDoS botnet based in China.",
"Botnets play major roles in a vast number of threats to network security, such as DDoS attacks, generation of spam emails, information theft. Detecting Botnets is a difficult task in due to the complexity and performance issues when analyzing the huge amount of data from real large-scale networks. In major Botnet malware, the use of Domain Generation Algorithms allows to decrease possibility to be detected using white list - blacklist scheme and thus DGA Botnets have higher survival. This paper proposes a DGA Botnet detection scheme based on DNS traffic analysis which utilizes semantic measures such as entropy, meaning the level of the domain, frequency of n-gram appearances and Mahalanobis distance for domain classification. The proposed method is an improvement of Phoenix botnet detection mechanism, where in the classification phase, the modified Mahalanobis distance is used instead of the original for classification. The clustering phase is based on modified k-means algorithm for archiving better effectiveness. The effectiveness of the proposed method was measured and compared with Phoenix, Linguistic and SVM Light methods. The experimental results show the accuracy of proposed Botnet detection scheme ranges from 90 to 99,97 depending on Botnet type."
]
} |
1711.00300 | 2766937060 | Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. As a solution, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy. | Many approaches @cite_36 @cite_31 @cite_3 @cite_25 , including Notos @cite_33 and EXPOSURE @cite_9 , identify malicious domains by building a classifier using the local features extracted from passive DNS data along with other network information such as WHOIS records @cite_20 . Such approaches are effective as long as the local features used in the classification are not manipulated. However, it has been shown @cite_16 that many local features such as TTL based features and patterns in domain names, are easy to manipulate and thus rendering such techniques less effective. These approaches perform best when one has access to sensitive individual DNS queries which are difficult to gain access to. On the other hand, inference based approaches like ours can detect malicious domains with high accuracy using only aggregate DNS data which is relatively easier to gain access to. | {
"cite_N": [
"@cite_33",
"@cite_36",
"@cite_9",
"@cite_3",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"155384935",
"2057364987",
"2082550445",
"2528572867",
"",
"1527646163",
"2565766771",
"2073193131"
],
"abstract": [
"The Domain Name System (DNS) is an essential protocol used by both legitimate Internet applications and cyber attacks. For example, botnets rely on DNS to support agile command and control infrastructures. An effective way to disrupt these attacks is to place malicious domains on a \"blocklist\" (or \"blacklist\") or to add a filtering rule in a firewall or network intrusion detection system. To evade such security countermeasures, attackers have used DNS agility, e.g., by using new domains daily to evade static blacklists and firewalls. In this paper we propose Notos, a dynamic reputation system for DNS. The premise of this system is that malicious, agile use of DNS has unique characteristics and can be distinguished from legitimate, professionally provisioned DNS services. Notos uses passive DNS query data and analyzes the network and zone features of domains. It builds models of known legitimate domains and malicious domains, and uses these models to compute a reputation score for a new domain indicative of whether the domain is malicious or legitimate. We have evaluated Notos in a large ISP's network with DNS traffic from 1.4 million users. Our results show that Notos can identify malicious domains with high accuracy (true positive rate of 96.8 ) and low false positive rate (0.38 ), and can identify these domains weeks or even months before they appear in public blacklists.",
"Network-wide activity is when one computer (the originator) touches many others (the targets). Motives for activity may be benign (mailing lists, CDNs, and research scanning), malicious (spammers and scanners for security vulnerabilities), or perhaps indeterminate (ad trackers). Knowledge of malicious activity may help anticipate attacks, and understanding benign activity may set a baseline or characterize growth. This paper identifies DNS backscatter as a new source of information about network-wide activity. Backscatter is the reverse DNS queries caused when targets or middleboxes automatically look up the domain name of the originator. Queries are visible to the authoritative DNS servers that handle reverse DNS. While the fraction of backscatter they see depends on the server's location in the DNS hierarchy, we show that activity that touches many targets appear even in sampled observations. We use information about the queriers to classify originator activity using machine-learning. Our algorithm has reasonable precision (70-80 ) as shown by data from three different organizations operating DNS servers at the root or country-level. Using this technique we examine nine months of activity from one authority to identify trends in scanning, identifying bursts corresponding to Heartbleed and broad and continuous scanning of ssh.",
"A wide range of malicious activities rely on the domain name service (DNS) to manage their large, distributed networks of infected machines. As a consequence, the monitoring and analysis of DNS queries has recently been proposed as one of the most promising techniques to detect and blacklist domains involved in malicious activities (e.g., phishing, spam, botnets command-and-control, etc.). EXPOSURE is a system we designed to detect such domains in real time, by applying 15 unique features grouped in four categories. We conducted a controlled experiment with a large, real-world dataset consisting of billions of DNS requests. The extremely positive results obtained in the tests convinced us to implement our techniques and deploy it as a free, online service. In this article, we present the Exposure system and describe the results and lessons learned from 17 months of its operation. Over this amount of time, the service detected over 100K malicious domains. The statistics about the time of usage, number of queries, and target IP addresses of each domain are also published on a daily basis on the service Web page.",
"Many malware families utilize domain generation algorithms (DGAs) to establish command and control (C&C) connections. While there are many methods to pseudorandomly generate domains, we focus in this paper on detecting (and generating) domains on a per-domain basis which provides a simple and flexible means to detect known DGA families. Recent machine learning approaches to DGA detection have been successful on fairly simplistic DGAs, many of which produce names of fixed length. However, models trained on limited datasets are somewhat blind to new DGA variants. In this paper, we leverage the concept of generative adversarial networks to construct a deep learning based DGA that is designed to intentionally bypass a deep learning based detector. In a series of adversarial rounds, the generator learns to generate domain names that are increasingly more difficult to detect. In turn, a detector model updates its parameters to compensate for the adversarially generated domains. We test the hypothesis of whether adversarially generated domains may be used to augment training sets in order to harden other machine learning models against yet-to-be-observed DGAs. We detail solutions to several challenges in training this character-based generative adversarial network. In particular, our deep learning architecture begins as a domain name auto-encoder (encoder + decoder) trained on domains in the Alexa one million. Then the encoder and decoder are reassembled competitively in a generative adversarial network (detector + generator), with novel neural architectures and training strategies to improve convergence.",
"",
"Automated bot botnet detection is a difficult problem given the high level of attacker power. We propose a systematic approach for evaluating the evadability of detection methods. An evasion tactic has two associated costs: implementation complexity and effect on botnet utility. An evasion tactic's implementation complexity is based on the ease with which bot writers can incrementally modify current bots to evade detection. Modifying a bot in order to evade a detection method may result in a less useful botnet; to explore this, we identify aspects of botnets that impact their revenue-generating capability. For concreteness, we survey some leading automated bot botnet detection methods, identify evasion tactics for each, and assess the costs of these tactics. We also reconsider assumptions about botnet control that underly many botnet detection methods.",
"Botnets play major roles in a vast number of threats to network security, such as DDoS attacks, generation of spam emails, information theft. Detecting Botnets is a difficult task in due to the complexity and performance issues when analyzing the huge amount of data from real large-scale networks. In major Botnet malware, the use of Domain Generation Algorithms allows to decrease possibility to be detected using white list - blacklist scheme and thus DGA Botnets have higher survival. This paper proposes a DGA Botnet detection scheme based on DNS traffic analysis which utilizes semantic measures such as entropy, meaning the level of the domain, frequency of n-gram appearances and Mahalanobis distance for domain classification. The proposed method is an improvement of Phoenix botnet detection mechanism, where in the classification phase, the modified Mahalanobis distance is used instead of the original for classification. The clustering phase is based on modified k-means algorithm for archiving better effectiveness. The effectiveness of the proposed method was measured and compared with Phoenix, Linguistic and SVM Light methods. The experimental results show the accuracy of proposed Botnet detection scheme ranges from 90 to 99,97 depending on Botnet type.",
"WHOIS is a long-established protocol for querying information about the 280M+ registered domain names on the Internet. Unfortunately, while such records are accessible in a human-readable'' format, they do not follow any consistent schema and thus are challenging to analyze at scale. Existing approaches, which rely on manual crafting of parsing rules and per-registrar templates, are inherently limited in coverage and fragile to ongoing changes in data representations. In this paper, we develop a statistical model for parsing WHOIS records that learns from labeled examples. Our model is a conditional random field (CRF) with a small number of hidden states, a large number of domain-specific features, and parameters that are estimated by efficient dynamic-programming procedures for probabilistic inference. We show that this approach can achieve extremely high accuracy (well over 99 ) using modest amounts of labeled training data, that it is robust to minor changes in schema, and that it can adapt to new schema variants by incorporating just a handful of additional examples. Finally, using our parser, we conduct an exhaustive survey of the registration patterns found in 102M com domains."
]
} |
1711.00300 | 2766937060 | Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. As a solution, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy. | Inference based approaches have been proposed to complement classification based approaches by considering not only local network features but also the associations among domains. We already discussed the related work by @cite_59 and @cite_22 in and . | {
"cite_N": [
"@cite_22",
"@cite_59"
],
"mid": [
"2396855773",
"1983776999"
],
"abstract": [
"Malicious domains are key components to a variety of cyber attacks. Several recent techniques are proposed to identify malicious domains through analysis of DNS data. The general approach is to build classifiers based on DNS-related local domain features. One potential problem is that many local features, e.g., domain name patterns and temporal patterns, tend to be not robust. Attackers could easily alter these features to evade detection without affecting much their attack capabilities. In this paper, we take a complementary approach. Instead of focusing on local features, we propose to discover and analyze global associations among domains. The key challenges are (1) to build meaningful associations among domains; and (2) to use these associations to reason about the potential maliciousness of domains. For the first challenge, we take advantage of the modus operandi of attackers. To avoid detection, malicious domains exhibit dynamic behavior by, for example, frequently changing the malicious domain-IP resolutions and creating new domains. This makes it very likely for attackers to reuse resources. It is indeed commonly observed that over a period of time multiple malicious domains are hosted on the same IPs and multiple IPs host the same malicious domains, which creates intrinsic association among them. For the second challenge, we develop a graph-based inference technique over associated domains. Our approach is based on the intuition that a domain having strong associations with known malicious domains is likely to be malicious. Carefully established associations enable the discovery of a large set of new malicious domains using a very small set of previously known malicious ones. Our experiments over a public passive DNS database show that the proposed technique can achieve high true positive rates (over 95 ) while maintaining low false positive rates (less than 0.5 ). Further, even with a small set of known malicious domains (a couple of hundreds), our technique can discover a large set of potential malicious domains (in the scale of up to tens of thousands).",
"Enterprises routinely collect terabytes of security relevant data, e.g., network logs and application logs, for several reasons such as cheaper storage, forensic analysis, and regulatory compliance. Analyzing these big data sets to identify actionable security information and hence to improve enterprise security, however, is a relatively unexplored area. In this paper, we introduce a system to detect malicious domains accessed by an enterprise’s hosts from the enterprise’s HTTP proxy logs. Specifically, we model the detection problem as a graph inference problemwe construct a host-domain graph from proxy logs, seed the graph with minimal ground truth information, and then use belief propagation to estimate the marginal probability of a domain being malicious. Our experiments on data collected at a global enterprise show that our approach scales well, achieves high detection rates with low false positive rates, and identifies previously unknown malicious domains when compared with state-of-the-art systems. Since malware infections inside an enterprise spread primarily via malware domain accesses, our approach can be used to detect and prevent malware infections."
]
} |
1711.00300 | 2766937060 | Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. As a solution, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy. | @cite_42 proposed a similar approach based on BP but they utilized domain-IP associations in addition to domain-host associations in order to build the graph. Active DNS data used in our study can also be modeled as a bipartite graph and then BP can be applied the bipartite graph. However, we observe that the accuracy of the inference is unacceptably low as the associations in the Active DNS data are much more weaker than those in DNS query logs. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2110983102"
],
"abstract": [
"Malware remains a major threat to nowadays Internet. In this paper, we propose a DNS graph mining-based malware detection approach. A DNS graph is composed of DNS nodes, which represent server IPs, client IPs, and queried domain names in the process of DNS resolution. After the graph construction, we next transform the problem of malware detection to the graph mining task of inferring graph nodes' reputation scores using the belief propagation algorithm. The nodes with lower reputation scores are inferred as those infected by malwares with higher probability. For demonstration, we evaluate the proposed malware detection approach with real-world dataset. Our real-world dataset is collected from campus DNS servers for three months and we built a DNS graph consisting of 19,340,820 vertices and 24,277,564 edges. On the graph, we achieve a true positive rate 80.63 with a false positive rate 0.023 . With a false positive of 1.20 , the true positive rate was improved to 95.66 . We detected 88,592 hosts infected by malware or C&C servers, accounting for the percentage of 5.47 among all hosts. Meanwhile, 117,971 domains are considered to be related tomalicious activities, accounting for 1.5 among all domains. The results indicate that our method is efficient and effective in detecting malwares."
]
} |
1711.00300 | 2766937060 | Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. As a solution, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy. | SMASH @cite_53 is an unsupervised approach to infer groups of related servers involved in malware campaigns. It focuses on server side communication patterns extracted from HTTP traffic to systematically mine relations among servers from multiple dimensions. SMASH is novel in proposing a mechanism that utilizes connections among malicious severs to detect malware campaigns in contrast to classification schemes that solely use individual server features. Our approach is similar to SMASH in establishing server associations as bases for identifying new malicious servers, but complements SMASH by utilizing active DNS data, in contrast to HTTP traffic, which offers privacy benefits as active DNS data is publicly available database and has no privacy or security liability associated with it. Additionally, instead of using second-level domain names, our approach establishes associations among fully qualified domain names. This relaxes the assumption in SMASH that servers with the same second-level domain belong to the same organization and hence, our approach detects malicious dynamic DNS servers. | {
"cite_N": [
"@cite_53"
],
"mid": [
"1567574980"
],
"abstract": [
"HTTP is a popular channel for malware to communicate with malicious servers (e.g., Command a Control, drive-by download, drop-zone), as well as to attack benign servers. By utilizing HTTP requests, malware easily disguises itself under a large amount of benign HTTP traffic. Thus, identifying malicious HTTP activities is challenging. We leverage an insight that cyber criminals are increasingly using dynamic malicious infrastructures with multiple servers to be efficient and anonymous in (i) malware distribution (using redirectors and exploit servers), (ii) control (using CaC servers) and (iii) monetization (using payment servers), and (iv) being robust against server takedowns (using multiple backups for each type of servers). Instead of focusing on detecting individual malicious domains, we propose a complementary approach to identify a group of closely related servers that are potentially involved in the same malware campaign, which we term as Associated Server Herd (ASH). Our solution, SMASH (Systematic Mining of Associated Server Herds), utilizes an unsupervised framework to infer malware ASHs by systematically mining the relations among all servers from multiple dimensions. We build a prototype system of SMASH and evaluate it with traces from a large ISP. The result shows that SMASH successfully infers a large number of previously undetected malicious servers and possible zero-day attacks, with low false positives. We believe the inferred ASHs provide a better global view of the attack campaign that may not be easily captured by detecting only individual servers."
]
} |
1711.00300 | 2766937060 | Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. As a solution, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy. | Very recently, @cite_38 analyze and detect bulletproof hosting (BPH) services, which provide Internet miscreants with infrastructure that is resilient to complaints of illicit activities, on legitimate hosting providers. They shed lights on how BPH services have moved from self-managed monolithic infrastructure to the sub-allocations within third-party hosting services in order to evade reputation based detection such as BGP ranking and ASwatch @cite_29 . They detect malicious sub-allocations within ASs as opposed to malicious ASs @cite_39 @cite_43 . They rely on two key datasets: Whois dataset that is used to spot sub-allocations, and Passive DNS dataset, that is used to extract signals indicating malicious behavior. While their approach detects a very specific subset of malicious IPs, our approach is designed to detect any malicious domains in the wild that behave similar to known malicious domains. We believe that their detection accuracy could be improved by adopting our techniques of detecting malicious domains. | {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_43",
"@cite_39"
],
"mid": [
"2695177217",
"1984918390",
"",
"2134928993"
],
"abstract": [
"BulletProof Hosting (BPH) services provide criminal actors with technical infrastructure that is resilient to complaints of illicit activities, which serves as a basic building block for streamlining numerous types of attacks. Anecdotal reports have highlighted an emerging trend of these BPH services reselling infrastructure from lower end service providers (hosting ISPs, cloud hosting, and CDNs) instead of from monolithic BPH providers. This has rendered many of the prior methods of detecting BPH less effective, since instead of the infrastructure being highly concentrated within a few malicious Autonomous Systems (ASes) it is now agile and dispersed across a larger set of providers that have a mixture of benign and malicious clients. In this paper, we present the first systematic study on this new trend of BPH services. By collecting and analyzing a large amount of data (25 snapshots of the entire Whois IPv4 address space, 1.5 TB of passive DNS data, and longitudinal data from several blacklist feeds), we are able to identify a set of new features that uniquely characterizes BPH on sub-allocations and that are costly to evade. Based upon these features, we train a classifier for detecting malicious sub-allocated network blocks, achieving a 98 recall and 1.5 false discovery rates according to our evaluation. Using a conservatively trained version of our classifier, we scan the whole IPv4 address space and detect 39K malicious network blocks. This allows us to perform a large-scale study of the BPH service ecosystem, which sheds light on this underground business strategy, including patterns of network blocks being recycled and malicious clients being migrated to different network blocks, in an effort to evade IP address based blacklisting. Our study highlights the trend of agile BPH services and points to potential methods of detecting and mitigating this emerging threat.",
"Presented on November 18, 2016 at 12:00 p.m. in the Klaus Advanced Computing Building, Room 1116W.",
"",
"This paper presents a large scale longitudinal study of the spatial and temporal features of malicious source addresses. The basis of our study is a 402-day trace of over 7 billion Internet intrusion attempts provided by DShield.org, which includes 160 million unique source addresses. Specifically, we focus on spatial distributions and temporal characteristics of malicious sources. First, we find that one out of 27 hosts is potentially a scanning source among 232 IPv4 addresses. We then show that malicious sources have a persistent, non-uniform spatial distribution. That is, more than 80 of the sources send packets from the same 20 of the IPv4 address space over time. We also find that 7.3 of malicious source addresses are unroutable, and that some source addresses are correlated. Next, we show that most sources have a short lifetime. 57.9 of the source addresses appear only once in the trace, and 90 of source addresses appear less than 5 times. These results have implications for both attacks and defenses."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.