aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1110.4493
|
2951532727
|
We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text @math that is represented by a (context-free) grammar of @math (terminal and nonterminal) symbols and size @math (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of @math takes @math bits of space. Our representation requires @math bits of space, for any @math . It can find the positions of the @math occurrences of a pattern of length @math in @math in @math time, and extract any substring of length @math of @math in time @math , where @math is the height of the grammar tree.
|
More recently, a self-index based on LZ77 compression has been developed @cite_15 . Given a parsing of @math into @math phrases, the self-index uses @math bits of space, and searches in time @math , where @math is the nesting of the parsing. Extraction requires @math time. Experiments on repetitive collections @cite_17 @cite_23 show that the grammar-based compressor @cite_21 can be competitive with the best classical self-index adapted to repetitive collections @cite_33 but, at least that particular implementation, is not competitive with the LZ77-based self-index @cite_15 .
|
{
"cite_N": [
"@cite_33",
"@cite_21",
"@cite_23",
"@cite_15",
"@cite_17"
],
"mid": [
"2147217460",
"1570532020",
"2142590039",
"1528475610",
"2041824945"
],
"abstract": [
"A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N . Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O (N logN ) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O (N log*** ) bits, where *** is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N n . We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.",
"Straight-line programs (SLPs) offer powerful text compression by representing a text T[1,u] in terms of a restricted context-free grammar of n rules, so that T can be recovered in O(u) time. However, the problem of operating the grammar in compressed form has not been studied much. We present a grammar representation whose size is of the same order of that of a plain SLP representation, and can answer other queries apart from expanding nonterminals. This can be of independent interest. We then extend it to achieve the first grammar representation able of extracting text substrings, and of searching the text for patterns, in time o(n). We also give byproducts on representing binary relations.",
"We introduce new compressed inverted indexes for highly repetitive document collections. They are based on run-length, Lempel-Ziv, or grammar-based compression of the differential inverted lists, instead of gap-encoding them as is the usual practice. We show that our compression methods significantly reduce the space achieved by classical compression, at the price of moderate slowdowns. Moreover, many of our methods are universal, that is, they do not need to know the versioning structure of the collection. We also introduce compressed self-indexes in the comparison. We show that techniques can compress much further, using a small fraction of the space required by our new inverted indexes, yet they are orders of magnitude slower.",
"We introduce the first self-index based on the Lempel-Ziv 1977 compression format (LZ77). It is particularly competitive for highly repetitive text collections such as sequence databases of genomes of related species, software repositories, versioned document collections, and temporal text databases. Such collections are extremely compressible but classical self-indexes fail to capture that source of compressibility. Our self-index takes in practice a few times the space of the text compressed with LZ77 (as little as 2.5 times), extracts 1-2 million characters of the text per second, and finds patterns at a rate of 10-50 microseconds per occurrence. It is smaller (up to one half) than the best current self-index for repetitive collections, and faster in many cases.",
"The study of compressed storage schemes for highly repetitive sequence collections has been recently boosted by the availability of cheaper sequencing technologies and the flood of data they promise to generate. Such a storage scheme may range from the simple goal of retrieving whole individual sequences to the more advanced one of providing fast searches in the collection. In this paper we study alternatives to implement a particularly popular index, namely, the one able of finding all the positions in the collection of substrings of fixed length ( @math -grams). We introduce two novel techniques and show they constitute practical alternatives to handle this scenario. They excel particularly in two cases: when @math is small (up to 6), and when the collection is extremely repetitive (less than 0.01 mutations)."
]
}
|
1110.4493
|
2951532727
|
We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text @math that is represented by a (context-free) grammar of @math (terminal and nonterminal) symbols and size @math (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of @math takes @math bits of space. Our representation requires @math bits of space, for any @math . It can find the positions of the @math occurrences of a pattern of length @math in @math in @math time, and extract any substring of length @math of @math in time @math , where @math is the height of the grammar tree.
|
In the rest of the paper we describe how this structure operates. First, we preprocess the grammar to enforce several invariants useful to ensure our time complexities. Then we use a data structure for labeled binary relations @cite_21 to find the primary'' occurrences of @math , that is, those formed when concatenating symbols in the right hand of a rule. To get rid of the factor @math in this part of the search, we introduce a new technique to extract the first @math symbols of the expansion of any nonterminal in time @math . To find the secondary'' occurrences (i.e., those that are found as the result of the nonterminal containing primary occurrences being mentioned elsewhere), we use a pruned representation of the parse tree of @math . This tree is traversed upwards for each secondary occurrence to report. The grammar invariants introduced ensure that those traversals amortize to a constant number of steps per occurrence reported. In this way we get rid of the factor @math on the secondary occurrences too.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"1570532020"
],
"abstract": [
"Straight-line programs (SLPs) offer powerful text compression by representing a text T[1,u] in terms of a restricted context-free grammar of n rules, so that T can be recovered in O(u) time. However, the problem of operating the grammar in compressed form has not been studied much. We present a grammar representation whose size is of the same order of that of a plain SLP representation, and can answer other queries apart from expanding nonterminals. This can be of independent interest. We then extend it to achieve the first grammar representation able of extracting text substrings, and of searching the text for patterns, in time o(n). We also give byproducts on representing binary relations."
]
}
|
1110.4719
|
1778569101
|
This paper introduces the SEQ BIN meta-constraint with a polytime algorithm achieving generalized arc-consistency according to some properties. SEQ BIN can be used for encoding counting constraints such as CHANGE, SMOOTH or INCREASING NVALUE. For some of these constraints and some of their variants GAC can be enforced with a time and space complexity linear in the sum of domain sizes, which improves or equals the best known results of the literature.
|
At last, some techniques can be compared to our generic GAC algorithm, , a GAC algorithm in @math for [page 57] Hellsten04 , where @math is the total number of values in the domains of @math . Moreover, the GAC algorithm for generalizes to a class of counting constraints the ad-hoc GAC algorithm for @cite_5 without degrading time and space complexity in the case where represents .
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1570494132"
],
"abstract": [
"This paper introduces the Increasing_Nvalue constraint, which restricts the number of distinct values assigned to a sequence of variables so that each variable in the sequence is less than or equal to its successor. This constraint is a specialization of the Nvalue constraint, motivated by symmetry breaking. Propagating the Nvalue constraint is known as an NP-hard problem. However, we show that the chain of non strict inequalities on the variables makes the problem polynomial. We propose an algorithm achieving generalized arc-consistency in O(ΣDi) time, where ΣDi is the sum of domain sizes. This algorithm is an improvement of filtering algorithms obtained by the automaton-based or the Slide-based reformulations. We evaluate our constraint on a resource allocation problem."
]
}
|
1110.3563
|
1611855301
|
Finding a good clustering of vertices in a network, where vertices in the same cluster are more tightly connected than those in different clusters, is a useful, important, and well-studied task. Many clustering algorithms scale well, however they are not designed to operate upon internet-scale networks with billions of nodes or more. We study one of the fastest and most memory efficient algorithms possible - clustering based on the connected components in a random edge-induced subgraph. When defining the cost of a clustering to be its distance from such a random clustering, we show that this surprisingly simple algorithm gives a solution that is within an expected factor of two or three of optimal with either of two natural distance functions. In fact, this approximation guarantee works for any problem where there is a probability distribution on clusterings. We then examine the behavior of this algorithm in the context of social network trust inference.
|
There are many trust inference algorithms that take advantage of given trust values and the structure of a social network, including Advogato @cite_26 , Appleseed @cite_37 , Sunny @cite_11 , and Moletrust @cite_27 . These algorithms use trust that is assigned on a continuous scale (e.g. 1-10). Trust can also be treated as a probability. This approach has been used in a number of algorithms, including @cite_35 @cite_31 @cite_0 @cite_32 . The difficulty of generating these probabilities, using influence as a proxy for trust, was addressed in @cite_19 . In our research, we work with probabilities that are given , but those derived from other methods could also be used in our algorithms.
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_26",
"@cite_32",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_31",
"@cite_11"
],
"mid": [
"1531246938",
"2101902057",
"1923357748",
"2104661673",
"1922376695",
"2073926352",
"2510391126",
"2142633542",
"2017992786"
],
"abstract": [
"In open settings, the participants are autonomous and there is no central authority to ensure the felicity of their interactions. When agents interact in such settings, each relies upon being able to model the trustworthiness of the agents with whom it interacts. Fundamentally, such models must consider the past behavior of the other parties in order to predict their future behavior. Further, it is sensible for the agents to share information via referrals to trustworthy agents. Much progress has recently been made on probabilistic trust models including those that support the aggregation of information from multiple sources. However, current models do not support trust updates, leaving updates to be handled in an ad hoc manner. This paper proposes a trust representation that combines probabilities and certainty (defined as a function of a probability-certainty density function). Further, it offers a trust update mechanism to estimate the trustworthiness of referrers. This paper describes a testbed that goes beyond existing testbeds to enable the evaluation of a composite probability-certainty model. It then evaluates the proposed trust model showing that the trust model can (a) estimate trustworthiness of damping and capricious agents correctly, (b) update trust values of referrers accurately, and (c) resolve the conflicts in referral networks by certainty discounting.",
"Semantic Web endeavors have mainly focused on issues pertaining to knowledge representation and ontology design. However, besides understanding information metadata stated by subjects, knowing about their credibility becomes equally crucial. Hence, trust and trust metrics, conceived as computational means to evaluate trust relationships between individuals, come into play. Our major contributions to semantic Web trust management are twofold. First, we introduce our classification scheme for trust metrics along various axes and discuss advantages and drawbacks of existing approaches for semantic Web scenarios. Hereby, we devise our advocacy for local group trust metrics, guiding us to the second part which presents Appleseed, our novel proposal for local group trust computation. Compelling in its simplicity, Appleseed borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion.",
"This paper investigates the role of trust metrics in attack-resistant public key certification. We present an analytical framework for understanding the effectiveness of trust metrics in resisting attacks, including a characterization of the space of possible attacks. Within this framework, we establish the theoretical best case for a trust metric. Finally, we present a practical trust metric based on network flow that meets this theoretical bound.",
"Trust propagation is the principle by which new trust relationships can be derived from pre-existing trust relationship. Trust transitivity is the most explicit form of trust propagation, meaning for example that if Alice trusts Bob, and Bob trusts Claire, then by transitivity, Alice will also trust Claire. This assumes that Bob recommends Claire to Alice. Trust fusion is also an important element in trust propagation, meaning that Alice can combine Bob's recommendation with her own personal experience in dealing with Claire, or with other recommendations about Claire, in order to derive a more reliable measure of trust in Claire. These simple principles, which are essential for human interaction in business and everyday life, manifests itself in many different forms. This paper investigates possible formal models that can be implemented using belief reasoning based on subjective logic. With good formal models, the principles of trust propagation can be ported to online communities of people, organisations and software agents, with the purpose of enhancing the quality of those communities.",
"This research aims to develop a model of trust and reputation that will ensure good interactions amongst software agents in large scale open systems in particular. The following are key drivers for our model: (1) agents may be self-interested and may provide false accounts of experiences with other agents if it is beneficial for them to do so; (2) agents will need to interact with other agents with which they have no past experience. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents. When there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.",
"Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance.",
"Recommender Systems (RS) suggest to users items they will like based on their past opinions. Collaborative Filtering (CF) is the most used technique and works by recommending to the active user items appreciated by similar users. However the sparseness of user proles often prevent the computation of user similarity. Moreover CF doesn’t take into account the reliability of the other users. In this paper 1 we present a real world application, namely moleskiing.it, in which both of these conditions are critic to deliver personalized recommendations. A blog oriented architecture collects user experiences on ski mountaineering and their opinions on other users. Exploitation of Trust Metrics allows to present only relevant and reliable information according to the user’s personal point of view of other authors trustworthiness. Dieren tly from the notion of authority, we claim that trustworthiness is a user centered notion that requires the computation of personalized metrics. We also present an open information exchange architecture that makes use of Semantic Web formats to guarantee interoperability between ski mountaineering communities. 1 This work is based on an earlier work: Trust-enhanced Recommender System",
"The World Wide Web has transformed into an environment where users both produce and consume information. In order to judge the validity of information, it is important to know how trustworthy its creator is. Since no individual can have direct knowledge of more than a small fraction of information authors, methods for inferring trust are needed. We propose a new trust inference scheme based on the idea that a trust network can be viewed as a random graph, and a chain of trust as a path in that graph. In addition to having an intuitive interpretation, our algorithm has several advantages, noteworthy among which is the creation of an inferred trust-metric space where the shorter the distance between two people, the higher their trust. Metric spaces have rigorous algorithms for clustering, visualization, and related problems, any of which is directly applicable to our results.",
"In this article, we describe a new approach that gives an explicit probabilistic interpretation for social networks. In particular, we focus on the observation that many existing Web-based trust-inference algorithms conflate the notions of “trust” and “confidence,” and treat the amalgamation of the two concepts to compute the trust value associated with a social relationship. Unfortunately, the result of such an algorithm that merges trust and confidence is not a trust value, but rather a new variable in the inference process. Thus, it is hard to evaluate the outputs of such an algorithm in the context of trust inference. This article first describes a formal probabilistic network model for social networks that allows us to address that issue. Then we describe SUNNY, a new trust inference algorithm that uses probabilistic sampling to separately estimate trust information and our confidence in the trust estimate and use the two values in order to compute an estimate of trust based on only those information sources with the highest confidence estimates. We present an experimental evaluation of SUNNY. In our experiments, SUNNY produced more accurate trust estimates than the well-known trust inference algorithm TidalTrust, demonstrating its effectiveness. Finally, we discuss the implications these results will have on systems designed for personalizing content and making recommendations."
]
}
|
1110.3563
|
1611855301
|
Finding a good clustering of vertices in a network, where vertices in the same cluster are more tightly connected than those in different clusters, is a useful, important, and well-studied task. Many clustering algorithms scale well, however they are not designed to operate upon internet-scale networks with billions of nodes or more. We study one of the fastest and most memory efficient algorithms possible - clustering based on the connected components in a random edge-induced subgraph. When defining the cost of a clustering to be its distance from such a random clustering, we show that this surprisingly simple algorithm gives a solution that is within an expected factor of two or three of optimal with either of two natural distance functions. In fact, this approximation guarantee works for any problem where there is a probability distribution on clusterings. We then examine the behavior of this algorithm in the context of social network trust inference.
|
The result of these algorithms have a wide range of applications. Recommender systems are a common application, where computed trust values are used in place of traditional user similarity measures to compute recommendations (e.g. @cite_28 @cite_20 @cite_3 ). @cite_2 , the authors present a technique for using trust to estimate the of information that is presented, which in turn has applications for assessing information quality, particularly on the Semantic Web. More specific applications of that idea include using trust for semantic web service composition @cite_7 .
|
{
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_3",
"@cite_2",
"@cite_20"
],
"mid": [
"1562666771",
"1980202081",
"1601015633",
"2159296364",
"2024811782"
],
"abstract": [
"This paper describes how to generate compositions of semantic Web services using social trust information from user ratings of the services. We present a taxonomy of features, such as interoperability, availability, privacy, security, and others. We describe a way to compute social trust in OWL-S style semantic Web services. Our formalism exploits the users' ratings of the services and execution characteristics of those services. We describe our service-composition algorithm, called Trusty, that is based on this formalism. We discuss the formal properties of Trusty and our implementation of the algorithm. We present our experiments in which we compared Trusty with SHOP2, a well-known AI planning algorithm that has been successfully used for OWL-S style service composition. Our results demonstrate that Trusty generates more trustworthy compositions than SHOP2.",
"Recommender systems have proven to be an important response to the information overload problem, by providing users with more proactive and personalized information services. And collaborative filtering techniques have proven to be an vital component of many such recommender systems as they facilitate the generation of high-quality recom-mendations by leveraging the preferences of communities of similar users. In this paper we suggest that the traditional emphasis on user similarity may be overstated. We argue that additional factors have an important role to play in guiding recommendation. Specifically we propose that the trustworthiness of users must be an important consideration. We present two computational models of trust and show how they can be readily incorporated into standard collaborative filtering frameworks in a variety of ways. We also show how these trust models can lead to improved predictive accuracy during recommendation.",
"Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user's opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.",
"We consider a set of views stating possibly conflicting facts. Negative facts in the views may come, e.g., from functional dependencies in the underlying database schema. We want to predict the truth values of the facts. Beyond simple methods such as voting (typically rather accurate), we explore techniques based on \"corroboration\", i.e., taking into account trust in the views. We introduce three fixpoint algorithms corresponding to different levels of complexity of an underlying probabilistic model. They all estimate both truth values of facts and trust in the views. We present experimental studies on synthetic and real-world data. This analysis illustrates how and in which context these methods improve corroboration results over baseline methods. We believe that corroboration can serve in a wide range of applications such as source selection in the semantic Web, data quality assessment or semantic annotation cleaning in social networks. This work sets the bases for a wide range of techniques for solving these more complex problems.",
"Recommender Systems (RS) suggests to users items they will like based on their past opinions. Collaborative Filtering (CF) is the most used technique to assess user similarity between users but very often the sparseness of user profiles prevents the computation. Moreover CF doesn't take into account the reliability of the other users. In this paper we present a real world application, namely moleskiing.it, in which both of these conditions are critic to deliver personalized recommendations. A blog oriented architecture collects user experiences on ski mountaineering and their opinions on other users. Exploitation of Trust Metrics allows to present only relevant and reliable information according to the user's personal point of view of other authors trustworthiness. Differently from the notion of authority, we claim that trustworthiness is a user centered notion that requires the computation of personalized metrics. We also present an open information exchange architecture that makes use of Semantic Web formats to guarantee interoperability between ski mountaineering communities."
]
}
|
1110.3563
|
1611855301
|
Finding a good clustering of vertices in a network, where vertices in the same cluster are more tightly connected than those in different clusters, is a useful, important, and well-studied task. Many clustering algorithms scale well, however they are not designed to operate upon internet-scale networks with billions of nodes or more. We study one of the fastest and most memory efficient algorithms possible - clustering based on the connected components in a random edge-induced subgraph. When defining the cost of a clustering to be its distance from such a random clustering, we show that this surprisingly simple algorithm gives a solution that is within an expected factor of two or three of optimal with either of two natural distance functions. In fact, this approximation guarantee works for any problem where there is a probability distribution on clusterings. We then examine the behavior of this algorithm in the context of social network trust inference.
|
When each data point to be clustered consists of a vector of numerical values, one common technique is to choose a distance function between the elements (Euclidean, L1-norm, etc.) and look for clusters which minimize some optimization function. Examples of these algorithms include k-means @cite_33 (which minimizes the mean squared-distance of elements from their cluster centers), and k-centers @cite_9 (which minimizes the maximum distance from any point to the center of a cluster). Typically approximation algorithms, which find solutions close to optimal, are used because it is impractical to compute the optimal clustering for these problems. For a more extensive overview of various clustering algorithms, see @cite_34 .
|
{
"cite_N": [
"@cite_9",
"@cite_34",
"@cite_33"
],
"mid": [
"2073583237",
"2153233077",
"1977556410"
],
"abstract": [
"In this paper we present a 2-approximation algorithm for the k-center problem with triangle inequality. This result is “best possible” since for any δ < 2 the existence of δ-approximation algorithm would imply that P = NP. It should be noted that no δ-approximation algorithm, for any constant δ, has been reported to date. Linear programming duality theory provides interesting insight to the problem and enables us to derive, in O|E| log |E| time, a solution with value no more than twice the k-center optimal value. A by-product of the analysis is an O|E| algorithm that identifies a dominating set in G2, the square of a graph G, the size of which is no larger than the size of the minimum dominating set in the graph G. The key combinatorial object used is called a strong stable set, and we prove the NP-completeness of the corresponding decision problem.",
"Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the profusion of options causes confusion. We survey clustering algorithms for data sets appearing in statistics, computer science, and machine learning, and illustrate their applications in some benchmark data sets, the traveling salesman problem, and bioinformatics, a new field attracting intensive efforts. Several tightly related topics, proximity measure, and cluster validation, are also discussed.",
""
]
}
|
1110.3563
|
1611855301
|
Finding a good clustering of vertices in a network, where vertices in the same cluster are more tightly connected than those in different clusters, is a useful, important, and well-studied task. Many clustering algorithms scale well, however they are not designed to operate upon internet-scale networks with billions of nodes or more. We study one of the fastest and most memory efficient algorithms possible - clustering based on the connected components in a random edge-induced subgraph. When defining the cost of a clustering to be its distance from such a random clustering, we show that this surprisingly simple algorithm gives a solution that is within an expected factor of two or three of optimal with either of two natural distance functions. In fact, this approximation guarantee works for any problem where there is a probability distribution on clusterings. We then examine the behavior of this algorithm in the context of social network trust inference.
|
Frequently we cluster networks in order to find inherent communities in the data. perform an extensive study on the best communities of different sizes in many large social networks @cite_36 . They use conductance (or the normalized cut metric @cite_17 ), defined as the ratio of edges between the community and the outside world to edges within the community, as a measure of community strength. For all of the networks they examine, regardless of size, maximum community conductance drops off considerably for community sizes greater than one hundred. This results suggests that there may be no clusterings of large social networks which help us understand the networks structure. However even if clustering such networks does not reveal anything important about them, it may still be useful in getting better application-specific results or efficiency.
|
{
"cite_N": [
"@cite_36",
"@cite_17"
],
"mid": [
"2146591355",
"2140000690"
],
"abstract": [
"A large body of work has been devoted to defining and identifying clusters or communities in social and information networks, i.e., in graphs in which the nodes represent underlying social entities and the edges represent some sort of interaction between pairs of nodes. Most such research begins with the premise that a community or a cluster should be thought of as a set of nodes that has more and or better connections between its members than to the remainder of the network. In this paper, we explore from a novel perspective several questions related to identifying meaningful communities in large social and information networks, and we come to several striking conclusions. Rather than defining a procedure to extract sets of nodes from a graph and then attempting to interpret these sets as \"real\" communities, we employ approximation algorithms for the graph-partitioning problem to characterize as a function of size the statistical and structural properties of partitions of graphs that could plausibly be i...",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We have applied this approach to segmenting static images and found results very encouraging."
]
}
|
1110.1980
|
2953307965
|
There has been much recent work on the revenue-raising properties of truthful mechanisms for selling goods to selfish bidders. Typically the revenue of a mechanism is compared against a benchmark (such as, the maximum revenue obtainable by an omniscient seller selling at a fixed price to at least two customers), with a view to understanding how much lower the mechanism's revenue is than the benchmark, in the worst case. We study this issue in the context of lotteries , where the seller may sell a probability of winning an item. We are interested in two general issues. Firstly, we aim at using the true optimum revenue as benchmark for our auctions. Secondly, we study the extent to which the expressive power resulting from lotteries, helps to improve the worst-case ratio. We study this in the well-known context of digital goods , where the production cost is zero. We show that in this scenario, collusion-resistant lotteries (these are lotteries for which no coalition of bidders exchanging side payments has an advantage in lying) are as powerful as truthful ones.
|
This work is motivated by the results in @cite_12 . @cite_12 show how lotteries help in maximizing the revenue when designing envy-free prices. Here we address a similar type of question and aim at obtaining similar results for incentive-compatible lotteries.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"1968517895"
],
"abstract": [
"Randomized mechanisms, which map a set of bids to a probability distribution over outcomes rather than a single outcome, are an important but ill-understood area of computational mechanism design. We investigate the role of randomized outcomes (henceforth, \"lotteries\") in the context of a fundamental and archetypical multi-parameter mechanism design problem: selling heterogeneous items to unit-demand bidders. To what extent can a seller improve her revenue by pricing lotteries rather than items, and does this modification of the problem affect its computational tractability? Our results show that the answers to these questions hinge on whether consumers can purchase only one lottery (the buy-one model) or purchase any set of lotteries and receive an independent sample from each (the buy-many model). In the buy-one model, there is a polynomial-time algorithm to compute the revenue-maximizing envy-free prices (thus overcoming the inapproximability of the corresponding item pricing problem) and the revenue of the optimal lottery system can exceed the revenue of the optimal item pricing by an unbounded factor as long as the number of item types is at least 4. In the buy-many model with n item types, the profit achieved by lottery pricing can exceed item pricing by a factor of Θ(log n) but not more, and optimal lottery pricing cannot be approximated within a factor of O(ne) for some e > 0, unless NP ⊆ ∩Δ>0 BPTIME(2O(nΔ)). Our lower bounds rely on a mixture of geometric and algebraic techniques, whereas the upper bounds use a novel rounding scheme to transform a mechanism with randomized outcomes into one with deterministic outcomes while losing only a bounded amount of revenue."
]
}
|
1110.1980
|
2953307965
|
There has been much recent work on the revenue-raising properties of truthful mechanisms for selling goods to selfish bidders. Typically the revenue of a mechanism is compared against a benchmark (such as, the maximum revenue obtainable by an omniscient seller selling at a fixed price to at least two customers), with a view to understanding how much lower the mechanism's revenue is than the benchmark, in the worst case. We study this issue in the context of lotteries , where the seller may sell a probability of winning an item. We are interested in two general issues. Firstly, we aim at using the true optimum revenue as benchmark for our auctions. Secondly, we study the extent to which the expressive power resulting from lotteries, helps to improve the worst-case ratio. We study this in the well-known context of digital goods , where the production cost is zero. We show that in this scenario, collusion-resistant lotteries (these are lotteries for which no coalition of bidders exchanging side payments has an advantage in lying) are as powerful as truthful ones.
|
Truthful lotteries defined above naturally relate to the truthful auctions for digital goods considered in @cite_7 . The authors of @cite_7 show that no deterministic truthful auction can guarantee a reasonable approximation of @math and therefore focus on universally truthful auctions. However, they also show that these auctions fail to guarantee any constant approximation of @math (cf. Lemma 3.5 in @cite_7 ) and therefore the benchmark of interest becomes @math . They define an interesting auction called Random Sampling Optimal Price (RSOP, for short) and prove that RSOP gives a (quite weak) constant approximation of @math ; they also conjecture the right constant to be @math . Better bounds are then proved in @cite_2 @cite_1 ; the latter work proves the conjecture when the number of winners is at least 6 and in general for two-valued domains.
|
{
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_2"
],
"mid": [
"2177723277",
"",
"1531887947"
],
"abstract": [
"In the context of auctions for digital goods, an interesting Random Sampling Optimal Price auction (RSOP) has been proposed by Goldberg, Hartline and Wright; this leads to a truthful mechanism. Since random sampling is a popular approach for auctions that aims to maximize the seller's revenue, this method has been analyzed further by Feige, Flaxman, Hartline and Kleinberg, who have shown that it is 15-competitive in the worst case -- which is substantially better than the previously proved bounds but still far from the conjectured competitive ratio of 4. In this paper, we prove that RSOP is indeed 4-competitive for a large class of instances in which the number λ of bidders receiving the item at the optimal uniform price, is at least 6. We also show that it is 4.68 competitive for the small class of remaining instances thus leaving a negligible gap between the lower and upper bound. Furthermore, we develop a robust version of RSOP -- one in which the seller's revenue is, with high probability, not much below its mean -- when the above parameter λ grows large. We employ a mix of probabilistic techniques and dynamic programming to compute these bounds.",
"",
"We give a simple analysis of the competitive ratio of the random sampling auction from [10]. The random sampling auction was first shown to be worst-case competitive in [9] (with a bound of 7600 on its competitive ratio); our analysis improves the bound to 15. In support of the conjecture that random sampling auction is in fact 4-competitive, we show that on the equal revenue input, where any sale price gives the same revenue, random sampling is exactly a factor of four from optimal."
]
}
|
1110.1980
|
2953307965
|
There has been much recent work on the revenue-raising properties of truthful mechanisms for selling goods to selfish bidders. Typically the revenue of a mechanism is compared against a benchmark (such as, the maximum revenue obtainable by an omniscient seller selling at a fixed price to at least two customers), with a view to understanding how much lower the mechanism's revenue is than the benchmark, in the worst case. We study this issue in the context of lotteries , where the seller may sell a probability of winning an item. We are interested in two general issues. Firstly, we aim at using the true optimum revenue as benchmark for our auctions. Secondly, we study the extent to which the expressive power resulting from lotteries, helps to improve the worst-case ratio. We study this in the well-known context of digital goods , where the production cost is zero. We show that in this scenario, collusion-resistant lotteries (these are lotteries for which no coalition of bidders exchanging side payments has an advantage in lying) are as powerful as truthful ones.
|
Hart and Nisan @cite_5 use a model similar to ours (risk-neutral bidders and lottery offers) in their study of the optimal revenue when selling multiple items.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2169604085"
],
"abstract": [
"Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible multi-item setting: two items and a single buyer who has independently distributed values for the items, and an additive valuation. In general, the revenue achievable from selling two independent items may be strictly higher than the sum of the revenues obtainable by selling each of them separately. In fact, the structure of optimal (i.e., revenue-maximizing) mechanisms for two items even in this simple setting is not understood. In this paper we obtain approximate revenue optimization results using two simple auctions: that of selling the items separately, and that of selling them as a single bundle. Our main results (which are of a \"direct sum\" variety, and apply to any distributions) are as follows. Selling the items separately guarantees at least half the revenue of the optimal auction; for identically distributed items, this becomes at least 73 of the optimal revenue. For the case of k > 2 items, we show that selling separately guarantees at least a c log2(k) fraction of the optimal revenue; for identically distributed items, the bundling auction yields at least a c log(k) fraction of the optimal revenue."
]
}
|
1110.1980
|
2953307965
|
There has been much recent work on the revenue-raising properties of truthful mechanisms for selling goods to selfish bidders. Typically the revenue of a mechanism is compared against a benchmark (such as, the maximum revenue obtainable by an omniscient seller selling at a fixed price to at least two customers), with a view to understanding how much lower the mechanism's revenue is than the benchmark, in the worst case. We study this issue in the context of lotteries , where the seller may sell a probability of winning an item. We are interested in two general issues. Firstly, we aim at using the true optimum revenue as benchmark for our auctions. Secondly, we study the extent to which the expressive power resulting from lotteries, helps to improve the worst-case ratio. We study this in the well-known context of digital goods , where the production cost is zero. We show that in this scenario, collusion-resistant lotteries (these are lotteries for which no coalition of bidders exchanging side payments has an advantage in lying) are as powerful as truthful ones.
|
Other benchmarks are defined in the literature to compare the revenue of incentive-compatible auctions, see, e.g., @cite_4 . To the best of our knowledge, our work is the first in which revenue is compared to . However, in certain combinatorial settings, such as the one considered in e.g. @cite_10 , , as social welfare, is used as benchmark for revenue maximization.
|
{
"cite_N": [
"@cite_10",
"@cite_4"
],
"mid": [
"2031502124",
"2404718321"
],
"abstract": [
"We consider the problem of pricing n items to maximize revenue when faced with a series of unknown buyers with complex preferences, and show that a simple pricing scheme achieves surprisingly strong guarantees. We show that in the unlimited supply setting, a random single price achieves expected revenue within a logarithmic factor of the total social welfare for customers with general valuation functions, which may not even necessarily be monotone. This generalizes work of Guruswami et. al [18], who show a logarithmic factor for only the special cases of single-minded and unit-demand customers. In the limited supply setting, we show that for subadditive valuations, a random single price achieves revenue within a factor of 2O(√(log n loglog n) of the total social welfare, i.e., the optimal revenue the seller could hope to extract even if the seller could price each bundle differently for every buyer. This is the best approximation known for any item pricing scheme for subadditive (or even submodular) valuations, even using multiple prices. We complement this result with a lower bound showing a sequence of subadditive (in fact, XOS) buyers for which any single price has approximation ratio 2Ω(log1 4 n), thus showing that single price schemes cannot achieve a polylogarithmic ratio. This lower bound demonstrates a clear distinction between revenue maximization and social welfare maximization in this setting, for which [12,10] show that a fixed price achieves a logarithmic approximation in the case of XOS [12], and more generally subadditive [10], customers. We also consider the multi-unit case examined by [1111] in the context of social welfare, and show that so long as no buyer requires more than a 1 -- e fraction of the items, a random single price now does in fact achieve revenue within an O(log n) factor of the maximum social welfare.",
"We consider the problem of maximizing revenue in prior-free auctions for general single parameter settings. The setting is modeled by an arbitrary downward-closed set system, which captures many special cases such as single item, digital goods and single-minded combinatorial auctions. We relax the truthfulness requirement by the solution concept of Nash equilibria. Implementation by Nash equilibria is a natural and relevant framework in many applications of computer science, where auctions are run repeatedly and bidders can observe others’ strategies, but the auctioneer needs to design a mechanism in advance and cannot use any information on the bidders’ private valuations. We introduce a worst-case revenue benchmark which generalizes the second price of single item auction and the F2 benchmark, introduced by , for digital goods. We design a mechanism whose Nash equilibria obtains at least a constant factor of this benchmark and prove that no truthful mechanisms can achieve a constant approximation."
]
}
|
1110.1864
|
2010415008
|
We show that there are Turing complete computably enumerable sets of arbitrarily low non-trivial initial segment prefix-free complexity. In particular, given any computably enumerable set @math with non-trivial prefix-free initial segment complexity, there exists a Turing complete computably enumerable set @math with complexity strictly less than the complexity of @math . On the other hand it is known that sets with trivial initial segment prefix-free complexity are not Turing complete. Moreover we give a generalization of this result for any finite collection of computably enumerable sets @math with non-trivial initial segment prefix-free complexity. An application of this gives a negative answer to a question from [Section 11.12] rodenisbook and MRmerstcdhdtd which asked for minimal pairs in the structure of the c.e. reals ordered by their initial segment prefix-free complexity. Further consequences concern various notions of degrees of randomness. For example, the Solovay degrees and the @math -degrees of computably enumerable reals and computably enumerable sets are not elementarily equivalent. Also, the degrees of randomness based on plain and prefix-free complexity are not elementarily equivalent; the same holds for their @math and @math substructures.
|
However there are some differences between @math and @math , the most important being that in @math we usually work with oracle computations while in @math we only work with descriptions. It is quite remarkable that the triviality notion with respect to @math coincides with the triviality notion with respect to @math . As soon as we consider sequences of non-zero @math -degrees or @math -degrees, the study of the two structures becomes less uniform. A comparison of the arguments about the non-existence of minimal pairs of @math -degrees in this paper with the corresponding arguments in @cite_12 that refer to the @math degrees shows that they follow a similar structure, yet various aspects need to be addressed individually. We discuss the high level view of these arguments in Section .
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2072474105"
],
"abstract": [
"Abstract Given two infinite binary sequences A , B we say that B can compress at least as well as A if the prefix-free Kolmogorov complexity relative to B of any binary string is at most as much as the prefix-free Kolmogorov complexity relative to A , modulo a constant. This relation, introduced in Nies (2005) [14] and denoted by A ≤ L K B , is a measure of relative compressing power of oracles, in the same way that Turing reducibility is a measure of relative information. The equivalence classes induced by ≤ L K are called L K degrees (or degrees of compressibility) and there is a least degree containing the oracles which can only compress as much as a computable oracle, also called the ‘low for K ’ sets. A well-known result from Nies (2005) [14] states that these coincide with the K-trivial sets, which are the ones whose initial segments have minimal prefix-free Kolmogorov complexity. We show that with respect to ≤ L K , given any non-trivial Δ 2 0 sets X , Y there is a computably enumerable set A which is not K-trivial and it is below X , Y . This shows that the local structures of Σ 1 0 and Δ 2 0 Turing degrees are not elementarily equivalent to the corresponding local structures in the LK degrees. It also shows that there is no pair of sets computable from the halting problem which forms a minimal pair in the LK degrees; this is sharp in terms of the jump, as it is known that there are sets computable from 0 ″ which form a minimal pair in the LK degrees. We also show that the structure of LK degrees below the LK degree of the halting problem is not elementarily equivalent to the Δ 2 0 or Σ 1 0 structures of LK degrees. The proofs introduce a new technique of permitting below a Δ 2 0 set that is not K -trivial, which is likely to have wider applications."
]
}
|
1110.1391
|
2142768229
|
Machine transliteration is a method for automatically converting words in one language into phonetically equivalent ones in another language. Machine transliteration plays an important role in natural language applications such as information retrieval and machine translation, especially for handling proper nouns and technical terms. Four machine transliteration models - grapheme-based transliteration model, phoneme-based transliteration model, hybrid transliteration model, and correspondence-based transliteration model - have been proposed by several researchers. To date, however, there has been little research on a framework in which multiple transliteration models can operate simultaneously. Furthermore, there has been no comparison of the four models within the same framework and using the same data. We addressed these problems by 1) modeling the four models within the same framework, 2) comparing them under the same conditions, and 3) developing a way to improve machine transliteration through this comparison. Our comparison showed that the hybrid and correspondence-based models were the most effective and that the four models can be used in a complementary manner to improve machine transliteration performance.
|
Machine transliteration has received significant research attention in recent years. In most cases, the source language and target language have been English and an Asian language, respectively -- for example, English to Japanese @cite_2 , English to Chinese @cite_18 @cite_31 , and English to Korean @cite_19 @cite_24 @cite_37 @cite_40 @cite_27 @cite_13 @cite_8 @cite_34 @cite_35 . In this section, we review previous work related to the four transliteration models.
|
{
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_37",
"@cite_8",
"@cite_24",
"@cite_19",
"@cite_40",
"@cite_27",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_13"
],
"mid": [
"1983824829",
"2115513413",
"1591338396",
"2075241709",
"101941176",
"128271544",
"",
"2054648860",
"",
"2019614587",
"2907748619",
"115924709"
],
"abstract": [
"There is increasing concern about English-Korean (E-K) transliteration recently. In the previous works, direct converting methods from English alphabets to Korean alphabets were a main research topic. In this paper, we present an E-K transliteration model using pronunciation and contextual rules. Unlike the previous works, our method uses phonetic information such as phoneme and its context. We also use word formation information such as English words of Greek origin, With them, our method shows significant performance increase about 31 in word accuracy.",
"We have developed a technique for automatic transliteration of named entities for English-Chinese cross-language spoken document retrieval (CL-SDR). Our retrieval system integrates machine translation, speech recognition and information retrieval technologies. An English news story forms a textual query that is automatically translated into Chinese words, which are mapped into Mandarin syllables by pronunciation dictionary lookup. Mandarin radio news broadcasts form spoken documents that are indexed by word and syllable recognition. The information retrieval engine performs matching in both word and syllable scales. The English queries contain many named entities that tend to be out-of-vocabulary words for machine translation and speech recognition, and are omitted in retrieval. Names are often transliterated across languages and are generally important for retrieval. We present a technique that takes in a name spelling and automatically generates a phonetic cognate in terms of Chinese syllables to be used in retrieval. Experiments show consistent retrieval performance improvement by including the use of named entities in this way.",
"Many foreign words and English words appear in Korean texts, especially in the areas of science and engineering. We recognize two issues related to foreign words, which should be addressed for information retrieval (IR). First, since foreign words are introduced dynamically and not always found in a dictionary, they cause problems in morphological analysis required for indexing. Second, although a foreign word and its origin in the source language like English refer to the same concept, they are erroneously treated as independent index terms. As a way of alleviating the first problem we developed an algorithm that first identifies a phrase containing a foreign word and then extracts the foreign word part from the phrase based on statistical information. For the second problem, we present our method for back-transliteration of a foreign word to its English origin. Finally we report our evaluation results for each of the algorithms and experimental results for their impact on IR effectiveness.",
"We present in this paper the method of English-to-Korean (E-K) transliteration and back-transliteration. In Korean technical documents, many English words are transliterated into Korean words in various forms in diverse ways. As English words and Korean transliterations are usually technical terms and proper nouns, it is hard to find a transliteration and its variations in a dictionary. Therefore an automatic transliteration system is needed to find the transliterations of English words without manual intervention.To explain E-K transliteration phenomena, we use phoneme chunks that do not have a length limit. By applying phoneme chunks, we combine different length information with easy. The E-K transliteration method has three steps. In the first, we make a phoneme network that shows all possible transliterations of the given word. In the second step, we apply phoneme chunks, extracted from training data, to calculate the reliability of each possible transliteration. Then we obtain probable transliterations of the given English word.",
"Pharmaceutical compositions free from pathogenic microorganisms which could be harmful to warm-blooded animals, are provided containing, as an active ingredient, an antimicrobially effective amount of at least one compound of the formula (I) in which R represents hydrogen, -CO-R1 or -SO2-R2, wherein R1 represents optionally substituted alkyl, alkenyl or alkinyl, optionally substituted phenyl, optionally substituted phenoxyalkyl, phenylalkyl, cycloalkyl, alkylamino, dialkylamino or optionally substituted phenylamino, R2 represents alkyl or optionally substituted phenyl, A represents a keto group or a -CH(OH)- grouping, X represents hydrogen or an -OR grouping, wherein R is as defined above, X1 represents alkyl or optionally substituted phenyl, Y represents -CH- or a nitrogen atom, Z represents halogen, alkyl, halogenoalkyl, cycloalkyl, alkoxy, alkythio, alkoxycarbonyl, optionally substituted phenyl, optionally substituted phenoxy, optionally substituted phenylalkyl, amino, cyano or nitro and n represents 0 or an integer from 1 to 5, or a salt thereof, in admixture with a sterile pharmaceutical carrier, such as a solid or liquefied gaseous diluent, or with a liquid diluent other than a solvent of molecular weight less than 200 (preferably 300) except in the presence of a surface-active agent. The compositions of the invention are useful as antimycotic agents. Also included in the invention is the provision of the compositions of the invention in unit dosage form as well as the provision of methods of treatment wherein the compositions of the invention are administered to warm-blooded animals.",
"In Korean technical documents, many English words are transliterated into Korean in various ways. Most of these words are technical terms and proper nouns that are frequently used as query terms in information retrieval systems. As the communication with foreigners increases, an automatic transliteration system is needed to find the various transliterations for the cross lingual information systems, especially for the proper nouns and technical terms which are not registered in the dictionary. In this paper, we present a language independent Statistical Transliteration Model (STM) that learns rules automatically from word-aligned pairs in order to generate transliteration variations. For the transliteration from English to Korean, we compared two methods based on STM: the pivot method and the direct method. In the pivot method, the transliteration is done in two steps: converting English words into pronunciation symbols by using the STM and then converting these symbols into Korean words by using the Korean standard conversion rule. In the direct method, English words are directly converted to Korean words by using the STM without intermediate steps. After comparing the performance of the two methods, we propose a hybrid method that is more effective to generate various transliterations and consequently to retrieve more relevant documents.",
"",
"Automatic transliteration problem is to transcribe foreign words in one's own alphabet. Machine generated transliteration can be useful in various applications such as indexing in an information retrieval system and pronunciation synthesis in a text-to-speech system. In this paper we present a model for statistical English-to-Korean transliteration that generates transliteration candidates with probability. The model is designed to utilize various information sources by extending a conventional Markov window. Also, an efficient and accurate method for alignment and syllabification of pronunciation units is described. The experimental results show a recall of 0.939 for trained words and 0.875 for untrained words when the best 10 candidates are considered.",
"",
"Most foreign names are transliterated into Chinese, Japanese or Korean with approximate phonetic equivalents. The transliteration is usually achieved through intermediate phonemic mapping. This paper presents a new framework that allows direct orthographical mapping (DOM) between two different languages, through a joint source-channel model, also called n-gram transliteration model (TM). With the n-gram TM model, we automate the orthographic alignment process to derive the aligned transliteration units from a bilingual dictionary. The n-gram TM under the DOM framework greatly reduces system development effort and provides a quantum leap in improvement in transliteration accuracy over that of other state-of-the-art machine learning algorithms. The modeling framework is validated through several experiments for English-Chinese language pair.",
"",
"Automatic transliteration and back-transliteration across languages with drastically different alphabets and phonemes inventories such as English Korean, English Japanese, English Arabic, English Chinese, etc, have practical importance in machine translation, crosslingual information retrieval, and automatic bilingual dictionary compilation, etc. In this paper, a bi-directional and to some extent language independent methodology for English Korean transliteration and back-transliteration is described. Our method is composed of character alignment and decision tree learning. We induce transliteration rules for each English alphabet and back-transliteration rules for each Korean alphabet. For the training of decision trees we need a large labeled examples of transliteration and backtransliteration. However this kind of resources are generally not available. Our character alignment algorithm is capable of highly accurately aligning English word and Korean transliteration in a desired way."
]
}
|
1110.1391
|
2142768229
|
Machine transliteration is a method for automatically converting words in one language into phonetically equivalent ones in another language. Machine transliteration plays an important role in natural language applications such as information retrieval and machine translation, especially for handling proper nouns and technical terms. Four machine transliteration models - grapheme-based transliteration model, phoneme-based transliteration model, hybrid transliteration model, and correspondence-based transliteration model - have been proposed by several researchers. To date, however, there has been little research on a framework in which multiple transliteration models can operate simultaneously. Furthermore, there has been no comparison of the four models within the same framework and using the same data. We addressed these problems by 1) modeling the four models within the same framework, 2) comparing them under the same conditions, and 3) developing a way to improve machine transliteration through this comparison. Our comparison showed that the hybrid and correspondence-based models were the most effective and that the four models can be used in a complementary manner to improve machine transliteration performance.
|
Conceptually, the @math is direct orthographical mapping from source graphemes to target graphemes. Several transliteration methods based on this model have been proposed, such as those based on a source-channel model @cite_19 @cite_40 @cite_37 @cite_24 , a decision tree @cite_13 @cite_34 , a transliteration network @cite_8 @cite_2 , and a joint source-channel model @cite_31 .
|
{
"cite_N": [
"@cite_37",
"@cite_8",
"@cite_24",
"@cite_19",
"@cite_40",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_13"
],
"mid": [
"1591338396",
"2075241709",
"101941176",
"128271544",
"",
"",
"2019614587",
"2907748619",
"115924709"
],
"abstract": [
"Many foreign words and English words appear in Korean texts, especially in the areas of science and engineering. We recognize two issues related to foreign words, which should be addressed for information retrieval (IR). First, since foreign words are introduced dynamically and not always found in a dictionary, they cause problems in morphological analysis required for indexing. Second, although a foreign word and its origin in the source language like English refer to the same concept, they are erroneously treated as independent index terms. As a way of alleviating the first problem we developed an algorithm that first identifies a phrase containing a foreign word and then extracts the foreign word part from the phrase based on statistical information. For the second problem, we present our method for back-transliteration of a foreign word to its English origin. Finally we report our evaluation results for each of the algorithms and experimental results for their impact on IR effectiveness.",
"We present in this paper the method of English-to-Korean (E-K) transliteration and back-transliteration. In Korean technical documents, many English words are transliterated into Korean words in various forms in diverse ways. As English words and Korean transliterations are usually technical terms and proper nouns, it is hard to find a transliteration and its variations in a dictionary. Therefore an automatic transliteration system is needed to find the transliterations of English words without manual intervention.To explain E-K transliteration phenomena, we use phoneme chunks that do not have a length limit. By applying phoneme chunks, we combine different length information with easy. The E-K transliteration method has three steps. In the first, we make a phoneme network that shows all possible transliterations of the given word. In the second step, we apply phoneme chunks, extracted from training data, to calculate the reliability of each possible transliteration. Then we obtain probable transliterations of the given English word.",
"Pharmaceutical compositions free from pathogenic microorganisms which could be harmful to warm-blooded animals, are provided containing, as an active ingredient, an antimicrobially effective amount of at least one compound of the formula (I) in which R represents hydrogen, -CO-R1 or -SO2-R2, wherein R1 represents optionally substituted alkyl, alkenyl or alkinyl, optionally substituted phenyl, optionally substituted phenoxyalkyl, phenylalkyl, cycloalkyl, alkylamino, dialkylamino or optionally substituted phenylamino, R2 represents alkyl or optionally substituted phenyl, A represents a keto group or a -CH(OH)- grouping, X represents hydrogen or an -OR grouping, wherein R is as defined above, X1 represents alkyl or optionally substituted phenyl, Y represents -CH- or a nitrogen atom, Z represents halogen, alkyl, halogenoalkyl, cycloalkyl, alkoxy, alkythio, alkoxycarbonyl, optionally substituted phenyl, optionally substituted phenoxy, optionally substituted phenylalkyl, amino, cyano or nitro and n represents 0 or an integer from 1 to 5, or a salt thereof, in admixture with a sterile pharmaceutical carrier, such as a solid or liquefied gaseous diluent, or with a liquid diluent other than a solvent of molecular weight less than 200 (preferably 300) except in the presence of a surface-active agent. The compositions of the invention are useful as antimycotic agents. Also included in the invention is the provision of the compositions of the invention in unit dosage form as well as the provision of methods of treatment wherein the compositions of the invention are administered to warm-blooded animals.",
"In Korean technical documents, many English words are transliterated into Korean in various ways. Most of these words are technical terms and proper nouns that are frequently used as query terms in information retrieval systems. As the communication with foreigners increases, an automatic transliteration system is needed to find the various transliterations for the cross lingual information systems, especially for the proper nouns and technical terms which are not registered in the dictionary. In this paper, we present a language independent Statistical Transliteration Model (STM) that learns rules automatically from word-aligned pairs in order to generate transliteration variations. For the transliteration from English to Korean, we compared two methods based on STM: the pivot method and the direct method. In the pivot method, the transliteration is done in two steps: converting English words into pronunciation symbols by using the STM and then converting these symbols into Korean words by using the Korean standard conversion rule. In the direct method, English words are directly converted to Korean words by using the STM without intermediate steps. After comparing the performance of the two methods, we propose a hybrid method that is more effective to generate various transliterations and consequently to retrieve more relevant documents.",
"",
"",
"Most foreign names are transliterated into Chinese, Japanese or Korean with approximate phonetic equivalents. The transliteration is usually achieved through intermediate phonemic mapping. This paper presents a new framework that allows direct orthographical mapping (DOM) between two different languages, through a joint source-channel model, also called n-gram transliteration model (TM). With the n-gram TM model, we automate the orthographic alignment process to derive the aligned transliteration units from a bilingual dictionary. The n-gram TM under the DOM framework greatly reduces system development effort and provides a quantum leap in improvement in transliteration accuracy over that of other state-of-the-art machine learning algorithms. The modeling framework is validated through several experiments for English-Chinese language pair.",
"",
"Automatic transliteration and back-transliteration across languages with drastically different alphabets and phonemes inventories such as English Korean, English Japanese, English Arabic, English Chinese, etc, have practical importance in machine translation, crosslingual information retrieval, and automatic bilingual dictionary compilation, etc. In this paper, a bi-directional and to some extent language independent methodology for English Korean transliteration and back-transliteration is described. Our method is composed of character alignment and decision tree learning. We induce transliteration rules for each English alphabet and back-transliteration rules for each Korean alphabet. For the training of decision trees we need a large labeled examples of transliteration and backtransliteration. However this kind of resources are generally not available. Our character alignment algorithm is capable of highly accurately aligning English word and Korean transliteration in a desired way."
]
}
|
1110.1391
|
2142768229
|
Machine transliteration is a method for automatically converting words in one language into phonetically equivalent ones in another language. Machine transliteration plays an important role in natural language applications such as information retrieval and machine translation, especially for handling proper nouns and technical terms. Four machine transliteration models - grapheme-based transliteration model, phoneme-based transliteration model, hybrid transliteration model, and correspondence-based transliteration model - have been proposed by several researchers. To date, however, there has been little research on a framework in which multiple transliteration models can operate simultaneously. Furthermore, there has been no comparison of the four models within the same framework and using the same data. We addressed these problems by 1) modeling the four models within the same framework, 2) comparing them under the same conditions, and 3) developing a way to improve machine transliteration through this comparison. Our comparison showed that the hybrid and correspondence-based models were the most effective and that the four models can be used in a complementary manner to improve machine transliteration performance.
|
Knight and Graehl knight97 modeled Japanese-to-English transliteration with weighted finite state transducers (WFSTs) by combining several parameters including romaji-to-phoneme, phoneme-to-English, English word probabilities, and so on. A similar model was developed for Arabic-to-English transliteration @cite_4 . Meng meng01 proposed an English-to-Chinese transliteration method based on English grapheme-to-phoneme conversion, cross-lingual phonological rules, mapping rules between English phonemes and Chinese phonemes, and Chinese syllable-based and character-based language models. Jung jung00 modeled English-to-Korean transliteration with an extended Markov window. The method transforms an English word into English pronunciation by using a pronunciation dictionary. Then it segments the English phonemes into chunks of English phonemes; each chunk corresponds to a Korean grapheme as defined by handcrafted rules. Finally, it automatically transforms each chunk of English phonemes into Korean graphemes by using an extended Markov window.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2056724340"
],
"abstract": [
"It is challenging to translate names and technical terms from English into Arabic. Translation is usually done phonetically: different alphabets and sound inventories force various compromises. For example, Peter Streams may come out as [Abstract contained text which could not be captured.] bytr strymz. This process is called transliteration. We address here the reverse problem: given a foreign name or loanword in Arabic text, we want to recover the original in Roman script. For example, an input like [Abstract contained text which could not be captured.] bytr strymz should yield an output like Peter Streams. Arabic presents special challenges due to unwritten vowels and phonetic-context effects. We present results and examples of use in an Arabic-to-English machine translator."
]
}
|
1110.1391
|
2142768229
|
Machine transliteration is a method for automatically converting words in one language into phonetically equivalent ones in another language. Machine transliteration plays an important role in natural language applications such as information retrieval and machine translation, especially for handling proper nouns and technical terms. Four machine transliteration models - grapheme-based transliteration model, phoneme-based transliteration model, hybrid transliteration model, and correspondence-based transliteration model - have been proposed by several researchers. To date, however, there has been little research on a framework in which multiple transliteration models can operate simultaneously. Furthermore, there has been no comparison of the four models within the same framework and using the same data. We addressed these problems by 1) modeling the four models within the same framework, 2) comparing them under the same conditions, and 3) developing a way to improve machine transliteration through this comparison. Our comparison showed that the hybrid and correspondence-based models were the most effective and that the four models can be used in a complementary manner to improve machine transliteration performance.
|
Attempts to use both source graphemes and source phonemes in machine transliteration led to the correspondence-based transliteration model ( @math ) @cite_35 and the hybrid transliteration model ( @math ) @cite_40 @cite_1 @cite_9 . The former makes use of the correspondence between a source grapheme and a source phoneme when it produces target language graphemes; the latter simply combines @math and @math through linear interpolation. Note that the @math combines the grapheme-based transliteration probability ( @math ) and the phoneme-based transliteration probability ( @math ) using linear interpolation.
|
{
"cite_N": [
"@cite_35",
"@cite_40",
"@cite_9",
"@cite_1"
],
"mid": [
"1983824829",
"",
"1547150220",
"2150028966"
],
"abstract": [
"There is increasing concern about English-Korean (E-K) transliteration recently. In the previous works, direct converting methods from English alphabets to Korean alphabets were a main research topic. In this paper, we present an E-K transliteration model using pronunciation and contextual rules. Unlike the previous works, our method uses phonetic information such as phoneme and its context. We also use word formation information such as English words of Greek origin, With them, our method shows significant performance increase about 31 in word accuracy.",
"",
"Transliterating words and names from one language to another is a frequent and highly productive phenomenon. Transliteration is information loosing since important distinctions are not preserved in the process. Hence, automatically converting transliterated words back into their original form is a real challenge. However, due to wide applicability in MT and CLIR, it is a computationally interesting problem. Previously proposed back-transliteration methods are based either on phoneme modeling or grapheme modeling across languages. In this paper, we propose a new method, combining the two models in order to enhance the back–transliterations of words transliterated in Japanese. Our experiments show that the resulting system outperforms single-model systems.",
"Named entity phrases are some of the most difficult phrases to translate because new phrases can appear from nowhere, and because many are domain specific, not to be found in bilingual dictionaries. We present a novel algorithm for translating named entity phrases using easily obtainable monolingual and bilingual resources. We report on the application and evaluation of this algorithm in translating Arabic named entities to English. We also compare our results with the results obtained from human translations and a commercial system for the same task."
]
}
|
1110.1391
|
2142768229
|
Machine transliteration is a method for automatically converting words in one language into phonetically equivalent ones in another language. Machine transliteration plays an important role in natural language applications such as information retrieval and machine translation, especially for handling proper nouns and technical terms. Four machine transliteration models - grapheme-based transliteration model, phoneme-based transliteration model, hybrid transliteration model, and correspondence-based transliteration model - have been proposed by several researchers. To date, however, there has been little research on a framework in which multiple transliteration models can operate simultaneously. Furthermore, there has been no comparison of the four models within the same framework and using the same data. We addressed these problems by 1) modeling the four models within the same framework, 2) comparing them under the same conditions, and 3) developing a way to improve machine transliteration through this comparison. Our comparison showed that the hybrid and correspondence-based models were the most effective and that the four models can be used in a complementary manner to improve machine transliteration performance.
|
Several researchers @cite_40 @cite_1 @cite_9 have proposed hybrid model-based transliteration methods. They model @math and @math with WFSTs or a source-channel model and combine @math and @math through linear interpolation. In their @math , several parameters are considered, such as the probability, probability, and probability. In their @math , the probability is mainly considered. The main disadvantage of the hybrid model is that the dependence between the source grapheme and source phoneme is not taken into consideration in the combining process; in contrast, Oh and Choi's approach @cite_35 considers this dependence by using the correspondence between the source grapheme and phoneme.
|
{
"cite_N": [
"@cite_9",
"@cite_40",
"@cite_1",
"@cite_35"
],
"mid": [
"1547150220",
"",
"2150028966",
"1983824829"
],
"abstract": [
"Transliterating words and names from one language to another is a frequent and highly productive phenomenon. Transliteration is information loosing since important distinctions are not preserved in the process. Hence, automatically converting transliterated words back into their original form is a real challenge. However, due to wide applicability in MT and CLIR, it is a computationally interesting problem. Previously proposed back-transliteration methods are based either on phoneme modeling or grapheme modeling across languages. In this paper, we propose a new method, combining the two models in order to enhance the back–transliterations of words transliterated in Japanese. Our experiments show that the resulting system outperforms single-model systems.",
"",
"Named entity phrases are some of the most difficult phrases to translate because new phrases can appear from nowhere, and because many are domain specific, not to be found in bilingual dictionaries. We present a novel algorithm for translating named entity phrases using easily obtainable monolingual and bilingual resources. We report on the application and evaluation of this algorithm in translating Arabic named entities to English. We also compare our results with the results obtained from human translations and a commercial system for the same task.",
"There is increasing concern about English-Korean (E-K) transliteration recently. In the previous works, direct converting methods from English alphabets to Korean alphabets were a main research topic. In this paper, we present an E-K transliteration model using pronunciation and contextual rules. Unlike the previous works, our method uses phonetic information such as phoneme and its context. We also use word formation information such as English words of Greek origin, With them, our method shows significant performance increase about 31 in word accuracy."
]
}
|
1110.1112
|
1597205557
|
Click-through data has been used in various ways in Web search such as estimating relevance between documents and queries. Since only search snippets are perceived by users before issuing any clicks, the relevance induced by clicks are usually called which has proven to be quite useful for Web search. While there is plenty of click data for popular queries, very little information is available for unpopular tail ones. These tail queries take a large portion of the search volume but search accuracy for these queries is usually unsatisfactory due to data sparseness such as limited click information. In this paper, we study the problem of modeling perceived relevance for queries without click-through data. Instead of relying on users' click data, we carefully design a set of snippet features and use them to approximately capture the perceived relevance. We study the effectiveness of this set of snippet features in two settings: (1) predicting perceived relevance and (2) enhancing search engine ranking. Experimental results show that our proposed model is effective to predict the relative perceived relevance of Web search results. Furthermore, our proposed snippet features are effective to improve search accuracy for longer tail queries without click-through data.
|
The long tail view was first coined in @cite_20 and has been observed for many diverse applications like e-commerce and Web search @cite_13 . Our work is more related to the long tail study in Web search. For example, @cite_23 compared head queries and tail queries in terms of search accuracy and users search behaviors. @cite_14 proposed robust algorithms for rare query classification. @cite_28 studied the advertisability of tail queries in sponsored search and proposed a word-based approach for online efficient computation. @cite_31 studied query suggestions for rare queries but their approaches still assume that there is click information to leverage. In contrast, our work is on directly improving the search accuracy, which is the most important aspect of a search engine, for tail queries without any click-through data.
|
{
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_23",
"@cite_31",
"@cite_13",
"@cite_20"
],
"mid": [
"2098876286",
"2137865506",
"2087573683",
"2098326081",
"2148117599",
""
],
"abstract": [
"We propose a methodology for building a practical robust query classification system that can identify thousands of query classes with reasonable accuracy, while dealing in real-time with the query volume of a commercial web search engine. We use a blind feedback technique: given a query, we determine its topic by classifying the web search results retrieved by the query. Motivated by the needs of search advertising, we primarily focus on rare queries, which are the hardest from the point of view of machine learning, yet in aggregation account for a considerable fraction of search engine traffic. Empirical evaluation confirms that our methodology yields a considerably higher classification accuracy than previously reported. We believe that the proposed methodology will lead to better matching of online ads to rare queries and overall to a better user experience.",
"Sponsored search is one of the major sources of revenue for search engines on the World Wide Web. It has been observed that while showing ads for every query maximizes short-term revenue, irrelevant ads lead to poor user experience and less revenue in the long-term. Hence, it is in search engines' interest to place ads only for queries that are likely to attract ad-clicks. Many algorithms for estimating query advertisability exist in literature, but most of these methods have been proposed for and tested on the frequent or \"head\" queries. Since query frequencies on search engine are known to be distributed as a power-law, this leaves a huge fraction of the queries uncovered. In this paper we focus on the more challenging problem of estimating query advertisability for infrequent or \"tail\" queries. These require fundamentally different methods than head queries: for e.g., tail queries are almost all unique and require the estimation method to be online and inexpensive. We show that previously proposed methods do not apply to tail queries, and when modified for our scenario they do not work well. Further, we give a simple, yet effective, approach, which estimates query advertisability using only the words present in the queries. We evaluate our approach on a real-world dataset consisting of search engine queries and user clicks. Our results show that our simple approach outperforms a more complex one based on regularized regression.",
"A large fraction of queries submitted to Web search enginesoccur very infrequently. We describe search log studiesaimed at elucidating behaviors associated with rare andcommon queries. We present several analyses and discussresearch directions.",
"Query suggestion has been an effective approach to help users narrow down to the information they need. However, most of existing studies focused on only popular head queries. Since rare queries possess much less information (e.g., clicks) than popular queries in the query logs, it is much more difficult to efficiently suggest relevant queries to a rare query. In this paper, we propose an optimal rare query suggestion framework by leveraging implicit feedbacks from users in the query logs. Our model resembles the principle of pseudo-relevance feedback which assumes that top-returned results by search engines are relevant. However, we argue that the clicked URLs and skipped URLs contain different levels of information and thus should be treated differently. Hence, our framework optimally combines both the click and skip information from users and uses a random walk model to optimize the query correlation. Our model specifically optimizes two parameters: (1) the restarting (jumping) rate of random walk, and (2) the combination ratio of click and skip information. Unlike the Rocchio algorithm, our learning process does not involve the content of the URLs but simply leverages the click and skip counts in the query-URL bipartite graphs. Consequently, our model is capable of scaling up to the need of commercial search engines. Experimental results on one-month query logs from a large commercial search engine with over 40 million rare queries demonstrate the superiority of our framework, with statistical significance, over the traditional random walk models and pseudo-relevance feedback models.",
"The success of \"infinite-inventory\" retailers such as Amazon.com and Netflix has been ascribed to a \"long tail\" phenomenon. To wit, while the majority of their inventory is not in high demand, in aggregate these \"worst sellers,\" unavailable at limited-inventory competitors, generate a significant fraction of total revenue. The long tail phenomenon, however, is in principle consistent with two fundamentally different theories. The first, and more popular hypothesis, is that a majority of consumers consistently follow the crowds and only a minority have any interest in niche content; the second hypothesis is that everyone is a bit eccentric, consuming both popular and specialty products. Based on examining extensive data on user preferences for movies, music, Web search, and Web browsing, we find overwhelming support for the latter theory. However, the observed eccentricity is much less than what is predicted by a fully random model whereby every consumer makes his product choices independently and proportional to product popularity; so consumers do indeed exhibit at least some a priori propensity toward either the popular or the exotic. Our findings thus suggest an additional factor in the success of infinite-inventory retailers, namely, that tail availability may boost head sales by offering consumers the convenience of \"one-stop shopping\" for both their mainstream and niche interests. This hypothesis is further supported by our theoretical analysis that presents a simple model in which shared inventory stores, such as Amazon Marketplace, gain a clear advantage by satisfying tail demand, helping to explain the emergence and increasing popularity of such retail arrangements. Hence, we believe that the return-on-investment (ROI) of niche products goes beyond direct revenue, extending to second-order gains associated with increased consumer satisfaction and repeat patronage. More generally, our findings call into question the conventional wisdom that specialty products only appeal to a minority of consumers.",
""
]
}
|
1110.1112
|
1597205557
|
Click-through data has been used in various ways in Web search such as estimating relevance between documents and queries. Since only search snippets are perceived by users before issuing any clicks, the relevance induced by clicks are usually called which has proven to be quite useful for Web search. While there is plenty of click data for popular queries, very little information is available for unpopular tail ones. These tail queries take a large portion of the search volume but search accuracy for these queries is usually unsatisfactory due to data sparseness such as limited click information. In this paper, we study the problem of modeling perceived relevance for queries without click-through data. Instead of relying on users' click data, we carefully design a set of snippet features and use them to approximately capture the perceived relevance. We study the effectiveness of this set of snippet features in two settings: (1) predicting perceived relevance and (2) enhancing search engine ranking. Experimental results show that our proposed model is effective to predict the relative perceived relevance of Web search results. Furthermore, our proposed snippet features are effective to improve search accuracy for longer tail queries without click-through data.
|
In the past, snippets have been used by many different purposes such as query classification @cite_14 and measuring query similarity @cite_29 . In particular, our work is related to @cite_15 . In @cite_15 , some snippet features such as overlap between the words in title and in query are used, together with user behavior and click-through features. The main finding of their study is that click features are the most useful for general queries. In our work, we focus on tail queries which do not have any click information. We define a more comprehensive set of snippet features and discuss different application scenarios to efficiently leverage these snippet features.
|
{
"cite_N": [
"@cite_29",
"@cite_14",
"@cite_15"
],
"mid": [
"2161443453",
"2098876286",
"2099391294"
],
"abstract": [
"Determining the similarity of short text snippets, such as search queries, works poorly with traditional document similarity measures (e.g., cosine), since there are often few, if any, terms in common between two short text snippets. We address this problem by introducing a novel method for measuring the similarity between short text snippets (even those without any overlapping terms) by leveraging web search results to provide greater context for the short texts. In this paper, we define such a similarity kernel function, mathematically analyze some of its properties, and provide examples of its efficacy. We also show the use of this kernel function in a large-scale system for suggesting related queries to search engine users.",
"We propose a methodology for building a practical robust query classification system that can identify thousands of query classes with reasonable accuracy, while dealing in real-time with the query volume of a commercial web search engine. We use a blind feedback technique: given a query, we determine its topic by classifying the web search results retrieved by the query. Motivated by the needs of search advertising, we primarily focus on rare queries, which are the hardest from the point of view of machine learning, yet in aggregation account for a considerable fraction of search engine traffic. Empirical evaluation confirms that our methodology yields a considerably higher classification accuracy than previously reported. We believe that the proposed methodology will lead to better matching of online ads to rare queries and overall to a better user experience.",
"Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected \"noisy\" user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods."
]
}
|
1110.1112
|
1597205557
|
Click-through data has been used in various ways in Web search such as estimating relevance between documents and queries. Since only search snippets are perceived by users before issuing any clicks, the relevance induced by clicks are usually called which has proven to be quite useful for Web search. While there is plenty of click data for popular queries, very little information is available for unpopular tail ones. These tail queries take a large portion of the search volume but search accuracy for these queries is usually unsatisfactory due to data sparseness such as limited click information. In this paper, we study the problem of modeling perceived relevance for queries without click-through data. Instead of relying on users' click data, we carefully design a set of snippet features and use them to approximately capture the perceived relevance. We study the effectiveness of this set of snippet features in two settings: (1) predicting perceived relevance and (2) enhancing search engine ranking. Experimental results show that our proposed model is effective to predict the relative perceived relevance of Web search results. Furthermore, our proposed snippet features are effective to improve search accuracy for longer tail queries without click-through data.
|
Our work is also related to click prediction works @cite_36 @cite_26 @cite_17 . @cite_36 used an existing hierarchy to propagate clicks to rare events. @cite_17 used past clicks to predict future. @cite_26 proposed a feature-based method of predicting the click-through rate for new ads. To the best of our knowledge, few works have been conducted to predict click-based perceived relevance for tail queries in Web search. Furthermore, compared with @cite_26 which only uses query-dependent features, we explore a more compressive feature set with both query-dependent and query-independent features.
|
{
"cite_N": [
"@cite_36",
"@cite_26",
"@cite_17"
],
"mid": [
"2071488943",
"2090883204",
"2026784708"
],
"abstract": [
"We consider the problem of estimating occurrence rates of rare eventsfor extremely sparse data, using pre-existing hierarchies to perform inference at multiple resolutions. In particular, we focus on the problem of estimating click rates for (webpage, advertisement) pairs (called impressions) where both the pages and the ads are classified into hierarchies that capture broad contextual information at different levels of granularity. Typically the click rates are low and the coverage of the hierarchies is sparse. To overcome these difficulties we devise a sampling method whereby we analyze aspecially chosen sample of pages in the training set, and then estimate click rates using a two-stage model. The first stage imputes the number of (webpage, ad) pairs at all resolutions of the hierarchy to adjust for the sampling bias. The second stage estimates clickrates at all resolutions after incorporating correlations among sibling nodes through a tree-structured Markov model. Both models are scalable and suited to large scale data mining applications. On a real-world dataset consisting of 1 2 billion impressions, we demonstrate that even with 95 negative (non-clicked) events in the training set, our method can effectively discriminate extremely rare events in terms of their click propensity.",
"Search engine advertising has become a significant element of the Web browsing experience. Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad. This ranking has a strong impact on the revenue the search engine receives from the ads. Further, showing the user an ad that they prefer to click on improves user satisfaction. For these reasons, it is important to be able to accurately estimate the click-through rate of ads in the system. For ads that have been displayed repeatedly, this is empirically measurable, but for new ads, other means must be used. We show that we can use features of ads, terms, and advertisers to learn a model that accurately predicts the click-though rate for new ads. We also show that using our model improves the convergence and performance of an advertising system. As a result, our model increases both revenue and user satisfaction.",
"Search engine click logs provide an invaluable source of relevance information but this information is biased because we ignore which documents from the result list the users have actually seen before and after they clicked. Otherwise, we could estimate document relevance by simple counting. In this paper, we propose a set of assumptions on user browsing behavior that allows the estimation of the probability that a document is seen, thereby providing an unbiased estimate of document relevance. To train, test and compare our model to the best alternatives described in the Literature, we gather a large set of real data and proceed to an extensive cross-validation experiment. Our solution outperforms very significantly all previous models. As a side effect, we gain insight into the browsing behavior of users and we can compare it to the conclusions of an eye-tracking experiments by [12]. In particular, our findings confirm that a user almost always see the document directly after a clicked document. They also explain why documents situated just after a very relevant document are clicked more often."
]
}
|
1110.0585
|
2949505065
|
In machine learning and computer vision, input images are often filtered to increase data discriminability. In some situations, however, one may wish to purposely decrease discriminability of one classification task (a "distractor" task), while simultaneously preserving information relevant to another (the task-of-interest): For example, it may be important to mask the identity of persons contained in face images before submitting them to a crowdsourcing site (e.g., Mechanical Turk) when labeling them for certain facial attributes. Another example is inter-dataset generalization: when training on a dataset with a particular covariance structure among multiple attributes, it may be useful to suppress one attribute while preserving another so that a trained classifier does not learn spurious correlations between attributes. In this paper we present an algorithm that finds optimal filters to give high discriminability to one task while simultaneously giving low discriminability to a distractor task. We present results showing the effectiveness of the proposed technique on both simulated data and natural face images.
|
For the application of generalizing to datasets with different image statistics, our work is related to the problem of covariate shift @cite_6 and the field of transfer learning @cite_10 . The method proposed in our paper is useful when dataset differences are known a priori -- the learned filter helps to overcome covariate shift by altering the underlying images themselves.
|
{
"cite_N": [
"@cite_10",
"@cite_6"
],
"mid": [
"2165698076",
"2034368206"
],
"abstract": [
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"Abstract A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function. This approach is effective in cases such as sample surveys or design of experiments, where the observed covariate follows a different distribution than that in the whole population. Under misspecification of the parametric model, the optimal choice of the weight function is asymptotically shown to be the ratio of the density function of the covariate in the population to that in the observations. This is the pseudo-maximum likelihood estimation of sample surveys. The optimality is defined by the expected Kullback–Leibler loss, and the optimal weight is obtained by considering the importance sampling identity. Under correct specification of the model, however, the ordinary maximum likelihood estimate (i.e. the uniform weight) is shown to be optimal asymptotically. For moderate sample size, the situation is in between the two extreme cases, and the weight function is selected by minimizing a variant of the information criterion derived as an estimate of the expected loss. The method is also applied to a weighted version of the Bayesian predictive density. Numerical examples as well as Monte-Carlo simulations are shown for polynomial regression. A connection with the robust parametric estimation is discussed."
]
}
|
1110.0207
|
1994990845
|
XML Schema is the language used to define the structure of messages exchanged between OGC-based web service clients and providers. The size of these schemas has been growing with time, reaching a state that makes its understanding and effective application a hard task. A first step to cope with this situation is to provide different ways to measure the complexity of the schemas. In this regard, we present in this paper an analysis of the complexity of XML schemas in OGC web services. We use a group of metrics found in the literature and introduce new metrics to measure size and or complexity of these schemas. The use of adequate metrics allows us to quantify the complexity, quality and other properties of the schemas, which can be very useful in different scenarios.
|
Literature about measuring XML schemas complexity has increased in the last few years, based mainly on adapting metrics for assessing complexity on software systems or XML documents @cite_21 @cite_2 @cite_13 . To our best knowledge the most relevant attempt in this topic is presented in @cite_12 . Here, a comprehensive set of metrics is defined and applied to a large corpus of real-world XML schemas. Based on the resulting metrics, the authors define a categorization for a set of schema files according to its size. Another relevant study is @cite_15 which defines eleven metrics to measure the quality and complexity of XML Schemas.
|
{
"cite_N": [
"@cite_21",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"2137580272",
"1964962870",
"97127447",
"1986243933",
""
],
"abstract": [
"XML has emerged as the language for exchanging data on the web and has attracted considerable interest both in industry and in academia. Nevertheless, to date, little is known about the XML documents published on the web. This paper presents a comprehensive analysis of a sample of about 200,000 XML documents on the web, and is the first study of its kind. We study the distribution of XML documents across the web in several ways; moreover, we provided a detailed characterization of the structure of real XML documents. Our results provide valuable input to the design of algorithms, tools and systems that use XML in one form or another.",
"This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph-theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The control graphs of several actual Fortran programs are then presented to illustrate the correlation between intuitive complexity and the graph-theoretic complexity. Several properties of the graph-theoretic complexity are then proved which show, for example, that complexity is independent of physical size (adding or subtracting functional statements leaves complexity unchanged) and complexity depends only on the decision structure of a program.",
"Despite the ubiquity of XML, research in metrics for XML documents is scarce. This paper proposes and discusses eleven metrics to measure the quality and complexity of XML Schema and conforming XML documents. To provide an easy view of these metrics, two composite indices have been defined to measure quality and complexity. An open source metric analyzer tool for XML Schema has been developed. The tool can easily be extended to add new metrics and alter the composition of the indices to best fit the requirements of a given application.",
"The eXtensible Markup Language (XML) is a recommendation of the World Wide Web Consortium (W3C). It is a public format and has been widely adopted as a means of interchanging information among computer programs. With XML documents being typically large, we need to have ways of improving their ease of use and maintainability by keeping their complexity low. This research focused on different ways of determining the complexity of XML documents based on various syntactic and structural aspects of these documents. An XML document represents a generic tree. XML documents are pre-order traversal of equivalent XML trees. One of the important findings was that documents with higher nesting levels had more weights and could therefore be viewed as being more complicated as compared to the documents with lower nesting levels. Another important finding was related to document type definitions (DTDs). DTDs can be expressed as regular expressions providing means for calculating quantitative values.",
""
]
}
|
1110.0207
|
1994990845
|
XML Schema is the language used to define the structure of messages exchanged between OGC-based web service clients and providers. The size of these schemas has been growing with time, reaching a state that makes its understanding and effective application a hard task. A first step to cope with this situation is to provide different ways to measure the complexity of the schemas. In this regard, we present in this paper an analysis of the complexity of XML schemas in OGC web services. We use a group of metrics found in the literature and introduce new metrics to measure size and or complexity of these schemas. The use of adequate metrics allows us to quantify the complexity, quality and other properties of the schemas, which can be very useful in different scenarios.
|
Last about schemas complexity, in @cite_5 the authors present a set of schema metrics in the context of schema mapping. A combined metric is defined based in simpler metrics considering schemas size, use of different schema features, and naming strategies. The combined metric is evaluated in the context of business document standards.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1989875612"
],
"abstract": [
"Exchanging structured business documents is inevitable for successful collaboration in electronic commerce. A prerequisite, for fostering the interoperability between business partners utilizing different business document standards, is a mapping between different standards. However, the effort involved in creating those mappings is hard to estimate. For example, the complexity of standardized formats is one crucial aspect affecting the effort of the mapping process. Therefore, a notion of complexity is desirable for both, manual as well as automatic mapping processes. For this reason we develop an initial set of metrics, based on well established metrics for XML Schema, allowing to analyze the complexity of business document standards. Having such metrics at hand allows estimating the complexity and hence the mapping effort of a business document standard, prior to the actual mapping process. We demonstrate the complexity metrics on three different business document standards from the electronic commerce domain."
]
}
|
1109.5559
|
1641168109
|
We present results from our cosmological N-body simulation which consisted of 2048x2048x2048 particles and ran distributed across three supercomputers throughout Europe. The run, which was performed as the concluding phase of the Gravitational Billion Body Problem DEISA project, integrated a 30 Mpc box of dark matter using an optimized Tree Particle Mesh N-body integrator. We ran the simulation up to the present day (z=0), and obtained an efficiency of about 0.93 over 2048 cores compared to a single supercomputer run. In addition, we share our experiences on using multiple supercomputers for high performance computing and provide several recommendations for future projects.
|
There are a several other projects which have run high performance computing applications across multiple supercomputers. These include simulations of a galaxy collision @cite_20 , a materials science problem @cite_1 as well as an analysis application for arthropod evolution @cite_22 . A larger number of groups performed distributed computing across sites of PCs rather than supercomputers (e.g., @cite_4 @cite_7 @cite_15 ). Several software tools have been developed to facilitate high performance computing across sites of PCs (e.g., @cite_3 @cite_5 @cite_18 @cite_9 @cite_12 ) and within volatile computing environments @cite_17 . The recently launched MAPPER EU-FP7 project @cite_8 seeks to run multiscale applications across a distributed supercomputing environment, where individual subcodes periodically exchange information and (in some cases) run concurrently on different supercomputing architectures.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_5",
"@cite_15",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"1825216778",
"2024343295",
"2169736303",
"2059715946",
"",
"2004261242",
"204986382",
"2077783617",
"1996254287",
"2153626282",
"2088186136",
"1985967703",
"2008170189"
],
"abstract": [
"A large number of MPI implementations are currently available, each of which emphasize different aspects of high-performance computing or are intended to solve a specific research problem. The result is a myriad of incompatible MPI implementations, all of which require separate installation, and the combination of which present significant logistical challenges for end users. Building upon prior research, and influenced by experience gained from the code bases of the LAM MPI, LA-MPI, and FT-MPI projects, Open MPI is an all-new, production-quality MPI-2 implementation that is fundamentally centered around component concepts. Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI. Its component architecture provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons. This paper presents a high-level overview the goals, design, and implementation of Open MPI.",
"We discuss the performance of direct summation codes used in the simulation of astrophysical stellar systems on highly distributed architectures. These codes compute the gravitational interaction among stars in an exact way and have an O(N^2) scaling with the number of particles. They can be applied to a variety of astrophysical problems, like the evolution of star clusters, the dynamics of black holes, the formation of planetary systems, and cosmological simulations. The simulation of realistic star clusters with sufficiently high accuracy cannot be performed on a single workstation but may be possible on parallel computers or grids. We have implemented two parallel schemes for a direct N-body code and we study their performance on general purpose parallel computers and large computational grids. We present the results of timing analyzes conducted on the different architectures and compare them with the predictions from theoretical models. We conclude that the simulation of star clusters with up to a million particles will be possible on large distributed computers in the next decade. Simulating entire galaxies however will in addition require new hybrid methods to speedup the calculation.",
"Maximum likelihood analysis is a powerful technique for inferring evolutionary histories from genetic sequence data. During the fall of 2003, an international team of computer scientists, biologists, and computer centers created a global grid to analyze the evolution of hexapods (arthropods with six legs). We created a global grid of computers using systems located in eight countries, spread across six continents (every continent but Antarctica). This work was done as part of the SC03 HPC challenge, and this project was given an HPC challenge award for the \"most distributed application\". More importantly, the creation of this computing grid enabled investigation of important questions regarding the evolution of arthropods - research that would not have otherwise been undertaken. Grid computing thus leads directly to new scientific insights.",
"This paper argues that computational grids can be used for far more types of applications than just trivially parallel ones. Algorithmic optimizations like latency-hiding and exploiting locality can be used effectively to obtain high performance on grids, despite the relatively slow wide-area networks that connect the grid resources. Moreover, the bandwidth of wide-area networks increases rapidly, allowing even some applications that are extremely communication intensive to run on a grid, provided the underlying algorithms are latency-tolerant. We illustrate large-scale parallel computing on grids with three example applications that search large state spaces: transposition-driven search, retrograde analysis, and model checking. We present several performance results on a state-of-the-art computer science grid (DAS-3) with a dedicated optical network.",
"",
"Large scale supercomputing applications typically run on clusters using vendor message passing libraries, limiting the application to the availability of memory and CPU resources on that single machine. The ability to run inter-cluster parallel code is attractive since it allows the consolidation of multiple large scale resources for computational simulations not possible on a single machine, and it also allows the conglomeration of small subsets of CPU cores for rapid turnaround, for example, in the case of high-availability computing. MPIg is a grid-enabled implementation of the Message Passing Interface (MPI), extending the MPICH implementation of MPI to use Globus Toolkit services such as resource allocation and authentication. To achieve co-availability of resources, HARC, the Highly-Available Resource Co-allocator, is used. Here we examine two applications using MPIg: LAMMPS (Large-scale Atomic Molecular Massively Parallel Simulator), is used with a replica exchange molecular dynamics approach to enhance binding affinity calculations in HIV drug research, and HemeLB, which is a lattice-Boltzmann solver designed to address fluid flow in geometries such as the human cerebral vascular system. The cross-site scalability of both these applications is tested and compared to single-machine performance. In HemeLB, communication costs are hidden by effectively overlapping non-blocking communication with computation, essentially scaling linearly across multiple sites, and LAMMPS scales almost as well when run between two significantly geographically separated sites as it does at a single site.",
"The advanced networking department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past several years as a forum to demonstrate and focus communication and networking developments. At Supercomputing 96, for the first time, Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory combined their Supercomputing 96 activities within a single research booth under the ASO banner. Sandia provided the network design and coordinated the networking activities within the booth. At Supercomputing 96, Sandia elected: to demonstrate wide area network connected Massively Parallel Processors, to demonstrate the functionality and capability of Sandia s new edge architecture, to demonstrate inter-continental collaboration tools, and to demonstrate ATM video capabilities. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia s overall strategies in ATM networking.",
"Application development for distributed-computing \"Grids\" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.",
"Distributed computing is a means to overcome the limitations of single computing systems. In this paper we describe how clusters of heterogeneous supercomputers can be used to run a single application or a set of applications. We concentrate on the communication problem in such a configuration and present a software library called PACX-MPI that was developed to allow a single system image from the point of view of an MPI programmer. We describe the concepts that have been implemented for heterogeneous clusters of this type and give a description of real applications using this library.",
"The concept of topology-aware grid applications is derived from parallelized computational models of complex systems that are executed on heterogeneous resources, either because they require specialized hardware for certain calculations, or because their parallelization is flexible enough to exploit such resources. Here we describe two such applications, a multi-body simulation of stellar evolution, and an evolutionary algorithm that is used for reverse-engineering gene regulatory networks. We then describe the topology-aware middleware we have developed to facilitate the modeling-implementing-executing'' cycle of complex systems applications. The developed middleware allows topology-aware simulations to run on geographically distributed clusters with or without firewalls between them. Additionally, we describe advanced coallocation and scheduling techniques that take into account the applications topologies. Results are given based on running the topology-aware applications on the Grid'5000 infrastructure.",
"While computational grids with multiple batch systems (batch grids) have been used for efficient executions of loosely-coupled and workflow-based parallel applications, they can also be powerful infrastructures for executing long-running multi-component parallel applications. In this paper, we have constructed a generic middleware framework for executing long-running multi-component applications with execution times much greater than execution time limits of batch queues. Our framework coordinates the distribution, execution, migration and restart of the components of the application on the multiple queues, where the component jobs of the different queues can have different queue waiting and startup times. We have used our framework with a foremost long-running multi-component application for climate modeling, the Community Climate System Model (CCSM). We have performed real multiple-site CCSM runs for 6.5 days of wallclock time spanning three sites with four queues and emulated external workloads. Our experiments indicate that multi-site executions can lead to good throughput of application execution.",
"As a part of the Supercomputing '95 High Performance Computing Challenge, the authors have used the re sources of three of the National Science Foundation NSF supercomputing centers as a single, distributed, heteroge neous metacomputer to carry out the largest simulation of colliding galaxies yet attempted. The metacomputer con sisted of the following machines: TMC CM-5, Cray C90 T3D, IBM SP-2, and SGI Power Challenge. This paper describes the scalable parallel numerical algorithms used to carry out the hybrid N-body gas dynamical simulation, as well as the inter-MPP communication system the authors developed to connect them. The system, called sclib, is simple yet flexible, and consists of a network of communicating objects, or agents. Each communication agent is a process that resides on the front-end processor of a parallel supercomputer and brokers the interactions between the various components of the distributed com putation. The design is independent of host architecture and execution model yet takes advantage of scheduling resources local to the compute servers. As such, it serves as a useful prototype for future research in distributed, heterogeneous, high-performance computing.",
"Parallel computing on volatile distributed resources requires schedulers that consider job and resource characteristics. We study unconventional computing environments containing devices spread throughout a single large organization. The devices are not necessarily typical general purpose machines; instead, they could be processors dedicated to special purpose tasks (for example printing and document processing), but capable of being leveraged for distributed computations. Harvesting their idle cycles can simultaneously help resources cooperate to perform their primary task and enable additional functionality and services. A new burstiness metric characterizes the volatility of the high-priority native tasks. A burstiness-aware scheduling heuristic opportunistically introduces grid jobs (a lower priority workload class) to avoid the higher-priority native applications, and effectively harvests idle cycles. Simulations based on real workload traces indicate that this approach improves makespan by an average of 18.3 over random scheduling, and comes within 7.6 of the theoretical upper bound."
]
}
|
1109.5931
|
2950300858
|
We reinterpret some online greedy algorithms for a class of nonlinear "load-balancing" problems as solving a mathematical program online. For example, we consider the problem of assigning jobs to (unrelated) machines to minimize the sum of the alpha^ th -powers of the loads plus assignment costs (the online Generalized Assignment Problem); or choosing paths to connect terminal pairs to minimize the alpha^ th -powers of the edge loads (online routing with speed-scalable routers). We give analyses of these online algorithms using the dual of the primal program as a lower bound for the optimal algorithm, much in the spirit of online primal-dual results for linear problems. We then observe that a wide class of uni-processor speed scaling problems (with essentially arbitrary scheduling objectives) can be viewed as such load balancing problems with linear assignment costs. This connection gives new algorithms for problems that had resisted solutions using the dominant potential function approaches used in the speed scaling literature, as well as alternate, cleaner proofs for other known results.
|
There are two main lines of speed scaling research that fit within the framework that we consider here. This first is the problem of minimizing energy used with deadline feasibility constraints. @cite_15 proposed two online algorithms and , and showed that is @math -competitive by reasoning directly about the optimal schedule. @cite_4 introduced the use of potential functions for analyzing online scheduling problems, and showed that and another algorithm are @math -competitive. @cite_13 gave a potential function analysis to show that is @math -competitive. @cite_27 introduced the algorithm , and gave a potential function analysis to show that it has a better competitive ratio than OA or AVR for smallish @math .
|
{
"cite_N": [
"@cite_13",
"@cite_27",
"@cite_15",
"@cite_4"
],
"mid": [
"2159187451",
"",
"2099961254",
"2138779116"
],
"abstract": [
"Speed scaling is a power management technique that involves dynamically changing the speed of a processor. This gives rise to dual-objective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. Yao, Demers, and Shenker (Proc. IEEE Symp. Foundations of Computer Science, pp. 374–382, 1995) considered the problem where the QoS constraint is deadline feasibility and the objective is to minimize the energy used. They proposed an online speed scaling algorithm Average Rate (AVR) that runs each job at a constant speed between its release and its deadline. They showed that the competitive ratio of AVR is at most (2α)α 2 if a processor running at speed s uses power s α . We show the competitive ratio of AVR is at least ((2−δ)α)α 2, where δ is a function of α that approaches zero as α approaches infinity. This shows that the competitive analysis of AVR by Yao, Demers, and Shenker is essentially tight, at least for large α. We also give an alternative proof that the competitive ratio of AVR is at most (2α)α 2 using a potential function argument. We believe that this analysis is significantly simpler and more elementary than the original analysis of AVR in (Proc. IEEE Symp. Foundations of Computer Science, pp. 374–382, 1995).",
"",
"The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s sup p where p spl ges 2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.",
"Speed scaling is a power management technique that involves dynamically changing the speed of a processor. We study policies for setting the speed of the processor for both of the goals of minimizing the energy used and the maximum temperature attained. The theoretical study of speed scaling policies to manage energy was initiated in a seminal paper by [1995], and we adopt their setting. We assume that the power required to run at speed s is P(s) e sα for some constant α > 1. We assume a collection of tasks, each with a release time, a deadline, and an arbitrary amount of work that must be done between the release time and the deadline. [1995] gave an offline greedy algorithm YDS to compute the minimum energy schedule. They further proposed two online algorithms Average Rate (AVR) and Optimal Available (OA), and showed that AVR is 2α − 1 αα-competitive with respect to energy. We provide a tight αα bound on the competitive ratio of OA with respect to energy. We initiate the study of speed scaling to manage temperature. We assume that the environment has a fixed ambient temperature and that the device cools according to Newton's law of cooling. We observe that the maximum temperature can be approximated within a factor of two by the maximum energy used over any interval of length 1 b, where b is the cooling parameter of the device. We define a speed scaling policy to be cooling-oblivious if it is simultaneously constant-competitive with respect to temperature for all cooling parameters. We then observe that cooling-oblivious algorithms are also constant-competitive with respect to energy, maximum speed and maximum power. We show that YDS is a cooling-oblivious algorithm. In contrast, we show that the online algorithms OA and AVR are not cooling-oblivious. We then propose a new online algorithm that we call BKP. We show that BKP is cooling-oblivious. We further show that BKP is e-competitive with respect to the maximum speed, and that no deterministic online algorithm can have a better competitive ratio. BKP also has a lower competitive ratio for energy than OA for α ≥5. Finally, we show that the optimal temperature schedule can be computed offline in polynomial-time using the Ellipsoid algorithm."
]
}
|
1109.5931
|
2950300858
|
We reinterpret some online greedy algorithms for a class of nonlinear "load-balancing" problems as solving a mathematical program online. For example, we consider the problem of assigning jobs to (unrelated) machines to minimize the sum of the alpha^ th -powers of the loads plus assignment costs (the online Generalized Assignment Problem); or choosing paths to connect terminal pairs to minimize the alpha^ th -powers of the edge loads (online routing with speed-scalable routers). We give analyses of these online algorithms using the dual of the primal program as a lower bound for the optimal algorithm, much in the spirit of online primal-dual results for linear problems. We then observe that a wide class of uni-processor speed scaling problems (with essentially arbitrary scheduling objectives) can be viewed as such load balancing problems with linear assignment costs. This connection gives new algorithms for problems that had resisted solutions using the dominant potential function approaches used in the speed scaling literature, as well as alternate, cleaner proofs for other known results.
|
An extensive survey tutorial on the online primal dual technique for linear problems, as well the history of the development of this technique, can be found in @cite_12 .
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2142339200"
],
"abstract": [
"The primal—dual method is a powerful algorithmic technique that has proved to be extremely useful for a wide variety of problems in the area of approximation algorithms for NP-hard problems. The method has its origins in the realm of exact algorithms, e.g., for matching and network flow. In the area of approximation algorithms, the primal—dual method has emerged as an important unifying design methodology, starting from the seminal work of Goemans and Williamson [60] We show in this survey how to extend the primal—dual method to the setting of online algorithms, and show its applicability to a wide variety of fundamental problems. Among the online problems that we consider here are the weighted caching problem, generalized caching, the set-cover problem, several graph optimization problems, routing, load balancing, and the problem of allocating ad-auctions. We also show that classic online problems such as the ski rental problem and the dynamic TCP-acknowledgement problem can be solved optimally using a simple primal—dual approach. The primal—dual method has several advantages over existing methods. First, it provides a general recipe for the design and analysis of online algorithms. The linear programming formulation helps detecting the difficulties of the online problem, and the analysis of the competitive ratio is direct, without a potential function appearing \"out of nowhere.\" Finally, since the analysis is done via duality, the competitiveness of the online algorithm is with respect to an optimal fractional solution, which can be advantageous in certain scenarios."
]
}
|
1109.6046
|
1788236294
|
The ever increasing popularity of Facebook and other Online Social Networks has left a wealth of personal and private data on the web, aggregated and readily accessible for broad and automatic retrieval. Protection from both undesired recipients as well as harvesting through crawlers is implemented by simple access control at the provider, configured by manual authorization through the publishing user. Several studies demonstrate that standard settings directly cause an unnoticed over-sharing and that the users have trouble understanding and configuring adequate settings. Using the three simple principles of color coding, ease of access, and application of common practices, we developed a new privacy interface that increases the usability significantly. The results of our user study underlines the extent of the initial problem and documents that our interface enables faster, more precise authorisation and leads to increased intelligibility.
|
Improving security in OSNs is a very widely discussed issue in literature. A vast amount of approaches have been published, mainly assuming not only malicious users, but a malicious provider, as well. The range starts with cutting the profile in centralized OSNs into atomic parts, to encrypt each part separately and distribute keys to authorized recipients @cite_10 @cite_11 . It ends with completely distributed p2p OSNs like PeerSoN @cite_1 or Safebook @cite_4 . Common to these approaches is the attempt to enhance service's infrastructure or to even develop an entirely new social network service. All of them are based on encryption or decentralized storage of private content. Distrusting the service provider, they consequently aim at implementing distributed access control and confidential data storage. Assuming benign service providers, the usability of the interfaces emerges as the prevalent challenge for privacy.
|
{
"cite_N": [
"@cite_1",
"@cite_10",
"@cite_4",
"@cite_11"
],
"mid": [
"1984599464",
"2146097977",
"",
"1921245731"
],
"abstract": [
"To address privacy concerns over Online Social Networks (OSNs), we propose a distributed, peer-to-peer approach coupled with encryption. Moreover, extending this distributed approach by direct data exchange between user devices removes the strict Internet-connectivity requirements of web-based OSNs. In order to verify the feasibility of this approach, we designed a two-tiered architecture and protocols that recreate the core features of OSNs in a decentralized way. This paper focuses on the description of the prototype built for the P2P infrastructure for social networks, as a first step without the encryption part, and shares early experiences from the prototype and insights gained since first outlining the challenges and possibilities of decentralized alternatives to OSNs.",
"Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question \"Can users retain their privacy while still benefiting from these web services?\". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect.",
"",
"The publication of private data in user profiles in a both secure and private way is a rising problem and of special interest in, e.g., online social networks that become more and more popular. Current approaches, especially for decentralized networks, often do not address this issue or impose large storage overhead. In this paper, we present a cryptographic approach to Private Profile Management that is seen as a building block for applications in which users maintain their own profiles, publish and retrieve data, and authorize other users to access different portions of data in their profiles. In this course, we provide: (i) formalization of confidentiality and unlinkability as two main security and privacy goals for the data which is kept in profiles and users who are authorized to retrieve this data, and (ii) specification, analysis, and comparison of two private profile management schemes based on different encryption techniques."
]
}
|
1109.5153
|
2951480135
|
We consider asynchronous multiprocessor systems where processes communicate by accessing shared memory. Exchange of information among processes in such a multiprocessor necessitates costly memory accesses called (RMRs), which generate communication on the interconnect joining processors and main memory. In this paper we compare two popular shared memory architecture models, namely the (CC) and (DSM) models, in terms of their power for solving synchronization problems efficiently with respect to RMRs. The particular problem we consider entails one process sending a "signal" to a subset of other processes. We show that a variant of this problem can be solved very efficiently with respect to RMRs in the CC model, but not so in the DSM model, even when we consider amortized RMR complexity. To our knowledge, this is the first separation in terms of amortized RMR complexity between the CC and DSM models. It is also the first separation in terms of RMR complexity (for asynchronous systems) that does not rely in any way on wait-freedom---the requirement that a process makes progress in a bounded number of its own steps.
|
Mutual exclusion has been studied not only asynchronous systems, but also in semi-synchronous systems, where consecutive steps by the same process occur at most @math time units apart for some @math @cite_13 . In one class of such systems, every process knows @math , and processes have the ability to delay their own execution by at least @math time units in order to force others to make progress. Given reads, writes and comparison primitives, ME can be solved in such systems using @math RMRs in the DSM model, but in the CC model @math RMRs are needed in the worst case @cite_5 . To our knowledge, this is the first result that separates the CC and DSM models in terms of RMR complexity for solving a fundamental synchronization problem. (In this context we ignore complexity bounds for LFCU systems because they are not representative of the more common variants of the CC model.)
|
{
"cite_N": [
"@cite_5",
"@cite_13"
],
"mid": [
"1510877565",
"2087801709"
],
"abstract": [
"We consider the time complexity of shared-memory mutual exclusion algorithms based on reads, writes, and comparison primitives under the remote-memory-reference (RMR) time measure. For asynchronous systems, a lower bound of Ω(log N log log N) RMRs per critical-section entry has been established in previous work, where N is the number of processes. In this paper, we show that lower RMR time complexity is attainable in semi-synchronous systems in which processes may execute delay statements. When assessing the time complexity of delay-based algorithms, the question of whether delays should be counted arises. We consider both possibilities. Also of relevance is whether delay durations are upper-bounded. (They are lower-bounded by definition.) Again, we consider both possibilities. For each of these possibilities, we present an algorithm with either Θ(1) or Θ(log log N) time complexity. For the cases in which a Ω(log log N) algorithm is given, we establish matching Ω(log log N) lower bounds.",
"In 1986, Michel Raynal published a comprehensive survey of algorithms for mutual exclusion [72]. In this paper, we survey major research trends since 1986 in work on shared-memory mutual exclusion."
]
}
|
1109.5034
|
2114512051
|
The goal of this work is the identification of humans based on motion data in the form of natural hand gestures. In this paper, the identification problem is formulated as classification with classes corresponding to persons' identities, based on recorded signals of performed gestures. The identification performance is examined with a database of twenty-two natural hand gestures recorded with two types of hardware and three state-of-art classifiers: Linear Discrimination Analysis (LDA), Support Vector machines (SVM) and k-Nearest Neighbour (k-NN). Results show that natural hand gestures allow for an effective human classification.
|
Hand data gathering techniques can be divided into: device-based, where mechanical or optical sensors are attached to a glove, allowing for measurement of finger flex, hand position and acceleration, e.g. @cite_23 , and vision-based, when hands are tracked based on the data from optical sensors e.g. @cite_10 . A survey of glove-based systems for motion data gathering, as well as their applications can be found in @cite_6 , while @cite_11 provides a comprehensive analysis of the integration of various sensors into gesture recognition systems.
|
{
"cite_N": [
"@cite_10",
"@cite_6",
"@cite_23",
"@cite_11"
],
"mid": [
"",
"2144232362",
"1992889084",
"2079788562"
],
"abstract": [
"",
"Hand movement data acquisition is used in many engineering applications ranging from the analysis of gestures to the biomedical sciences. Glove-based systems represent one of the most important efforts aimed at acquiring hand movement data. While they have been around for over three decades, they keep attracting the interest of researchers from increasingly diverse fields. This paper surveys such glove systems and their applications. It also analyzes the characteristics of the devices, provides a road map of the evolution of the technology, and discusses limitations of current technology and trends at the frontiers of research. A foremost goal of this paper is to provide readers who are new to the area with a basis for understanding glove systems technology and how it can be applied, while offering specialists an updated picture of the breadth of applications in several engineering and biomedical sciences areas.",
"This paper describes a novel hand gesture recognition system that utilizes both multi-channel surface electromyogram (EMG) sensors and 3D accelerometer (ACC) to realize user-friendly interaction between human and computers. Signal segments of meaningful gestures are determined from the continuous EMG signal inputs. Multi-stream Hidden Markov Models consisting of EMG and ACC streams are utilized as decision fusion method to recognize hand gestures. This paper also presents a virtual Rubik's Cube game that is controlled by the hand gestures and is used for evaluating the performance of our hand gesture recognition system. For a set of 18 kinds of gestures, each trained with 10 repetitions, the average recognition accuracy was about 91.7 in real application. The proposed method facilitates intelligent and natural control based on gesture interaction.",
"A gesture recognition system (GRS) is comprised of a gesture, gesture-capture device (sensor), tracking algorithm (for motion capture), feature extraction, and classification algorithm. With the impending movement toward natural communication with mechanical and software systems, it is important to examine the first apparatus that separates the human communicator and the device being controlled. Although there are numerous reviews of GRSs, a comprehensive analysis of the integration of sensors into GRSs and their impact on system performance is lacking in the professional literature. Thus, we have undertaken this effort. Determination of the sensor stimulus, context of use, and sensor platform are major preliminary design issues in GRSs. Thus, these three components form the basic structure of our taxonomy. We emphasize the relationship between these critical components and the design of the GRS in terms of its architectural functions and computational requirements. In this treatise, we consider sensors that are capable of capturing dynamic and static arm and hand gestures. Although we discuss various sensor types, our main focus is on visual sensors as we expect these to become the sensor of choice in the foreseeable future. We delineate the challenges ahead for their increased effectiveness in this application domain. We note as a special challenge, the development of sensors that take over many of the functions the GRS designer struggles with today. We believe our contribution, in this first survey on sensors for GRSs, can give valuable insights into this important research and development topic, and encourage advanced research directions and new approaches."
]
}
|
1109.5034
|
2114512051
|
The goal of this work is the identification of humans based on motion data in the form of natural hand gestures. In this paper, the identification problem is formulated as classification with classes corresponding to persons' identities, based on recorded signals of performed gestures. The identification performance is examined with a database of twenty-two natural hand gestures recorded with two types of hardware and three state-of-art classifiers: Linear Discrimination Analysis (LDA), Support Vector machines (SVM) and k-Nearest Neighbour (k-NN). Results show that natural hand gestures allow for an effective human classification.
|
While non-invasive vision-based methods for gathering hand movement data are popular, device-based techniques receive attention due to widespread use of motion sensors in mobile devices. For example @cite_20 presents a high performance, two-stage recognition algorithm for acceleration signals, that was adapted in Samsung cell phones.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"1552665206"
],
"abstract": [
"As many functionalities like cameras and MP3 players are converged to cell phones, more intuitive interaction methods are essential beyond tiny keypads. In this paper, we present gesture-based interactions and their two-stage recognition algorithm. Acceleration signals are generated from accelerometer. At the first stage, they are hierarchically modelled and matched as basic component and their relationships by Bayesian networks. At the second stage, they are further classified by SVMs for resolving confusing pairs. Our system showed enough recognition performance for commercialization; with 100 novice users, the average recognition rate was 96.9 on 11 gestures (digits 1-9, O, X). The algorithms have been adopted in the world-first gesture-recognizing Samsung cell phones since 2005."
]
}
|
1109.5034
|
2114512051
|
The goal of this work is the identification of humans based on motion data in the form of natural hand gestures. In this paper, the identification problem is formulated as classification with classes corresponding to persons' identities, based on recorded signals of performed gestures. The identification performance is examined with a database of twenty-two natural hand gestures recorded with two types of hardware and three state-of-art classifiers: Linear Discrimination Analysis (LDA), Support Vector machines (SVM) and k-Nearest Neighbour (k-NN). Results show that natural hand gestures allow for an effective human classification.
|
Extracted features may describe not only the motion of hands but also their estimated pose. A review of literature regarding hand pose estimation is provided in @cite_21 . Creation of a gesture model can be performed using multiple approaches including Hidden Markov Models e.g. @cite_22 or Dynamic Bayesian Networks e.g. @cite_25 . For hand gesture recognition, application domains include: sign language recognition e.g. @cite_3 , robotic and computer interaction e.g. @cite_29 , computer games e.g. @cite_32 and virtual reality applications e.g. @cite_25 .
|
{
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_25"
],
"mid": [
"2168392347",
"2145452192",
"",
"2097880181",
"2096976709",
"2124619563"
],
"abstract": [
"Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and or body. It is of utmost importance in designing an intelligent and efficient human-computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. Existing challenges and future research possibilities are also highlighted",
"An intelligent robot requires natural interaction with humans. Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRl). Previous HRI researches were focused on issues such as hand gesture, sign language, and command gesture recognition. However, automatic recognition of whole body gestures is required in order to operate HRI naturally. This can be a challenging problem because describing and modeling meaningful gesture patterns from whole body gestures are complex tasks. This paper presents a new method for spotting and recognizing whole body key gestures at the same time on a mobile robot. Our method is simultaneously used with other HRI approaches such as speech recognition, face recognition, and so forth. In this regard, both of execution speed and recognition performance should be considered. For efficient and natural operation, we used several approaches at each step of gesture recognition; learning and extraction of articulated joint information, representing gesture as a sequence of clusters, spotting and recognizing a gesture with HMM. In addition, we constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile robot.",
"",
"User interaction is an essential feature in the design of an interactive game. Most existing games receive inputs from users via conventional devices such as keyboard, mouse, joystick and paddle. More recent games make use of infrared beams from user's devices, the stylus from touch screen, or pressure-sensing pads to provide rich contextual sensing and interactions. In this paper, we propose the use of hand gestures as the basis for users to directly interact with game objects that are rendered across a flat plasma or LCD display. It forms a new paradigm of interaction in which the physical movements of hands in the form of hand gestures are coordinated along with the virtual objects in the game. Thus, the user effectively becomes an “input device”. We make use of a low-cost web camera that is mounted over the gaming screen display to provide image-feed to the hand tracking and gesture recognition system, called Germane, which employs the hull-point analysis algorithm for gesture recognition. A working prototype of Germane has been developed to validate its operations on several common gestures. Performance evaluation results of Germane are also presented.",
"This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a precis of sign linguistics and their impact on the field. The types of data available and the relative merits are explored allowing examination of the features which can be extracted. Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages. Methods for combining the sign classification results into full SLR are given showing the progression towards speech recognition techniques and the further adaptations required for the sign specific case. Finally the current frontiers are discussed and the recent research presented. This covers the task of continuous sign recognition, the work towards true signer independence, how to effectively combine the different modalities of sign, making use of the current linguistic research and adapting to larger more noisy data sets.",
"The recognition of hand gestures is a challenging task for the high degrees of freedom of hand motion. We develop a virtual reality based driving training system of Self-Propelled Gun (SPG). For this system, a DataGlove with 18 sensors is employed to perform some driving tasks such as pressing switches, manipulating steering wheel, changing gears, etc. To accomplish these tasks, some hand gestures must be defined from the DataGlove sensors data. A feedforward neural network can represent an arbitrary functional mapping so it is possible to map raw data directly to the required hand gestures. This paper uses BP neural network to recognize the hand patterns which exist in the raw sensor data of the DataGlove. A pattern set of 300 hand gestures is used to train and test the neural network. The recognition system achieves good performance. It can be effectively used in our virtual reality training system of SPG to perform various manipulating tasks in a more fast, precise, and natural way."
]
}
|
1109.5034
|
2114512051
|
The goal of this work is the identification of humans based on motion data in the form of natural hand gestures. In this paper, the identification problem is formulated as classification with classes corresponding to persons' identities, based on recorded signals of performed gestures. The identification performance is examined with a database of twenty-two natural hand gestures recorded with two types of hardware and three state-of-art classifiers: Linear Discrimination Analysis (LDA), Support Vector machines (SVM) and k-Nearest Neighbour (k-NN). Results show that natural hand gestures allow for an effective human classification.
|
Relatively new application of HCI elements are biometric technologies aimed to recognise a person based on their physiological or behavioural characteristic. A survey of behavioural biometrics is provided in @cite_16 where authors examine types of features used to describe human behaviour as well as compare accuracy rates for verification of users using different behavioural biometric approaches. Simple gesture recognition may be applied for authentication on mobile devices e.g. in @cite_19 authors present a study of light-weight user authentication system using an accelerometer while a multi-touch gesture-based authentication system is presented in @cite_28 . Typically however, instead of hand motion more reliable features like hand layout @cite_8 or body gait @cite_35 are employed.
|
{
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_28",
"@cite_19",
"@cite_16"
],
"mid": [
"1572940804",
"",
"2163582782",
"2147780311",
"2152395371"
],
"abstract": [
"Human gait is an attractive modality for recognizing people at a distance. In this paper we adopt an appearance-based approach to the problem of gait recognition. The width of the outer contour of the binarized silhouette of a walking person is chosen as the basic image feature. Different gait features are extracted from the width vector such as the dowsampled, smoothed width vectors, the velocity profile etc. and sequences of such temporally ordered feature vectors are used for representing a person's gait. We use the dynamic time-warping (DTW) approach for matching so that non-linear time normalization may be used to deal with the naturally-occuring changes in walking speed. The performance of the proposed method is tested using different gait databases.",
"",
"In this paper, we present a novel multi-touch gesture-based authentication technique. We take advantage of the multi-touch surface to combine biometric techniques with gestural input. We defined a comprehensive set of five-finger touch gestures, based upon classifying movement characteristics of the center of the palm and fingertips, and tested them in a user study combining biometric data collection with usability questions. Using pattern recognition techniques, we built a classifier to recognize unique biometric gesture characteristics of an individual. We achieved a 90 accuracy rate with single gestures, and saw significant improvement when multiple gestures were performed in sequence. We found user ratings of a gestures desirable characteristics (ease, pleasure, excitement) correlated with a gestures actual biometric recognition rate - that is to say, user ratings aligned well with gestural security, in contrast to typical text-based passwords. Based on these results, we conclude that multi-touch gestures show great promise as an authentication mechanism.",
"The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures or physical manipulation of the devices. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. Unlike statistical methods, uWave requires a single training sample for each gesture pattern and allows users to employ personalized gestures and physical manipulations. We evaluate uWave using a large gesture library with over 4000 samples collected from eight users over an elongated period of time for a gesture vocabulary with eight gesture patterns identified by a Nokia research. It shows that uWave achieves 98.6 accuracy, competitive with statistical methods that require significantly more training samples. Our evaluation data set is the largest and most extensive in published studies, to the best of our knowledge. We also present applications of uWave in gesture-based user authentication and interaction with three-dimensional mobile user interfaces using user created gestures.",
"This study is a survey and classification of the state-of-the-art in behavioural biometrics which is based on skills, style, preference, knowledge, motor-skills or strategy used by people while accomplishing different everyday tasks such as driving an automobile, talking on the phone or using a computer. The authors examine current research in the field and analyse the types of features used to describe different types of behaviour. After comparing accuracy rates for verification of users using different behavioural biometric approaches, researchers address privacy issues which arise or might arise in the future with the use of behavioural biometrics."
]
}
|
1109.5034
|
2114512051
|
The goal of this work is the identification of humans based on motion data in the form of natural hand gestures. In this paper, the identification problem is formulated as classification with classes corresponding to persons' identities, based on recorded signals of performed gestures. The identification performance is examined with a database of twenty-two natural hand gestures recorded with two types of hardware and three state-of-art classifiers: Linear Discrimination Analysis (LDA), Support Vector machines (SVM) and k-Nearest Neighbour (k-NN). Results show that natural hand gestures allow for an effective human classification.
|
Despite their limitations, linear classifiers @cite_26 proved to produce good results for many applications, including face recognition @cite_34 and speech detection @cite_33 . In @cite_9 LDA is used for the estimation of consistent parameters to three model standard types of violin bow strokes. Authors show that such gestures can be effectively presented in the bi-dimensional space. In @cite_36 , the LDA classifier was compared with neural networks (NN) and focused time delay neural networks (TDNN) for gesture recognition based on data from a 3-axis accelerometer. LDA gave similar results to the NN approach, and the TDNN technique, though computationally more complex, achieved better performance. An analysis of LDA and the PCA algorithm, with a discussion about their performance for the purpose of object recognition is provided in @cite_1 . SVM and k-NN classifiers were used in @cite_18 for the purpose of visual category recognition. A comparison of the effectiveness of these method is classification of human gait patterns is provided in @cite_4 .
|
{
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_36",
"@cite_9",
"@cite_1",
"@cite_34"
],
"mid": [
"2165828254",
"",
"2182033924",
"2117321034",
"2110910614",
"1837844861",
"2134262590",
"2122992400"
],
"abstract": [
"We consider visual category recognition in the framework of measuring similarities, or equivalently perceptual distances, to prototype examples of categories. This approach is quite flexible, and permits recognition based on color, texture, and particularly shape, in a homogeneous framework. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in bias-variance decomposition) in the case of limited sampling. Alternatively, one could use support vector machines but they involve time-consuming optimization and computation of pairwise distances. We propose a hybrid of these two methods which deals naturally with the multiclass setting, has reasonable computational complexity both in training and at run time, and yields excellent results in practice. The basic idea is to find close neighbors to a query sample and train a local support vector machine that preserves the distance function on the collection of neighbors. Our method can be applied to large, multiclass data sets for which it outperforms nearest neighbor and support vector machines, and remains efficient when the problem becomes intractable for support vector machines. A wide variety of distance functions can be used and our experiments show state-of-the-art performance on a number of benchmark data sets for shape and texture classification (MNIST, USPS, CUReT) and object recognition (Caltech- 101). On Caltech-101 we achieved a correct classification rate of 59.05 (±0.56 ) at 15 training images per class, and 66.23 (±0.48 ) at 30 training images.",
"",
"Information fusion offers a promising solution to the development of a high performance classification system. In this paper multiple gait components such as spatial, temporal and wavelet are fused for enhancing the classification rate. Initially background modeling is done from a video sequence and the foreground moving objects in the individual frames are segmented using the background subtraction algorithm. Then gait representing features are extracted for training and testing the multi_class k Nearest Neighbor models (kNN) and multi_class support vector machine models (SVM). We have successfully achieved our objective with only two gait cycles and our experimental results demonstrate that the classification ability of SVM is better than kNN. The proposed system is evaluated using side view videos of NLPR database.",
"In speech recognition, speech non-speech detection must be robust to,noise. In the paper, a method for speech non-speech detection using a linear discriminant analysis (LDA) applied to mel frequency cepstrum coefficients (MFCC) is presented. The energy is the most discriminant parameter between noise and speech. But with this single parameter, the speech non-speech detection system detects too many noise segments. The LDA applied to MFCC and the associated test reduces the detection of noise segments. This new algorithm is compared to the one based on signal to noise ratio (Mauuary and Monne, 1993).",
"We used Fisher linear discriminant analysis (LDA), static neural networks (NN), and focused time delay neural networks (TDNN) for gesture recognition. Gestures were collected in form of acceleration signals along three axes from six participants. A sports watch containing a 3-axis accelerometer, was worn by the users, who performed four gestures. Each gesture was performed for ten seconds, at the speed of one gesture per second. User-dependent and user-independent k-fold cross validations were carried out to measure the classifier performance. Using first and second order statistical descriptors of acceleration signals from validation datasets, LDA and NN classifiers were able to recognize the gestures at an average rate of 86 and 97 (user-dependent) and 89 and 85 (user-independent), respectively. TDNNs proved to be the best, achieving near perfect classification rates both for user-dependent and user-independent scenarios, while operating directly on the acceleration signals alleviating the need for explicit feature extraction.",
"We developed an ”augmented violin”, i.e. an acoustic instrument with added gesture capture capabilities to control electronic processes. We report here gesture analysis we performed on three different bow strokes, Detache, Martele and Spiccato, using this augmented violin. Different features based on velocity and acceleration were considered. A linear discriminant analysis has been performed to estimate a minimum number of pertinent features necessary to model these bow stroke classes. We found that the maximum and minimum accelerations of a given stroke were efficient to parameterize the different bow stroke types, as well as differences in dynamics playing. Recognition rates were estimated using a kNN method with various training sets. We finally discuss that bow stroke recognition allows to relate the gesture data to music notation, while a bow stroke continuous parameterization can be related to continuous sound characteristics.",
"In the context of the appearance-based paradigm for object recognition, it is generally believed that algorithms based on LDA (linear discriminant analysis) are superior to those based on PCA (principal components analysis). In this communication, we show that this is not always the case. We present our case first by using intuitively plausible arguments and, then, by showing actual results on a face database. Our overall conclusion is that when the training data set is small, PCA can outperform LDA and, also, that PCA is less sensitive to different training data sets.",
"Linear discriminant analysis (LDA) is a popular feature extraction technique for face recognition. However, It often suffers from the small sample size problem when dealing with the high dimensional face data. Fisherface and null space LDA (N-LDA) are two conventional approaches to address this problem. But in many cases, these LDA classifiers are overfitted to the training set and discard some useful discriminative information. In this paper, by analyzing different overfitting problems for the two kinds of LDA classifiers, we propose an approach using random subspace and bagging to improve them respectively. By random sampling on feature vector and training samples, multiple stabilized Fisherface and N-LDA classifiers are constructed. The two kinds of complementary classifiers are integrated using a fusion rule, so nearly all the discriminative information is preserved. We also apply this approach to the integration of multiple features. A robust face recognition system integrating shape, texture and Gabor responses is finally developed."
]
}
|
1109.4156
|
2951814478
|
Given an undirected graph @math with @math edges, @math vertices, and non-negative edge weights, and given an integer @math , we show that for some universal constant @math , a @math -approximate distance oracle for @math of size @math can be constructed in @math time and can answer queries in @math time. We also give an oracle which is faster for smaller @math . Our results break the quadratic preprocessing time bound of Baswana and Kavitha for all @math and improve the @math time bound of Thorup and Zwick except for very sparse graphs and small @math . When @math and @math , our oracle is optimal w.r.t. both stretch, size, preprocessing time, and query time, assuming a widely believed girth conjecture by Erd o s.
|
A problem related to distance oracles is that of finding spanners. We have already mentioned the linear-time algorithm of Baswana and Sen @cite_0 to find a spanner of stretch @math . There has also been interest in so-called @math -spanners, where @math and @math are real numbers. Such a spanner @math of a graph @math ensures that for all vertices @math and @math , @math . In other words, @math allows an stretch in addition to a multiplicative stretch. Thorup and Zwick @cite_3 showed the existence of @math -spanners of size @math for any constant @math , where @math for some constant @math . A @math -spanner of size @math was presented by Dor, Halperin, and Zwick @cite_9 . The size was later improved slightly by Elkin and Peleg to @math @cite_10 . Baswana, Kavitha, Mehlhorn, and Pettie @cite_14 gave a spanner of size @math which has additive stretch @math and no multiplicative stretch. This is currently the smallest known spanner with constant additive stretch and no multiplicative stretch.
|
{
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_10"
],
"mid": [
"",
"2156047991",
"2033698604",
"2092938419",
"2042333226"
],
"abstract": [
"",
"Let G=(V,E) be an unweighted undirected graph on n vertices. A simple argument shows that computing all distances in G with an additive one-sided error of at most 1 is as hard as Boolean matrix multiplication. Building on recent work of [SIAM J. Comput., 28 (1999), pp. 1167--1181], we describe an @math -time algorithm APASP2 for computing all distances in G with an additive one-sided error of at most 2. Algorithm APASP2 is simple, easy to implement, and faster than the fastest known matrix-multiplication algorithm. Furthermore, for every even k>2, we describe an @math -time algorithm APASPk for computing all distances in G with an additive one-sided error of at most k. We also give an @math -time algorithm @math for producing stretch 3 estimated distances in an unweighted and undirected graph on n vertices. No constant stretch factor was previously achieved in @math time. We say that a weighted graph F=(V,E') k-emulates an unweighted graph G=(V,E) if for every @math we have @math . We show that every unweighted graph on n vertices has a 2-emulator with @math edges and a 4-emulator with @math edges. These results are asymptotically tight. Finally, we show that any weighted undirected graph on n vertices has a 3-spanner with @math edges and that such a 3-spanner can be built in @math time. We also describe an @math -time algorithm for estimating all distances in a weighted undirected graph on n vertices with a stretch factor of at most 3.",
"Let k ≥ 2 be an integer. We show that any undirected and unweighted graph G = (V, E) on n vertices has a subgraph G' = (V, E') with O(kn1+1 k) edges such that for any two vertices u, v ∈ V, if Δ G (u, v) = d, then Δ G' (u, v) = d+O(d1-1 k-1). Furthermore, we show that such subgraphs can be constructed in O(mn1 k) time, where m and n are the number of edges and vertices in the original graph. We also show that it is possible to construct a weighted graph G* = (V, E*) with O(kn1+1 (2k-1)) edges such that for every u, v ∈ V, if Δ G (u, v) = d, then Δ ≤ Δ G* (u, v) = d + O(d1-1 k-1). These are the first such results with additive error terms of the form o(d), i.e., additive error terms that are sublinear in the distance being approximated.",
"Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t-spanner of the graph G, for any t ≥ 1, is a subgraph (V,ES), ES ⊆ E, such that the distance between any pair of vertices in the subgraph is at most t times the distance between them in the graph G. Computing a t-spanner of minimum size (number of edges) has been a widely studied and well-motivated problem in computer science. In this paper we present the first linear time randomized algorithm that computes a t-spanner of a given weighted graph. Moreover, the size of the t-spanner computed essentially matches the worst case lower bound implied by a 43-year old girth lower bound conjecture made independently by Erdos, Bollobas, and Bondy & Simonovits. Our algorithm uses a novel clustering approach that avoids any distance computation altogether. This feature is somewhat surprising since all the previously existing algorithms employ computation of some sort of local or global distance information, which involves growing either breadth first search trees up to t(t)-levels or full shortest path trees on a large fraction of vertices. The truly local approach of our algorithm also leads to equally simple and efficient algorithms for computing spanners in other important computational environments like distributed, parallel, and external memory. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2007 Preliminary version of this work appeared in the 30th International Colloquium on Automata, Languages and Programming, pages 384–396, 2003.",
"An @math -spanner of a graph G is a subgraph H such that @math for every pair of vertices u,w, where distG'(u,w) denotes the distance between two vertices u and v in G'. It is known that every graph G has a polynomially constructible @math -spanner (also known as multiplicative @math -spanner) of size @math for every integer @math , and a polynomially constructible (1,2)-spanner (also known as additive 2-spanner) of size @math . This paper explores hybrid spanner constructions (involving both multiplicative and additive factors) for general graphs and shows that the multiplicative factor can be made arbitrarily close to 1 while keeping the spanner size arbitrarily close to O(n), at the cost of allowing the additive term to be a sufficiently large constant. More formally, we show that for any constant @math there exists a constant @math such that for every @math -vertex graph G there is an efficiently constructible @math -spanner of size @math ."
]
}
|
1109.2613
|
2594199870
|
The throughput benefits of random linear network codes have been studied extensively for wirelined and wireless erasure networks. It is often assumed that all nodes within a network perform coding operations. In energy-constrained systems, however, coding subgraphs should be chosen to control the number of coding nodes while maintaining throughput. In this paper, we explore the strategic use of network coding in the wireless packet erasure relay channel according to both throughput and energy metrics. In the relay channel, a single source communicates to a single sink through the aid of a half-duplex relay. The fluid flow model is used to describe the case where both the source and the relay are coding, and Markov chain models are proposed to describe packet evolution if only the source or only the relay is coding. In addition to transmission energy, we take into account coding and reception energies. We show that coding at the relay alone while operating in a rateless fashion is neither throughput nor energy efficient. Given a set of system parameters, our analysis determines the optimal amount of time the relay should participate in the transmission, and where coding should be performed.
|
The use of RLNC in wireless erasure networks under packetized operations is first studied by @cite_1 , and extended to a scheduling framework by @cite_17 . Other schemes that employ network coding in a relay setup includes the MORE protocol @cite_15 , which performs RLNC at the source only to reduce the amount of coordination required by multiple relay nodes, and the COPE protocol, which employs RLNC at the relay only in a 2-way relay channel to improve reliability, taking advantage of opportunistic listening and coding @cite_29 . also proposed a network coding based cooperative multicast scheme to show that significant throughput gains can be achieved when network coding is performed at the relay only @cite_6 ; one assumption in this work is that feedback is available from both the destination and the relay to the source after each packet reception. In practical systems, feedback can be costly in terms of both throughput and energy, depending on the underlying hardware architecture @cite_11 .
|
{
"cite_N": [
"@cite_11",
"@cite_29",
"@cite_1",
"@cite_6",
"@cite_15",
"@cite_17"
],
"mid": [
"2950944549",
"2163728264",
"2953360229",
"2147198176",
"2127350146",
"2098674643"
],
"abstract": [
"A network coding scheme for practical implementations of wireless body area networks is presented, with the objective of providing reliability under low-energy constraints. We propose a simple network layer protocol for star networks, adapting redundancy based on both transmission and reception energies for data and control packets, as well as channel conditions. Our numerical results show that even for small networks, the amount of energy reduction achievable can range from 29 to 87 , as the receiving energy per control packet increases from equal to much larger than the transmitting energy per data packet. The achievable gains increase as a) more nodes are added to the network, and or b) the channels seen by different sensor nodes become more asymmetric.",
"This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. Prior work on network coding is mainly theoretical and focuses on multicast traffic. This paper aims to bridge theory with practice; it addresses the common case of unicast traffic, dynamic and potentially bursty flows, and practical issues facing the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that COPE largely increases network throughput. The gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.",
"We present a capacity-achieving coding scheme for unicast or multicast over lossy packet networks. In the scheme, intermediate nodes perform additional coding yet do not decode nor even wait for a block of packets before sending out coded packets. Rather, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of previously received packets. All coding and decoding operations have polynomial complexity. We show that the scheme is capacity-achieving as long as packets received on a link arrive according to a process that has an average rate. Thus, packet losses on a link may exhibit correlation in time or with losses on other links. In the special case of Poisson traffic with i.i.d. losses, we give error exponents that quantify the rate of decay of the probability of error with coding delay. Our analysis of the scheme shows that it is not only capacity-achieving, but that the propagation of packets carrying \"innovative\" information follows the propagation of jobs through a queueing network, and therefore fluid flow models yield good approximations. We consider networks with both lossy point-to-point and broadcast links, allowing us to model both wireline and wireless packet networks.",
"We first consider a topology consisting of one source, two destinations and one relay. For such a topology, it is shown that a network coding based cooperative (NCBC) multicast scheme can achieve a diversity order of two. In this paper, we discuss and analyze NCBC in a systematic way as well as compare its performance with two other multicast protocols. The throughput, delay and queue length for each protocol are evaluated. In addition, we present an optimal scheme to maximize throughput subject to delay and queue length constraints. Numerical results will demonstrate that network coding can bring significant gains in terms of throughput.",
"Opportunistic routing is a recent technique that achieves high throughput in the face of lossy wireless links. The current opportunistic routing protocol, ExOR, ties the MAC with routing, imposing a strict schedule on routers' access to the medium. Although the scheduler delivers opportunistic gains, it misses some of the inherent features of the 802.11 MAC. For example, it prevents spatial reuse and thus may underutilize the wireless medium. It also eliminates the layering abstraction, making the protocol less amenable to extensions to alternate traffic types such as multicast. This paper presents MORE, a MAC-independent opportunistic routing protocol. MORE randomly mixes packets before forwarding them. This randomness ensures that routers that hear the same transmission do not forward the same packets. Thus, MORE needs no special scheduler to coordinate routers and can run directly on top of 802.11. Experimental results from a 20-node wireless testbed show that MORE's median unicast throughput is 22 higher than ExOR, and the gains rise to 45 over ExOR when there is a chance of spatial reuse. For multicast, MORE's gains increase with the number of destinations, and are 35-200 greater than ExOR.",
"Consider network coded multicast traffic over a wireless network in the bandwidth limited regime. We formulate the joint medium access and subgraph optimization problem by means of a graphical conflict model. The nature of network coded flows is not captured by classical link-based scheduling and therefore requires a novel approach based on conflicting hyperarcs. By means of simulations, we evaluate the performance of our algorithm and conclude that it significantly outperforms existing scheduling techniques."
]
}
|
1109.2613
|
2594199870
|
The throughput benefits of random linear network codes have been studied extensively for wirelined and wireless erasure networks. It is often assumed that all nodes within a network perform coding operations. In energy-constrained systems, however, coding subgraphs should be chosen to control the number of coding nodes while maintaining throughput. In this paper, we explore the strategic use of network coding in the wireless packet erasure relay channel according to both throughput and energy metrics. In the relay channel, a single source communicates to a single sink through the aid of a half-duplex relay. The fluid flow model is used to describe the case where both the source and the relay are coding, and Markov chain models are proposed to describe packet evolution if only the source or only the relay is coding. In addition to transmission energy, we take into account coding and reception energies. We show that coding at the relay alone while operating in a rateless fashion is neither throughput nor energy efficient. Given a set of system parameters, our analysis determines the optimal amount of time the relay should participate in the transmission, and where coding should be performed.
|
In this paper, we explore rateless transmissions, where the acknowledgement for successful reception is sent only once by the destination when the transmission of all available data is completed. As described in the introduction, we also take into account the energy spent on reception and packet processing in addition to the energy required to transmit them. Furthermore, we assume that a sufficiently large field is used for network coding operations, such that transmissions of non-innovative packets from the source can be neglected. In terms of energy use, we make the simple assumption that coding energy stays constant as field size increases, and show that, the decision to code depends on the dominating energy term (transmission, reception, or code generation). The tradeoff between energy budget for the transmission of linearly dependent packets when field size is small and the energy budget for code generation when field size is large is discussed in @cite_4 .
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"180044563"
],
"abstract": [
"In the last few years, Network Coding (NC) has been shown to provide several advantages, both in theory and in practice. However, its applicability to battery-operated systems under strict power constraints has not been proven yet, since most implementations are based on high-end CPUs and GPUs. This work represents the first effort to bridge NC theory with real-world, low-power applications. In this paper, we provide a detailed analysis on the energy consumption of NC, based on VLSI design measurements, and an approach for specifying optimal algorithmic parameters, such as field size, minimizing the required energy for both transmission and coding of data. Our custom, energy-aware NC accelerator proves the feasibility of incorporating NC into modern, lowpower systems; the proposed architecture achieves a coding throughput of 80MB s (60MB s), while consuming 22uW (12.5mW) for the encoding (decoding) process."
]
}
|
1109.3240
|
2171720987
|
In many real-world networks, nodes have class labels, attributes, or variables that affect the network's topology. If the topology of the network is known but the labels of the nodes are hidden, we would like to select a small subset of nodes such that, if we knew their labels, we could accurately predict the labels of all the other nodes. We develop an active learning algorithm for this problem which uses information-theoretic techniques to choose which nodes to explore. We test our algorithm on networks from three different domains: a social network, a network of English words that appear adjacently in a novel, and a marine food web. Our algorithm makes no initial assumptions about how the groups connect, and performs well even when faced with quite general types of network structure. In particular, we do not assume that nodes of the same class are more likely to be connected to each other---only that they connect to the rest of the network in similar ways.
|
The idea of designing experiments by maximizing the mutual information between the variable we learn next and the joint distribution of the other variables, or equivalently the expected amount of information we gain about the joint distribution, has a long history in statistics, artificial intelligence, and machine learning, e.g. Mackay @cite_37 and Guo and Greiner @cite_39 . Indeed, it goes back to the work of Lindley @cite_21 in the 1950s. However, to our knowledge this is the first time it has been coupled with a generative model to discover hidden variables in networks.
|
{
"cite_N": [
"@cite_37",
"@cite_21",
"@cite_39"
],
"mid": [
"2115305054",
"2076580309",
"120286951"
],
"abstract": [
"Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed that measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain information about lead to three different criteria for data selection. All these criteria depend on the assumption that the hypothesis space is correct, which may prove to be their main weakness.",
"",
"An \"active learning system\" will sequentially decide which unlabeled instance to label, with the goal of efficiently gathering the information necessary to produce a good classifier. Some such systems greedily select the next instance based only on properties of that instance and the few currently labeled points -- e.g., selecting the one closest to the current classification boundary. Unfortunately, these approaches ignore the valuable information contained in the other unlabeled instances, which can help identify a good classifier much faster. For the previous approaches that do exploit this unlabeled data, this information is mostly used in a conservative way. One common property of the approaches in the literature is that the active learner sticks to one single query selection criterion in the whole process. We propose a system, MM+M, that selects the query instance that is able to provide the maximum conditional mutual information about the labels of the unlabeled instances, given the labeled data, in an optimistic way. This approach implicitly exploits the discriminative partition information contained in the unlabeled data. Instead of using one selection criterion, MM+M also employs a simple on-line method that changes its selection rule when it encounters an \"unexpected label\". Our empirical results demonstrate that this new approach works effectively."
]
}
|
1109.3240
|
2171720987
|
In many real-world networks, nodes have class labels, attributes, or variables that affect the network's topology. If the topology of the network is known but the labels of the nodes are hidden, we would like to select a small subset of nodes such that, if we knew their labels, we could accurately predict the labels of all the other nodes. We develop an active learning algorithm for this problem which uses information-theoretic techniques to choose which nodes to explore. We test our algorithm on networks from three different domains: a social network, a network of English words that appear adjacently in a novel, and a marine food web. Our algorithm makes no initial assumptions about how the groups connect, and performs well even when faced with quite general types of network structure. In particular, we do not assume that nodes of the same class are more likely to be connected to each other---only that they connect to the rest of the network in similar ways.
|
Additional works by Goldberg, Zhu, and Wright @cite_29 and Tong and Jin @cite_22 also perform semi-supervised learning on graphs, and handle the disassortative case. But they work in a setting where they know, for each link, if the ends should have the same or different labels, such as if one writer quotes another with pejorative words. In contrast, we work in a setting where we have no such information: only the topology is available to us, and there are no signs on the edges telling us whether we should propagate similar or dissimilar labels.
|
{
"cite_N": [
"@cite_29",
"@cite_22"
],
"mid": [
"2287655724",
"119705851"
],
"abstract": [
"Supervised and semi-supervised data mining techniques require labeled data. However, labeling examples is costly for many real-world applications. To address this problem, active learning techniques have been developed to guide the labeling process in an effort to minimize the amount of labeled data without sacrificing much from the quality of the learned models. Yet, most of the active learning methods to date have remained relatively agnostic to the rich structure offered by network data, often ignoring the relationships between the nodes of a network. On the other hand, the relational learning community has shown that the relationships can be very informative for various prediction tasks. In this paper, we propose different ways of adapting existing active learning work to network data while utilizing links to select better examples to label.",
"Recent studies have shown that graph-based approaches are effective for semi-supervised learning. The key idea behind many graph-based approaches is to enforce the consistency between the class assignment of unlabeled examples and the pairwise similarity between examples. One major limitation with most graph-based approaches is that they are unable to explore dissimilarity or negative similarity. This is because the dissimilar relation is not transitive, and therefore is difficult to be propagated. Furthermore, negative similarity could result in unbounded energy functions, which makes most graph-based algorithms unapplicable. In this paper, we propose a new graph-based approach, termed as \"mixed label propagation\" which is able to effectively explore both similarity and dissimilarity simultaneously. In particular, the new framework determines the assignment of class labels by (1) minimizing the energy function associated with positive similarity, and (2) maximizing the energy function associated with negative similarity. Our empirical study with collaborative filtering shows promising performance of the proposed approach."
]
}
|
1109.2935
|
2950910970
|
Over the past five years, graphics processing units (GPUs) have had a transformational effect on numerical lattice quantum chromodynamics (LQCD) calculations in nuclear and particle physics. While GPUs have been applied with great success to the post-Monte Carlo "analysis" phase which accounts for a substantial fraction of the workload in a typical LQCD calculation, the initial Monte Carlo "gauge field generation" phase requires capability-level supercomputing, corresponding to O(100) GPUs or more. Such strong scaling has not been previously achieved. In this contribution, we demonstrate that using a multi-dimensional parallelization strategy and a domain-decomposed preconditioner allows us to scale into this regime. We present results for two popular discretizations of the Dirac operator, Wilson-clover and improved staggered, employing up to 256 GPUs on the Edge cluster at Lawrence Livermore National Laboratory.
|
Most work to date has concerned single-GPU LQCD implementations, and beyond the multi-GPU parallelization of QUDA @cite_12 @cite_24 and the work in @cite_4 which targets a multi-GPU implementation of the overlap formulation, there has been little reported in the literature, though we are aware of other implementations which are in production @cite_3 . Domain-decomposition algorithms were first introduced to LQCD in @cite_7 , through an implementation of the Schwarz Alternating Procedure preconditioner, which is a multiplicative Schwarz preconditioner. More akin to the work presented here is the work in @cite_15 where a restricted additive Schwarz preconditioner was implemented for a GPU cluster. However, the work reported in @cite_15 was carried out on a rather small cluster containing only 4 nodes and connected with Gigabit Ethernet. The work presented here aims for scaling to O(100) GPUs using a QDR Infiniband interconnect.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_3",
"@cite_24",
"@cite_15",
"@cite_12"
],
"mid": [
"2953175396",
"2074846139",
"",
"",
"2964130252",
"2295191825"
],
"abstract": [
"Lattice QCD calculations were one of the first applications to show the potential of GPUs in the area of high performance computing. Our interest is to find ways to effectively use GPUs for lattice calculations using the overlap operator. The large memory footprint of these codes requires the use of multiple GPUs in parallel. In this paper we show the methods we used to implement this operator efficiently. We run our codes both on a GPU cluster and a CPU cluster with similar interconnects. We find that to match performance the CPU cluster requires 20-30 times more CPU cores than GPUs.",
"Abstract Efficient algorithms for the solution of partial differential equations on parallel computers are often based on domain decomposition methods. Schwarz preconditioners combined with standard Krylov space solvers are widely used in this context, and such a combination is shown here to perform very well in the case of the Wilson–Dirac equation in lattice QCD. In particular, with respect to even-odd preconditioned solvers, the communication overhead is significantly reduced, which allows the computational work to be distributed over a large number of processors with only small parallelization losses.",
"",
"",
"Pallalel GPGPU computing for lattice QCD simulations has a bottleneck on the GPU to GPU data communication due to the lack of the direct data exchanging facility. In this work we investigate the performance of quark solver using the restricted additive Schwarz (RAS) preconditioner on a low cost GPU cluster. We expect that the RAS preconditioner with appropriate domaindecomposition and task distribution reduces the communication bottleneck. The GPU cluster we constructed is composed of four PC boxes, two GPU cards are attached to each box, and we have eight GPU cards in total. The compute nodes are connected with rather slow but low cost Gigabit-Ethernet. We include the RAS preconditioner in the single-precision part of the mixedprecision nested-BiCGStab algorithm and the single-precision task is distributed to the multiple GPUs. The benchmarking is done with the O(a)-improved Wilson quark on a randomly generated gauge configuration with the size of 32 4 . We observe a factor two improvment on the solver performance with the RAS precoditioner compared to that without the preconditioner and find that the improvment mainly comes from the reduction of the communication bottleneck as we expected.",
"Graphics Processing Units (GPUs) are having a transformational effect on numerical lattice quantum chromo- dynamics (LQCD) calculations of importance in nuclear and particle physics. The QUDA library provides a package of mixed precision sparse matrix linear solvers for LQCD applications, supporting single GPUs based on NVIDIA's Compute Unified Device Architecture (CUDA). This library, interfaced to the QDP++ Chroma framework for LQCD calculations, is currently in production use on the \"9g\" cluster at the Jefferson Laboratory, enabling unprecedented price performance for a range of problems in LQCD. Nevertheless, memory constraints on current GPU devices limit the problem sizes that can be tackled. In this contribution we describe the parallelization of the QUDA library onto multiple GPUs using MPI, including strategies for the overlapping of communication and computation. We report on both weak and strong scaling for up to 32 GPUs interconnected by InfiniBand, on which we sustain in excess of 4 Tflops."
]
}
|
1109.2265
|
1539963681
|
We determine conditions on q for the nonexistence of deep holes of the standard Reed-Solomon code of dimension k over F_q generated by polynomials of degree k+d. Our conditions rely on the existence of q-rational points with nonzero, pairwise-distinct coordinates of a certain family of hypersurfaces defined over F_q. We show that the hypersurfaces under consideration are invariant under the action of the symmetric group of permutations of the coordinates. This allows us to obtain critical information concerning the singular locus of these hypersurfaces, from which the existence of q-rational points is established.
|
As explained before, in @cite_15 the nonexistence of deep holes of the standard Reed--Solomon code @math is reduced to the existence of @math --rational points, namely points whose coordinates belong to @math , with nonzero, pairwise--distinct coordinates of the hypersurfaces @math defined by the family of polynomials @math of ), where @math runs through the set of polynomials @math as in ). The authors prove that all the hypersurfaces @math are absolutely irreducible. This enables them to apply the explicit version of the Lang--Weil estimate of @cite_3 in order to obtain sufficient conditions for the nonexistence of deep holes of Reed--Solomon codes. More precisely, the following result is obtained.
|
{
"cite_N": [
"@cite_15",
"@cite_3"
],
"mid": [
"1565759886",
"2022841156"
],
"abstract": [
"For generalized Reed-Solomon codes, it has been proved [7] that the problem of determining if a received word is a deep hole is co-NP-complete. The reduction relies on the fact that the evaluation set of the code can be exponential in the length of the code - a property that practical codes do not usually possess. In this paper, we first present a much simpler proof of the same result. We then consider the problem for standard Reed-Solomon codes, i.e. the evaluation set consists of all the nonzero elements in the field. We reduce the problem of identifying deep holes to deciding whether an absolutely irreducible hypersurface over a finite field contains a rational point whose coordinates are pairwise distinct and nonzero. By applying Cafure-Matera estimation of rational points on algebraic varieties, we prove that the received vector (f(α))α∈Fpfor the Reed-Solomon [p - 1, k]p, k < p1 4-Ɛ, cannot be a deep hole, whenever f(x) is a polynomial of degree k + d for 1 ≤ d ≤ p3 13-Ɛ.",
"We show explicit estimates on the number of q-ratinoal points of an F\"q-definable affine absolutely irreducible variety of [email protected]?\"q^n. Our estimates for a hypersurface significantly improve previous estimates of W. Schmidt and M.-D. Huang and Y.-C. Wong, while in the case of a variety our estimates improve those of S. Ghorpade and G. Lachaud in several important cases. Our proofs rely on elementary methods of effective elimination theory and suitable effective versions of the first Bertini theorem."
]
}
|
1109.2265
|
1539963681
|
We determine conditions on q for the nonexistence of deep holes of the standard Reed-Solomon code of dimension k over F_q generated by polynomials of degree k+d. Our conditions rely on the existence of q-rational points with nonzero, pairwise-distinct coordinates of a certain family of hypersurfaces defined over F_q. We show that the hypersurfaces under consideration are invariant under the action of the symmetric group of permutations of the coordinates. This allows us to obtain critical information concerning the singular locus of these hypersurfaces, from which the existence of q-rational points is established.
|
In @cite_2 the existence of deep holes is reconsidered. Using the Weil estimate for certain character sums as in @cite_10 , the authors obtain the following result.
|
{
"cite_N": [
"@cite_10",
"@cite_2"
],
"mid": [
"2050777560",
"2077712820"
],
"abstract": [
"Weil's character sum estimate is used to study the problem of constructing generators for the multiplicative group of a finite field. An application to the distribution of irreducible polynomials is given, which confirms an asymptotic version of a conjecture of Hansen-Mullen.",
"The complexity of decoding the standard Reed-Solomon code is a well known open problem in coding theory. The main problem is to compute the error distance of a received word. Using the Weil bound for character sum estimate, we show that the error distance can be determined precisely when the degree of the received word is small. As an application of our method, we give a significant improvement of the recent bound of Cheng-Murray on non-existence of deep holes (words with maximal error distance)."
]
}
|
1109.2434
|
2950836901
|
Answer set programming (ASP) is a form of declarative programming that allows to succinctly formulate and efficiently solve complex problems. An intuitive extension of this formalism is communicating ASP, in which multiple ASP programs collaborate to solve the problem at hand. However, the expressiveness of communicating ASP has not been thoroughly studied. In this paper, we present a systematic study of the additional expressiveness offered by allowing ASP programs to communicate. First, we consider a simple form of communication where programs are only allowed to ask questions to each other. For the most part, we deliberately only consider simple programs, i.e. programs for which computing the answer sets is in P. We find that the problem of deciding whether a literal is in some answer set of a communicating ASP program using simple communication is NP-hard. In other words: we move up a step in the polynomial hierarchy due to the ability of these simple ASP programs to communicate and collaborate. Second, we modify the communication mechanism to also allow us to focus on a sequence of communicating programs, where each program in the sequence may successively remove some of the remaining models. This mimics a network of leaders, where the first leader has the first say and may remove models that he or she finds unsatisfactory. Using this particular communication mechanism allows us to capture the entire polynomial hierarchy. This means, in particular, that communicating ASP could be used to solve problems that are above the second level of the polynomial hierarchy, such as some forms of abductive reasoning as well as PSPACE-complete problems such as STRIPS planning.
|
Two other important works in the area of multi-agent ASP are @cite_24 and @cite_13 . In both @cite_24 and @cite_13 a multi-agent system is developed in which multiple agents component programs can communicate with each other. Most importantly from the point of view of our work, both approaches use ASP and have agents that are quite expressive in their own right. Indeed, in @cite_24 each agent is an Ordered Choice Logic Program (OCLP) @cite_21 and in @cite_13 each agent uses the extended answer set semantics.
|
{
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_13"
],
"mid": [
"2171432826",
"58578906",
""
],
"abstract": [
"Multi-agent systems (MAS) can take many forms depending on the characteristics of the agents populating them. Amongst the more demanding properties with respect to the design and implementation of multi-agent system is how these agents may individually reason and communicate about their knowledge and beliefs, with a view to cooperation and collaboration. In this paper, we present a deductive reasoning multi-agent platform using an extension of answer set programming (ASP). We show that it is capable of dealing with the specification and implementation of the system's architecture, communication and the individual agent's reasoning capacities. Agents are represented as Ordered Choice Logic Programs (OCLP) as a way of modelling their knowledge and reasoning capacities, with communication between the agents regulated by uni-directional channels transporting information based on their answer sets. In the implementation of our system we combine the extensibility of the JADE framework with the flexibility of the OCT front-end to the Smodels answer set solver. The power of this approach is demonstrated by a multi-agent system reasoning about equilibria of extensive games with perfect information.",
"Ordered Choice Logic Programming (OCLP) allows for preference-based decision-making with multiple alternatives and without the burden of any form of negation. This complete absence of negation does not weaken the language as both forms (classical and as-failure) can be intuitively simulated in the language. The semantics of the language is based on the preference between alternatives, yielding both a skeptical and a credulous approach. In this paper we discuss the theoretical basis for the implementation of an OCLP front-end for answer set solvers that can compute both semantics in an efficient manner. Both the basic algorithm and the proposed optimizations can be used in general and are not tailored towards any particular answer set solver.",
""
]
}
|
1109.2434
|
2950836901
|
Answer set programming (ASP) is a form of declarative programming that allows to succinctly formulate and efficiently solve complex problems. An intuitive extension of this formalism is communicating ASP, in which multiple ASP programs collaborate to solve the problem at hand. However, the expressiveness of communicating ASP has not been thoroughly studied. In this paper, we present a systematic study of the additional expressiveness offered by allowing ASP programs to communicate. First, we consider a simple form of communication where programs are only allowed to ask questions to each other. For the most part, we deliberately only consider simple programs, i.e. programs for which computing the answer sets is in P. We find that the problem of deciding whether a literal is in some answer set of a communicating ASP program using simple communication is NP-hard. In other words: we move up a step in the polynomial hierarchy due to the ability of these simple ASP programs to communicate and collaborate. Second, we modify the communication mechanism to also allow us to focus on a sequence of communicating programs, where each program in the sequence may successively remove some of the remaining models. This mimics a network of leaders, where the first leader has the first say and may remove models that he or she finds unsatisfactory. Using this particular communication mechanism allows us to capture the entire polynomial hierarchy. This means, in particular, that communicating ASP could be used to solve problems that are above the second level of the polynomial hierarchy, such as some forms of abductive reasoning as well as PSPACE-complete problems such as STRIPS planning.
|
We also mention @cite_1 where recursive modular non-monotonic logic programs (MLP) under the ASP semantics are considered. The main difference between MLP and our work is that our communication mechanism is parameter-less, the truth of a situated literal is not dependent on parameters passed by the situated literal to the target component program. Our approach is clearly different and we cannot readily mimic the behaviour of the networks presented in @cite_1 . Our expressiveness results therefore do not directly apply to MLPs.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1511906506"
],
"abstract": [
"Recently, enabling modularity aspects in Answer Set Programming (ASP) has gained increasing interest to ease the composition of program parts to an overall program. In this paper, we focus on modular nonmonotonic logic programs (MLP) under the answer set semantics, whose modules may have contextually dependent input provided by other modules. Moreover, (mutually) recursive module calls are allowed. We define a model-theoretic semantics for this extended setting, show that many desired properties of ordinary logic programming generalize to our modular ASP, and determine the computational complexity of the new formalism. We investigate the relationship of modular programs to disjunctive logic programs with well-defined input output interface (DLP-functions) and show that they can be embedded into MLPs."
]
}
|
1109.1990
|
2950602671
|
Using the @math -norm to regularize the estimation of the parameter vector of a linear model leads to an unstable estimator when covariates are highly correlated. In this paper, we introduce a new penalty function which takes into account the correlation of the design matrix to stabilize the estimation. This norm, called the trace Lasso, uses the trace norm, which is a convex surrogate of the rank, of the selected covariates as the criterion of model complexity. We analyze the properties of our norm, describe an optimization algorithm based on reweighted least-squares, and illustrate the behavior of this norm on synthetic data, showing that it is more adapted to strong correlations than competing methods such as the elastic net.
|
Hence, it is natural to penalize linear models by the number of variables used by the model. Unfortunately, this criterion, sometimes denoted by @math ( @math -penalty), is not convex and solving the problem in Eq. ) is generally NP-hard @cite_17 . Thus, a convex relaxation for this problem was introduced, replacing the size of the selected subset by the @math -norm of @math . This estimator is known as the Lasso @cite_14 in the statistics community and basis pursuit @cite_18 in signal processing. It was later shown that under some assumptions, the two problems were in fact equivalent (see for example @cite_4 and references therein).
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_17"
],
"mid": [
"1986931325",
"2135046866",
"2129131372",
"2021302824"
],
"abstract": [
"The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an \"optimal\" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.",
"SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.",
"This paper considers a natural error correcting problem with real valued input output. We wish to recover an input vector f spl isin R sup n from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the spl lscr sub 1 -minimization problem ( spl par x spl par sub spl lscr 1 := spl Sigma sub i |x sub i |) min(g spl isin R sup n ) spl par y - Ag spl par sub spl lscr 1 provided that the support of the vector of errors is not too large, spl par e spl par sub spl lscr 0 :=| i:e sub i spl ne 0 | spl les spl rho spl middot m for some spl rho >0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of spl lscr sub 1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.",
"The problem of optimally approximating a function with a linear expansion over a redundant dictionary of waveforms is NP-hard. The greedy matching pursuit algorithm and its orthogonalized variant produce suboptimal function expansions by iteratively choosing dictionary waveforms that best match the function’s structures. A matching pursuit provides a means of quickly computing compact, adaptive function approximations."
]
}
|
1109.1990
|
2950602671
|
Using the @math -norm to regularize the estimation of the parameter vector of a linear model leads to an unstable estimator when covariates are highly correlated. In this paper, we introduce a new penalty function which takes into account the correlation of the design matrix to stabilize the estimation. This norm, called the trace Lasso, uses the trace norm, which is a convex surrogate of the rank, of the selected covariates as the criterion of model complexity. We analyze the properties of our norm, describe an optimization algorithm based on reweighted least-squares, and illustrate the behavior of this norm on synthetic data, showing that it is more adapted to strong correlations than competing methods such as the elastic net.
|
When two predictors are highly correlated, the Lasso has a very unstable behavior: it may only select the variable that is the most correlated with the residual. On the other hand, the Tikhonov regularization tends to shrink coefficients of correlated variables together, leading to a very stable behavior. In order to get the best of both worlds, stability and variable selection, Zou and Hastie introduced the elastic net @cite_2 , which is the sum of the @math -norm and squared @math -norm. Unfortunately, this estimator needs two regularization parameters and is not adaptive to the precise correlation structure of the data. Some authors also proposed to use pairwise correlations between predictors to interpolate more adaptively between the @math -norm and squared @math -norm, by introducing the pairwise elastic net @cite_3 (see comparisons with our approach in ).
|
{
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"2148172612",
"2122825543"
],
"abstract": [
"A new approach to regression regularization called the Pairwise Elastic Net is proposed. Like the Elastic Net, it simultaneously performs automatic variable selection and continuous shrinkage. In addition, the Pairwise Elastic Net encourages the grouping of strongly correlated predictors based on a pairwise similarity measure. We give examples of how the approach can be used to achieve the objectives of Ridge regression, the Lasso, the Elastic Net, and Group Lasso. Finally, we present a coordinate descent algorithm to solve the Pairwise Elastic Net.",
"Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together.The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the"
]
}
|
1109.2148
|
2164781796
|
Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics.
|
Hierarchical HMMs @cite_4 , factorial HMMs , and HMMs based on tree automata @cite_35 decompose the state variables into smaller units. In hierarchical HMMs states themselves can be HMMs, in factorial HMMs they can be factored into @math state variables which depend on one another only through the observation, and in tree based HMMs the represented probability distributions are defined over tree structures. The key difference with LOHMMs is that these approaches do not employ the logical concept of unification. Unification is essential because it allows us to introduce abstract transitions, which do not consist of more detailed states. As our experimental evidence shows, sharing information among abstract states by means of unification can lead to more accurate model estimation. The same holds for relational Markov models (RMMs) @cite_26 to which LOHMMs are most closely related. In RMMs, states can be of different types, with each type described by a different set of variables. The domain of each variable can be hierarchically structured. The main differences between LOHMMs and RMMs are that RMMs do not either support variable binding nor unification nor hidden states.
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_4"
],
"mid": [
"1547553618",
"599484449",
"1636244751"
],
"abstract": [
"In the traditional setting, text categorization is formulated as a concept learning problem where each instance is a single isolated document. However, this perspective is not appropriate in the case of many digital libraries that offer as contents scanned and optically read books or magazines. In this paper, we propose a more general formulation of text categorization, allowing documents to be organized as sequences of pages. We introduce a novel hybrid system specifically designed for multi-page text documents. The architecture relies on hidden Markov models whose emissions are bag-of-words resulting from a multinomial word event model, as in the generative portion of the Naive Bayes classifier. The rationale behind our proposal is that taking into account contextual information provided by the whole page sequence can help disambiguation and improves single page classification accuracy. Our results on two datasets of scanned journals from the Making of America collection confirm the importance of using whole page sequences. The empirical evaluation indicates that the error rate (as obtained by running the Naive Bayes classifier on isolated pages) can be significantly reduced if contextual information is incorporated.",
"",
"We introduce, analyze and demonstrate a recursive hierarchical generalization of the widely used hidden Markov models, which we name Hierarchical Hidden Markov Models (HHMM). Our model is motivated by the complex multi-scale structure which appears in many natural sequences, particularly in language, handwriting and speech. We seek a systematic unsupervised approach to the modeling of such structures. By extending the standard Baum-Welch (forward-backward) algorithm, we derive an efficient procedure for estimating the model parameters from unlabeled data. We then use the trained model for automatic hierarchical parsing of observation sequences. We describe two applications of our model and its parameter estimation procedure. In the first application we show how to construct hierarchical models of natural English text. In these models different levels of the hierarchy correspond to structures on different length scales in the text. In the second application we demonstrate how HHMMs can be used to automatically identify repeated strokes that represent combination of letters in cursive handwriting."
]
}
|
1109.2148
|
2164781796
|
Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics.
|
An alternative to learning PCFGs from strings only is to learn from more structured data such as skeletons , which are derivation trees with the nonterminal nodes removed @cite_15 . Skeletons are exactly the set of trees accepted by skelet al tree automata (STA). Informally, an STA, when given a tree as input, processes the tree bottom up, assigning a state to each node based on the states of that node's children. The STA accepts a tree iff it assigns a final state to the root of the tree. Due to this automata-based characterization of the skeletons of derivation trees, the learning problem of (P)CFGs can be reduced to the problem of an STA. In particular, STA techniques have been adapted to learning tree grammars and (P)CFGs efficiently.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2022460695"
],
"abstract": [
"In this paper we explore the idea of characterizing sentences by the shapes of their structural descriptions only; for example, in the case of context free grammars, by the shapes of the derivation trees only. Such structural descriptions will be called skeletons . A skeleton exhibits all of the grouping structure (phrase structure) of the sentence without naming the syntactic categories used in the description. The inclusion of syntactic categories as variables is primarily a question of economy of description. Every context free grammar is strongly equivalent to a skelet al grammar, in a sense made precise in the paper. Besides clarifying the role of skeletons in mathematical linguistics, we show that skelet al automata provide a characterization of local sets, remedying a “defect” in the usual tree automata theory. We extend the method of skelet al structural descriptions to other forms of tree describing systems. We also suggest a theoretical basis for grammatical inference based on grouping structure only."
]
}
|
1109.2148
|
2164781796
|
Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics.
|
In the second type of approaches, most attention has been devoted to developing highly expressive formalisms, such as e.g. PCUP @cite_38 , PCLP @cite_28 , SLPs @cite_39 , PLPs @cite_30 , RBNs @cite_1 , PRMs @cite_9 , PRISM @cite_2 , BLPs @cite_10 @cite_7 , and DPRMs @cite_5 . LOHMMs can be seen as an attempt towards downgrading such highly expressive frameworks. Indeed, applying the main idea underlying LOHMMs to non-regular probabilistic grammar, i.e., replacing flat symbols with atoms, yields -- in principle -- stochastic logic programs @cite_39 . As a consequence, LOHMMs represent an interesting position on the expressiveness scale. Whereas they retain most of the essential logical features of the more expressive formalisms, they seem easier to understand, adapt and learn. This is akin to many contemporary considerations in inductive logic programming and multi-relational data mining .
|
{
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_39",
"@cite_2",
"@cite_5",
"@cite_10"
],
"mid": [
"2000805332",
"173692888",
"",
"1515383272",
"2803681287",
"1607205948",
"2583189786",
"2101335378",
"330951725",
"1791364091"
],
"abstract": [
"We define a language for representing context-sensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We provide a declarative semantics for our language. We present a query answering procedure that takes a query Q and a set of evidence E and constructs a Bayesian network to compute P(Q¦E). The posterior probability is then computed using any of a number of Bayesian network inference algorithms. We use the declarative semantics to prove the query procedure sound and complete. We use concepts from logic programming to justify our approach.",
"",
"",
"We present a probabilistic model for constraint-based grammars and a method for estimating the parameters of such models from incomplete, i.e., unparsed data. Whereas methods exist to estimate the parameters of probabilistic context-free grammars from incomplete data (Baum 1970), so far for probabilistic grammars involving context-dependencies only parameter estimation techniques from complete, i.e., fully parsed data have been presented (Abney 1997). However, complete-data estimation requires labor-intensive, error-prone, and grammar-specific hand-annotating of large language corpora. We present a log-linear probability model for constraint logic programming, and a general algorithm to estimate the parameters of such models from incomplete data by extending the estimation algorithm of Della-Pietra, Della-Pietra, and Lafferty (1997) to incomplete data settings.",
"A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with \"flat\" data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend well-known statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning -- the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases.",
"A new method is developed to represent probabilistic relations on multiple random events. Where previously knowledge bases containing probabilistic rules were used for this purpose, here a probability distribution over the relations is directly represented by a Bayesian network. By using a powerful way of specifying conditional probability distributions in these networks, the resulting formalism is more expressive than the previous ones. Particularly, it provides for constraints on equalities of events, and it allows to define complex, nested combination functions.",
"Probabilistic inductive logic programming, sometimes also called statistical relational learning, addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with first order logic representations and machine learning. A rich variety of different formalisms and learning techniques have been developed. In the present paper, we start from inductive logic programming and sketch how it can be extended with probabilistic methods. More precisely, we outline three classical settings for inductive logic programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be used to learn different types of probabilistic representations.",
"We propose a logical mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, that runs for a class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have been developed independently in each research field. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can significantly outperform the Inside-Outside algorithm.",
"Intelligent agents must function in an uncertain world, containing multiple objects and relations that change over time. Unfortunately, no representation is currently available that can handle all these issues, while allowing for principled and efficient inference. This paper addresses this need by introducing dynamic probabilistic relational models (DPRMs). DPRMs are an extension of dynamic Bayesian networks (DBNs) where each time slice (and its dependences on previous slices) is represented by a probabilistic relational model (PRM). Particle filtering, the standard method for inference in DBNs, has severe limitations when applied to DPRMs, but we are able to greatly improve its performance through a form of relational Rao-Blackwellisation. Further gains in efficiency arc obtained through the use of abstraction trees, a novel data structure. We successfully apply DPRMs to execution monitoring and fault diagnosis of an assembly plan, in which a complex product is gradually constructed from subparts.",
"Recently, new representation languages that integrate first order logic with Bayesian networks have been developed. Bayesian logic programs are one of these languages. In this paper, we present results on combining Inductive Logic Programming (ILP) with Bayesian networks to learn both the qualitative and the quantitative components of Bayesian logic programs. More precisely, we show how to combine the ILP setting learning from interpretations with score-based techniques for learning Bayesian networks. Thus, the paper positively answers Koller and Pfeffer's question, whether techniques from ILP could help to learn the logical component of first order probabilistic models."
]
}
|
1109.2034
|
2950940477
|
Recurrent neural networks (RNNs) in combination with a pooling operator and the neighbourhood components analysis (NCA) objective function are able to detect the characterizing dynamics of sequences and embed them into a fixed-length vector space of arbitrary dimensionality. Subsequently, the resulting features are meaningful and can be used for visualization or nearest neighbour classification in linear time. This kind of metric learning for sequential data enables the use of algorithms tailored towards fixed length vector spaces such as R^n.
|
A fully unsupervised approach is to use the parameters estimated by a system identification method (e.g., a linear dynamical system) as features. Recent work includes @cite_8 , in which a complex numbers based system successfully clusters motion capture data.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2181643798"
],
"abstract": [
"Given a motion capture sequence, how to identify the category of the motion? Classifying human motions is a critical task in motion editing and synthesizing, for which manual labeling is clearly inefficient for large databases. Here we study the general problem of time series clustering. We propose a novel method of clustering time series that can (a) learn joint temporal dynamics in the data; (b) handle time lags; and (c) produce interpretable features. We achieve this by developing complex-valued linear dynamical systems (CLDS), which include real-valued Kalman filters as a special case; our advantage is that the transition matrix is simpler (just diagonal), and the transmission one easier to interpret. We then present Complex-Fit, a novel EM algorithm to learn the parameters for the general model and its special case for clustering. Our approach produces significant improvement in clustering quality, 1.5 to 5 times better than well-known competitors on real motion capture sequences."
]
}
|
1109.1021
|
1998360044
|
Collaborative spectrum sensing is vulnerable to data falsification attacks, where malicious secondary users (attackers) submit manipulated sensing reports to mislead the fusion center's decision on spectrum occupancy. This paper considers a challenging attack scenario, where multiple attackers cooperatively maximize their aggregate spectrum utilization. Without attack-prevention mechanisms, we show that honest secondary users (SUs) are unable to opportunistically transmit over the licensed spectrum and may even get penalized for collisions caused by attackers. To prevent such attacks, we propose two attack-prevention mechanisms with direct and indirect punishments. Our key idea is to identify collisions with the primary user (PU) that should not happen if all SUs follow the fusion center's decision. Unlike prior work, the proposed simple mechanisms do not require the fusion center to identify and exclude attackers. The direct punishment can effectively prevent all attackers from behaving maliciously. The indirect punishment is easier to implement and can prevent attacks when the attackers care enough about their long-term reward.
|
There has been a growing interest in attack-resilient collaborative spectrum sensing in CRNs (e.g., @cite_16 @cite_9 @cite_3 @cite_17 @cite_15 ). Liu @cite_12 exploited the problem of detecting unauthorized usage of a primary licensed spectrum. In this work, the path-loss effect is studied to detect anomalous spectrum usage, and a machine-learning technique is proposed to solve the general case. Chen @cite_9 focused on a passive approach with robust signal processing, and investigated robustness of various data-fusion techniques against sensing-targeted attacks. Kaligineedi @cite_16 presented outlier detection schemes to identify abnormal sensing reports. Min @cite_3 proposed a mechanisms for detecting and filtering out abnormal sensing reports by exploiting shadow-fading correlation in received primary signal strengths among nearby SUs. Fatemieh @cite_15 used outlier measurements inside each SU cell and collaboration among neighboring cells to identify cells with a significant number of malicious nodes. Li in @cite_17 detected possible abnormalities according to SU sensing report histories.
|
{
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2107069798",
"2096368748",
"2078514270",
"2135889430",
"2100139208",
"2164710023"
],
"abstract": [
"Distributed spectrum sensing (DSS) enables a Cognitive Radio (CR) network to reliably detect licensed users and avoid causing interference to licensed communications. The data fusion technique is a key component of DSS. We discuss the Byzantine failure problem in the context of data fusion, which may be caused by either malfunctioning sensing terminals or Spectrum Sensing Data Falsification (SSDF) attacks. In either case, incorrect spectrum sensing data will be reported to a data collector which can lead to the distortion of data fusion outputs. We investigate various data fusion techniques, focusing on their robustness against Byzantine failures. In contrast to existing data fusion techniques that use a fixed number of samples, we propose a new technique that uses a variable number of samples. The proposed technique, which we call Weighted Sequential Probability Ratio Test (WSPRT), introduces a reputation-based mechanism to the Sequential Probability Ratio Test (SPRT). We evaluate WSPRT by comparing it with a variety of data fusion techniques under various network operating conditions. Our simulation results indicate that WSPRT is the most robust against the Byzantine failure problem among the data fusion techniques that were considered.",
"Accurate sensing of the spectrum condition is of crucial importance to the mitigation of the spectrum scarcity problem in dynamic spectrum access (DSA) networks. Specifi- cally, distributed sensing has been recognized as a viable means to enhance the incumbent signal detection by exploiting the diversity of sensors. However, it is challenging to make such distributed sensing secure due mainly to the unique features of DSA networks—openness of a low-layer protocol stack in SDR devices and non-existence of communications between primary and secondary devices. To address this challenge, we propose attack-tolerant distributed sensing protocol (ADSP), under which sensors in close proximity are grouped into a cluster, and sensors in a cluster cooperatively safeguard distributed sensing. The heart of ADSP is a novel shadow fading correlation-based filter tailored to anomaly detection, by which the fusion center pre- filters abnormal sensor reports via cross-validation. By realizing this correlation filter, ADSP minimizes the impact of an attack on the performance of distributed sensing, while incurring minimal processing and communications overheads. The efficacy of our scheme is validated on a realistic two-dimensional shadow-fading field, which accurately approximates real-world shadowing envi- ronments. Our extensive simulation-based evaluation shows that ADSP significantly reduces the impact of attacks on incumbent detection performance.",
"Collaborative Sensing is an important enabling tech- nique for realizing opportunistic spectrum access in white space (cognitive radio) networks. We consider the security ramifications of crowdsourcing of spectrum sensing in presence of malicious users that report false measurements. We propose viewing the area of interest as a grid of square cells and using it to identify and disregard false measurements. The proposed mechanism is based on identifying outlier measurements inside each cell, as well as corroboration among neighboring cells in a hierarchical structure to identify cells with significant number of malicious nodes. We provide a framework for taking into considera- tion inherent uncertainties, such as loss due to distance and shadowing, to reduce the likelihood of inaccurate classification of legitimate measurements as outliers. We use simulations to evaluate the effectiveness of the proposed approach against attackers with varying degrees of sophistication. The results show that depending on the attacker-type and location parameters, in the worst case we can nullify the effect of up to 41 of attacker nodes in a particular region. This figure is as high as 100 for a large subset of scenarios. 1",
"The most important task for a cognitive radio (CR) system is to identify the primary licensed users over a wide range of spectrum. Cooperation among spectrum sensing devices has been shown to offer various benefits including decrease in sensitivity requirements of the individual sensing devices. However, it has been shown in the literature that the performance of cooperative sensing schemes can be severely degraded due to presence of malicious users sending false sensing data. In this paper, we present techniques to identify such malicious users and mitigate their harmful effect on the performance of the cooperative sensing system.",
"Dynamic spectrum access has been proposed as a means to share scarce radio resources, and requires devices to follow protocols that use resources in a proper, disciplined manner. For a cognitive radio network to achieve this goal, spectrum policies and the ability to enforce them are necessary. Detection of an unauthorized (anomalous) usage is one of the critical issues in spectrum etiquette enforcement. In this paper, we present a network structure for dynamic spectrum access and formulate the anomalous usage detection problem using statistical significance testing. The detection problem is classified into two subproblems. For the case where no authorized signal is present, we describe the existing cooperative sensing schemes and investigate the impact of signal path loss on their performance. For the case where an authorized signal is present, we propose three methods that detect anomalous transmissions by making use of the characteristics of radio propagation. Analytical models are formulated for two special cases and, due to the intractability of the general problem, we present an algorithm using machine learning techniques to solve the general case. Our simulation results show that our approaches can effectively detect unauthorized spectrum usage with high detection rate and low false positive rate.",
"Collaborative spectrum sensing is subject to the attack of malicious secondary user(s), which may send false reports. Therefore, it is necessary to detect potential attacker(s) and then exclude the attacker's report for spectrum sensing. Many existing attacker-detection schemes are based on the knowledge of the attacker's strategy and thus apply the Bayesian attacker detection. However, in practical cognitive radio systems the data fusion center typically does not know the attacker's strategy. To alleviate the problem of the unknown strategy of attacker(s), an abnormality-detection approach, based on the abnormality detection in data mining, is proposed. The performance of the attacker detection in the single-attacker scenario is analyzed explicitly. For the case in which the attacker does not know the reports of honest secondary users (called independent attack), it is shown that the attacker can always be detected as the number of spectrum sensing rounds tends to infinity. For the case in which the attacker knows all the reports of other secondary users, based on which the attacker sends its report (called dependent attack), an approach for the attacker to perfectly avoid being detected is found, provided that the attacker has perfect information about the miss-detection and false-alarm probabilities. This motivates cognitive radio networks to protect the reports of secondary users. The performance of attacker detection in the general case of multiple attackers is demonstrated using numerical simulations."
]
}
|
1109.1552
|
2950581827
|
The problem of opportunistic spectrum access in cognitive radio networks has been recently formulated as a non-Bayesian restless multi-armed bandit problem. In this problem, there are N arms (corresponding to channels) and one player (corresponding to a secondary user). The state of each arm evolves as a finite-state Markov chain with unknown parameters. At each time slot, the player can select K < N arms to play and receives state-dependent rewards (corresponding to the throughput obtained given the activity of primary users). The objective is to maximize the expected total rewards (i.e., total throughput) obtained over multiple plays. The performance of an algorithm for such a multi-armed bandit problem is measured in terms of regret, defined as the difference in expected reward compared to a model-aware genie who always plays the best K arms. In this paper, we propose a new continuous exploration and exploitation (CEE) algorithm for this problem. When no information is available about the dynamics of the arms, CEE is the first algorithm to guarantee near-logarithmic regret uniformly over time. When some bounds corresponding to the stationary state distributions and the state-dependent rewards are known, we show that CEE can be easily modified to achieve logarithmic regret over time. In contrast, prior algorithms require additional information concerning bounds on the second eigenvalues of the transition matrices in order to guarantee logarithmic regret. Finally, we show through numerical simulations that CEE is more efficient than prior algorithms.
|
One important variant of classic multi-armed bandit problem is the Bayesian MAB. In this case, probabilistic knowledge about the problem and system is required. Gittins and Jones presented a simple approach for the rested bandit problem, in which one arm is activated at each time and only the activated arm changes state as a known Markov process @cite_1 . The optimal policy is to play the arm with highest Gittins' index. The was posed by Whittle in 1988 @cite_7 , in which all the arms can change state. The optimal solution for this problem has been shown to be PSPACE-hard by Papadimitriou and Tsitsiklis @cite_6 . Whittle proposed an index policy which is optimal under certain conditions @cite_10 . This policy can offer near-optimal performance numerically, however, its existence and optimality are not guaranteed. The restless bandit problem has no general solution though it may be solved in special cases. For instance, when each channel is modeled as identical two-state Markov chain, the myopic policy is proved to be optimal if the channel number is no more than 3 or is positively correlated @cite_3 @cite_8 .
|
{
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_10"
],
"mid": [
"2056921512",
"2029199203",
"2141515329",
"2171671264",
"2105556121",
""
],
"abstract": [
"We consider a population of n projects which in general continue to evolve whether in operation or not (although by different rules). It is desired to choose the projects in operation at each instant of time so as to maximise the expected rate of reward, under a constraint upon the expected number of projects in operation. The Lagrange multiplier associated with this constraint defines an index which reduces to the Gittins index when projects not being operated are static. If one is constrained to operate m projects exactly then arguments are advanced to support the conjecture that, for m and n large in constant ratio, the policy of operating the m projects of largest current index is nearly optimal. The index is evaluated for some particular projects.",
"This paper considers opportunistic communication over multiple channels where the state (ldquogoodrdquo or ldquobadrdquo) of each channel evolves as independent and identically distributed (i.i.d.) Markov processes. A user, with limited channel sensing capability, chooses one channel to sense and decides whether to use the channel (based on the sensing result) in each time slot. A reward is obtained whenever the user senses and accesses a ldquogoodrdquo channel. The objective is to design a channel selection policy that maximizes the expected total (discounted or average) reward accrued over a finite or infinite horizon. This problem can be cast as a partially observed Markov decision process (POMDP) or a restless multiarmed bandit process, to which optimal solutions are often intractable. This paper shows that a myopic policy that maximizes the immediate one-step reward is optimal when the state transitions are positively correlated over time. When the state transitions are negatively correlated, we show that the same policy is optimal when the number of channels is limited to two or three, while presenting a counterexample for the case of four channels. This result finds applications in opportunistic transmission scheduling in a fading environment, cognitive radio networks for spectrum overlay, and resource-constrained jamming and antijamming.",
"In this paper, we consider a class of restless multiarmed bandit processes (RMABs) that arises in dynamic multichannel access, user server scheduling, and optimal activation in multiagent systems. For this class of RMABs, we establish the indexability and obtain Whittle index in closed form for both discounted and average reward criteria. These results lead to a direct implementation of Whittle index policy with remarkably low complexity. When arms are stochastically identical, we show that Whittle index policy is optimal under certain conditions. Furthermore, it has a semiuniversal structure that obviates the need to know the Markov transition probabilities. The optimality and the semiuniversal structure result from the equivalence between Whittle index policy and the myopic policy established in this work. For nonidentical arms, we develop efficient algorithms for computing a performance upper bound given by Lagrangian relaxation. The tightness of the upper bound and the near-optimal performance of Whittle index policy are illustrated with simulation examples.",
"We consider a multi-channel opportunistic communication system where the states of these channels evolve as independent and statistically identical Markov chains (the Gilbert- Elliot channel model). A user chooses one channel to sense and access in each slot and collects a reward determined by the state of the chosen channel. The problem is to design a sensing policy for channel selection to maximize the average reward, which can be formulated as a multi-arm restless bandit process. In this paper, we study the structure, optimality, and performance of the myopic sensing policy. We show that the myopic sensing policy has a simple robust structure that reduces channel selection to a round-robin procedure and obviates the need for knowing the channel transition probabilities. The optimality of this simple policy is established for the two-channel case and conjectured for the general case based on numerical results. The performance of the myopic sensing policy is analyzed, which, based on the optimality of myopic sensing, characterizes the maximum throughput of a multi-channel opportunistic communication system and its scaling behavior with respect to the number of channels. These results apply to cognitive radio networks, opportunistic transmission in fading environments, downlink scheduling in centralized networks, and resource-constrained jamming and anti-jamming.",
"We show that several well-known optimization problems related to the optimal control of queues are provably intractable-independently of any unproven conjecture such as P â NP. In particular, we show that several versions of the problem of optimally controlling a simple network of queues with simple arrival and service distributions and multiple customer classes is complete for exponential time. This is perhaps the first such intractability result for a well-known optimization problem. We also show that the restless bandit problem the generalization of the multi-armed bandit problem to the case in which the unselected processes are not quiescent is complete for polynomial space.",
""
]
}
|
1109.1552
|
2950581827
|
The problem of opportunistic spectrum access in cognitive radio networks has been recently formulated as a non-Bayesian restless multi-armed bandit problem. In this problem, there are N arms (corresponding to channels) and one player (corresponding to a secondary user). The state of each arm evolves as a finite-state Markov chain with unknown parameters. At each time slot, the player can select K < N arms to play and receives state-dependent rewards (corresponding to the throughput obtained given the activity of primary users). The objective is to maximize the expected total rewards (i.e., total throughput) obtained over multiple plays. The performance of an algorithm for such a multi-armed bandit problem is measured in terms of regret, defined as the difference in expected reward compared to a model-aware genie who always plays the best K arms. In this paper, we propose a new continuous exploration and exploitation (CEE) algorithm for this problem. When no information is available about the dynamics of the arms, CEE is the first algorithm to guarantee near-logarithmic regret uniformly over time. When some bounds corresponding to the stationary state distributions and the state-dependent rewards are known, we show that CEE can be easily modified to achieve logarithmic regret over time. In contrast, prior algorithms require additional information concerning bounds on the second eigenvalues of the transition matrices in order to guarantee logarithmic regret. Finally, we show through numerical simulations that CEE is more efficient than prior algorithms.
|
There have been a few recent attempts to solve the restless multi-arm bandit problem under unknown models. @cite_2 , Tekin and Liu use a weaker definition of regret and propose a policy (RCA) that achieves logarithmic regret when certain knowledge about the system is known. However, the algorithm only exploits part of observing data and leaves space to improve performances. @cite_0 , Haoyang Liu proposed a policy, referred to as RUCB, achieving a logarithmic regret over time when certain system parameters are known. The regret they adopt is the same as in @cite_2 . They also extend the RUCB policy to achieve a near-logarithmic regret over time when no knowledge about the system is available. Conclusions on multi-arm selections are given in @cite_11 . However, they only give the upper bound of regret at the end of a certain time point referred as . When no information about the system is known, their analysis of regret gives the upper bound over time only asymptotically, not uniformly.
|
{
"cite_N": [
"@cite_0",
"@cite_11",
"@cite_2"
],
"mid": [
"2148250692",
"2157146750",
""
],
"abstract": [
"We consider the restless multi-armed bandit (RMAB) problem with unknown dynamics. At each time, a player chooses K out of N (N > K) arms to play. The state of each arm determines the reward when the arm is played and transits according to Markovian rules no matter the arm is engaged or passive. The Markovian dynamics of the arms are unknown to the player. The objective is to maximize the long-term expected reward by designing an optimal arm selection policy. The performance of a policy is measured by regret, defined as the reward loss with respect to the case where the player knows which K arms are the most rewarding and always plays these K best arms. We construct a policy, referred to as Restless Upper Confidence Bound (RUCB), that achieves a regret with logarithmic order of time when an arbitrary nontrivial bound on certain system parameters is known. When no knowledge about the system is available, we extend the RUCB policy to achieve a regret arbitrarily close to the logarithmic order. In both cases, the system achieves the maximum average reward offered by the K best arms. Potential applications of these results include cognitive radio networks, opportunistic communications in unknown fading environments, and financial investment.",
"We consider decentralized restless multi-armed bandit problems with unknown dynamics and multiple players. The reward state of each arm transits according to an unknown Markovian rule when it is played and evolves according to an arbitrary unknown random process when it is passive. Players activating the same arm at the same time collide and suffer from reward loss. The objective is to maximize the long-term reward by designing a decentralized arm selection policy to address unknown reward models and collisions among players. A decentralized policy is constructed that achieves a regret with logarithmic order. The result finds applications in communication networks, financial investment, and industrial engineering.",
""
]
}
|
1109.1552
|
2950581827
|
The problem of opportunistic spectrum access in cognitive radio networks has been recently formulated as a non-Bayesian restless multi-armed bandit problem. In this problem, there are N arms (corresponding to channels) and one player (corresponding to a secondary user). The state of each arm evolves as a finite-state Markov chain with unknown parameters. At each time slot, the player can select K < N arms to play and receives state-dependent rewards (corresponding to the throughput obtained given the activity of primary users). The objective is to maximize the expected total rewards (i.e., total throughput) obtained over multiple plays. The performance of an algorithm for such a multi-armed bandit problem is measured in terms of regret, defined as the difference in expected reward compared to a model-aware genie who always plays the best K arms. In this paper, we propose a new continuous exploration and exploitation (CEE) algorithm for this problem. When no information is available about the dynamics of the arms, CEE is the first algorithm to guarantee near-logarithmic regret uniformly over time. When some bounds corresponding to the stationary state distributions and the state-dependent rewards are known, we show that CEE can be easily modified to achieve logarithmic regret over time. In contrast, prior algorithms require additional information concerning bounds on the second eigenvalues of the transition matrices in order to guarantee logarithmic regret. Finally, we show through numerical simulations that CEE is more efficient than prior algorithms.
|
In our previous work @cite_4 , we adopted a stronger definition of regret, which is defined as the reward loss with the optimal policy. Our policy achieve a near-logarithmic regret without of the system. It applies to special cases of the RMAB, in particular the same scenario as in @cite_3 and @cite_8 .
|
{
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_8"
],
"mid": [
"2962764550",
"2171671264",
"2029199203"
],
"abstract": [
"In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are N arms, with rewards on all arms evolving at each time as Markov chains with known parameters. A player seeks to activate K ≥ 1 arms at each time in order to maximize the expected total reward obtained over multiple plays. RMAB is a challenging problem that is known to be PSPACE-hard in general. We consider in this work the even harder non-Bayesian RMAB, in which the parameters of the Markov chain are assumed to be unknown a priori. We develop an original approach to this problem that is applicable when the corresponding Bayesian problem has the structure that, depending on the known parameter values, the optimal solution is one of a prescribed finite set of policies. In such settings, we propose to learn the optimal policy for the non-Bayesian RMAB by employing a suitable meta-policy which treats each policy from this finite set as an arm in a different non-Bayesian multi-armed bandit problem for which a single-arm selection policy is optimal. We demonstrate this approach by developing a novel sensing policy for opportunistic spectrum access over unknown dynamic channels. We prove that our policy achieves near-logarithmic regret (the difference in expected reward compared to a model-aware genie), which leads to the same average reward that can be achieved by the optimal policy under a known model. This is the first such result in the literature for a non-Bayesian RMAB.",
"We consider a multi-channel opportunistic communication system where the states of these channels evolve as independent and statistically identical Markov chains (the Gilbert- Elliot channel model). A user chooses one channel to sense and access in each slot and collects a reward determined by the state of the chosen channel. The problem is to design a sensing policy for channel selection to maximize the average reward, which can be formulated as a multi-arm restless bandit process. In this paper, we study the structure, optimality, and performance of the myopic sensing policy. We show that the myopic sensing policy has a simple robust structure that reduces channel selection to a round-robin procedure and obviates the need for knowing the channel transition probabilities. The optimality of this simple policy is established for the two-channel case and conjectured for the general case based on numerical results. The performance of the myopic sensing policy is analyzed, which, based on the optimality of myopic sensing, characterizes the maximum throughput of a multi-channel opportunistic communication system and its scaling behavior with respect to the number of channels. These results apply to cognitive radio networks, opportunistic transmission in fading environments, downlink scheduling in centralized networks, and resource-constrained jamming and anti-jamming.",
"This paper considers opportunistic communication over multiple channels where the state (ldquogoodrdquo or ldquobadrdquo) of each channel evolves as independent and identically distributed (i.i.d.) Markov processes. A user, with limited channel sensing capability, chooses one channel to sense and decides whether to use the channel (based on the sensing result) in each time slot. A reward is obtained whenever the user senses and accesses a ldquogoodrdquo channel. The objective is to design a channel selection policy that maximizes the expected total (discounted or average) reward accrued over a finite or infinite horizon. This problem can be cast as a partially observed Markov decision process (POMDP) or a restless multiarmed bandit process, to which optimal solutions are often intractable. This paper shows that a myopic policy that maximizes the immediate one-step reward is optimal when the state transitions are positively correlated over time. When the state transitions are negatively correlated, we show that the same policy is optimal when the number of channels is limited to two or three, while presenting a counterexample for the case of four channels. This result finds applications in opportunistic transmission scheduling in a fading environment, cognitive radio networks for spectrum overlay, and resource-constrained jamming and antijamming."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
Wang @cite_26 uses a parameterized database of human shapes consisting of to find a new human shape based on a given set of measurements. Feature patches are initialized to smooth patches that are subsequently refined to capture geometric details of the body shape. While this refinement does not maintain the smoothness of the patches, the final results are visually smooth and do not contain realistic localized shape details. The approach finds models in the database with measurements similar to the given set of measurements and computes the new human shape as a linear combination of these models. @cite_21 use a similar approach, but the models are represented using a layer-based representation.
|
{
"cite_N": [
"@cite_26",
"@cite_21"
],
"mid": [
"2020466163",
"1569618975"
],
"abstract": [
"Abstract This paper presents a novel feature based parameterization approach of human bodies from the unorganized cloud points and the parametric design method for generating new models based on the parameterization. The parameterization consists of two phases. First, the semantic feature extraction technique is applied to construct the feature wireframe of a human body from laser scanned 3D unorganized points. Secondly, the symmetric detail mesh surface of the human body is modeled. Gregory patches are utilized to generate G 1 continuous mesh surface interpolating the curves on feature wireframe. After that, a voxel-based algorithm adds details on the smooth G 1 continuous surface by the cloud points. Finally, the mesh surface is adjusted to become symmetric. Compared to other template fitting based approaches, the parameterization approach introduced in this paper is more efficient. The parametric design approach synthesizes parameterized sample models to a new human body according to user input sizing dimensions. It is based on a numerical optimization process. The strategy of choosing samples for synthesis is also introduced. Human bodies according to a wide range of dimensions can be generated by our approach. Different from the mathematical interpolation function based human body synthesis methods, the models generated in our method have the approximation errors minimized. All mannequins constructed by our approach have consistent feature patches, which benefits the design automation of customized clothes around human bodies a lot.",
"Personalize mannequin design are the basic element in virtual garment simulation and visualization. This paper addresses the problem of mannequin construction from scanned point cloud and regenerates their shapes by inputting variant dimension. Layer-based simplification and parameterization algorithm is presented to preserve the mannequin's features as far as possible. The regenerated models are watertight and used for clothes design."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
@cite_0 @cite_23 compute a new triangular mesh based on a given set of measurements. Starting from a parameterized database of human bodies in similar poses, the approach performs Principal Component Analysis (PCA) of the data. This yields one PCA weight for each training shape. The training database is used to learn a linear mapping from the set of measurements measured on the training data to the PCA space. This mapping is called . Feature analysis can be used to compute a new PCA weight based on a new set of measurements, and the learned PCA model allows to compute a new triangular mesh from this PCA weight. @cite_27 perform feature analysis on a database of human shapes consisting of feature patches to find a new model consisting of smooth patches.
|
{
"cite_N": [
"@cite_0",
"@cite_27",
"@cite_23"
],
"mid": [
"2099011563",
"2048454024",
"2154332168"
],
"abstract": [
"We develop a novel method for fitting high-resolution template meshes to detailed human body range scans with sparse 3D markers. We formulate an optimization problem in which the degrees of freedom are an affine transformation at each template vertex. The objective function is a weighted combination of three measures: proximity of transformed vertices to the range data, similarity between neighboring transformations, and proximity of sparse markers at corresponding locations on the template and target surface. We solve for the transformations with a non-linear optimizer, run at two resolutions to speed convergence. We demonstrate reconstruction and consistent parameterization of 250 human body models. With this parameterized set, we explore a variety of applications for human body modeling, including: morphing, texture transfer, statistical analysis of shape, model fitting from sparse markers, feature analysis to modify multiple correlated parameters (such as the weight and height of an individual), and transfer of surface detail and animation controls from a template to fitted models.",
"This paper presents an exemplar-based method to provide intuitive way for users to generate 3D human body shape from semantic parameters. In our approach, human models and their semantic parameters are correlated as a single linear system of equations. When users input a new set of semantic parameters, a new 3D human body will be synthesized from the exemplar human bodies in the database. This approach involves simpler computation compared to non-linear methods while maintaining quality outputs. A semantic parametric design in interactive speed can be implemented easily. Furthermore, a new method is developed to quickly predict whether the parameter values is reasonable or not, with the training models in the human body database. The reconstructed human bodies in this way will all have the same topology (i.e., mesh connectivity), which facilitates the freeform design automation of human-centric products.",
"In this paper, we demonstrate a system for synthesizing high-resolution, realistic 3D human body shapes according to user-specified anthropometric parameters. We begin with a corpus of whole-body 3D laser range scans of 250 different people. For each scan, we warp a common template mesh to fit each scanned shape, thereby creating a one-to-one vertex correspondence between each of the example body shapes. Once we have a common surface representation for each example, we then use principal component analysis to reduce the data storage requirements. The final step is to relate the variation of body shape with concrete parameters, such as body circumferences, point-to-point measurements, etc. These parameters can then be used as \"sliders\" to synthesize new individuals with the required attributes, or to edit the attributes of scanned individuals."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
Seo and Megnenat-Thalmann @cite_12 represent the human body as a triangular mesh with an associated skeleton model. As with the approach of , this approach reduces the dimensionality of the data using PCA. This yields a set of PCA weights. The approach learns a mapping from the set of measurements measured on the training data to the PCA space using an interpolating radial basis function (RBF) with Gaussian kernel @cite_1 . As in the approach of , this mapping produces a shape based on a new set of measurements. @cite_5 apply the two previously reviewed approaches to a new representation of human body shapes that is posture invariant. Their method simultaneously models body pose and shape.
|
{
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_12"
],
"mid": [
"1993846356",
"334078648",
"2005645152"
],
"abstract": [
"A circuit for controlling a display panel identifying malfunctions in an engine generator receives a plurality of electrical signals from the engine generator, each of which identifies a particular trouble. The electrical signal may be produced by closing a switch. It is caused to operate a latch that lights a light associated with the particular malfunction. Indications of other malfunctions are suppressed until the circuit is reset. A manual reset tests all lights and then leaves them off ready to respond. A power-up reset does not test lights but leaves all lights off ready to respond. The circuit is rendered especially appropriate for military use by hardening against radiation and against pulses of electromagnetic interference.",
"Preliminaries: Size Measures and Shape Coordinates. Preliminaries: Planar Procrustes Analysis. Shape Space and Distance. General Procrustes Methods. Shape Models for Two Dimensional Data. Tangent Space Inference. Size--and--Shape. Distributions for Higher Dimensions. Deformations and Describing Shape Change. Shape in Images. Additional Topics. References and Author Index. Index.",
"In this paper, we present an automatic, runtime modeler for modeling realistic, animatable human bodies. A user can generate a new model or modify an existing one simply by inputting a number of sizing parameters.We approach the problem by forming deformation functions that are devoted to the generation of appropriate shape and proportion of the body geometry by taking the parameters as input. Starting from a number of 3D scanned data of human body models as examples, we derive these functions by using radial basis interpolation. A prerequisite of such formulation is to have correspondence among example models in the database. We obtain the correspondence by fitting a template onto each scanned data. Throughout the paper, body geometry is considered to have two distinct entities, namely rigid and elastic component of the deformation. The rigid deformation is represented by the corresponding joint parameters, which will determine the linear approximation of the physique. The elastic deformation is essentially vertex displacements, which, when added to the rigid deformation, depicts the detail shape of the body.Having these interpolators formulated, the runtime modeling can be reduced to the function evaluation and application of the evaluated results to the template model. We demonstrate our method by applying different parameters to generate a wide range of different body models."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
Recently, Baek and Lee @cite_19 presented a technique that uses hierarchical clustering to build a statistical model of the training data. The approach proceeds by clustering the training database and by performing a multi-cluster analysis of the training data. To predict a body shape based on a set of input measurements, the approach finds the shape within the learned shape space that best describes the measurements using an optimization of shape parameters. It is shown experimentally that accurate and visually pleasing body shapes are estimated when the input body sizes are inside the shape space spanned by the training data. This approach is conceptually similar to the first optimization step of our algorithm. Hence, we expect that this approach does not allow to model shape variations that are outside of the shape space spanned by the training data.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2077557812"
],
"abstract": [
"The objective of this study is the development of a novel parametric human body shape modeling framework for integration into various product design applications. Our modeling framework is comprised of three phases of database construction, statistical analysis, and model generation. During the database construction phase, a 3D whole body scan data of 250 subjects are obtained, and their data structures are processed so as to be suitable for statistical analysis. Using those preprocessed scan data, the characteristics of the human body shape variation and their correlations with several items of body sizes are investigated in the statistical analysis phase. The correlations obtained from such analysis allow us to develop an interactive modeling interface, which takes the body sizes as inputs and returns a corresponding body shape model as an output. Using this interface, we develop a parametric human body shape modeling system and generate body shape models based on the input body sizes. In our experiment, our modeler produced reasonable results having not only a high level of accuracy but also fine visual fidelity. Compared to other parametric human modeling approaches, our method contributes to the related field by introducing a novel method for correlating body shape and body sizes and by establishing an improved parameter optimization technique for the model generation process."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
* Estimating 3D Body or Face Shapes from Markers @cite_17 aim to estimate a 3D human body shape based on a sparse set of marker positions. This technique is useful when motion capture data is available. 's SCAPE model represents the human body as a triangular mesh with an associated skeleton model. As with the approach of , this approach reduces the dimensionality of the data using PCA. This yields a set of PCA weights. aim to compute a new triangular mesh based on a set of marker positions located on the body. This is achieved by adjusting the PCA weights to solve a non-linear optimization problem. This method searches the solution in the learned PCA space. Hence, it cannot find local variations not present in the training database.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"1989191365"
],
"abstract": [
"We introduce the SCAPE method (Shape Completion and Animation for PEople)---a data-driven method for building a human shape model that spans variation in both subject shape and pose. The method is based on a representation that incorporates both articulated and non-rigid deformations. We learn a pose deformation model that derives the non-rigid surface deformation as a function of the pose of the articulated skeleton. We also learn a separate model of variation based on body shape. Our two models can be combined to produce 3D surface models with realistic muscle deformation for different people in different poses, when neither appear in the training set. We show how the model can be used for shape completion --- generating a complete surface mesh given a limited set of markers specifying the target shape. We present applications of shape completion to partial view completion and motion capture animation. In particular, our method is capable of constructing a high-quality animated surface model of a moving person, with realistic muscle deformation, using just a single static scan and a marker motion capture sequence of the person."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
Blanz and Vetter @cite_15 estimate a 3D face shape from a single input image in neutral expression. They start by building a parameterized database of textured 3D faces and performing PCA on the shape and texture data. Given an input image, the learned PCA space is searched to find the textured shape (and parameters related to rendering the model) that best explains the input image.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2237250383"
],
"abstract": [
"In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
@cite_14 estimate a body shape from two images of a human in a fixed posture. Starting from a parameterized database of human meshes in similar poses, the approach performs PCA of the 3D data. Given the two images, the learned PCA space is searched to find a set of PCA weights that corresponds to a 3D shape that matches the input images well.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"1553670630"
],
"abstract": [
"We present a data-driven shape model for reconstructing human body models from one or more 2D photos. One of the key tasks in reconstructing the 3D model from image data is shape recovery, a task done until now in utterly geometric way, in the domain of human body modeling. In contrast, we adopt a data-driven, parameterized deformable model that is acquired from a collection of range scans of real human body. The key idea is to complement the image-based reconstruction method by leveraging the quality shape and statistic information accumulated from multiple shapes of range-scanned people. In the presence of ambiguity either from the noise or missing views, our technique has a bias towards representing as much as possible the previously acquired ‘knowledge' on the shape geometry. Texture coordinates are then generated by projecting the modified deformable model onto the front and back images. Our technique has shown to reconstruct successfully human body models from minimum number images, even from a single image input."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
Chen and Cipolla @cite_24 aim to estimate the human body shape in a fixed pose based on a given silhouette. Starting from a parameterized database of human meshes in similar poses and a set of corresponding silhouettes, the approach performs PCA of the 3D and 2D data separately. The approach then computes a mapping from the PCA space of the silhouette data to the PCA space of the 3D data using a Shared Gaussian Process Latent Variable Model (SGPLVM) @cite_7 . Given a new silhouette, the approach maps the silhouette into silhouette PCA space and uses the SGPLVM to map to the PCA space of the 3D meshes. @cite_4 use a similar approach to estimate the pose of a human body based on a given silhouette.
|
{
"cite_N": [
"@cite_24",
"@cite_4",
"@cite_7"
],
"mid": [
"2030575031",
"1541825479",
"2103510282"
],
"abstract": [
"In this paper, we aim to reconstruct free-from 3D models from a single view by learning the prior knowledge of a specific class of objects. Instead of heuristically proposing specific regularities and defining parametric models as previous research, our shape prior is learned directly from existing 3D models under a framework based on the Gaussian Process Latent Variable Model (GPLVM). The major contributions of the paper include: 1) a probabilistic framework for prior-based reconstruction we propose, which requires no heuristic of the object, and can be easily generalized to handle various categories of 3D objects, and 2) an attempt at automatic reconstruction of more complex 3D shapes, like human bodies, from 2D silhouettes only. Qualitative and quantitative experimental results on both synthetic and real data demonstrate the efficacy of our new approach.",
"We describe a method for recovering 3D human body pose from silhouettes. Our model is based on learning a latent space using the Gaussian Process Latent Variable Model (GP-LVM) [1] encapsulating both pose and silhouette features Our method is generative, this allows us to model the ambiguities of a silhouette representation in a principled way. We learn a dynamical model over the latent space which allows us to disambiguate between ambiguous silhouettes by temporal consistency. The model has only two free parameters and has several advantages over both regression approaches and other generative methods. In addition to the application shown in this paper the suggested model is easily extended to multiple observation spaces without constraints on type.",
"We propose an algorithm that uses Gaussian process regression to learn common hidden structure shared between corresponding sets of heterogenous observations. The observation spaces are linked via a single, reduced-dimensionality latent variable space. We present results from two datasets demonstrating the algorithms's ability to synthesize novel data from learned correspondences. We first show that the method can learn the nonlinear mapping between corresponding views of objects, filling in missing data as needed to synthesize novel views. We then show that the method can learn a mapping between human degrees of freedom and robotic degrees of freedom for a humanoid robot, allowing robotic imitation of human poses from motion capture data."
]
}
|
1109.1175
|
2143039717
|
Recent advances in 3D imaging technologies give rise to databases of human shapes, from which statistical shape models can be built. These statistical models represent prior knowledge of the human shape and enable us to solve shape reconstruction problems from partial information. Generating human shape from traditional anthropometric measurements is such a problem, since these 1D measurements encode 3D shape information. Combined with a statistical shape model, these easy-to-obtain measurements can be leveraged to create 3D human shapes. However, existing methods limit the creation of the shapes to the space spanned by the database and thus require a large amount of training data. In this paper, we introduce a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization. This method ensures that the generated shape is both human-like and satisfies the measurement conditions. We demonstrate the effectiveness of the method and compare it to existing approaches through extensive experiments, using both synthetic data and real human measurements.
|
@cite_25 estimate both the shape and pose of a human body shape from a single photograph with a set of markers to be identified by the user. The approach is based on the SCAPE model. When adjusting the PCA weights, the shape is deformed to best match the image in terms of a shape-from-shading energy. @cite_3 estimate both the shape and pose of a human body from a photograph of a dressed person. This approach requires a manual segmentation of the background and the human in the image.
|
{
"cite_N": [
"@cite_25",
"@cite_3"
],
"mid": [
"2545173102",
"1992475172"
],
"abstract": [
"We describe a solution to the challenging problem of estimating human body shape from a single photograph or painting. Our approach computes shape and pose parameters of a 3D human body model directly from monocular image cues and advances the state of the art in several directions. First, given a user-supplied estimate of the subject's height and a few clicked points on the body we estimate an initial 3D articulated body pose and shape. Second, using this initial guess we generate a tri-map of regions inside, outside and on the boundary of the human, which is used to segment the image using graph cuts. Third, we learn a low-dimensional linear model of human shape in which variations due to height are concentrated along a single dimension, enabling height-constrained estimation of body shape. Fourth, we formulate the problem of parametric human shape from shading. We estimate the body pose, shape and reflectance as well as the scene lighting that produces a synthesized body that robustly matches the image evidence. Quantitative experiments demonstrate how smooth shading provides powerful constraints on human shape. We further demonstrate a novel application in which we extract 3D human models from archival photographs and paintings.",
"In this paper we propose a multilinear model of human pose and body shape which is estimated from a database of registered 3D body scans in different poses. The model is generated by factorizing the measurements into pose and shape dependent components. By combining it with an ICP based registration method, we are able to estimate pose and body shape of dressed subjects from single images. If several images of the subject are available, shape and poses can be optimized simultaneously for all input images. Additionally, while estimating pose and shape, we use the model as a virtual calibration pattern and also recover the parameters of the perspective camera model the images were created with."
]
}
|
1109.0758
|
1539727242
|
In this paper, we propose a probabilistic generative model, called unified model, which naturally unifies the ideas of social influence, collaborative filtering and content-based methods for item recommendation. To address the issue of hidden social influence, we devise new algorithms to learn the model parameters of our proposal based on expectation maximization (EM). In addition to a single-machine version of our EM algorithm, we further devise a parallelized implementation on the Map-Reduce framework to process two large-scale datasets we collect. Moreover, we show that the social influence obtained from our generative models can be used for group recommendation. Finally, we conduct comprehensive experiments using the datasets crawled from last.fm and whrrl.com to validate our ideas. Experimental results show that the generative models with social influence significantly outperform those without incorporating social influence. The unified generative model proposed in this paper obtains the best performance. Moreover, our study on social influence finds that users in whrrl.com are more likely to get influenced by friends than those in last.fm. The experimental results also confirm that our social influence based group recommendation algorithm outperforms the state-of-the-art algorithms for group recommendation.
|
. Under the context of social networks, social friendship is shown to be beneficial for recommendation @cite_15 @cite_18 @cite_19 @cite_33 @cite_4 @cite_16 @cite_22 . However, prior works in this area are mostly based on ad hoc heuristics. How a user is influenced by friends in the item selection process remains vague. For example, @cite_16 linearly combines social influence with conventional collaborative filtering; @cite_33 @cite_4 employ the random walk @cite_7 approach to incorporate social network information into the process of item recommendation; while @cite_15 @cite_18 @cite_19 explores social friendship via matrix factorization technique, where social influence is integrated by simple linear combination @cite_18 @cite_19 or as a regularization term @cite_15 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_7",
"@cite_19",
"@cite_15",
"@cite_16"
],
"mid": [
"1980127182",
"1976320242",
"2084527756",
"",
"2133299088",
"2093219534",
"2144487656",
"2087692915"
],
"abstract": [
"With the exponential growth of Web contents, Recommender System has become indispensable for discovering new information that might interest Web users. Despite their success in the industry, traditional recommender systems suffer from several problems. First, the sparseness of the user-item matrix seriously affects the recommendation quality. Second, traditional recommender systems ignore the connections among users, which loses the opportunity to provide more accurate and personalized recommendations. In this paper, aiming at providing more realistic and accurate recommendations, we propose a factor analysis-based optimization framework to incorporate the user trust and distrust relationships into the recommender systems. The contributions of this paper are three-fold: (1) We elaborate how user distrust information can benefit the recommender systems. (2) In terms of the trust relations, distinct from previous trust-aware recommender systems which are based on some heuristics, we systematically interpret how to constrain the objective function with trust regularization. (3) The experimental results show that the distrust relations among users are as important as the trust relations. The complexity analysis shows our method scales linearly with the number of observations, while the empirical analysis on a large Epinions dataset proves that our approaches perform better than the state-of-the-art approaches.",
"Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data. We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks. In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method.",
"Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"",
"How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the \"connection subgraphs\", personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block- wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150x speed up with 90 + quality preservation.",
"As an indispensable technique in the field of Information Filtering, Recommender System has been well studied and developed both in academia and in industry recently. However, most of current recommender systems suffer the following problems: (1) The large-scale and sparse data of the user-item matrix seriously affect the recommendation quality. As a result, most of the recommender systems cannot easily deal with users who have made very few ratings. (2) The traditional recommender systems assume that all the users are independent and identically distributed; this assumption ignores the connections among users, which is not consistent with the real world recommendations. Aiming at modeling recommender systems more accurately and realistically, we propose a novel probabilistic factor analysis framework, which naturally fuses the users' tastes and their trusted friends' favors together. In this framework, we coin the term Social Trust Ensemble to represent the formulation of the social trust restrictions on the recommender systems. The complexity analysis indicates that our approach can be applied to very large datasets since it scales linearly with the number of observations, while the experimental results show that our method performs better than the state-of-the-art approaches.",
"Although Recommender Systems have been comprehensively analyzed in the past decade, the study of social-based recommender systems just started. In this paper, aiming at providing a general method for improving recommender systems by incorporating social network information, we propose a matrix factorization framework with social regularization. The contributions of this paper are four-fold: (1) We elaborate how social network information can benefit recommender systems; (2) We interpret the differences between social-based recommender systems and trust-aware recommender systems; (3) We coin the term Social Regularization to represent the social constraints on recommender systems, and we systematically illustrate how to design a matrix factorization objective function with social regularization; and (4) The proposed method is quite general, which can be easily extended to incorporate other contextual information, like social tags, etc. The empirical analysis on two large datasets demonstrates that our approaches outperform other state-of-the-art methods.",
"In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches."
]
}
|
1109.0758
|
1539727242
|
In this paper, we propose a probabilistic generative model, called unified model, which naturally unifies the ideas of social influence, collaborative filtering and content-based methods for item recommendation. To address the issue of hidden social influence, we devise new algorithms to learn the model parameters of our proposal based on expectation maximization (EM). In addition to a single-machine version of our EM algorithm, we further devise a parallelized implementation on the Map-Reduce framework to process two large-scale datasets we collect. Moreover, we show that the social influence obtained from our generative models can be used for group recommendation. Finally, we conduct comprehensive experiments using the datasets crawled from last.fm and whrrl.com to validate our ideas. Experimental results show that the generative models with social influence significantly outperform those without incorporating social influence. The unified generative model proposed in this paper obtains the best performance. Moreover, our study on social influence finds that users in whrrl.com are more likely to get influenced by friends than those in last.fm. The experimental results also confirm that our social influence based group recommendation algorithm outperforms the state-of-the-art algorithms for group recommendation.
|
. To explore how to utilize social influence for group recommendation, we provides an in-depth study and comparison on group recommendation techniques. Group recommendations have been designed for various domains such as web news pages @cite_27 , tourism @cite_26 , music @cite_31 @cite_14 , and TV programs and movies @cite_21 @cite_24 . In summary, two main approaches have been proposed for group recommendation @cite_12 . The first one creates an aggregated profile for a group based on its group members and then makes recommendations based on the aggregated group profile @cite_31 @cite_24 . The second approach the recommendation results from individual members into a single group recommendation list. In other words, recommendations (i.e., ranked item lists) for individual members are created independently and then aggregated into a joint group recommendation list @cite_1 , where the aggregation functions could be based on average or least misery strategies @cite_9 . Different from these proposed methods, our approach regenerates the process of how group members would their preferences and other members to reach the final decision. Evaluation from real datasets demonstrates a significant improvement over the proposed method using social influence over the traditional methods.
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_24",
"@cite_27",
"@cite_31",
"@cite_12"
],
"mid": [
"2035525745",
"1603727626",
"2140715253",
"1527499847",
"",
"",
"22367269",
"2009686152",
""
],
"abstract": [
"Flytrap is a group music environment that knows its users' musical tastes and can automatically construct a soundtrack that tries to please everyone in the room. The system works by paying attention to what music people listen to on their computers. Users of the system have radio frequency ID badges that let the system know when they are nearby. Using the preference information it has gathered from watching its users, and knowledge of how genres of music interrelate, how artists have influenced each other, and what kinds of transitions between songs people tend to make, the 'virtual DJ' finds a compromise and chooses a song. The system tries to satisfy the tastes of people in the room, but it also makes a playlist that fits its own notion of what should come next. Once it has chosen a song, music is automatically broadcast over the network and played on the closest machine.",
"Group recommender systems introduce a whole set of new challenges for recommender systems research. The notion of generating a set of recommendations that will satisfy a group of users with potentially competing interests is challenging in itself. In addition to this we must consider how to record and combine the preferences of many different users as they engage in simultaneous recommendation dialogs. In this paper we introduce a group recommender system that is designed to provide assistance to a group of friends trying the plan a skiing vacation. The system uses the DiamondTouch interactive tabletop to allow up to 4 users to simultaneously engage in parallel recommendation sessions and we describe how personal and shared profiles and interaction spaces can be managed to generate sets of recommendations for the individual as well as the group.",
"Watching television tends to be a social activity. So, adaptive television needs to adapt to groups of users rather than to individual users. In this paper, we discuss different strategies for combining individual user models to adapt to groups, some of which are inspired by Social Choice Theory. In a first experiment, we explore how humans select a sequence of items for a group to watch, based on data about the individuals' preferences. The results show that humans use some of the strategies such as the Average Strategy (a.k.a. Additive Utilitarian), the Average Without Misery Strategy and the Least Misery Strategy, and care about fairness and avoiding individual misery. In a second experiment, we investigate how satisfied people believe they would be with sequences chosen by different strategies, and how their satisfaction corresponds with that predicted by a number of satisfaction functions. The results show that subjects use normalization, deduct misery, and use the ratings in a non-linear way. One of the satisfaction functions produced reasonable, though not completely correct predictions. According to our subjects, the sequences produced by five strategies give satisfaction to all individuals in the group. The results also show that subjects put more emphasis than expected on showing the best rated item to each individual (at a cost of misery for another individual), and that the ratings of the first and last items in the sequence are especially important. In a final experiment, we explore the influence viewing an item can have on the ratings of other items. This is important for deciding the order in which to present items. The results show an effect of both mood and topical relatedness.",
"We present PolyLens, a new collaborative filtering recommender system designed to recommend items for groups of users, rather than for individuals. A group recommender is more appropriate and useful for domains in which several people participate in a single activity, as is often the case with movies and restaurants. We present an analysis of the primary design issues for group recommenders, including questions about the nature of groups, the rights of group members, social value functions for groups, and interfaces for displaying group recommendations. We then report on our PolyLens prototype and the lessons we learned from usage logs and surveys from a nine-month trial that included 819 users We found that users not only valued group recommendations, but were willing to yield some privacy to get the benefits of group recommendations Users valued an extension to the group recommender system that enabled them to invite non-members to participate, via email",
"",
"",
"This paper overviews methods and techniques useful for building group-adaptive systems and presents an experience of building a system to give news adapted to different group of users in a public space. Starting from the analysis of limits of group modelling strategies and of problems to pass from a user modelling to a group modelling approach in the adaptation of system interaction, we suggest an update of the probabilistic group model to improve the interaction of groups of users with devices devoted to show news. In particular, we analyze the way to build a group model useful to improve the adaptation of a system that provides news on a video wall in a public space which is attended from a group of users with common interests.",
"Environmental factors affecting shared spaces are typically designed to appeal to the broadest audiences they are expected to serve, ignoring the preferences of the people actually inhabiting the environment at any given time. Examples of such factors include the lighting, temperature, decor or music in the common areas of an office building. We have designed and deployed MUSICFX, a group preference arbitration system that allows the members of a fitness center to influence, but not directly control, the selection of music in a fitness center. We present a number of empirical results from our work with this intelligent environment: the results of a poll of fitness center members, a quantitative evaluation of the performance of a group preference arbitrator in a shared environment, and some interesting anecdotes about members’ experiences with the system.",
""
]
}
|
1109.0730
|
2953110906
|
The performance of Orthogonal Matching Pursuit (OMP) for variable selection is analyzed for random designs. When contrasted with the deterministic case, since the performance is here measured after averaging over the distribution of the design matrix, one can have far less stringent sparsity constraints on the coefficient vector. We demonstrate that for exact sparse vectors, the performance of the OMP is similar to known results on the Lasso algorithm [ (2009) 2183--2202]. Moreover, variable selection under a more relaxed sparsity assumption on the coefficient vector, whereby one has only control on the @math norm of the smaller coefficients, is also analyzed. As a consequence of these results, we also show that the coefficient estimate satisfies strong oracle type inequalities.
|
As mentioned earlier, we are interested in variable selection in the high dimensional setting. Apart from iterative schemes, another popular approach is the convex relaxation scheme Lasso @cite_1 . In order to motivate our interest in random design matrices, we describe existing results on variable selection, using both methods, with deterministic as well as random design matrices. For convenience, we concentrate on implications of these results assuming the simplest sparsity constraint on @math , namely that @math has only a few non-zero entries. In particular, we assume that, |S_0( )| = k, where , @math . In other words, attention is restricted to all @math -sparse vectors, that is, those that have exactly @math non-zero entries. For convenience, we drop the dependence on @math and denote @math as @math whenever there is no ambiguity. The simplest goal then is to recover @math exactly, under the additional assumption that all @math , for @math , have magnitude at least @math , where @math . Denote as @math , as the set of coefficient vectors satisfying this assumption.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2135046866"
],
"abstract": [
"SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described."
]
}
|
1109.0730
|
2953110906
|
The performance of Orthogonal Matching Pursuit (OMP) for variable selection is analyzed for random designs. When contrasted with the deterministic case, since the performance is here measured after averaging over the distribution of the design matrix, one can have far less stringent sparsity constraints on the coefficient vector. We demonstrate that for exact sparse vectors, the performance of the OMP is similar to known results on the Lasso algorithm [ (2009) 2183--2202]. Moreover, variable selection under a more relaxed sparsity assumption on the coefficient vector, whereby one has only control on the @math norm of the smaller coefficients, is also analyzed. As a consequence of these results, we also show that the coefficient estimate satisfies strong oracle type inequalities.
|
A common sufficient condition on @math for this type of recovery is the , which requires that the the inner product between distinct columns be small. In particular, letting @math , for all @math , it is assumed that (X) = 1 n j j' |X_j^ X_ j' | is @math . Another related criterion is the @cite_14 , @cite_23 , which assumes, for all subset @math of size @math , that |(X_ T ^ X_ T )^ -1 X_ T ^ X_j |_1 < 1, for all j J - T . Here @math denotes the @math norm.
|
{
"cite_N": [
"@cite_14",
"@cite_23"
],
"mid": [
"2116148865",
"2150940164"
],
"abstract": [
"This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.",
"Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a computationally heavy combinatorial search. Lasso (Tibshirani, 1996) is now being used as a computationally feasible alternative to model selection. Therefore it is important to study Lasso for model selection purposes. In this paper, we prove that a single condition, which we call the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large. Based on these results, sufficient conditions that are verifiable in practice are given to relate to previous works and help applications of Lasso for feature selection and sparse representation. This Irrepresentable Condition, which depends mainly on the covariance of the predictor variables, states that Lasso selects the true model consistently if and (almost) only if the predictors that are not in the true model are \"irrepresentable\" (in a sense to be clarified) by predictors that are in the true model. Furthermore, simulations are carried out to provide insights and understanding of this result."
]
}
|
1109.0730
|
2953110906
|
The performance of Orthogonal Matching Pursuit (OMP) for variable selection is analyzed for random designs. When contrasted with the deterministic case, since the performance is here measured after averaging over the distribution of the design matrix, one can have far less stringent sparsity constraints on the coefficient vector. We demonstrate that for exact sparse vectors, the performance of the OMP is similar to known results on the Lasso algorithm [ (2009) 2183--2202]. Moreover, variable selection under a more relaxed sparsity assumption on the coefficient vector, whereby one has only control on the @math norm of the smaller coefficients, is also analyzed. As a consequence of these results, we also show that the coefficient estimate satisfies strong oracle type inequalities.
|
Observe that if @math ) is small, it gives strong guarantees on support recovery, since it ensures that any @math , with @math , can be recovered with high probability. However, it imposes severe constraints on the @math matrix. As as example, when the entries of @math are i.i.d Gaussian, the coherence @math is around @math . Correspondingly, for ) to hold, @math needs to be @math . In other words, the sparsity @math should be @math , which is rather strong since ideally one would like @math to be of the same order as @math . Similar requirements are needed for the irrepresentable condition to hold. Recovery using the irrepresentable condition has been shown for Lasso in @cite_23 , @cite_8 , and for the OMP in @cite_12 , @cite_2 . Indeed, it has been observed, in @cite_23 for the Lasso, and in @cite_12 , for the OMP, that a similar such condition is also necessary if one wanted exact recovery of the support, while keeping @math small.
|
{
"cite_N": [
"@cite_2",
"@cite_12",
"@cite_23",
"@cite_8"
],
"mid": [
"",
"2147329339",
"2150940164",
"2127300249"
],
"abstract": [
"",
"This paper studies the feature selection problem using a greedy least squares regression algorithm. We show that under a certain irrepresentable condition on the design matrix (but independent of the sparse target), the greedy algorithm can select features consistently when the sample size approaches infinity. The condition is identical to a corresponding condition for Lasso. Moreover, under a sparse eigenvalue condition, the greedy algorithm can reliably identify features as long as each nonzero coefficient is larger than a constant times the noise level. In comparison, Lasso may require the coefficients to be larger than O(√s) times the noise level in the worst case, where s is the number of nonzero coefficients.",
"Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a computationally heavy combinatorial search. Lasso (Tibshirani, 1996) is now being used as a computationally feasible alternative to model selection. Therefore it is important to study Lasso for model selection purposes. In this paper, we prove that a single condition, which we call the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large. Based on these results, sufficient conditions that are verifiable in practice are given to relate to previous works and help applications of Lasso for feature selection and sparse representation. This Irrepresentable Condition, which depends mainly on the covariance of the predictor variables, states that Lasso selects the true model consistently if and (almost) only if the predictors that are not in the true model are \"irrepresentable\" (in a sense to be clarified) by predictors that are in the true model. Furthermore, simulations are carried out to provide insights and understanding of this result.",
"The problem of consistently estimating the sparsity pattern of a vector β* ∈ RP based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problern dimension p, the number k of nonzero elements in β*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and l∞-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N(0, Σ) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 0, if n > 2(θu + δ)k log(p - k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2(θl - δ)k log(p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Σ = I p×p), we show that θl = θu = 1, so that the precise threshold n = 2k log (p - k) is exactly determined."
]
}
|
1109.0318
|
2015697017
|
Source localization by matched-field processing (MFP) generally involves solving a number of computationally intensive partial differential equations. This paper introduces a technique that mitigates this computational workload by “compressing” these computations. Drawing on key concepts from the recently developed field of compressed sensing, it shows how a low-dimensional proxy for the Green’s function can be constructed by backpropagating a small set of random receiver vectors. Then the source can be located by performing a number of “short” correlations between this proxy and the projection of the recorded acoustic data in the compressed space. Numerical experiments in a Pekeris ocean waveguide are presented that demonstrate that this compressed version of MFP is as effective as traditional MFP even when the compression is significant. The results are particularly promising in the broadband regime where using as few as two random backpropagations per frequency performs almost as well as the traditiona...
|
In this paper, we effectively demonstrate how classical localization procedures under a least-squares framework such as matched-field processing (MFP) may be solved in a reduced-dimensional space even without a-priori knowledge of the best'' dimension-reducing transform. This property has been shown in similar forms in the mainstream canon of Compressed Sensing (CS) literature. have described a number of useful variations on the theme of CS @cite_8 including a matched filtering detector. They have also described the smashed filter'' that is designed primarily for classification between a finite number of sets, but could easily be extended to parametric estimation. Wakin has also established some rigorous results on parameter estimation that relate the recovery properties of a general compressive estimation problem to the properties of the manifold that these parameters induce. This work could be used to analyze this problem via its manifold parameters @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_8"
],
"mid": [
"1771150291",
"2120961178"
],
"abstract": [
"A field known as Compressive Sensing (CS) has recently emerged to help address the growing challenges of capturing and processing high-dimensional signals and data sets. CS exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive (or random) linear measurements of that signal. Strong theoretical guarantees have been established on the accuracy to which sparse or near-sparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and non-parametric signal families. Building upon recent results concerning the stable embeddings of manifolds within the measurement space, we establish both deterministic and probabilistic instance-optimal bounds in @math for manifold-based signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsity-based CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing empirical evidence that manifold-based models can be used with high accuracy in compressive signal processing.",
"The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been shown that random projections are a near-optimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems-such as detection, classification, or estimation-and filtering problems using only compressive measurements and without ever reconstructing the signals involved. We provide theoretical bounds along with experimental results."
]
}
|
1109.0766
|
2951299285
|
By exploiting multipath fading channels as a source of common randomness, physical layer (PHY) based key generation protocols allow two terminals with correlated observations to generate secret keys with information-theoretical security. The state of the art, however, still suffers from major limitations, e.g., low key generation rate, lower entropy of key bits and a high reliance on node mobility. In this paper, a novel cooperative key generation protocol is developed to facilitate high-rate key generation in narrowband fading channels, where two keying nodes extract the phase randomness of the fading channel with the aid of relay node(s). For the first time, we explicitly consider the effect of estimation methods on the extraction of secret key bits from the underlying fading channels and focus on a popular statistical method--maximum likelihood estimation (MLE). The performance of the cooperative key generation scheme is extensively evaluated theoretically. We successfully establish both a theoretical upper bound on the maximum secret key rate from mutual information of correlated random sources and a more practical upper bound from Cramer-Rao bound (CRB) in estimation theory. Numerical examples and simulation studies are also presented to demonstrate the performance of the cooperative key generation system. The results show that the key rate can be improved by a couple of orders of magnitude compared to the existing approaches.
|
The PHY based key generation can be traced back to the original information-theoretic formulation of secure communication due to @cite_0 . Building on information theory, @cite_7 @cite_8 @cite_20 characterized the fundamental bounds and showed the feasibility of generating keys using external random source-channel impulse response. To the best of our knowledge, the first key generation scheme suitable for wireless network was proposed in @cite_6 . In @cite_6 , the differential phase between two frequency tones is encoded for key generation. Error control coding techniques are used for enhancing the reliability of key generation. Similar to @cite_6 , a technique of using random phase for extracting secret keys in an OFDM system through channel estimation and quantization was recently proposed in @cite_21 . This paper characterized the probability of generating the same bit vector between two nodes as a function of signal-to-interference-and-noise (SINR) and quantization levels.
|
{
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_20"
],
"mid": [
"1517444618",
"2159247478",
"2131580663",
"2085428487",
"2109394932",
"2108777864"
],
"abstract": [
"All information-theoretically secure key agreement protocols (e.g. based on quantum cryptography or on noisy channels) described in the literature are secure only against passive adversaries in the sense that they assume the existence of an authenticated public channel. The goal of this paper is to investigate information-theoretic security even against active adversaries with complete control over the communication channel connecting the two parties who want to agree on a secret key. Several impossibility results are proved and some scenarios are characterized in which secret-key agreement secure against active adversaries is possible. In particular, when each of the parties, including the adversary, can observe a sequence of random variables that are correlated between the parties, the rate at which key agreement against active adversaries is possible is characterized completely: it is either 0 or equal to the rate achievable against passive adversaries, and the condition for distinguishing between the two cases is given.",
"This is the first part of a three-part paper on secret-key agreement secure against active adversaries. In all three parts, we address the question whether two parties, knowing some correlated pieces of information X and Y, respectively, can generate a string S about which an adversary, knowing some information Z and having read and write access to the communication channel used by the legitimate partners, is almost completely ignorant. Whether such key agreement is possible, and if yes at which rate, is an inherent property of the joint probability distribution PXYZ. In this part, we first prove a number of general impossibility results. We then consider the important special case where the legitimate partners as well as the adversary have access to the outcomes of many independent repetitions of a fixed tripartite random experiment. In this case, the result characterizing the possibility of secret-key agreement secure against active adversaries is of all-or-nothing nature: either a secret key can be generated at the same rate as in the (well-studied) passive-adversary case, or such secret-key agreement is completely impossible. The exact condition characterizing the two cases is presented.",
"Secure wireless communications is a challenging problem due to the shared nature of the wireless medium. Most existing security protocols apply cryptographic techniques for bit scrambling at the application layer by exploiting a shared secret key between pairs of communicating nodes. However, more recent research argues that multipath propagation - a salient feature of wireless channels - provides a physical resource for secure communications. In this context, we propose a protocol that exploits the inherent randomness in multipath wireless channels for generating secret keys through channel estimation and quantization. Our approach is particularly attractive in wideband channels which exhibit a large number of statistically independent degrees of freedom (DoF), thereby enabling the generation of large, more-secure, keys. We show that the resulting keys are distinct for distinct pairwise links with a probability that increases exponentially with the key-size channel DoF. We also characterize the probability that the two users sharing a common link generate the same key. This characterization is used to analyze the energy consumption in successful acquisition of a secret key by the two users. For a given key size, our results show that there is an optimum transmit power, and an optimum quantization strategy, that minimizes the energy consumption. The proposed approach to secret key generation through channel quantization also obviates the problem of key pre-distribution inherent to many existing cryptographic approaches.",
"Abstract Hassan, A. A., Stark, W. E., Hershey, J. E., and Chennakeshu, S., Cryptographic Key Agreement for Mobile Radio, Digital Signal Processing 6 (1996), 207–212. The problem of establishing a mutually held secret cryptographic key using a radio channel is addressed. The performance of a particular key distribution system is evaluated for a practical mobile radio communications system. The performance measure taken is probabilistic, and different from the Shannon measure of perfect secrecy. In particular, it is shown that by using a channel decoder, the probability of two users establishing a secret key is close to one, while the probability of an adversary generating the same key is close to zero. The number of possible keys is large enough that exhaustive search is impractical.",
"THE problems of cryptography and secrecy systems furnish an interesting application of communication theory.1 In this paper a theory of secrecy systems is developed. The approach is on a theoretical level and is intended to complement the treatment found in standard works on cryptography.2 There, a detailed study is made of the many standard types of codes and ciphers, and of the ways of breaking them. We will be more concerned with the general mathematical structure and properties of secrecy systems.",
"As the first part of a study of problems involving common randomness at distance locations, information-theoretic models of secret sharing (generating a common random key at two terminals, without letting an eavesdropper obtain information about this key) are considered. The concept of key-capacity is defined. Single-letter formulas of key-capacity are obtained for several models, and bounds to key-capacity are derived for other models. >"
]
}
|
1109.0766
|
2951299285
|
By exploiting multipath fading channels as a source of common randomness, physical layer (PHY) based key generation protocols allow two terminals with correlated observations to generate secret keys with information-theoretical security. The state of the art, however, still suffers from major limitations, e.g., low key generation rate, lower entropy of key bits and a high reliance on node mobility. In this paper, a novel cooperative key generation protocol is developed to facilitate high-rate key generation in narrowband fading channels, where two keying nodes extract the phase randomness of the fading channel with the aid of relay node(s). For the first time, we explicitly consider the effect of estimation methods on the extraction of secret key bits from the underlying fading channels and focus on a popular statistical method--maximum likelihood estimation (MLE). The performance of the cooperative key generation scheme is extensively evaluated theoretically. We successfully establish both a theoretical upper bound on the maximum secret key rate from mutual information of correlated random sources and a more practical upper bound from Cramer-Rao bound (CRB) in estimation theory. Numerical examples and simulation studies are also presented to demonstrate the performance of the cooperative key generation system. The results show that the key rate can be improved by a couple of orders of magnitude compared to the existing approaches.
|
Due to noise, interference and other factors in the key generation process, discrepancies may exist between the generated bit streams. Variants of this problem have been extensively explored under the names information reconciliation, privacy amplification and fuzzy extractors. @cite_5 proposed the first protocol to solve the information-theoretic key agreement problem between two parties that initially posses only correlated weak secrets. The key agreement was shown to be theoretically feasible when the information that the two bit strings contain about each other is more than the information that the eavesdropper has about them. @cite_14 used error-correcting techniques to design a protocol that is computationally efficient for different distance metrics. Based on the previous results, @cite_15 proposed a protocol that is efficient for both parties and has both lower round complexity and lower entropy loss. Recently, @cite_9 proposed a two round key agreement protocol for the same settings as @cite_15 .
|
{
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_14",
"@cite_9"
],
"mid": [
"1585844421",
"2097079400",
"2070616660",
"2140805804"
],
"abstract": [
"A completely insecure communication channel can only be transformed into an unconditionally secure channel if some information-theoretic primitive is given to start from. All previous approaches to realizing such authenticity and privacy from weak primitives were symmetric in the sense that security for both parties was achieved. We show that asymmetric information-theoretic security can, however, be obtained at a substantially lower price than two-way security|like in the computational-security setting, as the example of public-key cryptography demonstrates. In addition to this, we show that also an unconditionally secure bidirectional channel can be obtained under weaker conditions than previously known. One consequence of these results is that the assumption usually made in the context of quantum key distribution that the two parties share a short key initially is unnecessarily strong.",
"We consider information-theoretic key agreement between two parties sharing somewhat different versions of a secret w that has relatively little entropy. Such key agreement, also known as information reconciliation and privacy amplification over unsecured channels, was shown to be theoretically feasible by Renner and Wolf (Eurocrypt 2004), although no protocol that runs in polynomial time was described. We propose a protocol that is not only polynomial-time, but actually practical, requiring only a few seconds on consumer-grade computers. Our protocol can be seen as an interactive version of robust fuzzy extractors (, Crypto 2006). While robust fuzzy extractors, due to their noninteractive nature, require w to have entropy at least half its length, we have no such constraint. In fact, unlike in prior solutions, in our solution the entropy loss is essentially unrelated to the length or the entropy of w , and depends only on the security parameter.",
"Consider two parties holding samples from correlated distributions @math and @math , respectively, where these samples are within distance @math of each other in some metric space. The parties wish to agree on a close-to-uniformly distributed secret key @math by sending a single message over an insecure channel controlled by an all-powerful adversary who may read and modify anything sent over the channel. We consider both the keyless case, where the parties share no additional secret information, and the keyed case, where the parties share a long-term secret @math that they can use to generate a sequence of session keys @math using multiple pairs @math . The former has applications to, e.g., biometric authentication, while the latter arises in, e.g., the bounded-storage model with errors. We show solutions that improve upon previous work in several respects. The best prior solution for the keyless case with no errors (i.e., @math ) requires the min-entropy of @math to exceed @math , where @math is the bit length of @math . Our solution applies whenever the min-entropy of @math exceeds the minimal threshold @math , and yields a longer key.",
"We study the question of basing symmetric key cryptography on weak secrets. In this setting, Alice and Bob share an n-bit secret W, which might not be uniformly random, but the adversary has at least k bits of uncertainty about it (formalized using conditional min-entropy). Since standard symmetric-key primitives require uniformly random secret keys, we would like to construct an authenticated key agreement protocol in which Alice and Bob use W to agree on a nearly uniform key R, by communicating over a public channel controlled by an active adversary Eve. We study this question in the information theoretic setting where the attacker is computationally unbounded. We show that single-round (i.e. one message) protocols do not work when k ≤ n 2, and require poor parameters even when n 2 On the other hand, for arbitrary values of k, we design a communication efficient two-round (challenge-response) protocol extracting nearly k random bits. This dramatically improves the previous construction of Renner and Wolf [32], which requires Θ(λ + log(n)) rounds where λ is the security parameter. Our solution takes a new approach by studying and constructing \"non-malleable\" seeded randomness extractors -- if an attacker sees a random seed X and comes up with an arbitrarily related seed X', then we bound the relationship between R= Ext(W;X) and R' = Ext(W;X'). We also extend our two-round key agreement protocol to the \"fuzzy\" setting, where Alice and Bob share \"close\" (but not equal) secrets WA and WB, and to the Bounded Retrieval Model (BRM) where the size of the secret W is huge."
]
}
|
1109.0827
|
2950713510
|
A decode and forward protocol based Trellis Coded Modulation (TCM) scheme for the half-duplex relay channel, in a Rayleigh fading environment, is presented. The proposed scheme can achieve any spectral efficiency greater than or equal to one bit per channel use (bpcu). A near-ML decoder for the suggested TCM scheme is proposed. It is shown that the high SNR performance of this near-ML decoder approaches the performance of the optimal ML decoder. The high SNR performance of this near-ML decoder is independent of the strength of the Source-Relay link and approaches the performance of the optimal ML decoder with an ideal Source-Relay link. Based on the derived Pair-wise Error Probability (PEP) bounds, design criteria to maximize the diversity and coding gains are obtained. Simulation results show a large gain in SNR for the proposed TCM scheme over uncoded communication as well as the direct transmission without the relay. Also, it is shown that even for the uncoded transmission scheme, the choice of the labelling scheme (mapping from bits to complex symbols) used at the source and the relay significantly impacts the BER vs SNR performance. We provide a good labelling scheme for @math -PSK signal set, where @math is an integer.
|
Achievable rates and upper bounds on the capacity for the discrete memoryless relay channel for the full-duplex and the more practical half-duplex relay channel were obtained in @cite_19 , @cite_14 . These results were extended for the half duplex Gaussian relay channel in @cite_8 . Optimal power allocation strategies to maximize the achievable rate for the Rayleigh fading relay channel were investigated in @cite_20 . Several protocols like Amplify and Forward (AF), Decode and Forward (DF), Compress and Forward (CF) were proposed @cite_6 . Power allocation strategies for the Non Orthogonal DF (NODF) protocol were discussed in @cite_21 . Non-orthogonal relay protocols offer higher spectral efficiency when compared with orthogonal relay protocols @cite_11 , @cite_2 , @cite_0 . Hence, the proposed TCM scheme in this paper is based on the NODF protocol.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"1555881611",
"",
"",
"",
"2167447263",
"2136209931",
"2121916661",
"2102934991"
],
"abstract": [
"",
"We consider the communication problem in a multi-hop relay network where the intermediate relay nodes cannot transmit and receive at the same time. The motivation for this assumption comes from the fact that current radios operate in TDD mode when the transmitting and receiving frequencies are the same. We label such a node radio as a cheap radio and the corresponding node of the network as a cheap node. In this paper we derive the capacities of the degraded cheap relay channel and the multi-hop net- work with cheap nodes. The proof of the achievability parts in coding theorems are presented based on the jointly typical sequences, while the proof of the con- verses are derived from the direct application of the upper bounds derived in (7).",
"",
"",
"",
"A relay channel consists of an input x_ l , a relay output y_ 1 , a channel output y , and a relay sender x_ 2 (whose transmission is allowed to depend on the past symbols y_ 1 . The dependence of the received symbols upon the inputs is given by p(y,y_ 1 |x_ 1 ,x_ 2 ) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_ 1 , then C : = : !_ p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y), I(X_ 1 ; Y_ 1 |X_ 2 ) . 2)If y_ 1 is a degraded form of y , then C : = : !_ p(x_ 1 ) x_ 2 I(X_ 1 ;Y|x_ 2 ) . 3)If p(y,y_ 1 |x_ 1 ,x_ 2 ) is an arbitrary relay channel with feedback from (y,y_ 1 ) to both x_ 1 x_ 2 , then C : = : p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y),I ,(X_ 1 ;Y,Y_ 1 |X_ 2 ) . 4)For a general relay channel, C : : p(x_ 1 ,x_ 2 ) , I ,(X_ 1 , X_ 2 ;Y),I(X_ 1 ;Y,Y_ 1 |X_ 2 ) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.",
"We propose novel cooperative transmission protocols for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (CMA) (up-link) channels. The proposed protocols are evaluated using Zheng-Tse diversity-multiplexing tradeoff. For the relay channel, we investigate two classes of cooperation schemes; namely, amplify and forward (AF) protocols and decode and forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with (N-1) relays where it is shown to outperform the space-time coded protocol of Laneman and Wornell without requiring decoding encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0lesrles1 N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the CMA channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the suboptimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint.",
"We consider three-node wireless relay channels in a Rayleigh-fading environment. Assuming transmitter channel state information (CSI), we study upper bounds and lower bounds on the outage capacity and the ergodic capacity. Our studies take into account practical constraints on the transmission reception duplexing at the relay node and on the synchronization between the source node and the relay node. We also explore power allocation. Compared to the direct transmission and traditional multihop protocols, our results reveal that optimum relay channel signaling can significantly outperform multihop protocols, and that power allocation has a significant impact on the performance.",
"Cooperative diversity is a transmission technique, where multiple terminals pool their resources to form a virtual antenna array that realizes spatial diversity gain in a distributed fashion. In this paper, we examine the basic building block of cooperative diversity systems, a simple fading relay channel where the source, destination, and relay terminals are each equipped with single antenna transceivers. We consider three different time-division multiple-access-based cooperative protocols that vary the degree of broadcasting and receive collision. The relay terminal operates in either the amplify-and-forward (AF) or decode-and-forward (DF) modes. For each protocol, we study the ergodic and outage capacity behavior (assuming Gaussian code books) under the AF and DF modes of relaying. We analyze the spatial diversity performance of the various protocols and find that full spatial diversity (second-order in this case) is achieved by certain protocols provided that appropriate power control is employed. Our analysis unifies previous results reported in the literature and establishes the superiority (both from a capacity, as well as a diversity point-of-view) of a new protocol proposed in this paper. The second part of the paper is devoted to (distributed) space-time code design for fading relay channels operating in the AF mode. We show that the corresponding code design criteria consist of the traditional rank and determinant criteria for the case of colocated antennas, as well as appropriate power control rules. Consequently space-time codes designed for the case of colocated multiantenna channels can be used to realize cooperative diversity provided that appropriate power control is employed."
]
}
|
1109.0882
|
2950929673
|
Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios.
|
A common approach for motion segmentation is to partition the dense optical-flow field @cite_42 . This is usually achieved by decomposing the image into different motion layers @cite_37 @cite_16 @cite_40 . The assumption is that the optical-flow field should be smooth in each motion layer, and sharp motion changes only occur at layer boundaries. Dense optical flow and motion boundaries are computed in an alternating manner named @cite_40 , which is usually implemented in a level set framework. The similar scheme is later applied to dynamic texture segmentation @cite_41 @cite_43 @cite_38 . While high accuracy can be achieved in these methods, accurate motion analysis itself is a challenging task due to the difficulties raised by aperture problem, occlusion, video noises, @cite_48 . Moreover, most of the motion segmentation methods require object contours to be initialized and the number of foreground objects to be specified @cite_40 .
|
{
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_41",
"@cite_48",
"@cite_42",
"@cite_43",
"@cite_40",
"@cite_16"
],
"mid": [
"2024995982",
"1527344285",
"2037769578",
"",
"",
"2156328381",
"2114190460",
"2143033699"
],
"abstract": [
"Motion estimation is usually based on the brightness constancy assumption. This assumption holds well for rigid objects with a Lambertian surface, but it is less appropriate for fluid and gaseous materials. For these materials an alternative assumption is required. This work examines three possible alternatives: gradient constancy, color constancy and brightness conservation (under this assumption the brightness of an object can diffuse to its neighborhood). Brightness conservation and color constancy are found to be adequate models. We propose a method for detecting regions of dynamic texture in image sequences. Accurate segmentation into regions of static and dynamic texture is achieved using a level set scheme. The level set function separates each image into regions that obey brightness constancy and regions that obey the alternative assumption. We show that the method can be simplified to obtain a less robust but fast algorithm, capable of real-time performance. Experimental results demonstrate accurate segmentation by the full level set scheme, as well as by the simplified method. The experiments included challenging image sequences, in which color or geometry cues by themselves would be insufficient.",
"",
"A novel video representation, the layered dynamic texture (LDT), is proposed. The LDT is a generative model, which represents a video as a collection of stochastic layers of different appearance and dynamics. Each layer is modeled as a temporal texture sampled from a different linear dynamical system. The LDT model includes these systems, a collection of hidden layer assignment variables (which control the assignment of pixels to layers), and a Markov random field prior on these variables (which encourages smooth segmentations). An EM algorithm is derived for maximum-likelihood estimation of the model parameters from a training video. It is shown that exact inference is intractable, a problem which is addressed by the introduction of two approximate inference procedures: a Gibbs sampler and a computationally efficient variational approximation. The trade-off between the quality of the two approximations and their complexity is studied experimentally. The ability of the LDT to segment videos into layers of coherent appearance and dynamics is also evaluated, on both synthetic and natural videos. These experiments show that the model possesses an ability to group regions of globally homogeneous, but locally heterogeneous, stochastic dynamics currently unparalleled in the literature.",
"",
"",
"We address the problem of segmenting a sequence of images of natural scenes into disjoint regions that are characterized by constant spatio-temporal statistics. We model the spatio-temporal dynamics in each region by Gauss-Markov models, and infer the model parameters as well as the boundary of the regions in a variational optimization framework. Numerical results demonstrate that - in contrast to purely texture-based segmentation schemes - our method is effective in segmenting regions that differ in their dynamics even when spatial statistics are identical.",
"We present a novel variational approach for segmenting the image plane into a set of regions of parametric motion on the basis of two consecutive frames from an image sequence. Our model is based on a conditional probability for the spatio-temporal image gradient, given a particular velocity model, and on a geometric prior on the estimated motion field favoring motion boundaries of minimal length. Exploiting the Bayesian framework, we derive a cost functional which depends on parametric motion models for each of a set of regions and on the boundary separating these regions. The resulting functional can be interpreted as an extension of the Mumford-Shah functional from intensity segmentation to motion segmentation. In contrast to most alternative approaches, the problems of segmentation and motion estimation are jointly solved by continuous minimization of a single functional. Minimizing this functional with respect to its dynamic variables results in an eigenvalue problem for the motion parameters and in a gradient descent evolution for the motion discontinuity set. We propose two different representations of this motion boundary: an explicit spline-based implementation which can be applied to the motion-based tracking of a single moving object, and an implicit multiphase level set implementation which allows for the segmentation of an arbitrary number of multiply connected moving objects. Numerical results both for simulated ground truth experiments and for real-world sequences demonstrate the capacity of our approach to segment objects based exclusively on their relative motion.",
"We suggest a variational method for the joint estimation of optic flow and the segmentation of the image into regions of similar motion. It makes use of the level set framework following the idea of motion competition, which is extended to non-parametric motion. Moreover, we automatically determine an appropriate initialization and the number of regions by means of recursive two-phase splits with higher order region models. The method is further extended to the spatiotemporal setting and the use of additional cues like the gray value or color for the segmentation. It need not fear a quantitative comparison to pure optic flow estimation techniques: For the popular Yosemite sequence with clouds we obtain the currently most accurate result. We further uncover a mistake in the ground truth. Coarsely correcting this, we get an average angular error below 1 degree."
]
}
|
1109.0882
|
2950929673
|
Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios.
|
An alternative approach for motion segmentation tries to segment the objects by analyzing point trajectories @cite_36 @cite_15 @cite_21 @cite_60 . Some sparse feature points are firstly detected and tracked throughout the video and then separated into several clusters via subspace clustering @cite_58 or spectral clustering @cite_60 . The formulation is mathematically elegant and it can handle large camera motion. However, these methods require point trajectories as input and only output a segmentation of sparse points. The performance relies on the quality of point tracking and postprocessing is needed to obtain the dense segmentation @cite_44 . Also, they are limited when dealing with noisy data and nonrigid motion @cite_58 .
|
{
"cite_N": [
"@cite_60",
"@cite_36",
"@cite_21",
"@cite_44",
"@cite_15",
"@cite_58"
],
"mid": [
"1496571393",
"1905817386",
"2536706996",
"2129822853",
"2052311585",
""
],
"abstract": [
"Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottom-up segmentation from static cues is well known to be ambiguous at the object level, the story changes as soon as objects move. In this paper, we present a method that uses long term point trajectories based on dense optical flow. Defining pair-wise distances between these trajectories allows to cluster them, which results in temporally consistent segmentations of moving objects in a video shot. In contrast to multi-body factorization, points and even whole objects may appear or disappear during the shot. We provide a benchmark dataset and an evaluation method for this so far uncovered setting.",
"We present an analytic solution to the problem of estimating multiple 2-D and 3-D motion models from two-view correspondences or optical flow. The key to our approach is to view the estimation of multiple motion models as the estimation of a single multibody motion model. This is possible thanks to two important algebraic facts. First, we show that all the image measurements, regardless of their associated motion model, can be fit with a real or complex polynomial. Second, we show that the parameters of the motion model associated with an image measurement can be obtained from the derivatives of the polynomial at the measurement. This leads to a novel motion segmentation algorithm that applies to most of the two-view motion models adopted in computer vision. Our experiments show that the proposed algorithm outperforms existing algebraic methods in terms of efficiency and robustness, and provides a good initialization for iterative techniques, such as EM, which is strongly dependent on correct initialization.",
"Background subtraction algorithms define the background as parts of a scene that are at rest. Traditionally, these algorithms assume a stationary camera, and identify moving objects by detecting areas in a video that change over time. In this paper, we extend the concept of ‘subtracting’ areas at rest to apply to video captured from a freely moving camera. We do not assume that the background is well-approximated by a plane or that the camera center remains stationary during motion. The method operates entirely using 2D image measurements without requiring an explicit 3D reconstruction of the scene. A sparse model of background is built by robustly estimating a compact trajectory basis from trajectories of salient features across the video, and the background is ‘subtracted’ by removing trajectories that lie within the space spanned by the basis. Foreground and background appearance models are then built, and an optimal pixel-wise foreground background labeling is obtained by efficiently maximizing a posterior function.",
"Point trajectories have emerged as a powerful means to obtain high quality and fully unsupervised segmentation of objects in video shots. They can exploit the long term motion difference between objects, but they tend to be sparse due to computational reasons and the difficulty in estimating motion in homogeneous areas. In this paper we introduce a variational method to obtain dense segmentations from such sparse trajectory clusters. Information is propagated with a hierarchical, nonlinear diffusion process that runs in the continuous domain but takes superpixels into account. We show that this process raises the density from 3 to 100 and even increases the average precision of labels.",
"Over the past few years, several methods for segmenting a scene containing multiple rigidly moving objects have been proposed. However, most existing methods have been tested on a handful of sequences only, and each method has been often tested on a different set of sequences. Therefore, the comparison of different methods has been fairly limited. In this paper, we compare four 3D motion segmentation algorithms for affine cameras on a benchmark of 155 motion sequences of checkerboard, traffic, and articulated scenes.",
""
]
}
|
1109.0882
|
2950929673
|
Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios.
|
In background subtraction, the general assumption is that a background model can be obtained from a training sequence that does not contain foreground objects. Moreover, it usually assumes that the video is captured by a static camera @cite_35 . Thus, foreground objects can be detected by checking the difference between the testing frame and the background model built previously.
|
{
"cite_N": [
"@cite_35"
],
"mid": [
"2121274305"
],
"abstract": [
"Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches."
]
}
|
1109.0882
|
2950929673
|
Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios.
|
A considerable number of works have been done on background modeling, building a proper representation of the background scene. Typical methods include single Gaussian distribution @cite_23 , Mixture of Gaussian @cite_5 , kernel density estimation @cite_11 @cite_59 , block correlation @cite_61 , codebook model @cite_26 , Hidden Markov model @cite_31 @cite_4 and linear autoregressive models @cite_49 @cite_46 @cite_57 .
|
{
"cite_N": [
"@cite_61",
"@cite_26",
"@cite_4",
"@cite_46",
"@cite_57",
"@cite_59",
"@cite_23",
"@cite_49",
"@cite_5",
"@cite_31",
"@cite_11"
],
"mid": [
"",
"2158604775",
"1487980058",
"2096309046",
"",
"2122423951",
"2140235142",
"",
"2102625004",
"",
"1499877760"
],
"abstract": [
"",
"We present a real-time algorithm for foreground-background segmentation. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. The codebook representation is efficient in memory and speed compared with other background modeling techniques. Our method can handle scenes containing moving backgrounds or illumination variations, and it achieves robust detection for different types of videos. We compared our method with other multimode modeling techniques. In addition to the basic algorithm, two features improving the algorithm are presented-layered modeling detection and adaptive codebook updating. For performance evaluation, we have applied perturbation detection rate analysis to four background subtraction algorithms and two videos of different types of scenes.",
"A new probabilistic background model based on a Hidden Markov Model is presented. The hidden states of the model enable discrimination between foreground, background and shadow. This model functions as a low level process for a car tracker. A particle filter is employed as a stochastic filter for the car tracker. The use of a particle filter allows the incorporation of the information from the low level process via importance sampling. A novel observation density for the particle filter which models the statistical dependence of neighboring pixels based on a Markov random field is presented. The effectiveness of both the low level process and the observation likelihood are demonstrated.",
"Background modeling and subtraction is a core component in motion analysis. The central idea behind such module is to create a probabilistic representation of the static scene that is compared with the current input to perform subtraction. Such approach is efficient when the scene to be modeled refers to a static structure with limited perturbation. In this paper, we address the problem of modeling dynamic scenes where the assumption of a static background is not valid. Waving trees, beaches, escalators, natural scenes with rain or snow are examples. Inspired by the work proposed by (2003), we propose an on-line auto-regressive model to capture and predict the behavior of such scenes. Towards detection of events we introduce a new metric that is based on a state-driven comparison between the prediction and the actual frame. Promising results demonstrate the potentials of the proposed framework.",
"",
"Background modeling is an important component of many vision systems. Existing work in the area has mostly addressed scenes that consist of static or quasi-static structures. When the scene exhibits a persistent dynamic behavior in time, such an assumption is violated and detection performance deteriorates. In this paper, we propose a new method for the modeling and subtraction of such scenes. Towards the modeling of the dynamic characteristics, optical flow is computed and utilized as a feature in a higher dimensional space. Inherent ambiguities in the computation of features are addressed by using a data-dependent bandwidth for density estimation using kernels. Extensive experiments demonstrate the utility and performance of the proposed approach.",
"Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding.",
"",
"A common method for real-time segmentation of moving regions in image sequences involves \"background subtraction\", or thresholding the error between an estimate of the image without moving objects and the current image. The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model. This paper discusses modeling each pixel as a mixture of Gaussians and using an on-line approximation to update the model. The Gaussian, distributions of the adaptive mixture model are then evaluated to determine which are most likely to result from a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectively is considered part of the background model. This results in a stable, real-time outdoor tracker which reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. This system has been run almost continuously for 16 months, 24 hours a day, through rain and snow.",
"",
"Background subtraction is a method typically used to segment moving regions in image sequences taken from a static camera by comparing each new frame to a model of the scene background. We present a novel non-parametric background model and a background subtraction approach. The model can handle situations where the background of the scene is cluttered and not completely static but contains small motions such as tree branches and bushes. The model estimates the probability of observing pixel intensity values based on a sample of intensity values for each pixel. The model adapts quickly to changes in the scene which enables very sensitive detection of moving targets. We also show how the model can use color information to suppress detection of shadows. The implementation of the model runs in real-time for both gray level and color imagery. Evaluation shows that this approach achieves very sensitive detection with very low false alarm rates."
]
}
|
1109.0882
|
2950929673
|
Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios.
|
Learning with sparsity has drawn a lot of attentions in recent machine learning and computer vision research @cite_47 , and several methods based on the sparse representation for background modeling have been developed. One pioneering work is the model @cite_18 , where the principle component analysis (PCA) is performed on a training sequence. When a new frame is arrived, it is projected onto the subspace spanned by the principle components, and the residues indicate the presence of new objects. An alternative approach that can operate sequentially is the sparse signal recovery @cite_12 @cite_6 @cite_30 . Background subtraction is formulated as a regression problem with the assumption that a new-coming frame should be sparsely represented by a linear combination of preceding frames except for foreground parts. These models capture the correlation between video frames. Thus, they can naturally handle the global variations in the background such as illumination change and dynamic textures.
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_6",
"@cite_47",
"@cite_12"
],
"mid": [
"",
"2115213191",
"2538773003",
"2069959554",
"2109842006"
],
"abstract": [
"",
"We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task. The system deals in particularly with detecting when interactions between people occur and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach. We propose and compare two different state-based learning architectures, namely, HMMs and CHMMs for modeling behaviors and interactions. Finally, a synthetic \"Alife-style\" training system is used to develop flexible prior models for recognizing human interactions. We demonstrate the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training.",
"This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clustered. Intuitively, better results can be achieved in these cases by reasonably utilizing both clustering and sparsity priors. Motivated by this idea, we have developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods. The proposed algorithm can recover stably sparse data with clustering trends using far fewer measurements and computations than current state-of-the-art algorithms with provable guarantees. Moreover, our algorithm can adaptively learn the dynamic group structure and the sparsity number if they are not available in the practical applications. We have applied the algorithm to sparse recovery and background subtraction in videos. Numerous experiments with improved performance over previous methods further validate our theoretical proofs and the effectiveness of the proposed algorithm.",
"Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining state-of-the-art results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.",
"The Lasso [28] is an attractive technique for regularization and variable selection for high-dimensional data, where the number of predictor variables p is potentially much larger than the number of samples n. However, it was recently discovered [23, 38, 39] that the sparsity pattern of the Lasso estimator can only be asymptotically identical to the true sparsity pattern if the design matrix satisfies the so-called irrepresentable condition. The latter condition can easily be violated in applications due to the presence of highly correlated variables. Here we examine the behavior of the Lasso estimators if the irrepresentable condition is relaxed. Even though the Lasso cannot recover the correct sparsity pattern, we show that the estimator is still consistent in the ‘2-norm sense for fixed designs under conditions on (a) the number sn of non-zero components of the vector n and (b) the minimal singular values of the design matrices that are induced by selecting of order sn variables. The results are extended to vectors in weak ‘q-balls with 0 < q < 1. Our results imply that, with We would like to thank Noureddine El Karoui and Debashis Paul for pointing out interesting connections to Random Matrix theory. Some results of this manuscript have been presented at the Oberwolfach workshop “Qualitative Assumptions and Regularization for High-Dimensional Data”. Nicolai Meinshausen is supported by DFG (Deutsche Forschungsgemeinschaft) and Bin Yu is partially supported by a Guggenheim fellowship and grants NSF DMS-0605165 (06-08), NSF DMS-03036508 (03-05) and ARO W911NF05-1-0104 (05-07)."
]
}
|
1109.0882
|
2950929673
|
Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios.
|
Background subtraction methods mentioned above rarely consider the scenario where the objects appear at the start and continuously present in the scene ( the training sequence is not available). Few literatures consider the problem of background initialization @cite_13 @cite_22 . Most of them seek a stable interval, inside which the intensity is relatively smooth for each pixel independently. Pixels during such intervals are regarded as background, and the background scene is estimated from these intervals. The validity of this approach relies on the assumption of static background. Thus, it is limited when processing dynamic background or videos captured by a moving camera.
|
{
"cite_N": [
"@cite_13",
"@cite_22"
],
"mid": [
"2118859514",
"1530138209"
],
"abstract": [
"Many motion detection and tracking algorithms rely on the process of background subtraction, a technique which detects changes from a model of the background scene. We present a new algorithm for the purpose of background model initialization. The algorithm takes as input a video sequence in which moving objects are present, and outputs a statistical background model describing the static parts of the scene. Multiple hypotheses of the background value at each pixel are generated by locating periods of stable intensity in the sequence. The likelihood of each hypothesis is then evaluated using optical flow information from the neighborhood around the pixel, and the most likely hypothesis is chosen to represent the background. Our results are compared with those of several standard background modeling techniques using surveillance video of humans in indoor environments.",
"In many visual tracking and surveillance systems, it is important to initialize a background model using a training video sequence which may include foreground objects. In such a case, robust statistical methods are required to handle random occurrences of foreground objects (i.e., outliers), as well as general image noise. The robust statistical method Median has been employed for initializing the background model. However, the Median can tolerate up to only 50 outliers, which cannot satisfy the requirements of some complicated environments. In this paper, we propose a novel robust method for the background initialization. The proposed method can tolerate more than 50 of foreground pixels and noise. We give quantitative evaluations on a number of video sequences and compare our proposed method with five other methods. Experiments show that our method can achieve very promising results in background initialization: including applications in video segmentation, visual tracking and surveillance."
]
}
|
1109.0312
|
2951132861
|
We describe fully retroactive dynamic data structures for approximate range reporting and approximate nearest neighbor reporting. We show how to maintain, for any positive constant @math , a set of @math points in @math indexed by time such that we can perform insertions or deletions at any point in the timeline in @math amortized time. We support, for any small constant @math , @math -approximate range reporting queries at any point in the timeline in @math time, where @math is the output size. We also show how to answer @math -approximate nearest neighbor queries for any point in the past or present in @math time.
|
Blelloch @cite_16 and Giora and Kaplan @cite_40 consider the problem of maintaining a fully retroactive dictionary that supports successor or predecessor queries. They both base their data structures on a structure by Mortensen @cite_22 , which answers fully retroactive one dimensional range reporting queries, although Mortensen framed the problem in terms of two dimensional orthogonal line segment intersection reporting. In this application, the @math -axis is viewed as a timeline for a retroactive data structure for 1-dimensional points. The insertion of a segment @math corresponds to the addition of an insert of @math at time @math and a deletion of @math at time @math . Likewise, the removal of such a segment corresponds to the removal of these two operations from the timeline. For this 1-dimensional retroactive data structuring problem, Blelloch and Giora and Kaplan give data structures that support queries and updates in @math time. Dickerson @cite_42 describe a retroactive data structure for maintaining the lower envelope of a set of parabolic arcs and give an application of this structure to the problem of cloning a Voronoi diagram from a server that answers nearest-neighbor queries.
|
{
"cite_N": [
"@cite_40",
"@cite_16",
"@cite_42",
"@cite_22"
],
"mid": [
"2624264009",
"1998117630",
"2951132123",
"2003523151"
],
"abstract": [
"In this paper we consider the dynamic vertical ray shooting problem, that is the task of maintaining a dynamic set S of n non intersecting horizontal line segments in the plane subject to a query that reports the first segment in S intersecting a vertical ray from a query point. We develop a linear-size structure that supports queries, insertions and deletions in O(log n) worst-case time. Our structure works in the comparison model and uses a RAM.",
"We describe an asymptotically optimal data-structure for dynamic point location for horizontal segments. For n line-segments, queries take O(log n) time, updates take O(log n) amortized time and the data structure uses O(n) space. This is the first structure for the problem that is optimal in space and time (modulo the possibility of removing amortization). We also describe dynamic data structures for orthogonal range reporting and orthogonal intersection reporting. In both data structures for n points (segments) updates take O(log n) amortized time, queries take O(log n+k log n log log n) time, and the structures use O(n) space, where k is the size of the output. The model of computation is the unit cost RAM.",
"We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.",
"We consider the two dimensional fully-dynamic orthogonal range reporting problem and the two dimensional fully-dynamic orthogonal line segment intersection reporting problem in the comparison model. We show that if n is the number of stored elements, then these problems can be solved in worst case time Θ(log n) plus time proportional to the size of the output pr. operation."
]
}
|
1108.6312
|
2951260765
|
Consider several source nodes communicating across a wireless network to a destination node with the help of several layers of relay nodes. Recent work by has approximated the capacity of this network up to an additive gap. The communication scheme achieving this capacity approximation is based on compress-and-forward, resulting in noise accumulation as the messages traverse the network. As a consequence, the approximation gap increases linearly with the network depth. This paper develops a computation alignment strategy that can approach the capacity of a class of layered, time-varying wireless relay networks up to an approximation gap that is independent of the network depth. This strategy is based on the compute-and-forward framework, which enables relays to decode deterministic functions of the transmitted messages. Alone, compute-and-forward is insufficient to approach the capacity as it incurs a penalty for approximating the wireless channel with complex-valued coefficients by a channel with integer coefficients. Here, this penalty is circumvented by carefully matching channel realizations across time slots to create integer-valued effective channels that are well-suited to compute-and-forward. Unlike prior constant gap results, the approximation gap obtained in this paper also depends closely on the fading statistics, which are assumed to be i.i.d. Rayleigh.
|
For networks, channel-network separation is not always optimal: higher rates can be achieved using more sophisticated relaying techniques such as compress-and-forward (see, e.g., @cite_4 @cite_7 @cite_17 @cite_8 @cite_6 ), amplify-and-forward (see, e.g., @cite_35 @cite_19 @cite_37 @cite_30 @cite_11 ), and compute-and-forward (see, e.g., @cite_0 @cite_2 @cite_22 @cite_25 ). While for certain classes of deterministic networks the unicast and multicast capacity regions are known @cite_27 @cite_34 @cite_8 , in the general, noisy case, these problems remain open. Recent progress has been made by focusing on finding capacity approximations @cite_33 @cite_8 @cite_21 @cite_18 @cite_23 .
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_22",
"@cite_2",
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_23",
"@cite_17",
"@cite_37",
"@cite_7",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_34",
"@cite_25",
"@cite_33",
"@cite_0",
"@cite_11"
],
"mid": [
"2153077717",
"2154164662",
"2057929374",
"2569741146",
"2091462336",
"2167447263",
"",
"2151027523",
"2548811864",
"2148893247",
"2138203492",
"2098567664",
"2132100261",
"",
"1616017690",
"2051479030",
"2952459054",
"2157989362",
"2005196269",
"2476428932"
],
"abstract": [
"A wireless network with fading and a single source-destination pair is considered. The information reaches the destination via multiple hops through a sequence of layers of single-antenna relays. At high signal-to-noise ratio (SNR), the simple amplify-and-forward strategy is shown to be optimal in terms of degrees of freedom, because it achieves the degrees of freedom equal to a point-to-point multiple-input multiple-output (MIMO) system. Hence, the lack of coordination in relay nodes does not reduce the achievable degrees of freedom. The performance of this amplify-and-forward strategy degrades with increasing network size. This phenomenon is analyzed by finding the tradeoffs between network size, rate, and diversity. A lower bound on the diversity-multiplexing tradeoff for concatenation of multiple random Gaussian matrices is obtained. Also, it is shown that achievable network size in the outage formulation (short codes) is a lot smaller than the ergodic formulation (long codes).",
"We introduce the real, discrete-time Gaussian parallel relay network. This simple network is theoretically important in the context of network information theory. We present upper and lower bounds to capacity and explain where they coincide.",
"In this paper, a Gaussian two-way relay channel, where two source nodes exchange messages with each other through a relay, is considered. We assume that all nodes operate in full-duplex mode and there is no direct channel between the source nodes. We propose an achievable scheme composed of nested lattice codes for the uplink and structured binning for the downlink. Unlike conventional nested lattice codes, our codes utilize two different shaping lattices for source nodes based on a three-stage lattice partition chain, which is a key ingredient for producing the best gap-to-capacity results to date. Specifically, for all channel parameters, the achievable rate region of our scheme is within 1 2 bit from the capacity region for each user and its sum rate is within log3 2 bit from the sum capacity.",
"The problem of designing physical-layer network coding (PNC) schemes via nested lattices is considered. Building on the compute-and-forward (C&F) relaying strategy of Nazer and Gastpar, who demonstrated its asymptotic gain using information-theoretic tools, an algebraic approach is taken to show its potential in practical, nonasymptotic, settings. A general framework is developed for studying nested-lattice-based PNC schemes-called lattice network coding (LNC) schemes for short-by making a direct connection between C&F and module theory. In particular, a generic LNC scheme is presented that makes no assumptions on the underlying nested lattice code. C&F is reinterpreted in this framework, and several generalized constructions of LNC schemes are given. The generic LNC scheme naturally leads to a linear network coding channel over modules, based on which noncoherent network coding can be achieved. Next, performance complexity tradeoffs of LNC schemes are studied, with a particular focus on hypercube-shaped LNC schemes. The error probability of this class of LNC schemes is largely determined by the minimum intercoset distances of the underlying nested lattice code. Several illustrative hypercube-shaped LNC schemes are designed based on Constructions A and D, showing that nominal coding gains of 3 to 7.5 dB can be obtained with reasonable decoding complexity. Finally, the possibility of decoding multiple linear combinations is considered and related to the shortest independent vectors problem. A notion of dominant solutions is developed together with a suitable lattice-reduction-based algorithm.",
"In this paper, we study a Gaussian relay-interference network, in which relay (helper) nodes are to facilitate competing information flows between different source-destination pairs. We focus on two-stage relay-interference networks where there are weak cross links, causing the networks to behave like a chain of Z Gaussian channels. Our main result is an approximate characterization of the capacity region for such ZZ and ZS networks. We propose a new interference management scheme, termed interference neutralization, which is implemented using structured lattice codes. This scheme allows for over-the-air interference removal, without the transmitters having complete access the interfering signals. This scheme in conjunction a new network decomposition technique provides the approximate characterization. Our analysis of these Gaussian networks is based on insights gained from an exact characterization of the corresponding linear deterministic model.",
"A relay channel consists of an input x_ l , a relay output y_ 1 , a channel output y , and a relay sender x_ 2 (whose transmission is allowed to depend on the past symbols y_ 1 . The dependence of the received symbols upon the inputs is given by p(y,y_ 1 |x_ 1 ,x_ 2 ) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_ 1 , then C : = : !_ p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y), I(X_ 1 ; Y_ 1 |X_ 2 ) . 2)If y_ 1 is a degraded form of y , then C : = : !_ p(x_ 1 ) x_ 2 I(X_ 1 ;Y|x_ 2 ) . 3)If p(y,y_ 1 |x_ 1 ,x_ 2 ) is an arbitrary relay channel with feedback from (y,y_ 1 ) to both x_ 1 x_ 2 , then C : = : p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y),I ,(X_ 1 ;Y,Y_ 1 |X_ 2 ) . 4)For a general relay channel, C : : p(x_ 1 ,x_ 2 ) , I ,(X_ 1 , X_ 2 ;Y),I(X_ 1 ;Y,Y_ 1 |X_ 2 ) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.",
"",
"Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within 1 bit s Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal level. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal level.",
"We consider the Gaussian “diamond” or parallel relay network, in which a source node transmits a message to a destination node with the help of N relays. Even for the symmetric setting, in which the channel gains to the relays are identical and the channel gains from the relays are identical, the capacity of this channel is unknown in general. The best known capacity approximation is up to an additive gap of order N bits and up to a multiplicative gap of order N2, with both gaps independent of the channel gains. In this paper, we approximate the capacity of the symmetric Gaussian N-relay diamond network up to an additive gap of 1.8 bits and up to a multiplicative gap of a factor 14. Both gaps are independent of the channel gains and, unlike the best previously known result, are also independent of the number of relays N in the network. Achievability is based on bursty amplify-and-forward, showing that this simple scheme is uniformly approximately optimal, both in the low-rate as well as in the high-rate regimes. The upper bound on capacity is based on a careful evaluation of the cut-set bound. We also present approximation results for the asymmetric Gaussian N-relay diamond network. In particular, we show that bursty amplify-and-forward combined with optimal relay selection achieves a rate within a factor O(log4(N)) of capacity with preconstant in the order notation independent of the channel gains.",
"In this work, new achievable rates are derived for the uplink channel of a cellular network with joint multicell processing (MCP), where unlike previous results, the ideal backhaul network has finite capacity per cell. Namely, the cell sites are linked to the central joint processor via lossless links with finite capacity. The new rates are based on compress-and-forward schemes combined with local decoding. Further, the cellular network is abstracted by symmetric models, which render analytical treatment plausible. For this family of idealistic models, achievable rates are presented for both Gaussian and fading channels. The rates are given in closed form for the classical Wyner model and the soft-handover model. These rates are then demonstrated to be rather close to the optimal unlimited backhaul joint processing rates, even for modest backhaul capacities, supporting the potential gain offered by the joint MCP approach. Particular attention is also given to the low-signal-to-noise ratio (SNR) characterization of these rates through which the effect of the limited backhaul network is explicitly revealed. In addition, the rate at which the backhaul capacity should scale in order to maintain the original high-SNR characterization of an unlimited backhaul capacity system is found.",
"The capacity of a particular large Gaussian relay network is determined in the limit as the number of relays tends to infinity. Upper bounds are derived from cut-set arguments, and lower bounds follow from an argument involving uncoded transmission. It is shown that in cases of interest, upper and lower bounds coincide in the limit as the number of relays tends to infinity. Hence, this paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance. The findings are illustrated by geometric interpretations. The techniques developed in this paper are then applied to a sensor network situation. This is a network joint source-channel coding problem, and it is well known that the source-channel separation theorem does not extend to this case. The present paper extends this insight by providing an example where separating source from channel coding does not only lead to suboptimal performance-it leads to an exponential penalty in performance scaling behavior (as a function of the number of nodes). Finally, the techniques developed in this paper are extended to include certain models of ad hoc wireless networks, where a capacity scaling law can be established: When all nodes act purely as relays for a single source-destination pair, capacity grows with the logarithm of the number of nodes.",
"Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination's channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels.",
"A noisy network coding scheme for communicating messages between multiple sources and destinations over a general noisy network is presented. For multi-message multicast networks, the scheme naturally generalizes network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to discrete memoryless and Gaussian networks. The scheme also extends the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves lossy compression by the relay as in the compress-forward coding scheme for the relay channel. However, unlike previous compress-forward schemes in which independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner-Ziv binning as in previous compress-forward schemes, and each decoder performs simultaneous decoding of the received signals from all the blocks without uniquely decoding the compression indices. A consequence of this new scheme is that achievability is proved simply and more generally without resorting to time expansion to extend results for acyclic networks to networks with cycles. The noisy network coding scheme is then extended to general multi-message networks by combining it with decoding techniques for the interference channel. For the Gaussian multicast network, noisy network coding improves the previously established gap to the cutset bound. We also demonstrate through two popular Gaussian network examples that noisy network coding can outperform conventional compress-forward, amplify-forward, and hash-forward coding schemes.",
"",
"A centrifugating device for biological liquids, e.g. blood, in which a rotatable container carries a specially shaped seal that surrounds and bears on a fixed assembly with a minimum area of interface between the fixed and rotating parts. This seal is disposed outside the path of the liquid to be treated. The fixed assembly, in turn, is releasably carried by a bracket, the bracket being selectively longitudinally extensible as well as selectively adjustably swingable about a vertical axis of oscillation eccentric to the centrifuge, thereby to permit exact positioning of the fixed assembly coaxially of the rotatable container. The parts are so simple and inexpensive in construction that at least some of them can be used once and thrown away. Moreover, the fixed assembly is easily insertable in sealed relationship in any of a variety of containers, by the simplest of manual assembly and disassembly operations.",
"The multicast capacity is determined for networks that have deterministic channels with broadcasting at the transmitters and no interference at the receivers. The multicast capacity is shown to have a cut-set interpretation. It is further shown that one cannot always layer channel and network coding in such networks. The proof of the latter result partially generalizes to discrete memoryless broadcast channels and is used to bound the common rate for problems where one achieves a cut bound on throughput.",
"We analyze the asymptotic behavior of compute-and-forward relay networks in the regime of high signal-to-noise ratios. We consider a section of such a network consisting of K transmitters and K relays. The aim of the relays is to reliably decode an invertible function of the messages sent by the transmitters. An upper bound on the capacity of this system can be obtained by allowing full cooperation among the transmitters and among the relays, transforming the network into a K times K multiple-input multiple-output (MIMO) channel. The number of degrees of freedom of compute-and-forward is hence at most K. In this paper, we analyze the degrees of freedom achieved by the lattice coding implementation of compute-and-forward proposed recently by Nazer and Gastpar. We show that this lattice implementation achieves at most 2 (1+1 K) 2 degrees of freedom, thus exhibiting a very different asymptotic behavior than the MIMO upper bound. This raises the question if this gap of the lattice implementation to the MIMO upper bound is inherent to compute-and-forward in general. We answer this question in the negative by proposing a novel compute-and-forward implementation achieving K degrees of freedom.",
"The capacity of the two-user Gaussian interference channel has been open for 30 years. The understanding on this problem has been limited. The best known achievable region is due to Han and Kobayashi but its characterization is very complicated. It is also not known how tight the existing outer bounds are. In this work, we show that the existing outer bounds can in fact be arbitrarily loose in some parameter ranges, and by deriving new outer bounds, we show that a very simple and explicit Han-Kobayashi type scheme can achieve to within a single bit per second per hertz (bit s Hz) of the capacity for all values of the channel parameters. We also show that the scheme is asymptotically optimal at certain high signal-to-noise ratio (SNR) regimes. Using our results, we provide a natural generalization of the point-to-point classical notion of degrees of freedom to interference-limited scenarios.",
"Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.",
"United States. Defense Advanced Research Projects Agency. Information Theory for Mobile Ad-Hoc Networks Program (grant 1105741-1-TFIND)"
]
}
|
1108.6312
|
2951260765
|
Consider several source nodes communicating across a wireless network to a destination node with the help of several layers of relay nodes. Recent work by has approximated the capacity of this network up to an additive gap. The communication scheme achieving this capacity approximation is based on compress-and-forward, resulting in noise accumulation as the messages traverse the network. As a consequence, the approximation gap increases linearly with the network depth. This paper develops a computation alignment strategy that can approach the capacity of a class of layered, time-varying wireless relay networks up to an approximation gap that is independent of the network depth. This strategy is based on the compute-and-forward framework, which enables relays to decode deterministic functions of the transmitted messages. Alone, compute-and-forward is insufficient to approach the capacity as it incurs a penalty for approximating the wireless channel with complex-valued coefficients by a channel with integer coefficients. Here, this penalty is circumvented by carefully matching channel realizations across time slots to create integer-valued effective channels that are well-suited to compute-and-forward. Unlike prior constant gap results, the approximation gap obtained in this paper also depends closely on the fading statistics, which are assumed to be i.i.d. Rayleigh.
|
As mentioned above, our approach combines signal alignment with lattice coding techniques. Signal alignment for interference management has proved useful especially for the Gaussian interference channel @cite_20 @cite_1 @cite_21 @cite_24 @cite_26 @cite_28 . Recent work has developed alignment schemes that attain the maximum degrees of freedom of certain Gaussian multi-hop networks @cite_16 @cite_14 @cite_13 @cite_29 .
|
{
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_24",
"@cite_16",
"@cite_13",
"@cite_20"
],
"mid": [
"2007864949",
"2949683706",
"2130172876",
"2953225045",
"2151027523",
"2950813095",
"2101431202",
"",
"2949078183",
"2136608712"
],
"abstract": [
"This paper develops a new communication strategy, ergodic interference alignment, for the K-user interference channel with time-varying fading. At any particular time, each receiver will see a superposition of the transmitted signals plus noise. The standard approach to such a scenario results in each transmitter-receiver pair achieving a rate proportional to 1 K its interference-free ergodic capacity. However, given two well-chosen time indices, the channel coefficients from interfering users can be made to exactly cancel. By adding up these two observations, each receiver can obtain its desired signal without any interference. If the channel gains have independent, uniform phases, this technique allows each user to achieve at least 1 2 its interference-free ergodic capacity at any signal-to-noise ratio. Prior interference alignment techniques were only able to attain this performance as the signal-to-noise ratio tended to infinity. Extensions are given for the case where each receiver wants a message from more than one transmitter as well as the “X channel” case (with two receivers) where each transmitter has an independent message for each receiver. Finally, it is shown how to generalize this strategy beyond Gaussian channel models. For a class of finite field interference channels, this approach yields the ergodic capacity region.",
"We show that the 2x2x2 interference channel, i.e., the multihop interference channel formed by concatenation of two 2-user interference channels achieves the min-cut outer bound value of 2 DoF, for almost all values of channel coefficients, for both time-varying or fixed channel coefficients. The key to this result is a new idea, called aligned interference neutralization, that provides a way to align interference terms over each hop in a manner that allows them to be cancelled over the air at the last hop.",
"In this paper, we develop the machinery of real interference alignment. This machinery is extremely powerful in achieving the sum degrees of freedom (DoF) of single antenna systems. The scheme of real interference alignment is based on designing single-layer and multilayer constellations used for modulating information messages at the transmitters. We show that constellations can be aligned in a similar fashion as that of vectors in multiple antenna systems and space can be broken up into fractional dimensions. The performance analysis of the signaling scheme makes use of a recent result in the field of Diophantine approximation, which states that the convergence part of the Khintchine-Groshev theorem holds for points on nondegenerate manifolds. Using real interference alignment, we obtain the sum DoF of two model channels, namely the Gaussian interference channel (IC) and the X channel. It is proved that the sum DoF of the K-user IC is (K 2) for almost all channel parameters. We also prove that the sum DoF of the X-channel with K transmitters and M receivers is (K M K + M - 1) for almost all channel parameters.",
"We study the sum capacity of multiple unicasts in wired and wireless multihop networks. With 2 source nodes and 2 sink nodes, there are a total of 4 independent unicast sessions (messages), one from each source to each sink node (this setting is also known as an X network). For wired networks with arbitrary connectivity, the sum capacity is achieved simply by routing. For wireless networks, we explore the degrees of freedom (DoF) of multihop X networks with a layered structure, allowing arbitrary number of hops, and arbitrary connectivity within each hop. For the case when there are no more than two relay nodes in each layer, the DoF can only take values 1, 4 3, 3 2 or 2, based on the connectivity of the network, for almost all values of channel coefficients. When there are arbitrary number of relays in each layer, the DoF can also take the value 5 3 . Achievability schemes incorporate linear forwarding, interference alignment and aligned interference neutralization principles. Information theoretic converse arguments specialized for the connectivity of the network are constructed based on the intuition from linear dimension counting arguments.",
"Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within 1 bit s Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal level. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal level.",
"While the best known outerbound for the K user interference channel states that there cannot be more than K 2 degrees of freedom, it has been conjectured that in general the constant interference channel with any number of users has only one degree of freedom. In this paper, we explore the spatial degrees of freedom per orthogonal time and frequency dimension for the K user wireless interference channel where the channel coefficients take distinct values across frequency slots but are fixed in time. We answer five closely related questions. First, we show that K 2 degrees of freedom can be achieved by channel design, i.e. if the nodes are allowed to choose the best constant, finite and nonzero channel coefficient values. Second, we show that if channel coefficients can not be controlled by the nodes but are selected by nature, i.e., randomly drawn from a continuous distribution, the total number of spatial degrees of freedom for the K user interference channel is almost surely K 2 per orthogonal time and frequency dimension. Thus, only half the spatial degrees of freedom are lost due to distributed processing of transmitted and received signals on the interference channel. Third, we show that interference alignment and zero forcing suffice to achieve all the degrees of freedom in all cases. Fourth, we show that the degrees of freedom @math directly lead to an @math capacity characterization of the form @math for the multiple access channel, the broadcast channel, the 2 user interference channel, the 2 user MIMO X channel and the 3 user interference channel with M>1 antennas at each node. Fifth, we characterize the degree of freedom benefits from cognitive sharing of messages on the 3 user interference channel.",
"The paper studies a class of three user Gaussian interference channels. A new layered lattice coding scheme is introduced as a transmission strategy. The use of lattice codes allows for an ldquoalignmentrdquo of the interference observed at each receiver. The layered lattice coding is shown to achieve more than one degree of freedom for a class of interference channels and also achieves rates which are better than the rates obtained using the Han-Kobayashi coding scheme.",
"",
"We consider two-source two-destination (i.e., two-unicast) multi-hop wireless networks that have a layered structure with arbitrary connectivity. We show that, if the channel gains are chosen independently according to continuous distributions, then, with probability 1, two-unicast layered Gaussian networks can only have 1, 3 2 or 2 sum degrees-of-freedom (unless both source-destination pairs are disconnected, in which case no degrees-of-freedom can be achieved). We provide sufficient and necessary conditions for each case based on network connectivity and a new notion of source-destination paths with manageable interference. Our achievability scheme is based on forwarding the received signals at all nodes, except for a small fraction of them in at most two key layers. Hence, we effectively create a \"condensed network\" that has at most four layers (including the sources layer and the destinations layer). We design the transmission strategies based on the structure of this condensed network. The converse results are obtained by developing information-theoretic inequalities that capture the structures of the network connectivity. Finally, we extend this result and characterize the full degrees-of-freedom region of two-unicast layered wireless networks.",
"In a multiple-antenna system with two transmitters and two receivers, a scenario of data communication, known as the X channel, is studied in which each receiver receives data from both transmitters. In this scenario, it is assumed that each transmitter is unaware of the other transmitter's data (noncooperative scenario). This system can be considered as a combination of two broadcast channels (from the transmitters' points of view) and two multiple-access channels (from the receivers' points of view). Taking advantage of both perspectives, two signaling schemes for such a scenario are developed. In these schemes, some linear filters are employed at the transmitters and at the receivers which decompose the system into either two noninterfering multiple-antenna broadcast subchannels or two noninterfering multiple-antenna multiple-access subchannels. The main objective in the design of the filters is to exploit the structure of the channel matrices to achieve the highest multiplexing gain (MG). It is shown that the proposed noncooperative signaling schemes outperform other known noncooperative schemes in terms of the achievable MG. In particular, it is shown that in some specific cases, the achieved MG is the same as the MG of the system if full cooperation is provided either between the transmitters or between the receivers. In the second part of the paper, it is shown that by using mixed design schemes, rather than decomposition schemes, and taking the statistical properties of the interference terms into account, the power offset of the system can be improved. The power offset represents the horizontal shift in the curve of the sum-rate versus the total power in decibels."
]
}
|
1108.6312
|
2951260765
|
Consider several source nodes communicating across a wireless network to a destination node with the help of several layers of relay nodes. Recent work by has approximated the capacity of this network up to an additive gap. The communication scheme achieving this capacity approximation is based on compress-and-forward, resulting in noise accumulation as the messages traverse the network. As a consequence, the approximation gap increases linearly with the network depth. This paper develops a computation alignment strategy that can approach the capacity of a class of layered, time-varying wireless relay networks up to an approximation gap that is independent of the network depth. This strategy is based on the compute-and-forward framework, which enables relays to decode deterministic functions of the transmitted messages. Alone, compute-and-forward is insufficient to approach the capacity as it incurs a penalty for approximating the wireless channel with complex-valued coefficients by a channel with integer coefficients. Here, this penalty is circumvented by carefully matching channel realizations across time slots to create integer-valued effective channels that are well-suited to compute-and-forward. Unlike prior constant gap results, the approximation gap obtained in this paper also depends closely on the fading statistics, which are assumed to be i.i.d. Rayleigh.
|
Lattice codes provide an elegant framework for many classical Gaussian multi-terminal problems @cite_5 @cite_9 . Beyond this role, it has recently been shown that they have a central part to play in approaching the capacity of networks that include some form of interference @cite_0 @cite_3 @cite_21 @cite_2 @cite_22 @cite_39 @cite_24 .
|
{
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_39",
"@cite_0",
"@cite_24",
"@cite_2",
"@cite_5"
],
"mid": [
"2057929374",
"2018300428",
"2151027523",
"2121714289",
"1505858209",
"2005196269",
"2101431202",
"2569741146",
"2111992817"
],
"abstract": [
"In this paper, a Gaussian two-way relay channel, where two source nodes exchange messages with each other through a relay, is considered. We assume that all nodes operate in full-duplex mode and there is no direct channel between the source nodes. We propose an achievable scheme composed of nested lattice codes for the uplink and structured binning for the downlink. Unlike conventional nested lattice codes, our codes utilize two different shaping lattices for source nodes based on a three-stage lattice partition chain, which is a key ingredient for producing the best gap-to-capacity results to date. Specifically, for all channel parameters, the achievable rate region of our scheme is within 1 2 bit from the capacity region for each user and its sum rate is within log3 2 bit from the sum capacity.",
"As bees and crystals (and people selling oranges in the market) know it for many years, lattices provide efficient structures for packing, covering, quantization and channel coding. In the recent years, interesting links were found between lattices and coding schemes for multi-terminal networks. This tutorial paper covers close to 20 years of my research in the area; of enjoying the beauty of lattice codes, and discovering their power in dithered quantization, dirty paper coding, Wyner-Ziv DPCM, modulo-lattice modulation, distributed interference cancelation, and more.",
"Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within 1 bit s Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal level. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal level.",
"In Costa's dirty-paper channel, Gaussian random binning is able to eliminate the effect of interference which is known at the transmitter, and thus achieve capacity. We examine a generalization of the dirty-paper problem to a multiple access channel (MAC) setup, where structured (lattice-based) binning seems to be necessary to achieve capacity. In the dirty-MAC, two additive interference signals are present, one known to each transmitter but none to the receiver. The achievable rates using Costa's Gaussian binning vanish if both interference signals are strong. In contrast, it is shown that lattice-strategies (“lattice precoding”) can achieve positive rates, independent of the interference power. Furthermore, in some cases-which depend on the noise variance and power constraints-high-dimensional lattice strategies are in fact optimal. In particular, they are optimal in the limit of high SNR-where the capacity region of the dirty MAC with strong interference approaches that of a clean MAC whose power is governed by the minimum of the users' powers rather than their sum. The rate gap at high SNR between lattice-strategies and optimum (rather than Gaussian) random binning is conjectured to be 1 2 log2(πe 6) ≈ 0.254 bit. Thus, the doubly dirty MAC is another instance of a network setting, like the Korner-Marton problem, where (linear) structured coding is potentially better than random binning.",
"In this paper, we consider a class of single-source multicast relay networks. We assume that all outgoing channels of a node in the network to its neighbors are orthogonal while the incoming signals from its neighbors can interfere with each other. We first focus on Gaussian relay networks with interference and find an achievable rate using a lattice coding scheme. We show that the achievable rate of our scheme is within a constant bit gap from the information theoretic cut-set bound, where the constant depends only on the network topology, but not on the transmit power, noise variance, and channel gains. This is similar to a recent result by Avestimehr, Diggavi, and Tse, who showed an approximate capacity characterization for general Gaussian relay networks. However, our achievability uses a structured code instead of a random one. Using the idea used in the Gaussian case, we also consider a linear finite-field symmetric network with interference and characterize its capacity using a linear coding scheme.",
"Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.",
"The paper studies a class of three user Gaussian interference channels. A new layered lattice coding scheme is introduced as a transmission strategy. The use of lattice codes allows for an ldquoalignmentrdquo of the interference observed at each receiver. The layered lattice coding is shown to achieve more than one degree of freedom for a class of interference channels and also achieves rates which are better than the rates obtained using the Han-Kobayashi coding scheme.",
"The problem of designing physical-layer network coding (PNC) schemes via nested lattices is considered. Building on the compute-and-forward (C&F) relaying strategy of Nazer and Gastpar, who demonstrated its asymptotic gain using information-theoretic tools, an algebraic approach is taken to show its potential in practical, nonasymptotic, settings. A general framework is developed for studying nested-lattice-based PNC schemes-called lattice network coding (LNC) schemes for short-by making a direct connection between C&F and module theory. In particular, a generic LNC scheme is presented that makes no assumptions on the underlying nested lattice code. C&F is reinterpreted in this framework, and several generalized constructions of LNC schemes are given. The generic LNC scheme naturally leads to a linear network coding channel over modules, based on which noncoherent network coding can be achieved. Next, performance complexity tradeoffs of LNC schemes are studied, with a particular focus on hypercube-shaped LNC schemes. The error probability of this class of LNC schemes is largely determined by the minimum intercoset distances of the underlying nested lattice code. Several illustrative hypercube-shaped LNC schemes are designed based on Constructions A and D, showing that nominal coding gains of 3 to 7.5 dB can be obtained with reasonable decoding complexity. Finally, the possibility of decoding multiple linear combinations is considered and related to the shortest independent vectors problem. A notion of dominant solutions is developed together with a suitable lattice-reduction-based algorithm.",
"Network information theory promises high gains over simple point-to-point communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner (1974, 1978) and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, previous work proposed the idea of nested codes, or more specifically, nested parity-check codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach."
]
}
|
1108.5248
|
2120462184
|
Representation languages for coalitional games are a key research area in algorithmic game theory. There is an inherent tradeoff between how general a language is, allowing it to capture more elaborate games, and how hard it is computationally to optimize and solve such games. One prominent such language is the simple yet expressive Weighted Graph Games (WGGs) representation [14], which maintains knowledge about synergies between agents in the form of an edge weighted graph. We consider the problem of finding the optimal coalition structure in WGGs. The agents in such games are vertices in a graph, and the value of a coalition is the sum of the weights of the edges present between coalition members. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that finding the optimal coalition structure is not only hard for general graphs, but is also intractable for restricted families such as planar graphs which are amenable for many other combinatorial problems. We then provide algorithms with constant factor approximations for planar, minor-free and bounded degree graphs.
|
Much work in algorithmic game theory has been dedicated to team formation, cooperative game representations and methods for finding optimal teams game theoretic solutions. Several papers describe representations of cooperative domains based on combinatorial structures @cite_26 @cite_20 @cite_23 @cite_24 @cite_5 and a survey in @cite_7 . A detailed presentation of such languages is given in @cite_1 @cite_22 . Generation of the optimal coalition structure received much attention @cite_9 @cite_10 @cite_21 @cite_3 due to its applications, such as vehicle routing and multi-sensor networks. An early approach @cite_9 focused on overlapping coalitions and gave a loose approximation algorithm. Another early approach @cite_10 has a worst case complexity of @math , whereas dynamic programming approaches @cite_30 have a worst case guarantee of @math . Such algorithms were examined empirically in @cite_25 .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_5",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"2015062753",
"2046116913",
"1528676759",
"1503525425",
"1971441460",
"1788505167",
"2102152964",
"1501141062",
"1530744370",
"1545756310",
"2005690414",
"2156887976",
"2033201011",
""
],
"abstract": [
"The complete set partitioning (CSP) problem is a special case of the set partitioning problem where the coefficient matrix has 2 m −1 columns, each column being a binary representation of a unique integer between 1 and 2 m −1,m⩾1. It has wide applications in the area of corporate tax structuring in operations research. In this paper we propose a dynamic programming approach to solve the CSP problem, which has time complexityO(3 m ), wheren=2 m −1 is the size of the problem space.",
"We study from a complexity theoretic standpoint the various solution concepts arising in cooperative game theory. We use as a vehicle for this study a game in which the players are nodes of a graph with weights on the edges, and the value of a coalition is determined by the total weight of the edges contained in it. The Shapley value is always easy to compute. The core is easy to characterize when the game is convex, and is intractable (NP-complete) otherwise. Similar results are shown for the kernel, the nucleolus, the e-core, and the bargaining set. As for the von Neumann-Morgenstern solution, we point out that its existence may not even be decidable. Many of these results generalize to the case in which the game is presented by a hypergraph with edges of size k > 2.",
"This exciting and pioneering new overview of multiagent systems, which are online systems composed of multiple interacting intelligent agents, i.e., online trading, offers a newly seen computer science perspective on multiagent systems, while integrating ideas from operations research, game theory, economics, logic, and even philosophy and linguistics. The authors emphasize foundations to create a broad and rigorous treatment of their subject, with thorough presentations of distributed problem solving, game theory, multiagent communication and learning, social choice, mechanism design, auctions, cooperative game theory, and modal logics of knowledge and belief. For each topic, basic concepts are introduced, examples are given, proofs of key results are offered, and algorithmic considerations are examined. An appendix covers background material in probability theory, classical logic, Markov decision processes and mathematical programming. Written by two of the leading researchers of this engaging field, this book will surely serve as THE reference for researchers in the fastest-growing area of computer science, and be used as a text for advanced undergraduate or graduate courses.",
"Preface. 1. Structures. 2. Linear optimization methods. 3. Discrete convex analysis. 4. Computational complexity. 5. Restricted games by partition systems. 6. Restricted games by union stable systems. 7. Values for games on convex geometries. 8. Values for games on matroids. 9. The core, the selectope and the Weber set. 10. Simple games on closure spaces. 11. Voting power. 12. Computing values with Mathematica. Bibliography. Index.",
"Task execution in multi-agent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is necessary when tasks cannot be performed by a single agent. However it may also be beneficial when groups perform more efficiently with respect to the single agents' performance. In this paper we present several solutions to the problem of task allocation among autonomous agents, and suggest that the agents form coalitions in order to perform tasks or improve the efficiency of their performance. We present efficient distributed algorithms with low ratio bounds and with low computational complexities. These properties are proven theoretically and supported by simulations and an implementation in an agent system. Our methods are based on both the algorithmic aspects of combinatorics and approximation algorithms for NP-hard problems. We first present an approach to agent coalition formation where each agent must be a member of only one coalition. Next, we present the domain of overlapping coalitions. We proceed with a discussion of the domain where tasks may have a precedence order. Finally, we discuss the case of implementation in an open, dynamic agent system. For each case we provide an algorithm that will lead agents to the formation of coalitions, where each coalition is assigned a task. Our algorithms are any-time algorithms, they are simple, efficient and easy to implement.",
"A key problem when forming effective coalitions of autonomous agents is determining the best groupings, or the optimal coalition structure, to select to achieve some goal. To this end, we present a novel, anytime algorithm for this task that is significantly faster than current solutions. Specifically, we empirically show that we are able to find solutions that are optimal in 0.082 of the time taken by the state of the art dynamic programming algorithm (for 27 agents), using much less memory (O(2n) instead of O(3n) for n agents). Moreover, our algorithm is the first to be able to find solutions for more than 17 agents in reasonable time (less than 90 minutes for 27 agents, as opposed to around 2 months for the best previous solution).",
"The theory of cooperative games provides a rich mathematical framework with which to understand the interactions between self-interested agents in settings where they can benefit from cooperation, and where binding agreements between agents can be made. Our aim in this talk is to describe the issues that arise when we consider cooperative game theory through a computational lens. We begin by introducing basic concepts from cooperative game theory, and in particular the key solution concepts: the core and the Shapley value. We then introduce the key issues that arise if one is to consider the cooperative games in a computational setting: in particular, the issue of representing games, and the computational complexity of cooperative solution concepts.",
"Forming effective coalitions is a major research challenge in the field of multi-agent systems. Central to this endeavour is the problem of partitioning the set of agents into exhaustive and disjoint coalitions such that the social welfare is maximized. This coalition structure generation problem is extremely challenging due to the exponential number of partitions that need to be examined. Specifically, given n agents, there are O(nn) possible partitions. To date, the only algorithm that can find an optimal solution in O(3n) is the Dynamic Programming (DP) algorithm, due to However, one of the main limitations of DP is that it requires a significant amount of memory. In this paper, we devise an Improved Dynamic Programming algorithm (IDP) that is proved to perform fewer operations than DP (e.g. 38.7 of the operations given 25 agents), and is shown to use only 33.3 of the memory in the best case, and 66.6 in the worst.",
"Coalition formation is an important capability of automated negotiation among self-interested agents. In order for coalitions to be stable, a key question that must be answered is how the gains from cooperation are to be distributed. Recent research has revealed that traditional solution concepts, such as the Shapley value, core, least core, and nucleolus, are vulnerable to various manipulations in open anonymous environments such as the Internet. These manipulations include submitting false names, collusion, and hiding some skills. To address this, a solution concept called the anonymity-proof core, which is robust against such manipulations, was developed. However, the representation size of the outcome function in the anonymity-proof core (and similar concepts) requires space exponential in the number of agents skills. This paper proposes a compact representation of the outcome function, given that the characteristic function is represented using a recently introduced compact language that explicitly specifies only coalitions that introduce synergy. This compact representation scheme can successfully express the outcome function in the anonymity-proof core. Furthermore, this paper develops a new solution concept, the anonymity-proof nucleolus, that is also expressible in this compact representation. We show that the anonymity-proof nucleolus always exists, is unique, and is in the anonymity-proof core (if the latter is nonempty). and assigns the same value to symmetric skills.",
"We consider optimizing the coalition structure in Coalitional Skill Games (CSGs), a succinct representation of coalitional games (Bachrach and Rosenschein 2008). In CSGs, the value of a coalition depends on the tasks its members can achieve. The tasks require various skills to complete them, and agents may have different skill sets. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that CSGs can represent any characteristic function, and consider optimal coalition structure generation in this representation. We provide hardness results, showing that in general CSGs, as well as in very restricted versions of them, computing the optimal coalition structure is hard. On the positive side, we show that the problem can be reformulated as constraint satisfaction on a hyper graph, and present an algorithm that finds the optimal coalition structure in polynomial time for instances with bounded tree-width and number of tasks.",
"Preference aggregation is used in a variety of multiagent applications, and as a result, voting theory has become an important topic in multiagent system research. However, power indices (which reflect how much \"real power\" a voter has in a weighted voting system) have received relatively little attention, although they have long been studied in political science and economics. We consider a particular multiagent domain, a threshold network flow game. Agents control the edges of a graph; a coalition wins if it can send a flow that exceeds a given threshold from a source vertex to a target vertex. The relative power of each edge agent reflects its significance in enabling such a flow, and in real-world networks could be used, for example, to allocate resources for maintaining parts of the network. We examine the computational complexity of calculating two prominent power indices, the Banzhaf index and the Shapley-Shubik index, in this network flow domain. We also consider the complexity of calculating the core in this domain. The core can be used to allocate, in a stable manner, the gains of the coalition that is established. We show that calculating the Shapley-Shubik index in this network flow domain is NP-hard, and that calculating the Banzhaf index is #P-complete. Despite these negative results, we show that for some restricted network flow domains there exists a polynomial algorithm for calculating agents' Banzhaf power indices. We also show that computing the core in this game can be performed in polynomial time.",
"Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition structure is NP-complete. But then, can the coalition structure found via a partial search be guaranteed to be within a bound from optimum? We show that none of the previous coalition structure generation algorithms can establish any bound because they search fewer nodes than a threshold that we show necessary for establishing a bound. We present an algorithm that establishes a tight bound within this minimal amount of search, and show that any other algorithm would have to search strictly more. The fraction of nodes needed to be searched approaches zero as the number of agents grows. If additional time remains, our anytime algorithm searches further, and establishes a progressively lower tight bound. Surprisingly, just searching one more node drops the bound in half. As desired, our algorithm lowers the bound rapidly early on, and exhibits diminishing returns to computation. It also significantly outperforms its obvious contenders. Finally, we show how to distribute the desired search across self-interested manipulative agents. © 1999 Elsevier Science B.V. All rights reserved.",
"Coalition formation is a key topic in multiagent systems. One would prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow for exhaustive search for the optimal one. We present experimental results for three anytime algorithms that search the space of coalition structures. We show that, in the average case, all three algorithms do much better than the recently established theoretical worst case results in (1999a). We also show that no one algorithm is dominant. Each algorithm's performance is influenced by the particular instance distribution, with each algorithm outperforming the others for different instances. We present a possible explanation for the behaviour of the algorithms and support our hypothesis with data collected from a controlled experimental run.",
""
]
}
|
1108.5248
|
2120462184
|
Representation languages for coalitional games are a key research area in algorithmic game theory. There is an inherent tradeoff between how general a language is, allowing it to capture more elaborate games, and how hard it is computationally to optimize and solve such games. One prominent such language is the simple yet expressive Weighted Graph Games (WGGs) representation [14], which maintains knowledge about synergies between agents in the form of an edge weighted graph. We consider the problem of finding the optimal coalition structure in WGGs. The agents in such games are vertices in a graph, and the value of a coalition is the sum of the weights of the edges present between coalition members. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that finding the optimal coalition structure is not only hard for general graphs, but is also intractable for restricted families such as planar graphs which are amenable for many other combinatorial problems. We then provide algorithms with constant factor approximations for planar, minor-free and bounded degree graphs.
|
Arguably, the state of the art method is presented in @cite_3 . It has a worst case runtime of @math and offers no polynomial runtime guarantees, but in practice it is faster than the above methods. All these methods assume a black-box that computes the value of a coalition, while we rely on a specific representation. Another approach solves the coalition structure generation problem @cite_23 , but relies on a different representation. A fixed parameter tractable approach was proposed for typed-games @cite_34 (the running time is exponential in the number of agent types''). However, in graph games the number of agent types is unbounded, so this approach is untractable. In contrast to the above approaches, we provide polynomial algorithms and sufficient conditions that guarantee various for WGGs @cite_26 .
|
{
"cite_N": [
"@cite_34",
"@cite_23",
"@cite_3",
"@cite_26"
],
"mid": [
"2950275611",
"1545756310",
"1501141062",
"2046116913"
],
"abstract": [
"We revisit the coalition structure generation problem in which the goal is to partition the players into exhaustive and disjoint coalitions so as to maximize the social welfare. One of our key results is a general polynomial-time algorithm to solve the problem for all coalitional games provided that player types are known and the number of player types is bounded by a constant. As a corollary, we obtain a polynomial-time algorithm to compute an optimal partition for weighted voting games with a constant number of weight values and for coalitional skill games with a constant number of skills. We also consider well-studied and well-motivated coalitional games defined compactly on combinatorial domains. For these games, we characterize the complexity of computing an optimal coalition structure by presenting polynomial-time algorithms, approximation algorithms, or NP-hardness and inapproximability lower bounds.",
"We consider optimizing the coalition structure in Coalitional Skill Games (CSGs), a succinct representation of coalitional games (Bachrach and Rosenschein 2008). In CSGs, the value of a coalition depends on the tasks its members can achieve. The tasks require various skills to complete them, and agents may have different skill sets. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that CSGs can represent any characteristic function, and consider optimal coalition structure generation in this representation. We provide hardness results, showing that in general CSGs, as well as in very restricted versions of them, computing the optimal coalition structure is hard. On the positive side, we show that the problem can be reformulated as constraint satisfaction on a hyper graph, and present an algorithm that finds the optimal coalition structure in polynomial time for instances with bounded tree-width and number of tasks.",
"Forming effective coalitions is a major research challenge in the field of multi-agent systems. Central to this endeavour is the problem of partitioning the set of agents into exhaustive and disjoint coalitions such that the social welfare is maximized. This coalition structure generation problem is extremely challenging due to the exponential number of partitions that need to be examined. Specifically, given n agents, there are O(nn) possible partitions. To date, the only algorithm that can find an optimal solution in O(3n) is the Dynamic Programming (DP) algorithm, due to However, one of the main limitations of DP is that it requires a significant amount of memory. In this paper, we devise an Improved Dynamic Programming algorithm (IDP) that is proved to perform fewer operations than DP (e.g. 38.7 of the operations given 25 agents), and is shown to use only 33.3 of the memory in the best case, and 66.6 in the worst.",
"We study from a complexity theoretic standpoint the various solution concepts arising in cooperative game theory. We use as a vehicle for this study a game in which the players are nodes of a graph with weights on the edges, and the value of a coalition is determined by the total weight of the edges contained in it. The Shapley value is always easy to compute. The core is easy to characterize when the game is convex, and is intractable (NP-complete) otherwise. Similar results are shown for the kernel, the nucleolus, the e-core, and the bargaining set. As for the von Neumann-Morgenstern solution, we point out that its existence may not even be decidable. Many of these results generalize to the case in which the game is presented by a hypergraph with edges of size k > 2."
]
}
|
1108.5248
|
2120462184
|
Representation languages for coalitional games are a key research area in algorithmic game theory. There is an inherent tradeoff between how general a language is, allowing it to capture more elaborate games, and how hard it is computationally to optimize and solve such games. One prominent such language is the simple yet expressive Weighted Graph Games (WGGs) representation [14], which maintains knowledge about synergies between agents in the form of an edge weighted graph. We consider the problem of finding the optimal coalition structure in WGGs. The agents in such games are vertices in a graph, and the value of a coalition is the sum of the weights of the edges present between coalition members. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that finding the optimal coalition structure is not only hard for general graphs, but is also intractable for restricted families such as planar graphs which are amenable for many other combinatorial problems. We then provide algorithms with constant factor approximations for planar, minor-free and bounded degree graphs.
|
This paper ignored the game theoretic problem of coalitional stability. While the structures we find do maximize the welfare, they do so in a potentially unstable manner. When agents are selfish, and only care about their own utility, the coalition structure may be broken when some agents decide to form a different coalition, improving their own utility at the expense of others. It would be interesting to examine questions relating to solution concepts such as the core, the nucleolus or the cost of stability @cite_32 @cite_6 @cite_17 @cite_0 .
|
{
"cite_N": [
"@cite_0",
"@cite_32",
"@cite_6",
"@cite_17"
],
"mid": [
"2115537584",
"",
"2125363721",
"1540727845"
],
"abstract": [
"A key question in cooperative game theory is that of coalitional stability, usually captured by the notion of the core --the set of outcomes such that no subgroup of players has an incentive to deviate. However, some coalitional games have empty cores, and any outcome in such a game is unstable. In this paper, we investigate the possibility of stabilizing a coalitional game by using external payments. We consider a scenario where an external party, which is interested in having the players work together, offers a supplemental payment to the grand coalition (or, more generally, a particular coalition structure). This payment is conditional on players not deviating from their coalition(s). The sum of this payment plus the actual gains of the coalition(s) may then be divided among the agents so as to promote stability. We define the cost of stability (CoS) as the minimal external payment that stabilizes the game. We provide general bounds on the cost of stability in several classes of games, and explore its algorithmic properties. To develop a better intuition for the concepts we introduce, we provide a detailed algorithmic study of the cost of stability in weighted voting games, a simple but expressive class of games which can model decision-making in political bodies, and cooperation in multiagent settings. Finally, we extend our model and results to games with coalition structures.",
"",
"Abstract : In RM 23, a proof was given that the nucleolus is continuous as a function of the characteristic function. This proof is not correct; the author, at least, does not know how to complete it. In the paper a correct proof for this fact is given. The proof is based on an alternative definition of the nucleolus, which is of some interest in its own right. (Author)",
"One key question in cooperative game theory is that of coalitional stability. A coalition in such games is stable when no subset of the agents in it has a rational incentive to leave the coalition. Finding a division of the gains of the coalition (an imputation) lies at the heart of many cooperative game theory solution concepts, the most prominent of which is the core. However, some coalitional games have empty cores, and any imputation in such a game is unstable. We investigate the possibility of stabilizing the coalitional structure using external payments. In this case, a supplemental payment is offered to the grand coalition by an external party which is interested in having the members of the coalition work together. The sum of this payment plus the gains of the coalition, called the coalition's \"adjusted gains\", may be divided among the members of the coalition in a stable manner. We call a division of the adjusted gains a super-imputation, and define the cost of stability (CoS) as the minimal sum of payments that stabilizes the coalition. We examine the cost of stability in weighted voting games, where each agent has a weight, and a coalition is successful if the sum of its weights exceeds a given threshold. Such games offer a simple model of decision making in political bodies, and of cooperation in multiagent settings. We show that it is coNP-complete to test whether a super-imputation is stable, but show that if either the weights or payments of agents are bounded then there exists a polynomial algorithm for this problem. We provide a polynomial approximation algorithm for computing the cost of stability."
]
}
|
1108.5717
|
110681318
|
Statistical relational learning techniques have been successfully applied in a wide range of relational domains. In most of these applications, the human designers capitalized on their background knowledge by following a trial-and-error trajectory, where relational features are manually defined by a human engineer, parameters are learned for those features on the training data, the resulting model is validated, and the cycle repeats as the engineer adjusts the set of features. This paper seeks to streamline application development in large relational domains by introducing a light-weight approach that efficiently evaluates relational features on pieces of the relational graph that are streamed to it one at a time. We evaluate our approach on two social media tasks and demonstrate that it leads to more accurate models that are learned faster.
|
Structure learning and feature selection are important problems that have been widely studied in both relational and i.i.d. settings. Most feature selection approaches, e.g., @cite_4 , have been developed for non-streaming classification settings. One recent exception is the work of @cite_20 , who study a classification task where the features arrive in a stream, while the data set is fixed. In contrast, here we explore the setting where the pool of features is fixed, but the data arrives as a stream.
|
{
"cite_N": [
"@cite_4",
"@cite_20"
],
"mid": [
"2119479037",
"1949281989"
],
"abstract": [
"Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods.",
"We study an interesting and challenging problem, online streaming feature selection, in which the size of the feature set is unknown, and not all features are available for learning while leaving the number of observations constant. In this problem, the candidate features arrive one at a time, and the learner's task is to select a \"best so far\" set of features from streaming features. Standard feature selection methods cannot perform well in this scenario. Thus, we present a novel framework based on feature relevance. Under this framework, a promising alternative method, Online Streaming Feature Selection (OSFS), is presented to online select strongly relevant and non-redundant features. In addition to OSFS, a faster Fast-OSFS algorithm is proposed to further improve the selection efficiency. Experimental results show that our algorithms achieve more compactness and better accuracy than existing streaming feature selection algorithms on various datasets."
]
}
|
1108.5281
|
1632754975
|
There are various interesting semantics' (extensions) designed for argumentation frameworks. They enable to assign a meaning, e.g., to odd-length cycles. Our main motivation is to transfer semantics' proposed by Baroni, Giacomin and Guida for argumetation frameworks with odd-length cycles to logic programs with odd-length cycles through default negation. The developed construction is even stronger. For a given logic program an argumentation framework is defined. The construction enables to transfer each semantics of the resulting argumentation framework to a semantics of the given logic program. Weak points of the construction are discussed and some future continuations of this approach are outlined.
|
Relations between the classic'' argumentation semantics' and corresponding semantic views on logic programs is studied in @cite_6 . Of course, the problem of odd cycles is not tackled in the paper. Our future goal is a detailed comparison of constructions of @cite_6 and ours.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2039020269"
],
"abstract": [
"Abstract We present an abstract framework for default reasoning, which includes Theorist, default logic, logic programming, autoepistemic logic, non-monotonic modal logics, and certain instances of circumscription as special cases. The framework can be understood as a generalisation of Theorist. The generalisation allows any theory formulated in a monotonic logic to be extended by a defeasible set of assumptions. An assumption can be defeated (or “attacked”) if its “contrary” can be proved, possibly with the aid of other conflicting assumptions. We show that, given such a framework, the standard semantics of most logics for default reasoning can be understood as sanctioning a set of assumptions, as an extension of a given theory, if and only if the set of assumptions is conflict-free (in the sense that it does not attack itself) and it attacks every assumption not in the set. We propose a more liberal, argumentation-theoretic semantics, based upon the notion of admissible extension in logic programming. We regard a set of assumptions, in general, as admissible if and only if it is conflict-free and defends itself (by attacking) every set of assumptions which attacks it. We identify conditions for the existence of extensions and for the equivalence of different semantics."
]
}
|
1108.5281
|
1632754975
|
There are various interesting semantics' (extensions) designed for argumentation frameworks. They enable to assign a meaning, e.g., to odd-length cycles. Our main motivation is to transfer semantics' proposed by Baroni, Giacomin and Guida for argumetation frameworks with odd-length cycles to logic programs with odd-length cycles through default negation. The developed construction is even stronger. For a given logic program an argumentation framework is defined. The construction enables to transfer each semantics of the resulting argumentation framework to a semantics of the given logic program. Weak points of the construction are discussed and some future continuations of this approach are outlined.
|
The correspondence between complete extensions in abstract argumentation and 3-valued stable models in logic programming was studied in @cite_0 .
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2091124158"
],
"abstract": [
"In this paper, we prove the correspondence between complete extensions in abstract argumentation and 3-valued stable models in logic programming. This result is in line with earlier work of [6] that identified the correspondence between the grounded extension in abstract argumentation and the well-founded model in logic programming, as well as between the stable extensions in abstract argumentation and the stable models in logic programming."
]
}
|
1108.5281
|
1632754975
|
There are various interesting semantics' (extensions) designed for argumentation frameworks. They enable to assign a meaning, e.g., to odd-length cycles. Our main motivation is to transfer semantics' proposed by Baroni, Giacomin and Guida for argumetation frameworks with odd-length cycles to logic programs with odd-length cycles through default negation. The developed construction is even stronger. For a given logic program an argumentation framework is defined. The construction enables to transfer each semantics of the resulting argumentation framework to a semantics of the given logic program. Weak points of the construction are discussed and some future continuations of this approach are outlined.
|
The project "New Methods for Analyzing, Comparing, and Solving Argumentation Problems", see, e.g., @cite_7 @cite_11 @cite_15 , is focused on implementations of argumentation frameworks in Answer-Set Programming, but also other fundamental theoretical questions are solved. CF2 semantics is studied, too. An Answer Set Programming Argumentation Reasoning Tool (ASPARTIX) is evolved.
|
{
"cite_N": [
"@cite_15",
"@cite_7",
"@cite_11"
],
"mid": [
"66062313",
"1570778371",
"2037950528"
],
"abstract": [
"Abstract argumentation frameworks nowadays provide the most popular formalization of argumentation on a conceptual level. Numerous semantics for this paradigm have been proposed, whereby cf2 semantics has shown to nicely solve particular problems concernend with odd-length cycles in such frameworks. In order to compare different semantics not only on a theoretical basis, it is necessary to provide systems which implement them within a uniform platform. Answer-Set Programming (ASP) turned out to be a promising direction for this aim, since it not only allows for a concise representation of concepts inherent to argumentation semantics, but also offers sophisticated off-the-shelves solvers which can be used as core computation engines. In fact, many argumentation semantics have meanwhile been encoded within the ASP paradigm, but not all relevant semantics, among them cf2 semantics, have yet been considered. The contributions of this work are thus twofold. Due to the particular nature of cf2 semantics, we first provide an alternative characterization which, roughly speaking, avoids the recursive computation of sub-frameworks. Then, we provide the concrete ASP-encodings, which are incorporated within the ASPARTIX system, a platform which already implements a wide range of semantics for abstract argumentation.",
"The system ASPARTIX is a tool for computing acceptable extensions for a broad range of formalizations of Dung's argumentation framework and generalizations thereof. ASPARTIX relies on a fixed disjunctive datalog program which takes an instance of an argumentation framework as input, and uses the answer-set solver DLV for computing the type of extension specified by the user.",
"Answer-set programming (ASP) has emerged as a declarative programming paradigm where problems are encoded as logic programs, such that the so-called answer sets of theses programs represent the solutions of the encoded problem. The efficiency of the latest ASP solvers reached a state that makes them applicable for problems of practical importance. Consequently, problems from many different areas, including diagnosis, data integration, and graph theory, have been successfully tackled via ASP. In this work, we present such ASP-encodings for problems associated to abstract argumentation frameworks (AFs) and generalisations thereof. Our encodings are formulated as fixed queries, such that the input is the only part depending on the actual AF to process. We illustrate the functioning of this approach, which is underlying a new argumentation system called ASPARTIX in detail and show its adequacy in terms of computational complexity."
]
}
|
1108.5281
|
1632754975
|
There are various interesting semantics' (extensions) designed for argumentation frameworks. They enable to assign a meaning, e.g., to odd-length cycles. Our main motivation is to transfer semantics' proposed by Baroni, Giacomin and Guida for argumetation frameworks with odd-length cycles to logic programs with odd-length cycles through default negation. The developed construction is even stronger. For a given logic program an argumentation framework is defined. The construction enables to transfer each semantics of the resulting argumentation framework to a semantics of the given logic program. Weak points of the construction are discussed and some future continuations of this approach are outlined.
|
The Mexican group @cite_12 @cite_1 @cite_17 @cite_2 @cite_3 @cite_4 contributes to research on relations of logic programing and argumentation frameworks, too. Their attention is devoted to characterizations of argumentation semantics' in terms of logic programming semantics'. Also a characterization of CF2 is provided in terms of answer set models or stratified argumentation semantics, which is based on stratified minimal models of logic programs.
|
{
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_12",
"@cite_17"
],
"mid": [
"1795525183",
"2130003285",
"2399019821",
"2395246457",
"2178852070",
"1533980545"
],
"abstract": [
"Extension-based argumentation semantics have shown to be a suitable approach for performing practical reasoning. Since extension-based argumentation semantics were formalized in terms of relationships between atomic arguments, it has been shown that extension-based argumentation semantics based on admissible sets such as stable semantics can be characterized in terms of answer sets. In this paper, we present an approach for characterizing SCC-recursive semantics in terms of answer set models. In particular, we will show a characterization of CF2 in terms of answer set models. This result suggests that not only extension-based argumentation semantics based on admissible sets can be characterized in terms of answer sets; but also extension-based argumentation semantics based on Strongly Connected Components can be characterized in terms of answer sets.",
"Given an argumentation framework AF, we introduce a mapping function that constructs a disjunctive logic program P, such that the preferred extensions of AF correspond to the stable models of P, after intersecting each stable model with the relevant atoms. The given mapping function is of polynomial size w.r.t. AF. In particular, we identify that there is a direct relationship between the minimal models of a propositional formula and the preferred extensions of an argumentation framework by working on representing the defeated arguments. Then we show how to infer the preferred extensions of an argumentation framework by using UNSAT algorithms and disjunctive stable model solvers. The relevance of this result is that we define a direct relationship between one of the most satisfactory argumentation semantics and one of the most successful approach of nonmonotonic reasoning i.e., logic programming with the stable model semantics.",
"Extension-based argumentation semantics have been shown to be a suitable approach for performing practical reasoning. Since extension-based argumentation semantics were formalized in terms of relationships between atomic arguments, it has been shown that extension-based argumentation semantics (such as the grounded semantics and stable semantics) can be characterized by logic programming semantics with negation as failure. Recently, it has been shown that argumentation semantics such as the preferred semantics and the CF2 semantics can be characterized in terms of logic programming semantics. In this paper, we make a short overview w.r.t. recent results in the close relationship between extension-based semantics and logic programming semantics with negation as failure. We also show that there is enough evidence to believe that the use of declarative approaches based on logic programming semantics with negation as failure is a practical approach for performing practical reasoning following an argumentation reasoning approach.",
"It is well-known, in the area of argumentation theory, that there is a direct relationship between extension-based argumentation semantics and logic programming semantics with negation as failure. One of the main implication of this relationship is that one can explore the implementation of argumentation engines by considering logic programming solvers. Recently, it was proved that the argumentation semantics CF2 can be characterized by the stratified minimal model semantics (MM). The stratified minimal model semantics is also a recently introduced logic programming semantics which is based on a recursive construction and minimal models. In this paper, we introduce a solver based on MINISAT algorithm for inferring the logic programming semantics MM∗. As one of the applications of the MM solver, we will argue that this solver is a suitable tool for computing the argumentation semantics CF2.",
"A polyamide resin composition comprises melamine cyanurate with or without a copper compound, an alkali met al halide, a tin compound, a bisamide compound or a bisureido compound.",
"Extension-based argumentation semantics is a successful approach for performing non-monotonic reasoning based on argumentation theory. An interesting property of some extension-based argumentation semantics is that these semantics can be characterized in terms of logic programming semantics. In this paper, we present novel results in this topic. In particular, we show that one can induce an argumentation semantics (that we call Stratified Argumentation Semantics) based on a logic programming semantics that is based on stratified minimal models. We show that the stratified argumentation semantics overcome some problems of extension-based argumentation semantics based on admissible sets and we show that it coincides with the argumentation semantics CF2."
]
}
|
1108.5890
|
2016860055
|
In this paper we present a cooperative medium access control (MAC) protocol that is designed for a physical layer that can decode interfering transmissions in distributed wireless networks. The proposed protocol pro-actively enforces two independent packet transmissions to interfere in a controlled and cooperative manner. The protocol ensures that when a node desires to transmit a unicast packet, regardless of the destination, it coordinates with minimal overhead with relay nodes in order to concurrently transmit over the wireless channel with a third node. The relay is responsible for allowing packets from the two selected nodes to interfere only when the desired packets can be decoded at the appropriate destinations and increase the sum-rate of the cooperative transmission. In case this is not feasible, classic cooperative or direct transmission is adopted. To enable distributed, uncoordinated, and adaptive operation of the protocol, a relay selection mechanism is introduced so that the optimal relay is selected dynamically and depending on the channel conditions. The most important advantage of the protocol is that interfering transmissions can originate from completely independent unicast transmissions from two senders. We present simulation results that validate the efficacy of our proposed scheme in terms of throughput and delay.
|
When we think about MAC issues in scenarios where ANC is employed, even fewer works exist. One of the most interesting works is the one by Boppana and Shea that proposed the overlapped CSMA protocol @cite_3 . The main task of that protocol is to estimate the level of secondary interfering transmissions that another primary transmission can sustain given its perfect knowledge of the signal that intends to cause the interference. This protocol requires significant signaling overhead in order to propagate RTS CTS messages at least two hops and notify the secondary sender whether it is allowed to proceed or not. Nevertheless, primary and secondary transmissions do not interfere with each other. Also the work by Zhang @cite_11 proposed a similar idea. Very recently the work by Khabbazian presented in @cite_12 , proposed the design of a probabilistic MAC based on ANC but only on a theoretical level.
|
{
"cite_N": [
"@cite_12",
"@cite_3",
"@cite_11"
],
"mid": [
"2049685192",
"2171557100",
"2018528670"
],
"abstract": [
"Most medium access control (MAC) mechanisms discard collided packets and consider interference harmful. Recent work on Analog Network Coding (ANC) suggests a different approach, in which multiple interfering transmissions are strategically scheduled. Receiving nodes collect the results of collisions and then use a decoding process, such as ZigZag decoding, to extract the packets involved in the collisions. In this paper, we present an algebraic representation of collisions and describe a general approach to recovering collisions using ANC. To study the effects of using ANC on the performance of MAC layers, we develop an ANC-based MAC algorithm, CMAC, and analyze its performance in terms of probabilistic latency guarantees for local packet delivery. Specifically, we prove that CMAC implements an abstract MAC layer service, as defined in [14, 13]. This study shows that ANC can significantly improve the performance of the abstract MAC layer service compared to conventional probabilistic transmission approaches. We illustrate how this improvement in the MAC layer can translate into faster higher-level algorithms, by analyzing the time complexity of a multi-message network-wide broadcast algorithm that uses CMAC.",
"In wireless ad hoc networks (WANets), multihop routing may result in a radio knowing the content of transmissions of nearby radios. This knowledge can be used to improve spatial reuse in the network, thereby enhancing network throughput. Consider two radios, Alice and Bob, that are neighbors in a WANet not employing spread-spectrum multiple access. Suppose that Alice transmits a packet to Bob for which Bob is not the final destination. Later, Bob forwards that packet on to the destination. Any transmission by Bob not intended for Alice usually causes interference that prevents Alice from receiving a packet from any of her neighbors. However, if Bob is transmitting a packet that he previously received from Alice, then Alice knows the content of the interfering packet, and this knowledge can allow Alice to receive a packet from one of her neighbors during Bob's transmission. In this paper, we develop overlapped transmission techniques based on this idea and analyze several factors affecting their performance. We then develop a MAC protocol based on the IEEE 802.11 standard to support overlapped transmission in a WANet. The resulting overlapped CSMA (OCSMA) protocol improves spatial reuse and end-to-end throughput in several scenarios.",
"In this paper, we consider using simultaneous Multiple Packet Transmission (MPT) to improve the downlink performance of wireless networks. With MPT, the sender can send two compatible packets simultaneously to two distinct receivers and can double the throughput in the ideal case. We formalize the problem of finding a schedule to send out buffered packets in minimum time as finding a maximum matching problem in a graph. Since maximum matching algorithms are relatively complex and may not meet the timing requirements of real-time applications, we give a fast approximation algorithm that is capable of finding a matching at least 3 4 of the size of a maximum matching in O(|E|) time, where |E| is the number of edges in the graph. We also give analytical bounds for maximum allowable arrival rate, which measures the speedup of the downlink after enhanced with MPT, and our results show that the maximum arrival rate increases significantly even with a very small compatibility probability. We also use an approximate analytical model and simulations to study the average packet delay, and our results show that packet delay can be greatly reduced even with a very small compatibility probability."
]
}
|
1108.4572
|
2949193553
|
When designing a product that needs to fit the human shape, designers often use a small set of 3D models, called design models, either in physical or digital form, as representative shapes to cover the shape variabilities of the population for which the products are designed. Until recently, the process of creating these models has been an art involving manual interaction and empirical guesswork. The availability of the 3D anthropometric databases provides an opportunity to create design models optimally. In this paper, we propose a novel way to use 3D anthropometric databases to generate design models that represent a given population for design applications such as the sizing of garments and gear. We generate the representative shapes by solving a covering problem in a parameter space. Well-known techniques in computational geometry are used to solve this problem. We demonstrate the method using examples in designing glasses and helmets.
|
@cite_15 used the traditional sparse anthropometric measurements to automatically create an apparel sizing system that has good fit. For a fixed set of measurements on the body, a fixed number of sizes, and a fixed percentage of the population that needs to be fitted by the sizing system, they optimize the fit of the sizing system. The fit for a human body is defined by a weighted distance function between the measurements on the body and the measurements used for the sizing system. Optimizing the fit amounts to solving a non-linear optimization system. This system is hard to solve and suboptimal solutions may be obtained. This approach can operate in multi-dimensional spaces. It is not straight forward to obtain design models from the resulting sizing system because computing a design model from a sparse set of measurements is an under constrained problem.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2157100308"
],
"abstract": [
"A novel approach for the construction of apparel sizing systems is formulated. As a first step to this process, efficient sizing systems are defined based on a mathematical model of garment fit. Nonlinear optimisation techniques are then used to derive a set of possible sizing systems using multidimensional information from anthropometric data. The method is illustrated by developing a sizing system designed for a dress shirt of a military uniform using anthropometric data from the US Army. Results of this analysis show that endogenous size assignment and selection of disaccommodated individuals, together with relaxation of the requirement of a ‘stepwise’ size structure, results in substantial improvements in fit over an existing sizing system. The proposed methodology enables the development of sizing systems that can either increase accommodation of the population, reduce the number of sizes in the system, or improve overall fit in accommodated individuals."
]
}
|
1108.4572
|
2949193553
|
When designing a product that needs to fit the human shape, designers often use a small set of 3D models, called design models, either in physical or digital form, as representative shapes to cover the shape variabilities of the population for which the products are designed. Until recently, the process of creating these models has been an art involving manual interaction and empirical guesswork. The availability of the 3D anthropometric databases provides an opportunity to create design models optimally. In this paper, we propose a novel way to use 3D anthropometric databases to generate design models that represent a given population for design applications such as the sizing of garments and gear. We generate the representative shapes by solving a covering problem in a parameter space. Well-known techniques in computational geometry are used to solve this problem. We demonstrate the method using examples in designing glasses and helmets.
|
Mochimaru and Kouchi @cite_13 represented each model in a database of human shapes using a set of manually placed landmark positions. They proposed an approach to find representative three-dimensional body shapes. The approach first reduces the dimensionality of the data using multi-dimensional scaling, and then uses Principal Component Analysis (PCA) to find representative shapes. Mochimaru and Kouchi showed that this approach is suitable to find representative shapes of a human foot. While this approach is fully automatic, it assumes that the distortion introduced by multi-dimensional scaling is small. While this may be true for low-dimensional data, it is not in general true when three-dimensional measurements of high resolution are considered. Hence, there is no guarantee that the design models optimally represent the population.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"1995061439"
],
"abstract": [
"A method of calculating representative forms from a given set of forms was developed, in which surface data is modeled by polygons based on landmarks. Inter-individual distances are defined as distortions in FFD control points. By calculating inter-individual distances for all possible pairs of forms, the distribution of the 3D forms in m-dimensional space is obtained using MDS. Each MDS dimension represents an independent shape factor. Forms with specific MDS scores, such as (0.5,0,0,0), (1,0,0,0) in standard deviation units, are calculated as weighted averages of all actual forms. An FFD transformation grid is calculated that represents the systematic form transformation along an MDS dimension. Forms with different scores for the first or second MDS dimensions only and average scores (=0) for the other MDS dimensions are calculated using these transformation grids."
]
}
|
1108.4572
|
2949193553
|
When designing a product that needs to fit the human shape, designers often use a small set of 3D models, called design models, either in physical or digital form, as representative shapes to cover the shape variabilities of the population for which the products are designed. Until recently, the process of creating these models has been an art involving manual interaction and empirical guesswork. The availability of the 3D anthropometric databases provides an opportunity to create design models optimally. In this paper, we propose a novel way to use 3D anthropometric databases to generate design models that represent a given population for design applications such as the sizing of garments and gear. We generate the representative shapes by solving a covering problem in a parameter space. Well-known techniques in computational geometry are used to solve this problem. We demonstrate the method using examples in designing glasses and helmets.
|
We propose a fully-automatic method to compute design models that represent a given 3D anthropometric database well. Since the approach computes the design models automatically, it can operate in high-dimensional spaces. Unlike the method by @cite_15 , our method does not rely on solving a non-linear optimization system. Instead, we model the fit explicitly using a set of tolerances (one tolerance along each dimension) that explains by how much the garment or gear can be adjusted along each dimension. These tolerances depend directly on the design and the materials that are used in a specific application. In this way, our method finds optimal design models for a specific task such as helmet design. To our knowledge, this is the first method that simultaneously considers the optimal fit accommodation and design models.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2157100308"
],
"abstract": [
"A novel approach for the construction of apparel sizing systems is formulated. As a first step to this process, efficient sizing systems are defined based on a mathematical model of garment fit. Nonlinear optimisation techniques are then used to derive a set of possible sizing systems using multidimensional information from anthropometric data. The method is illustrated by developing a sizing system designed for a dress shirt of a military uniform using anthropometric data from the US Army. Results of this analysis show that endogenous size assignment and selection of disaccommodated individuals, together with relaxation of the requirement of a ‘stepwise’ size structure, results in substantial improvements in fit over an existing sizing system. The proposed methodology enables the development of sizing systems that can either increase accommodation of the population, reduce the number of sizes in the system, or improve overall fit in accommodated individuals."
]
}
|
1108.4983
|
263868461
|
We consider the monotone submodular k-set packing problem in the context of the more general problem of maximizing a monotone submodular function in a k-exchange system. These systems, introduced by [Feldman,2011], generalize the matroid k-parity problem in a wide class of matroids and capture many other combinatorial optimization problems. We give a deterministic, non-oblivious local search algorithm that attains an approximation ratio of (k + 3) 2 + epsilon for the problem of maximizing a monotone submodular function in a k-exchange system, improving on the best known result of k+epsilon, and answering an open question posed by
|
In the case of an arbitrary single matroid constraint, have attained a @math approximation for monotone submodular maximization, via the . This result is tight, provided that @math @cite_11 . In the case of @math simultaneous matroid constraints, an early result of Fisher, Nemhauser, and Wolsey @cite_6 shows that the standard greedy algorithm attains a @math approximation for monotone submodular maximization. state further that the result can be generalized to @math -systems (a full proof appears in @cite_5 ). More recently, Lee, Sviridenko, and Vondr ak @cite_12 have improved this result to give a @math approximation for monotone submodular maximization over @math arbitrary matroid constraints, via a simple, oblivious local search algorithm. A similar analysis was used by @cite_16 to show that oblivious local search attains a @math approximation for the class of @math -exchange systems (here, again, @math ). For the more general class of @math -systems, @cite_0 give a @math approximation, where @math is the best known approximation ratio for unconstrained non-monotone submodular maximization.
|
{
"cite_N": [
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"2757107770",
"1503396090",
"2621717961",
"1883792619",
"2802297243",
"2143996311"
],
"abstract": [
"LetN be a finite set andz be a real-valued function defined on the set of subsets ofN that satisfies z(S)+z(T)gez(SxcupT)+z(SxcapT) for allS, T inN. Such a function is called submodular. We consider the problem maxSsubN a(S):|S|leK,z(S) submodular . Several hard combinatorial optimization problems can be posed in this framework. For example, the problem of finding a maximum weight independent set in a matroid, when the elements of the matroid are colored and the elements of the independent set can have no more thanK colors, is in this class. The uncapacitated location problem is a special case of this matroid optimization problem. We analyze greedy and local improvement heuristics and a linear programming relaxation for this problem. Our results are worst case bounds on the quality of the approximations. For example, whenz(S) is nondecreasing andz(0) = 0, we show that a ldquogreedyrdquo heuristic always produces a solution whose value is at least 1 –[(K – 1) K] K times the optimal value. This bound can be achieved for eachK and has a limiting value of (e – 1) e, where e is the base of the natural logarithm.",
"Constrained submodular maximization problems have long been studied, most recently in the context of auctions and computational advertising, with near-optimal results known under a variety of constraints when the submodular function is monotone. In this paper, we give constant approximation algorithms for the non-monotone case that work for p-independence systems (which generalize constraints given by the intersection of p matroids that had been studied previously), where the running time is poly(n, p). Our algorithms and analyses are simple, and essentially reduce non-monotone maximization to multiple runs of the greedy algorithm previously used in the monotone case. We extend these ideas to give a simple greedy-based constant factor algorithms for non-monotone submodular maximization subject to a knapsack constraint, and for (online) secretary setting (where elements arrive one at a time in random order and the algorithm must make irrevocable decisions) subject to uniform matroid or a partition matroid constraint. Finally, we give an O(log k) approximation in the secretary setting subject to a general matroid constraint of rank k.",
"",
"Submodular maximization and set systems play a major role in combinatorial optimization. It is long known that the greedy algorithm provides a 1 (k + 1)-approximation for maximizing a monotone submodular function over a k-system. For the special case of k-matroid intersection, a local search approach was recently shown to provide an improved approximation of 1 (k +δ) for arbitrary δ > 0. Unfortunately, many fundamental optimization problems are represented by a k-system which is not a k-intersection. An interesting question is whether the local search approach can be extended to include such problems. We answer this question affirmatively. Motivated by the b-matching and k-set packing problems, as well as the more general matroid k-parity problem, we introduce a new class of set systems called k-exchange systems, that includes k-set packing, b-matching, matroid k-parity in strongly base orderable matroids, and additional combinatorial optimization problems such as: independent set in (k+1)-claw free graphs, asymmetric TSP, job interval selection with identical lengths and frequency allocation on lines. We give a natural local search algorithm which improves upon the current greedy approximation, for this new class of independence systems. Unlike known local search algorithms for similar problems, we use counting arguments to bound the performance of our algorithm. Moreover, we consider additional objective functions and provide improved approximations for them as well. In the case of linear objective functions, we give a non-oblivious local search algorithm, that improves upon existing local search approaches for matroid k-parity.",
"Submodular function maximization is a central problem in combinatorial optimization, generalizing many important NP-hard problems including max cut in digraphs, graphs, and hypergraphs; certain constraint satisfaction problems; maximum entropy sampling; and maximum facility location problems. Our main result is that for any k ≥ 2 and any e > 0, there is a natural local search algorithm that has approximation guarantee of 1 (k + e) for the problem of maximizing a monotone submodular function subject to k matroid constraints. This improves upon the 1 (k + 1)-approximation of Fisher, Nemhauser, and Wolsey obtained in 1978 [Fisher, M., G. Nemhauser, L. Wolsey. 1978. An analysis of approximations for maximizing submodular set functions---II. Math. Programming Stud.8 73--87]. Also, our analysis can be applied to the problem of maximizing a linear objective function and even a general nonmonotone submodular function subject to k matroid constraints. We show that, in these cases, the approximation guarantees of our algorithms are 1 (k-1 + e) and 1 (k + 1 + 1 (k-1) + e), respectively. Our analyses are based on two new exchange properties for matroids. One is a generalization of the classical Rota exchange property for matroid bases, and another is an exchange property for two matroids based on the structure of matroid intersection.",
"Given a collection F of subsets of S = 1,…, n , setcover is the problem of selecting as few as possiblesubsets from F such that their union covers S, , and maxk-cover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems areNP-hard. We prove that (1 - o (1)) ln n is a threshold below which setcover cannot be approximated efficiently, unless NP has slightlysuperpolynomial time algorithms. This closes the gap (up to low-orderterms) between the ratio of approximation achievable by the greedyalogorithm (which is (1 - o (1)) lnn), and provious results of Lund and Yanakakis, that showed hardness ofapproximation within a ratio of log 2 n 2s0.72 ln n . For max k -cover, we show an approximationthreshold of (1 - 1 e )(up tolow-order terms), under assumption that P≠NP ."
]
}
|
1108.4983
|
263868461
|
We consider the monotone submodular k-set packing problem in the context of the more general problem of maximizing a monotone submodular function in a k-exchange system. These systems, introduced by [Feldman,2011], generalize the matroid k-parity problem in a wide class of matroids and capture many other combinatorial optimization problems. We give a deterministic, non-oblivious local search algorithm that attains an approximation ratio of (k + 3) 2 + epsilon for the problem of maximizing a monotone submodular function in a k-exchange system, improving on the best known result of k+epsilon, and answering an open question posed by
|
In the case of unconstrained non-monotone submodular maximization, Feige, Mirrokni, and Vondr 'ak @cite_1 gave a randomized @math approximation, which was iteratively improved by Gharan and Vondr ' a k @cite_13 and then Feldman, Naor, and Shwartz @cite_14 to @math . For non-monotone maximization subject to @math matroid constraints, Lee, Sviridenko, and Vondr 'ak @cite_8 gave a @math approximation, and later improved @cite_12 this to a @math approximation. Again, the latter result is obtained by a standard local search algorithm. @cite_16 apply similar techniques to yield a @math approximation for non-monotone submodular maximization the general class of @math -exchange systems.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_1",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"1542515633",
"2026338082",
"2157316274",
"1883792619",
"1542719709",
"2802297243"
],
"abstract": [
"Consider a suboptimal solution S for a maximization problem. Suppose S's value is small compared to an optimal solution OPT to the problem, yet S is structurally similar to OPT. A natural question in this setting is whether there is a way of improving S based solely on this information. In this paper we introduce the Structural Continuous Greedy Algorithm, answering this question affirmatively in the setting of the Nonmonotone Submodular Maximization Problem. We improve on the best approximation factor known for this problem. In the Nonmonotone Submodular Maximization Problem we are given a non-negative submodular function f, and the objective is to find a subset maximizing f. Our method yields an 0.42-approximation for this problem, improving on the current best approximation factor of 0.41 given by Gharan and Vondrak [5]. On the other hand, [4] showed a lower bound of 0.5 for this problem.",
"Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, and maximum facility location problems. Unlike submodular minimization, submodular maximization is NP-hard. In this paper, we give the first constant-factor approximation algorithm for maximizing any non-negative submodular function subject to multiple matroid or knapsack constraints. We emphasize that our results are for non-monotone submodular functions. In particular, for any constant k, we present a (1 k+2+1 k+e)-approximation for the submodular maximization problem under k matroid constraints, and a (1 5-e)-approximation algorithm for this problem subject to k knapsack constraints (e>0 is any constant). We improve the approximation guarantee of our algorithm to 1 k+1+ 1 k-1 +e for k≥2 partition matroid constraints. This idea also gives a ( 1 k+e)-approximation for maximizing a monotone submodular function subject to k≥2 partition matroids, which improves over the previously best known guarantee of 1 k+1.",
"Submodular maximization generalizes many important problems including Max Cut in directed undirected graphs and hypergraphs, certain constraint satisfaction problems and maximum facility location problems. Unlike the problem of minimizing submodular functions, the problem of maximizing submodular functions is NP-hard.",
"Submodular maximization and set systems play a major role in combinatorial optimization. It is long known that the greedy algorithm provides a 1 (k + 1)-approximation for maximizing a monotone submodular function over a k-system. For the special case of k-matroid intersection, a local search approach was recently shown to provide an improved approximation of 1 (k +δ) for arbitrary δ > 0. Unfortunately, many fundamental optimization problems are represented by a k-system which is not a k-intersection. An interesting question is whether the local search approach can be extended to include such problems. We answer this question affirmatively. Motivated by the b-matching and k-set packing problems, as well as the more general matroid k-parity problem, we introduce a new class of set systems called k-exchange systems, that includes k-set packing, b-matching, matroid k-parity in strongly base orderable matroids, and additional combinatorial optimization problems such as: independent set in (k+1)-claw free graphs, asymmetric TSP, job interval selection with identical lengths and frequency allocation on lines. We give a natural local search algorithm which improves upon the current greedy approximation, for this new class of independence systems. Unlike known local search algorithms for similar problems, we use counting arguments to bound the performance of our algorithm. Moreover, we consider additional objective functions and provide improved approximations for them as well. In the case of linear objective functions, we give a non-oblivious local search algorithm, that improves upon existing local search approaches for matroid k-parity.",
"We consider the problem of maximizing a nonnegative (possibly non-monotone) submodular set function with or without constraints. [9] showed a 2 5-approximation for the unconstrained problem and also proved that no approximation better than 1 2 is possible in the value oracle model. Constant-factor approximation has been also known for submodular maximization subject to a matroid independence constraint (a factor of 0.309 [33]) and for submodular maximization subject to a matroid base constraint, provided that the fractional base packing number ν is bounded away from 1 (a 1 4-approximation assuming that ν ≥ 2 [33]). In this paper, we propose a new algorithm for submodular maximization which is based on the idea of simulated annealing. We prove that this algorithm achieves improved approximation for two problems: a 0.41-approximation for unconstrained submodular maximization, and a 0.325-approximation for submodular maximization subject to a matroid independence constraint. On the hardness side, we show that in the value oracle model it is impossible to achieve a 0.478-approximation for submodular maximization subject to a matroid independence constraint, or a 0.394-approximation subject to a matroid base constraint in matroids with two disjoint bases. Even for the special case of cardinality constraint, we prove it is impossible to achieve a 0.491-approximation. (Previously it was conceivable that a 1 2-approximation exists for these problems.) It is still an open question whether a 1 2-approximation is possible for unconstrained submodular maximization.",
"Submodular function maximization is a central problem in combinatorial optimization, generalizing many important NP-hard problems including max cut in digraphs, graphs, and hypergraphs; certain constraint satisfaction problems; maximum entropy sampling; and maximum facility location problems. Our main result is that for any k ≥ 2 and any e > 0, there is a natural local search algorithm that has approximation guarantee of 1 (k + e) for the problem of maximizing a monotone submodular function subject to k matroid constraints. This improves upon the 1 (k + 1)-approximation of Fisher, Nemhauser, and Wolsey obtained in 1978 [Fisher, M., G. Nemhauser, L. Wolsey. 1978. An analysis of approximations for maximizing submodular set functions---II. Math. Programming Stud.8 73--87]. Also, our analysis can be applied to the problem of maximizing a linear objective function and even a general nonmonotone submodular function subject to k matroid constraints. We show that, in these cases, the approximation guarantees of our algorithms are 1 (k-1 + e) and 1 (k + 1 + 1 (k-1) + e), respectively. Our analyses are based on two new exchange properties for matroids. One is a generalization of the classical Rota exchange property for matroid bases, and another is an exchange property for two matroids based on the structure of matroid intersection."
]
}
|
1108.4675
|
1978444629
|
A classic experiment by Milgram shows that individuals can route messages along short paths in social networks, given only simple categorical information about recipients (such as "he is a prominent lawyer in Boston" or "she is a Freshman sociology major at Harvard"). That is, these networks have very short paths between pairs of nodes (the so-called small-world phenomenon); moreover, participants are able to route messages along these paths even though each person is only aware of a small part of the network topology. Some sociologists conjecture that participants in such scenarios use a greedy routing strategy in which they forward messages to acquaintances that have more categories in common with the recipient than they do, and similar strategies have recently been proposed for routing messages in dynamic ad-hoc networks of mobile devices. In this paper, we introduce a network property called membership dimension, which characterizes the cognitive load required to maintain relationships between participants and categories in a social network. We show that any connected network has a system of categories that will support greedy routing, but that these categories can be made to have small membership dimension if and only if the underlying network exhibits the small-world phenomenon.
|
Geometric greedy routing @cite_18 @cite_3 uses geographic location rather than categorical data to route messages. In this method, vertices have coordinates in a geometric metric space and messages are routed to any neighbor that is closer to the target's coordinates. Greedy routing may not succeed in certain geometric networks, so a number of techniques have been developed to assist such greedy routing schemes when they fail @cite_22 @cite_8 @cite_13 . Introduced by @cite_14 , virtual coordinates can overcome the shortcomings of real-world coordinates and allow simple greedy forwarding to function without the assistance of fallback algorithms. This approach has been explored by other researchers @cite_4 @cite_0 @cite_20 @cite_21 , who study various network properties that allow for greedy routing to succeed. Several researchers also study the existence of greedy-routing strategies @cite_19 @cite_25 @cite_23 @cite_16 , where the number of bits needed to represent the coordinates of each vertex is polylogarithmic in the size of the network; this notion of succinctness for geometric greedy routing is closely analogous to our definition of the membership dimension for categorical greedy routing.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"1518780012",
"2152809170",
"2169273227",
"2156689181",
"2101963262",
"2152948207",
"1550444125",
"2168630584",
"2109977785",
"1542976050",
"1572848511",
"2085463780",
"",
"1993306206"
],
"abstract": [
"Abstract : Digital packet networking technology is spreading rapidly into the commercial sector. Currently, most networks are isolated local area networks. This isolation is counterproductive. Within the next twenty years it should be possible to connect these networks to one another via a vast inter network. A metropolitan inter network must be capable of connecting many thousand networks and a national one several million. It is difficult to extend current inter networking technology to this scale. Problems include routing and host mobility. This report addressed these problems by developing an algorithm that retains robustness and has desirable commercial characteristics.",
"For many years, scalable routing for wireless communication systems was a compelling but elusive goal. Recently, several routing algorithms that exploit geographic information (e.g. GPSR) have been proposed to achieve this goal. These algorithms refer to nodes by their location, not address, and use those coordinates to route greedily, when possible, towards the destination. However, there are many situations where location information is not available at the nodes, and so geographic methods cannot be used. In this paper we define a scalable coordinate-based routing algorithm that does not rely on location information, and thus can be used in a wide variety of ad hoc and sensornet environments.",
"We conjecture that any planar 3-connected graph can be embedded in the plane in such a way that for any nodes s and t, there is a path from s to t such that the Euclidean distance to t decreases monotonically along the path. A consequence of this conjecture would be that in any ad hoc network containing such a graph as a spanning subgraph, two-dimensional virtual coordinates for the nodes can be found for which the method of purely greedy geographic routing is guaranteed to work. We discuss this conjecture and its equivalent forms show that its hypothesis is as weak as possible, and show a result delimiting the applicability of our approach: any 3-connected K3,3-free graph has a planar 3-connected spanning subgraph. We also present two alternative versions of greedy routing on virtual coordinates that provably work. Using Steinitz's theorem we show that any 3-connected planar graph can be embedded in three dimensions so that greedy routing works, albeit with a modified notion of distance; we present experimental evidence that this scheme can be implemented effectively in practice. We also present a simple but provably robust version of greedy routing that works for any graph with a 3-connected planar spanning subgraph.",
"We consider routing problems in ad hoc wireless networks modeled as unit graphs in which nodes are points in the plane and two nodes can communicate if the distance between them is less than some fixed unit. We describe the first distributed algorithms for routing that do not require duplication of packets or memory at the nodes and yet guarantee that a packet is delivered to its destination. These algorithms can be extended to yield algorithms for broadcasting and geocasting that do not require packet duplication. A by product of our results is a simple distributed protocol for extracting a planar subgraph of a unit graph. We also present simulation results on the performance of our algorithms.",
"We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks.",
"We propose a scalable and reliable point-to-point routing algorithm for ad hoc wireless networks and sensor-nets. Our algorithm assigns to each node of the network a virtual coordinate in the hyperbolic plane, and performs greedy geographic routing with respect to these virtual coordinates. Unlike other proposed greedy routing algorithms based on virtual coordinates, our embedding guarantees that the greedy algorithm is always successful in finding a route to the destination, if such a route exists. We describe a distributed algorithm for computing each node's virtual coordinates in the hyperbolic plane, and for greedily routing packets to a destination point in the hyperbolic plane. (This destination may be the address of another node of the network, or it may be an address associated to a piece of content in a Distributed Hash Table. In the latter case we prove that the greedy routing strategy makes a consistent choice of the node responsible for the address, irrespective of the source address of the request.) We evaluate the resulting algorithm in terms of both path stretch and node congestion.",
"Suppose that a traveler arrives to the City of Toronto, and wants to walk to the famous CN-Tower, one of the tallest free-standing structures in the world. Assume now that our visitor, lacking a map of Toronto, is standing at a crossing from which he can see the CN-tower, and several streets S1, . . . , Sm that he can choose to start his walk. A natural (and most likely safe assumption), is that our visitor must choose to walk first along the road that points closest in the direction of the CN-tower, see Figure 1. A close look at maps of numerous cities around the world, show us that the previous way to explore a new, and unknown city will in general yield walks that will be close enough to the optimal ones to travel from one location to another. In mathematical terms, we can model the map of many cities by geometric graphs in which street intersections are represented by the vertices of our graphs, and streets by straight line segments. Compass routing on geometric networks, in its most elemental form yields the following algorithm:",
"Geographic Routing is a family of routing algorithms that uses geographic point locations as addresses for the purposes of routing. Such routing algorithms have proven to be both simple to implement and heuristically effective when applied to wireless sensor networks. Greedy Routing is a natural abstraction of this model in which nodes are assigned virtual coordinates in a metric space, and these coordinates are used to perform point-to-point routing. Here we resolve a conjecture of Papadimitriou and Ratajczak that every 3-connected planar graph admits a greedy embedding into the Euclidean plane. This immediately implies that all 3-connected graphs that exclude K 3,3 as a minor admit a greedy embedding into the Euclidean plane. We also prove a combinatorial condition that guarantees nonembeddability. We use this result to construct graphs that can be greedily embedded into the Euclidean plane, but for which no spanning tree admits such an embedding.",
"In this paper, we presented a fully distributed algorithm to compute a planar subgraph of the underlying wireless connectivity graph. We considered the idealized unit disk graph model in which nodes are assumed to be connected if and only if nodes are within their transmission range. The main contribution of this work is a fully distributed algorithm to extract the connected, planar graph for routing in the wireless networks. The communication cost of the proposed algorithm is O(d log d) bits, where d is the degree of anode. In addition, this paper also presented a geometric routing algorithm. The algorithm is fully distributed and nodes know only the position of other nodes and can communicate with neighboring nodes in their transmission range",
"We describe a method for producing a greedy embedding of any n -vertex simple graph G in the hyperbolic plane, so that a message M between any pair of vertices may be routed by having each vertex that receives M pass it to a neighbor that is closer to M 's destination. Our algorithm produces succinct drawings, where vertex positions are represented using O (logn ) bits and distance comparisons may be performed efficiently using these representations.",
"We show that greedy geometric routing schemes exist for the Euclidean metric in R 2, for 3-connected planar graphs, with coordinates that can be represented succinctly, that is, with O(logn) bits, where n is the number of vertices in the graph.",
"All too often a seemingly insurmountable divide between theory and practice can be witnessed. In this paper we try to contribute to narrowing this gap in the field of ad-hoc routing. In particular we consider two aspects: We propose a new geometric routing algorithm which is outstandingly efficient on practical average-case networks, however is also in theory asymptotically worst-case optimal. On the other hand we are able to drop the formerly necessary assumption that the distance between network nodes may not fall below a constant value, an assumption that cannot be maintained for practical networks. Abandoning this assumption we identify from a theoretical point of view two fundamentamentally different classes of cost metrics for routing in ad-hoc networks.",
"",
"Note: Special Issue on Selected Papers from GD '08 Reference EPFL-ARTICLE-158714doi:10.7155 jgaa.00197 Record created on 2010-11-26, modified on 2017-07-08"
]
}
|
1108.4675
|
1978444629
|
A classic experiment by Milgram shows that individuals can route messages along short paths in social networks, given only simple categorical information about recipients (such as "he is a prominent lawyer in Boston" or "she is a Freshman sociology major at Harvard"). That is, these networks have very short paths between pairs of nodes (the so-called small-world phenomenon); moreover, participants are able to route messages along these paths even though each person is only aware of a small part of the network topology. Some sociologists conjecture that participants in such scenarios use a greedy routing strategy in which they forward messages to acquaintances that have more categories in common with the recipient than they do, and similar strategies have recently been proposed for routing messages in dynamic ad-hoc networks of mobile devices. In this paper, we introduce a network property called membership dimension, which characterizes the cognitive load required to maintain relationships between participants and categories in a social network. We show that any connected network has a system of categories that will support greedy routing, but that these categories can be made to have small membership dimension if and only if the underlying network exhibits the small-world phenomenon.
|
Recent work by Mei @cite_12 , studies category-based greedy routing as a heuristic for performing routing in dynamic delay-tolerant networks. Mei assume that the network nodes have been organized into pre-defined categories based on the users' interests. Experiments suggest that using these categories for greedy routing is superior to routing heuristics based on location or simple random choices. One can interpret the categorical greedy routing techniques of Mei and of this paper as being geometric routing schemes using virtual coordinates, where the coordinates represent category memberships. In this interpretation, the membership dimension of an embedding corresponds to the number of nonzero coordinates of each node, and our results show that such greedy routing schemes can be done succinctly in graphs with small diameter.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2126343340"
],
"abstract": [
"In this paper we describe SANE, the first forwarding mechanism that combines the advantages of both social-aware and stateless approaches in pocket switched network routing. SANE is based on the observation“that we validate on real-world traces”that individuals with similar interests tend to meet more often. In our approach, individuals (network members) are characterized by their interest profile, a compact representation of their interests. Through extensive experiments, we show the superiority of social-aware, stateless forwarding over existing stateful, social-aware and stateless, social-oblivious forwarding. An important byproduct of our interest-based approach is that it easily enables innovative routing primitives, such as interest-casting. An interest-casting protocol is also described, and extensively evaluated through experiments based on both real-world and synthetic mobility traces."
]
}
|
1108.4675
|
1978444629
|
A classic experiment by Milgram shows that individuals can route messages along short paths in social networks, given only simple categorical information about recipients (such as "he is a prominent lawyer in Boston" or "she is a Freshman sociology major at Harvard"). That is, these networks have very short paths between pairs of nodes (the so-called small-world phenomenon); moreover, participants are able to route messages along these paths even though each person is only aware of a small part of the network topology. Some sociologists conjecture that participants in such scenarios use a greedy routing strategy in which they forward messages to acquaintances that have more categories in common with the recipient than they do, and similar strategies have recently been proposed for routing messages in dynamic ad-hoc networks of mobile devices. In this paper, we introduce a network property called membership dimension, which characterizes the cognitive load required to maintain relationships between participants and categories in a social network. We show that any connected network has a system of categories that will support greedy routing, but that these categories can be made to have small membership dimension if and only if the underlying network exhibits the small-world phenomenon.
|
Similarly to the work of this paper, Kleinberg @cite_1 studies the small-world phenomenon from an algorithmic perspective. However, his approach is orthogonal to ours: He focuses on location rather than categorical information as the critical factor for the ability to find short routes efficiently, and constructs a random network based on that information, whereas our approach takes the network as a given and studies the kinds of categorical structures needed to support category-based greedy routing.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2128678576"
],
"abstract": [
"Long a matter of folklore, the small-world phenomenon'''' --the principle that we are all linked by short chains of acquaintances --was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960''s. This work was among the first to make the phenomenon quantitative, allowing people to speak of the six degrees of separation'''' between any two people in the United States. Since then, a number of network models have been proposed as frameworks in which to study the problem analytically. One of the most refined of these models was formulated in recent work of Watts and Strogatz; their framework provided compelling evidence that the small-world phenomenon is pervasive in a range of networks arising in nature and technology, and a fundamental ingredient in the evolution of the World Wide Web. But existing models are insufficient to explain the striking algorithmic component of Milgram''s original findings: that individuals using local information are collectively very effective at actually constructing short paths between two points in a social network. Although recently proposed network models are rich in short paths, we prove that no decentralized algorithm, operating with local information only, can construct short paths in these networks with non-negligible probability. We then define an infinite family of network models that naturally generalizes the Watts-Strogatz model, and show that for one of these models, there is a decentralized algorithm capable of finding short paths with high probability. More generally, we provide a strong characterization of this family of network models, showing that there is in fact a unique model within the family for which decentralized algorithms are effective."
]
}
|
1108.3329
|
1748959283
|
We present a generalization of the well-known problem of learning k-juntas in R^n, and a novel tensor algorithm for unraveling the structure of high-dimensional distributions. Our algorithm can be viewed as a higher-order extension of Principal Component Analysis (PCA). Our motivating problem is learning a labeling function in R^n, which is determined by an unknown k-dimensional subspace. This problem of learning a k-subspace junta is a common generalization of learning a k-junta (a function of k coordinates in R^n) and learning intersections of k halfspaces. In this context, we introduce an irrelevant noisy attributes model where the distribution over the "relevant" k-dimensional subspace is independent of the distribution over the (n-k)-dimensional "irrelevant" subspace orthogonal to it. We give a spectral tensor algorithm which identifies the relevant subspace, and thereby learns k-subspace juntas under some additional assumptions. We do this by exploiting the structure of local optima of higher moment tensors over the unit sphere; PCA finds the global optima of the second moment tensor (covariance matrix). Our main result is that when the distribution in the irrelevant (n-k)-dimensional subspace is any Gaussian, the complexity of our algorithm is T(k, ) + (n), where T is the complexity of learning the concept in k dimensions, and the polynomial is a function of the k-dimensional concept class being learned. This substantially generalizes existing results on learning low-dimensional concepts.
|
There have been a number of extensions of PCA to tensors @cite_1 analogous to SVD, although no method is known to have polynomial complexity. One approach is to view PCA as an optimization problem. The top eigenvector is the solution to a matrix optimization problem: where @math is the covariance matrix. A higher moment method optimizes the linear form defined by the tensors of higher moments: Unlike the bilinear case, finding the global maximum of a multilinear form is hard. For @math , it is NP-hard to approximate the optimum to better than factor @math @cite_9 , and the best known approximation factor is roughly @math . Several local search methods have been proposed for this problem as well @cite_26 .
|
{
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_1"
],
"mid": [
"",
"2070028074",
"2024165284"
],
"abstract": [
"",
"Recent work on eigenvalues and eigenvectors for tensors of order @math has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form @math subject to @math , which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.",
"This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or @math -way array. Decompositions of higher-order tensors (i.e., @math -way arrays with @math ) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors."
]
}
|
1108.3706
|
2055956547
|
In this paper, we identify and analyze the requirements to design a new routing link metric for wireless multihop networks. Considering these requirements, when a link metric is proposed, then both the design and implementation of the link metric with a routing protocol become easy. Secondly, the underlying network issues can easily be tackled. Thirdly, an appreciable performance of the network is guaranteed. Along with the existing implementation of three link metrics Expected Transmission Count (ETX), Minimum Delay (MD), and Minimum Loss (ML), we implement inverse ETX; invETX with Optimized Link State Routing (OLSR) using NS-2.34. The simulation results show that how the computational burden of a metric degrades the performance of the respective protocol and how a metric
|
After analyzing reactive and proactive protocols, Yang @cite_11 proposed that the proactive protocols that implement the hop-by-hop routing technique, as Destination-Sequenced Distance Vector (DSDV) @cite_16 and Optimized Link State Routing (OLSR) @cite_14 protocols are the best choice for mesh networks. They have also inspected the design requirements for routing link metrics for the mesh networks and related them to the routing techniques and routing protocols. In the chapter, four design requirements for link metrics; stability, minimum hop count, polynomial complexity of routing algorithm and loop-freeness have been suggested. However, the focus has only been on the mesh networks. Secondly, all the work is merely restricted to these four requirements. There are several other requirements that may help to achieve global optimization. For example, 'computational overhead' that might be outcome of the mathematical complexity introduced in the link metric or an attempt to design a multi-dimensional metric to tackle multiple issues simultaneously.
|
{
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_11"
],
"mid": [
"1549535141",
"2124651399",
"196672294"
],
"abstract": [
"This document describes the Optimized Link State Routing (OLSR) protocol for mobile ad hoc networks. The protocol is an optimization of the classical link state algorithm tailored to the requirements of a mobile wireless LAN. The key concept used in the protocol is that of multipoint relays (MPRs). MPRs are selected nodes which forward broadcast messages during the flooding process. This technique substantially reduces the message overhead as compared to a classical flooding mechanism, where every node retransmits each message when it receives the first copy of the message. In OLSR, link state information is generated only by nodes elected as MPRs. Thus, a second optimization is achieved by minimizing the number of control messages flooded in the network. As a third optimization, an MPR node may chose to report only links between itself and its MPR selectors. Hence, as contrary to the classic link state algorithm, partial link state information is distributed in the network. This information is then used for route calculation. OLSR provides optimal routes (in terms of number of hops). The protocol is particularly suitable for large and dense networks as the technique of MPRs works well in this context.",
"An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.",
"Designing routing metrics is critical for performance in wireless mesh networks. The unique characteristics of mesh networks, such as static nodes and the shared nature of the wireless medium, invalidate existing solutions from both wired and wireless networks and impose unique requirements on designing routing metrics for mesh networks. In this paper, we focus on identifying these requirements. We first analyze the possible types of routing protocols that can be used and show that proactive hop-by-hop routing protocols are the most appropriate for mesh networks. Then, we examine the requirements for designing routing metrics according to the characteristics of mesh networks and the type of routing protocols used. Finally, we study several existing routing metrics, including hop count, ETX, ETT, WCETT and MIC in terms of their ability to satisfy these requirements. Our simulation results of the performance of these metrics confirm our analysis of these metrics."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.