aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1406.2795
|
2951883319
|
We study the problem of rendezvous of two mobile agents starting at distinct locations in an unknown graph. The agents have distinct labels and walk in synchronous steps. However the graph is unlabelled and the agents have no means of marking the nodes of the graph and cannot communicate with or see each other until they meet at a node. When the graph is very large we want the time to rendezvous to be independent of the graph size and to depend only on the initial distance between the agents and some local parameters such as the degree of the vertices, and the size of the agent's label. It is well known that even for simple graphs of degree @math , the rendezvous time can be exponential in @math in the worst case. In this paper, we introduce a new version of the rendezvous problem where the agents are equipped with a device that measures its distance to the other agent after every step. We show that these agents are able to rendezvous in any unknown graph, in time polynomial in all the local parameters such the degree of the nodes, the initial distance @math and the size of the smaller of the two agent labels @math . Our algorithm has a time complexity of @math and we show an almost matching lower bound of @math on the time complexity of any rendezvous algorithm in our scenario. Further, this lower bound extends existing lower bounds for the general rendezvous problem without distance awareness.
|
There have been several studies on the minimum capabilities needed by the agents to solve rendezvous. For example, the minimum memory required by an agent to solve rendezvous is known to be @math for arbitrary graphs. @cite_3 have provided a memory optimal algorithm for rendezvous, and there are studies on the tradeoff between time and space requirements for rendezvous @cite_5 . In some papers, additional capacities are assumed for the agents to overcome other limitations, e.g. global vision is assumed to overcome memory limitations @cite_9 or the capability to mark nodes using tokens @cite_15 or whiteboards @cite_16 is often used to break symmetry. The model used in this paper can be seen as a special case of the oracle model for computation @cite_17 where the agent is allowed to query an oracle that has global knowledge of the environment. However in our case, since the only queries are distance queries, the oracle can be implemented without complete knowledge of the graph topology.
|
{
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"2144182788",
"1972775782",
"1526420573",
"1981643246",
"1543762072",
""
],
"abstract": [
"We consider the problem of gathering identical, memoryless, mobile robots in one node of an anonymous unoriented ring. Robots start from different nodes of the ring. They operate in Look-Compute-Move cycles and have to end up in the same node. In one cycle, a robot takes a snapshot of the current configuration (Look), makes a decision to stay idle or to move to one of its adjacent nodes (Compute), and in the latter case makes an instantaneous move to this neighbor (Move). Cycles are performed asynchronously for each robot. For an odd number of robots we prove that gathering is feasible if and only if the initial configuration is not periodic, and we provide a gathering algorithm for any such configuration. For an even number of robots we decide the feasibility of gathering except for one type of symmetric initial configurations, and provide gathering algorithms for initial configurations proved to be gatherable.",
"Two identical (anonymous) mobile agents start from arbitrary nodes in an a priori unknown graph and move synchronously from node to node with the goal of meeting. This rendezvous problem has been thoroughly studied, both for anonymous and for labeled agents, along with another basic task, that of exploring graphs by mobile agents. The rendezvous problem is known to be not easier than graph exploration. A well-known recent result on exploration, due to Reingold, states that deterministic exploration of arbitrary graphs can be performed in log-space, i.e., using an agent equipped with O(log n) bits of memory, where n is the size of the graph. In this paper we study the size of memory of mobile agents that permits us to solve the rendezvous problem deterministically. Our main result establishes the minimum size of the memory of anonymous agents that guarantees deterministic rendezvous when it is feasible. We show that this minimum size is Θ(log n), where n is the size of the graph, regardless of the delay between the starting times of the agents. More precisely, we construct identical agents equipped with Θ(log n) memory bits that solve the rendezvous problem in all graphs with at most n nodes, if they start with any delay τ, and we prove a matching lower bound Ω(log n) on the number of memory bits needed to accomplish rendezvous, even for simultaneous start. In fact, this lower bound is achieved already on the class of rings. This shows a significant contrast between rendezvous and exploration: e.g., while exploration of rings (without stopping) can be done using constant memory, rendezvous, even with simultaneous start, requires logarithmic memory. As a by-product of our techniques introduced to obtain log-space rendezvous we get the first algorithm to find a quotient graph of a given unlabeled graph in polynomial time, by means of a mobile agent moving around the graph.",
"We study the size of memory of mobile agents that permits to solve deterministically the rendezvous problem, i.e., the task of meeting at some node, for two identical agents moving from node to node along the edges of an unknown anonymous connected graph. The rendezvous problem is unsolvable in the class of arbitrary connected graphs, as witnessed by the example of the cycle. Hence we restrict attention to rendezvous in trees, where rendezvous is feasible if and only if the initial positions of the agents are not symmetric. We prove that the minimum memory size guaranteeing rendezvous in all trees of size at most nis i¾?(logn) bits. The upper bound is provided by an algorithm for abstract state machines accomplishing rendezvous in all trees, and using O(logn) bits of memory in trees of size at most n. The lower bound is a consequence of the need to distinguish between up to ni¾? 1 links incident to a node. Thus, in the second part of the paper, we focus on the potential existence of pairs of finiteagents (i.e., finite automata) capable of accomplishing rendezvous in all bounded degreetrees. We show that, as opposed to what has been proved for the graph exploration problem, there are no finite agents capable of accomplishing rendezvous in all bounded degree trees.",
"Mobile agent computing is being used in fields as diverse as artificial intelligence, computational economics and robotics. Agents' ability to adapt dynamically and execute asynchronously and autonomously brings potential advantages in terms of fault-tolerance, flexibility and simplicity. This monograph focuses on studying mobile agents as modelled in distributed systems research and in particular within the framework of research performed in the distributed algorithms community. It studies the fundamental question of how to achieve rendezvous , the gathering of two or more agents at the same node of a network. Like leader election, such an operation is a useful subroutine in more general computations that may require the agents to synchronize, share information, divide up chores, etc. The work provides an introduction to the algorithmic issues raised by the rendezvous problem in the distributed computing setting. For the most part our investigation concentrates on the simplest case of two agents attempting to rendezvous on a ring network. Other situations including multiple agents, faulty nodes and other topologies are also examined. An extensive bibliography provides many pointers to related work not covered in the text. The presentation has a distinctly algorithmic, rigorous, distributed computing flavor and most results should be easily accessible to advanced undergraduate and graduate students in computer science and mathematics departments. Table of Contents: Models for Mobile Agent Computing Deterministic Rendezvous in a Ring Multiple Agent Rendezvous in a Ring Randomized Rendezvous in a Ring Other Models Other Topologies",
"We study the problem of gathering at the same location two mobile agents that are dispersed in an unknown and unlabeled environment. This problem called Rendezvous, is a fundamental task in distributed coordination among autonomous entities. Most previous studies on the subject model the environment as an undirected graph and the solution techniques rely heavily on the fact that an agent can backtrack on any edge it traverses. However, such an assumption may not hold for certain scenarios, for instance a road network containing one-way streets. Thus, we consider the case of strongly connected directed graphs and present the first deterministic solution for rendezvous of two anonymous (identical) agents moving in such a digraph. Our algorithm achieves rendezvous with detection for any solvable instance of the problem, without any prior knowledge about the digraph, not even its size.",
""
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
To the best of our knowledge, there has been no prior usability study utilizing a fully automated (unattended) experimental setup with a video-projected experimenter. However, there have been precedents with virtually attended experiments and unattended online surveys, in many fields, most notably, psychology. There is a sizable body of work supporting validity and precision of such unattended online experiments, as compared to more traditional attended experiments in a lab setting. In particular, @cite_33 found no significant difference in psychometric data collected from an attended experiment in a lab setting and its online, virtually attended counterpart. This is further supported by @cite_8 in the comparison of data collected from unattended online and attended offline questionnaires. Guidelines for creating the best possible draw from the intended population base are provided by Birnbaum @cite_17 . Finally, Lazem and Gracanin @cite_9 replicated two classical social psychology experiments, where the experimenter and the three participants were represented as avatars in Second Life instead of being physically co-present. The outcomes were very similar.
|
{
"cite_N": [
"@cite_9",
"@cite_17",
"@cite_33",
"@cite_8"
],
"mid": [
"2064311940",
"2112744711",
"",
"2135944616"
],
"abstract": [
"Social traps are examples of social dilemma situations, where an individual acts for personal advantage that is damaging the group as a whole. Traps can be avoided, nevertheless, by the proper cooperation between the group members. A laboratory analog of social traps was implemented by Brechner in the 1970's. We built a a Second Life analog for Brechner's experiment to explore social traps and how coordination takes place in a 3D virtual world. While some of the groups that were not allowed to communicate succeeded in avoiding the trap, communication had a significant effect on how the participants regulated their resource. We observed very similar response patterns compared to the original experiment. That, in turn, shows a great potential for using virtual worlds as collaborative tools. We also analyzed the social traps experiment using game theory and the results of the experiments match the game theory predictions.",
"Advantages and disadvantages of Web and lab research are reviewed. Via the World Wide Web, one can efficiently recruit large, heterogeneous samples quickly, recruit specialized samples (people with rare characteristics), and standardize procedures, making studies easy to replicate. Alternative programming techniques (procedures for data collection) are compared, including client-side as opposed to server-side programming. Web studies have methodological problems; for example, higher rates of drop out and of repeated participation. Web studies must be thoroughly analyzed and tested before launching on-line. Many studies compared data obtained in Web versus lab. These two methods usually reach the same conclusions; however, there are significant differences between college students tested in the lab and people recruited and tested via the Internet. Reasons that Web researchers are enthusiastic about the potential of the new methods are discussed.",
"",
"The Internet can be an effective medium for the posting, exchange, and collection of information in psychology-related research and data. The relative ease and inexpensiveness of creating and maintaining Web-based applications, associated with the simplicity of use via the graphic-user interface format of form-based surveys, can establish a new research frontier for the social and behavioral sciences. To explore the possible use of Internet tools in psychological research, this study compared Web-based assessment techniques with traditional paper-based methods of different measures of Internet attitudes and behaviors in an Italian sample. The collected data were analyzed to identify both differences between the two samples and in the psychometric characteristics of the questionnaires. Even if we found significant differences between the two samples in the Internet attitudes and behaviors, no relevant differences were found in the psychometric properties of the different questionnaires. This result, simila..."
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
For example, Short Authentication String (SAS) protocols require a user to compare two short strings, of about 20 bits each @cite_14 . Since accurate task completion was found to be relatively difficult for human users, alternative protocols were developed.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2517548216"
],
"abstract": [
"Solutions for an easy and secure setup of a wireless connection between two devices are urgently needed for WLAN, Wireless USB, Bluetooth and similar standards for short range wireless communication. All such key exchange protocols employ data authentication as an unavoidable subtask. As a solution, we propose an asymptotically optimal protocol family for data authentication that uses short manually authenticated out-of-band messages. Compared to previous articles by Vaudenay and Pasini the results of this paper are more general and based on weaker security assumptions. In addition to providing security proofs for our protocols, we focus also on implementation details and propose practically secure and efficient sub-primitives for applications."
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
The first usability study of pairing techniques was carried out by @cite_21 . That study determined that the most accurate way to compare a pair of SAS was the "compare and confirm" method, wherein the user would be presented an SAS by both of the machines they are trying to pair, and would be to asked to confirm whether or not the two SASs match.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2110495618"
],
"abstract": [
"Setting up security associations between end-user devices is a challenging task when it needs to be done by ordinary users. The increasing popularity of powerful personal electronics with wireless communication abilities has made the problem more urgent than ever before. During the last few years, several solutions have appeared in the research literature. Several standardization bodies have also been working on improved setup procedures. All these protocols provide certain level of security, but several new questions arise, such as \"how to implement this protocol so that it is easy to use?\" and \"is it still secure when used by a non-technical person?\" In this paper, we attempt to answer these questions by carrying out a comparative usability evaluation of selected methods to derive some insights into the usability and security of these methods as well as strategies for implementing them."
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
@cite_29 introduced an authentication technique that utilized a Mad-Lib'' type structure, where participating devices, based on the protocol outcome, compose a nonsensical phrase out of several short English words. The human user is then tasked with determining whether the two devices came up with matching phrases. This technique was found to be easier to complete by non-specialist users.
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"2106044541"
],
"abstract": [
"Secure pairing of electronic devices is an important issue that must be addressed in many contexts. In the absence of prior security context, the need to involve the user in the pairing process is a prominent challenge. In this paper, we investigate the use of the audio channel for human-assisted device pairing. First we assume a common (insecure) wireless channel between devices. We then obviate the assumption of a pre-existing common channel with a single-channel device pairing approach only based on audio. Both approaches are applicable to a wide range of devices and place light burden on the user."
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
@cite_7 reported on a comprehensive comparative usability study of eleven major secure device pairing methods, measuring task performance times, task completion rates, perceived security and perceived usability. The main outcome was the grouping of the investigated methods into three clusters, following a principal components analysis.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2148165512"
],
"abstract": [
"Secure Device Pairing is the bootstrapping of secure communication between two previously unassociated devices over a wireless channel. The human-imperceptible nature of wireless communication, lack of any prior security context, and absence of a common trust infrastructure open the door for Man-in-the-Middle (aka Evil Twin) attacks. A number of methods have been proposed to mitigate these attacks, each requiring user assistance in authenticating information exchanged over the wireless channel via some human-perceptible auxiliary channels, e.g., visual, acoustic or tactile. In this paper, we present results of the first comprehensive and comparative study of eleven notable secure device pairing methods. Usability measures include: task performance times, ratings on System Usability Scale (SUS), task completion rates, and perceived security. Study subjects were controlled for age, gender and prior experience with device pairing. We present overall results and identify problematic methods for certain classes of users as well as methods best-suited for various device configurations."
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
@cite_12 examined usability of device pairing in a group setting, where up to 6 users tried to connect their devices to one another, and found that group effort decreased the expected rate of security and non-security failures. Although, an inherent insecurity of conformity'' was also identified, wherein users would deliberately lie about an observed string in order to fit in'' with the majority opinion of a group.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2144454907"
],
"abstract": [
"Initiating and bootstrapping secure, yet low-cost, ad-hoc transactions is an important challenge that needs to be overcome if the promise of mobile and pervasive computing is to be fulfilled. For example, mobile payment applications would benefit from the ability to pair devices securely without resorting to conventional mechanisms such as shared secrets, a Public Key Infrastructure (PKI), or trusted third parties. A number of methods have been proposed for doing this based on the use of a secondary out-of-band (OOB) channel that either authenticates information passed over the normal communication channel or otherwise establishes an authenticated shared secret which can be used for subsequent secure communication. A key element of the success of these methods is dependent on the performance and effectiveness of the OOB channel, which usually depends on people performing certain critical tasks correctly. In this paper, we present the results of a comparative usability study on methods that propose using humans to implement the OOB channel and argue that most of these proposals fail to take into account factors that may seriously harm the security and usability of a protocol. Our work builds on previous research in the usability of pairing methods and the accompanying recommendations for designing user interfaces that minimise human mistakes. Our findings show that the traditional methods of comparing and typing short strings into mobile devices are still preferable despite claims that new methods are more usable and secure, and that user interface design alone is not sufficient in mitigating human mistakes in OOB channels."
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
@cite_30 also examined the pairing of multiple devices in a group setting with groups of 4 or 6 members. They found that groups are nearly immune to insertion attacks, where an adversary will pretend to be a member of the group, and thus change the expected SAS for all members. They also found that groups are particularly vulnerable to a modified man-in-the-middle attack where a single member of the group is given false information, and instead of rejecting their incorrect SAS, they conform to the positive result the rest of the group reports.
|
{
"cite_N": [
"@cite_30"
],
"mid": [
"2096768518"
],
"abstract": [
"A fairly common modern setting entails users, each in possession of a personal wireless device, wanting to communicate securely, via their devices. If these users (and their devices) have no prior association, a new security context must be established. In order to prevent potential attacks, the initial context (association) establishment process must involve only the intended devices and their users. A number of methods for initial secure association of two devices have been proposed; their usability factors have been explored and compared extensively. However, a more challenging problem of initial secure association of a group of devices (and users) has not received much attention. Although a few secure group association methods have been proposed, their usability aspects have not been studied, especially, in a comparative manner. This paper discusses desirable features and evaluation criteria for secure group association, identifies suitable methods and presents a comparative usability study. Results show that some simple methods (e.g., peer- or leader-based number comparisons) are quite attractive for small groups, being fast, reasonably secure and well-received by users."
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
@cite_2 found that performance on out-of-band tasks in secure device pairing could be improved through the addition of a score metric on the user's performance, resulting in a considerable reduction in both safe and fatal errors.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"15257636"
],
"abstract": [
"We explore the use of extrinsic motivation to improve the state of user-centered security mechanisms. Specifically, we study applications of scores as user incentives in the context of secure device pairing. We develop a scoring functionality that can be integrated with traditional pairing approaches. We then report on a usability study that we performed to evaluate the effect of scoring on the performance of users in comparison operations. Our results demonstrate that individuals are likely to commit fewer errors and show more acceptance when working with the scoring based pairing approach. Framing pairing as a game and providing feedback to users in the form of a score is an efficient way to improve pairing security, particularly among users such as children who may not be aware of the consequences of their decisions while performing security tasks."
]
}
|
1406.3002
|
1830858207
|
Author(s): Kaczmarek, T; Kobsa, A; Sy, R; Tsudik, G | Abstract: User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring -- yet unexpected -- sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants' failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
|
Furthermore, O'Malley and Poplawsky @cite_3 showed that noise can affect behavioral selectivity. This means that while noise may not have a consistent positive or negative impact on task completion in all cases, noise may consistently have a negative effect on tasks that require the subject to detect signals in their periphery, and noise may have a consistent positive effect on task completion when the subject has to focus on signals coming from the center of their field of attention. This suggests that, regardless of task complexity, the addition of noise may narrow a subject's area of attention.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2194198555"
],
"abstract": [
"The literature on the effects of noise on monitoring performance shows a disappointing lack of consistency in results. The hypothesis of the present study was that task classification in terms of demands made on the observer should reconcile conflicting findings so that generalizations could be made. Therefore, a study was made of the effects of intermittent or variable noise on vigilance experiments with similar task demands. Twenty-one sensory vigilance studies, selected from 98 visual performance experiments, were analyzed in detail. It appeared that, even when studies possess similar task characteristics, they are hard to compare due to the many types and varieties of the noise variables involved and the measures of performance used. Contradictory results remain. It was concluded that we know nothing about the effects of variable noise on sustained attention, despite the importance of this kind of noise for everyday life. Using this detailed analysis as an illustration, it was suggested that disparate..."
]
}
|
1406.2895
|
2950117417
|
We present a system for identifying humans by their walking sounds. This problem is also known as acoustic gait recognition. The goal of the system is to analyse sounds emitted by walking persons (mostly the step sounds) and identify those persons. These sounds are characterised by the gait pattern and are influenced by the movements of the arms and legs, but also depend on the type of shoe. We extract cepstral features from the recorded audio signals and use hidden Markov models for dynamic classification. A cyclic model topology is employed to represent individual gait cycles. This topology allows to model and detect individual steps, leading to very promising identification rates. For experimental validation, we use the publicly available TUM GAID database, which is a large gait recognition database containing 3050 recordings of 305 subjects in three variations. In the best setup, an identification rate of 65.5 is achieved out of 155 subjects. This is a relative improvement of almost 30 compared to our previous work, which used various audio features and support vector machines.
|
The most-widespread approach for video-based gait recognition is the Gait Energy Image (GEI) @cite_14 , which is a simple silhouette-based approach. It can be combined with face recognition @cite_0 or with depth information @cite_17 . Furthermore, model-based approaches have been proposed for visual gait recognition @cite_16 . Besides using video or audio information, other methods to identify walking persons include using acoustic Doppler sonar @cite_1 or pressure sensors in the floor @cite_4 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_0",
"@cite_16",
"@cite_17"
],
"mid": [
"2126680226",
"",
"2159138006",
"1988862246",
"2115203491",
"2126952237"
],
"abstract": [
"In this paper, we propose a new spatio-temporal gait representation, called Gait Energy Image (GEI), to characterize human walking properties for individual recognition by gait. To address the problem of the lack of training templates, we also propose a novel approach for human recognition by combining statistical gait features from real and synthetic templates. We directly compute the real templates from training silhouette sequences, while we generate the synthetic templates from training sequences by simulating silhouette distortion. We use a statistical approach for learning effective features from real and synthetic templates. We compare the proposed GEI-based gait recognition approach with other gait recognition approaches on USF HumanID Database. Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches",
"",
"A person's gait is a characteristic that might be employed to identify him her automatically. Conventionally, automatic for gait-based identification of subjects employ video and image processing to characterize gait. In this paper we present an Acoustic Doppler Sensor(ADS) based technique for the characterization of gait. The ADS is very inexpensive sensor that can be built using off-the-shelf components, for under $20 USD at today's prices. We show that remarkably good gait recognition is possible with the ADS sensor.",
"This paper presents advances on the Human ID Gait Challenge. Our method is based on combining an improved gait recognition method with an adapted low resolution face recognition method. For this, we experiment with a new automated segmentation technique based on alpha-matting. This allows better construction of feature images used for gait recognition. The same segmentation is also used as a basis for finding and recognizing low-resolution facial profile images in the same database. Both, gait and face recognition methods show results comparable to the state of the art. Next, the two approaches are fused (which to our knowledge, has not yet been done for the Human ID Gait Challenge). With this fusion gain, we show significant performance improvement. Moreover, we reach the highest recognition rates and the largest absolute number of correct detections to date.",
"Gait enjoys advantages over other biometrics in that it can be perceived from a distance and is di cult to disguise.Current approaches are mostly statistical and concentrate on walking only.By analysing leg motion we show how we can recognise people not only by the walking gait,but also by the running gait.This is achieved by either of two new modelling approaches which employ coupled oscillators and the biomechanics of human locomotion as the underlying concepts.These models give a plausible method for data reduction by providing estimates of the inclination of the thigh and of the leg,from the image data. Both approaches derive a phase-weighted Fourier description gait signature by automated non-invasive means.One approach is completely automated whereas the other requires speci cation of a single parameter to distinguish between walking and running.Results show that both gaits are potential biometrics,with running being more potent.By its basis in evidence gathering,this new technique can tolerate noise and low resolution.",
"Using gait recognition methods, people can be identified by the way they walk. The most successful and efficient of these methods are based on the Gait Energy Image (GEI). In this paper, we extend the traditional Gait Energy Image by including depth information. First, GEI is extended by calculating the required silhouettes using depth data. We then formulate a completely new feature, which we call the Depth Gradient Histogram Energy Image (DGHEI). We compare the improved depth-GEI and the new DGHEI to the traditional GEI. We do this using a new gait database which was recorded with the Kinect sensor. On this database we show significant performance gain of DGHEI."
]
}
|
1406.2504
|
1939143390
|
Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark.
|
In the non-convex regime, effective optimization strategies attempt to at least locally minimize ), often exceeding the performance of the convex nuclear norm. For example, @cite_12 derives a family of (IRLS) algorithms applied to @math with @math as tuning parameters. A related penalty also considered is @math , which maintains an intimate connection with rank given that where @math is a standard indicator function. Consequently, when @math is small, @math behaves much like a scaled and translated version of the rank, albeit with nonzero gradients away from zero.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2136912397"
],
"abstract": [
"The problem of minimizing the rank of a matrix subject to affine constraints has applications in several areas including machine learning, and is known to be NP-hard. A tractable relaxation for this problem is nuclear norm (or trace norm) minimization, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLS-p (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLS-p shows better empirical performance in terms of recovering low-rank matrices than nuclear norm minimization. We provide an efficient implementation for IRLS-p, and also present a related family of algorithms, sIRLS-p. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set."
]
}
|
1406.2504
|
1939143390
|
Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark.
|
The IRLS0 algorithm from @cite_12 represents the best-performing special case of the above, where @math is minimized using a homotopy continuation scheme merged with IRLS. Here a fixed @math is replaced with a decreasing sequence @math , the rationale being that when @math is large, the cost function is relatively smooth and devoid of local minima. As the iterations @math progress, @math is reduced, and the cost behaves more like the matrix rank function. However, because now we are more likely to be within a reasonably good basin of attraction, spurious local minima are more easily avoided. The downside of this procedure is that it requires a pre-defined heuristic for reducing @math , and this schedule may be problem specific. Moreover, there is no guarantee that a global solution will ever be found.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2136912397"
],
"abstract": [
"The problem of minimizing the rank of a matrix subject to affine constraints has applications in several areas including machine learning, and is known to be NP-hard. A tractable relaxation for this problem is nuclear norm (or trace norm) minimization, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLS-p (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLS-p shows better empirical performance in terms of recovering low-rank matrices than nuclear norm minimization. We provide an efficient implementation for IRLS-p, and also present a related family of algorithms, sIRLS-p. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set."
]
}
|
1406.2504
|
1939143390
|
Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark.
|
In a related vein, @cite_8 derives a family of (IRNN) algorithms that can be applied to virtually any concave non-decreasing function @math , even when @math is non-smooth, unlike IRLS. For effective performance however the authors suggest a continuation strategy similar to IRLS0. Moreover, additional tuning parameters are required for different classes of functions @math and it remains unclear which choices are optimal. While the reported results are substantially better than when using the convex nuclear norm, in our experiments IRLS0 seems to perform slightly better, possibly because the quadratic least squares inner loop is less aggressive in the initial stages of optimization than weighted nuclear norm minimization, leading to a better overall trajectory. Regardless, all of these affine rank minimization algorithms fail well before the theoretical recovery limit is reached, when the number of observations @math equals the number of degrees of freedom in the low-rank matrix we wish to recover. Specifically, for an @math , rank @math matrix, the number of degrees of freedom is given by @math , hence @math is the best-case boundary. In practice if @math is ill-conditioned or degenerate the achievable limit may be more modest.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2075547019"
],
"abstract": [
"As surrogate functions of 0-norm, many nonconvex penalty functions have been proposed to enhance the sparse vector recovery. It is easy to extend these nonconvex penalty functions on singular values of a matrix to enhance low-rank matrix recovery. However, different from convex optimization, solving the nonconvex low-rank minimization problem is much more challenging than the nonconvex sparse minimization problem. We observe that all the existing nonconvex penalty functions are concave and monotonically increasing on [0, ∞). Thus their gradients are decreasing functions. Based on this property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the weight vector as the gradient of the concave penalty function, the WSVT problem has a closed form solution. In theory, we prove that IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthetic data and real images demonstrate that IRNN enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms."
]
}
|
1406.2504
|
1939143390
|
Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark.
|
A third approach relies on replacing the convex nuclear norm with a truncated non-convex surrogate @cite_11 . While some competitive results for image impainting via matrix completion are shown, in practice the proposed algorithm has many parameters to be tuned via cross-validation. Moreover, recent comparisons contained in @cite_8 show that default settings perform relatively poorly.
|
{
"cite_N": [
"@cite_8",
"@cite_11"
],
"mid": [
"2075547019",
"1969698720"
],
"abstract": [
"As surrogate functions of 0-norm, many nonconvex penalty functions have been proposed to enhance the sparse vector recovery. It is easy to extend these nonconvex penalty functions on singular values of a matrix to enhance low-rank matrix recovery. However, different from convex optimization, solving the nonconvex low-rank minimization problem is much more challenging than the nonconvex sparse minimization problem. We observe that all the existing nonconvex penalty functions are concave and monotonically increasing on [0, ∞). Thus their gradients are decreasing functions. Based on this property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the weight vector as the gradient of the concave penalty function, the WSVT problem has a closed form solution. In theory, we prove that IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthetic data and real images demonstrate that IRNN enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms.",
"Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets."
]
}
|
1406.2504
|
1939143390
|
Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark.
|
Finally, a somewhat different class of non-convex algorithms can be derived using a straightforward application of alternating minimization @cite_14 . The basic idea is to assume @math for some low-rank matrices @math and @math and then solve via coordinate decent. The downside of this approach is that it requires that @math and @math be parameterized with the correct rank. In contrast, our emphasis here is on algorithms that require no prior knowledge whatsoever regarding the true rank. Regardless, experimental results suggest that even when the correct rank is provided, these algorithms still cannot match the performance of our proposal. Moreover, from a generalization standpoint, these rank-aware variant are not suitable for embedding in a larger system with multiple low-rank components to estimate, since it is typically not feasible to simultaneously tune multiple rank parameters. In contrast, our method can be naturally extended for this purpose.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2952489973"
],
"abstract": [
"Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. @math ; the algorithm then alternates between finding the best @math and the best @math . Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis."
]
}
|
1406.2504
|
1939143390
|
Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark.
|
From a probabilistic perspective, previous work has applied Bayesian formalisms to rank minimization problems, although not specifically within an affine constraint set. For example, @cite_10 @cite_15 @cite_21 derive robust PCA algorithms built upon the linear summation of a rank penalty and an element-wise sparsity penalty. In particular, @cite_15 applies an MCMC sampling approach for posterior inference, but the resulting iterations are not scalable, subjectable to detailed analysis, nor readily adaptable to affine constraints. In contrast, @cite_10 applies a similar probabilistic model but performs inference using a variational mean-field approximation. While the special case of matrix completion is considered, from an empirical standpoint its estimation accuracy is not competitive with the state-of-the-art non-convex algorithms mentioned above. Finally, without the element-wise sparsity component intrinsic to robust PCA (which is not our focus here), @cite_21 simply collapses to a regular PCA model with a closed-form solution, so the challenges faced in solving ) do not apply. Consequently, general affine constraints really are a key differentiating factor.
|
{
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_10"
],
"mid": [
"2141159272",
"2171170856",
"2153742431"
],
"abstract": [
"A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.",
"In many applications that require matrix solutions of minimal rank, the underlying cost function is non-convex leading to an intractable, NP-hard optimization problem. Consequently, the convex nuclear norm is frequently used as a surrogate penalty term for matrix rank. The problem is that in many practical scenarios there is no longer any guarantee that we can correctly estimate generative low-rank matrices of interest, theoretical special cases notwithstanding. Consequently, this paper proposes an alternative empirical Bayesian procedure build upon a variational approximation that, unlike the nuclear norm, retains the same globally minimizing point estimate as the rank function under many useful constraints. However, locally minimizing solutions are largely smoothed away via marginalization, allowing the algorithm to succeed when standard convex relaxations completely fail. While the proposed methodology is generally applicable to a wide range of low-rank applications, we focus our attention on the robust principal component analysis problem (RPCA), which involves estimating an unknown low-rank matrix with unknown sparse corruptions. Theoretical and empirical evidence are presented to show that our method is potentially superior to related MAP-based approaches, for which the convex principle component pursuit (PCP) algorithm (, 2011) can be viewed as a special case.",
"Recovery of low-rank matrices has recently seen significant activity in many areas of science and engineering, motivated by recent theoretical results for exact reconstruction guarantees and interesting practical applications. A number of methods have been developed for this recovery problem. However, a principled method for choosing the unknown target rank is generally not provided. In this paper, we present novel recovery algorithms for estimating low-rank matrices in matrix completion and robust principal component analysis based on sparse Bayesian learning (SBL) principles. Starting from a matrix factorization formulation and enforcing the low-rank constraint in the estimates as a sparsity constraint, we develop an approach that is very effective in determining the correct rank while providing high recovery performance. We provide connections with existing methods in other similar problems and empirical results and comparisons with current state-of-the-art methods that illustrate the effectiveness of this approach."
]
}
|
1406.2504
|
1939143390
|
Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark.
|
From a motivational angle, the basic probabilistic model with which we begin our development can be interpreted as a carefully re-parameterized generalization of the probabilistic PCA model from @cite_25 . This will ultimately lead to a non-convex algorithm devoid of the heuristic tuning strategies mentioned above, but nonetheless still uniformly superior in terms of estimation accuracy. We emphasize that, although we employ a Bayesian entry point for our algorithmic strategy, final justification of the underlying model will be entirely based on properties of the underlying cost function that emerges, rather than any putative belief in the actual validity of the assumed prior distributions or likelihood function. This is quite unlike the vast majority of existing Bayesian approaches.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2125027820"
],
"abstract": [
"Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA."
]
}
|
1406.2504
|
1939143390
|
Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NP-hard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Non-convex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameter-free probabilistic PCA-like algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equal the degrees of freedom in the unknown low-rank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly ill-conditioned. While proving general recovery guarantees remains evasive for non-convex algorithms, Bayesian-inspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this same property. We conclude with a simple computer vision application involving image rectification and a standard collaborative filtering benchmark.
|
Turning to analytical issues, a number of celebrated theoretical results dictate conditions whereby substitution of the rank function with the convex nuclear norm in ) is nonetheless guaranteed to still produce the minimal rank solution. For example, if @math is a Gaussian iid measurement ensemble and @math represents the optimal solution to ) with @math , then with high probability as the problem dimensions grow large, the minimum nuclear norm feasible solution will equal @math if the number of measurements @math satisfies @math @cite_2 .
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2167077875"
],
"abstract": [
"In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming."
]
}
|
1406.2516
|
2949398319
|
Unlike telephone operators, which pay termination fees to reach the users of another network, Internet Content Providers (CPs) do not pay the Internet Service Providers (ISPs) of users they reach. While the consequent cross subsidization to CPs has nurtured content innovations at the edge of the Internet, it reduces the investment incentives for the access ISPs to expand capacity. As potential charges for terminating CPs' traffic are criticized under the net neutrality debate, we propose to allow CPs to voluntarily subsidize the usagebased fees induced by their content traffic for end-users. We model the regulated subsidization competition among CPs under a neutral network and show how deregulation of subsidization could increase an access ISP's utilization and revenue, strengthening its investment incentives. Although the competition might harm certain CPs, we find that the main cause comes from high access prices rather than the existence of subsidization. Our results suggest that subsidization competition will increase the competitiveness and welfare of the Internet content market; however, regulators might need to regulate access prices if the access ISP market is not competitive enough. We envision that subsidization competition could become a viable model for the future Internet.
|
Historically, the Internet adopted flat-rate prices @cite_31 @cite_25 for simplicity. Economists @cite_20 and computer scientists @cite_19 @cite_38 @cite_11 advocated usage-based pricing, which was shown to provide congestion control @cite_7 , quality of service @cite_15 and economic efficiency @cite_19 , and is adopted by mobile providers @cite_42 . Subsidization competition assumes the adoption of usage-based pricing by access ISPs, under which the consumers' usage-based charges could be subsidized by the CPs.
|
{
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_15",
"@cite_42",
"@cite_19",
"@cite_31",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2099797210",
"",
"",
"2810707749",
"2105903448",
"2152629569",
"2945631165",
""
],
"abstract": [
"",
"Usage-based pricing of offered traffic to a data network can be an effective technique for congestion control. To gain insight into the benefits usage-based pricing offers, the authors propose and study a simple model in which many users wish to transmit packets to a single-server queue. Based on the announced price per packet and the available quality of service (QoS) (e.g., mean delay), each user independently decides whether or not to transmit. Given statistical assumptions about the incoming traffic streams and the QoS as a function of offered load, the equilibrium relationship between price and QoS is determined by a fixed-point equation. The relationships among price, QoS, revenue, and server capacity are illustrated numerically, assuming a particular type of random user population. These examples indicate that adjusting the price to maximize revenue results in an efficient use of service capacity with an associated small mean delay.",
"",
"",
"Operators of multi?service networks require simple charging schemes with which they can fairly recover costs from their users and effectively allocate network resources. This paper studies an approach for computing such charges from simple measurements (time and volume), and relating these to bounds of the effective bandwidth. To achieve economic efficiency, it is necessary that usage?based charging schemes capture the relative amount of resources used by connections. Based on this criteria, we evaluate our approach for real traffic consisting of Internet Wide Area Network traces and MPEG?1 compressed video. Its incentive compatibility is shown with an example involving deterministic multiplexing, and the effect of pricing on a network's equilibrium is investigated for deterministic and statistical multiplexing. Finally, we investigate the incentives for traffic shaping provided by the approach.",
"There are repeating patterns in the histories of communication technologies, including ordinary mail, the telegraph, the telephone, and the Internet. In particular, the typical story for each service is that quality rises, prices decrease, and usage increases to produce increased total revenues. At the same time, prices become simpler.The historical analogies of this paper suggest that the Internet will evolve in a similar way, towards simplicity. The schemes that aim to provide differentiated service levels and sophisticated pricing schemes are unlikely to be widely adopted.Price and quality differentiation are valuable tools that can provide higher revenues and increase utilization efficiency of a network, and thus in general increase social welfare. Such measures, most noticeable in airline pricing, are spreading to many services and products, especially high-tech ones. However, it appears that as communication services become less expensive and are used more frequently, those arguments lose out to customers' desire for simplicity.Flat rates are the simplest form of pricing. Although they have generally been regarded as irrational, and economically and socially undesirable, they have serious advantages. Consumers like them, and are willing to pay extra for them. Further, flat rates are extremely effective in stimulating usage, which is of advantage in a rapidly growing service like the Internet.",
"We study revenue-maximizing pricing by a service provider in a communication network and compare revenues from simple pricing rules to the maximum revenues that are feasible. In particular, we focus on flat entry fees as the simplest pricing rule. We provide a lower bound for the ratio between the revenue from this pricing rule and maximum revenue, which we refer to as the price of simplicity. We characterize what types of environments lead to a low price of simplicity and show that in a range of environments, the loss of revenue from using simple entry fees is small. We then study the price of simplicity for a simple non-linear pricing (price discrimination) scheme based on the Paris Metro Pricing. The service provider creates different service classes and charges differential entry fees for these classes. We show that the gain from this type of price discrimination is small, particularly in environments in which the simple entry fee pricing leads to a low price of simplicity.",
"This is a list of Frequently Asked Questions about usage-based pricing of the Internet. We argue that usage-based pricing is likely to come sooner or later and that some serious thought should be devoted to devising a sensible system of usage-based pricing.",
""
]
}
|
1406.1923
|
2949318254
|
In this paper, we address the problem of broadcasting in a wireless network under a novel communication model: the swamping communication model. In this model, nodes communicate only with those nodes at geometric distance greater than @math and at most @math from them. Communication between nearby nodes under this model can be very time consuming, as the length of the path between two nodes within distance @math is only bounded above by the diameter @math , in many cases. For the @math -node lattice networks, we present algorithms of optimal time complexity, respectively @math for the lattice line and @math for the two-dimensional lattice. We also consider networks of unknown topology of diameter @math and of a parameter @math ( granularity ). More specifically, we consider networks with @math the minimum distance between any two nodes and @math . We present broadcast algorithms for networks of nodes placed on the line and on the plane with respective time complexities @math and @math , where @math .
|
The fundamental questions of network reliability have received much attention in the context of wired networks, under the assumption that components fail randomly and independently (cf., e.g. @cite_14 @cite_23 @cite_18 @cite_12 and the survey @cite_6 ). On the other hand, empirical work has shown that positive correlation of faults is a more reasonable assumption for networks @cite_22 @cite_3 @cite_11 . In particular, in @cite_11 , the authors provide empirical evidence that data packets losses are spatially correlated in networks. Moreover, in @cite_22 , the authors state that the environment provides many phenomena that may lead to spatially correlated faults. More recently, in @cite_1 , a gap was demonstrated between the fault-tolerance of networks when faults occur independently as opposed to when they occur with positive correlation. To the best of our knowledge, this was the first paper to provide analytic results concerning network fault-tolerant communication in the presence of positively correlated faults for arbitrary networks.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_23",
"@cite_12",
"@cite_11"
],
"mid": [
"2031578616",
"2045940022",
"2107905431",
"1522267679",
"2063105299",
"",
"",
"2071898078",
"2105269208"
],
"abstract": [
"We consider the problem of broadcasting in an n –node hypercube whose links and nodes fail independently with given probabilities p source , has to reach all other fault-free nodes. Messages may be directly transmitted to adjacent nodes only, and every node may communicate with at most one neighbour in a unit of time. A message can be transmitted only if both communicating neighbours and the link joining them are fault-free. For parameters p and q satisfying (1 – p )(1 – q ) ≽ 0.99 (e.g. p = q = 0.5 ), we give an algorithm working in time O (log n ) and broadcasting source information to all fault-free nodes with probability exceeding 1 – cn -e for some positive constant e, c depending on p and q but not depending on n .",
"Abstract We consider the problem of broadcasting in an n -vertex graph a message that originates from a given vertex, in the presence of random edge faults. If the number of edge faults is at most proportional to the total number of edges, there are networks for which the broadcast can be done in time O(log n ), with high probability.",
"Previously proposed sensor network data dissemination schemes require periodic low-rate flooding of data in order to allow recovery from failure. We consider constructing two kinds of multipaths to enable energy efficient recovery from failure of the shortest path between source and sink. Disjoint multipath has been studied in the literature. We propose a novel braided multipath scheme, which results in several partially disjoint multipath schemes. We find that braided multipaths are a viable alternative for energy-efficient recovery from isolated and patterned failures.",
"The aim of this paper is to study communication in networks where nodes fail in a random dependent way. In order to capture fault dependencies, we introduce the neighborhood fault model, where damaging events, called spots, occur randomly and independently with probability p at nodes of a network, and cause faults in the given node and all of its neighbors. Faults at distance at most 2 become dependent in this model and are positively correlated. We investigate the impact of spot probability on feasibility and time of communication in the fault-free part of the network. We show a network which supports fast communication with high probability, if p = 1 c log n. We also show that communication is not feasible with high probability in most classes of networks, for constant spot probabilities. For smaller spot probabilities, high probability communication is supported even by bounded degree networks. It is shown that the torus supports communication with high probability when p decreases faster than 1 n1 2, and does not when p ? 1 O(n1 2). Furthermore, a network built of tori is designed, with the same faulttolerance properties and additionally supporting fast communication. We show, however, that networks of degree bounded by a constant d do not support communication with high probability, if p ? 1 O(n1 d). While communication in networks with independent faults was widely studied, this is the first analytic paper which investigates network communication for random dependent faults.",
"Broadcasting and gossiping are fundamental tasks in network communication. In broadcasting, or one-to-all communication, information originally held in one node of the network (called the source) must be transmitted to all other nodes. In gossiping, or all-to-all communication, every node holds a message which has to be transmitted to all other nodes. As communication networks grow in size, they become increasingly vulnerable to component failures. Thus, capabilities for fault-tolerant broadcasting and gossiping gain importance. The present paper is a survey of the fast-growing area of research investigating these capabilities. We focus on two most important efficiency measures of broadcasting and gossiping algorithms: running time and number of elementary transmissions required by the communication process. We emphasize the unifying thread in most results from the research in fault-tolerant communication: the trade-offs between efficiency of communication schemes and their fault-tolerance. © 1996 John Wiley & Sons, Inc.",
"",
"",
"We construct and analyze a fast broadcasting algorithm working in the presence of Byzantine component faults. Such faults are particularly difficult to deal with, as faulty components may behave arbitrarily (even maliciously) as transmitters, by either blocking, rerouting, or altering transmitted messages in a way most detrimental to the broadcasting process. We assume that links and nodes of a communication network are subject to Byzantine failures, and that faults are distributed randomly and independently, with link failure probability p and node failure probability q, these parameters being constant and satisfying the inequality (1 - p)2(1 - q) > 1 2. A broadcasting algorithm, working in an n-node network, is called almost safe if the probability of its correctness is at least 1 - 1 n, for sufficiently large n. Thus the robustness of the algorithm grows with the size of the network. Our main result is the design and analysis of an almost safe broadcasting algorithm working in time O(log2 n) and using O(n log n) messages in n-node networks. Under a stronger assumption on failure probability parameters, namely (1 - p)2(1 - q)2 > 1 2, our algorithm can be modified to work in time O(log2 n log log n), also using O(n log n) messages. The novelty of our algorithm is that it can cope with the most difficult type of faults, potentially affecting all components of the network (both its links and nodes), and that it is simultaneously robust and efficient.",
"The success of multicast applications such as Internet teleconferencing illustrates the tremendous potential of applications built upon wide-area multicast communication services. A critical issue for such multicast applications and the higher layer protocols required to support them is the manner in which packet losses occur within the multicast network. We present and analyze packet loss data collected on multicast-capable hosts at 17 geographically distinct locations in Europe and the US and connected via the MBone. We experimentally and quantitatively examine the spatial and temporal correlation in packet loss among participants in a multicast session. Our results show that there is some spatial correlation in loss among the multicast sites. However, the shared loss in the backbone of the MBone is, for the most part, low. We find a fairly significant amount of of burst loss (consecutive losses) at most sites. In every dataset, at least one receiver experienced a long loss burst greater than 8 seconds (100 consecutive packets). A predominance of solitary loss was observed in all cases, but periodic losses of length approximately 0.6 seconds and at 30 second intervals were seen by some receivers."
]
}
|
1406.1923
|
2949318254
|
In this paper, we address the problem of broadcasting in a wireless network under a novel communication model: the swamping communication model. In this model, nodes communicate only with those nodes at geometric distance greater than @math and at most @math from them. Communication between nearby nodes under this model can be very time consuming, as the length of the path between two nodes within distance @math is only bounded above by the diameter @math , in many cases. For the @math -node lattice networks, we present algorithms of optimal time complexity, respectively @math for the lattice line and @math for the two-dimensional lattice. We also consider networks of unknown topology of diameter @math and of a parameter @math ( granularity ). More specifically, we consider networks with @math the minimum distance between any two nodes and @math . We present broadcast algorithms for networks of nodes placed on the line and on the plane with respective time complexities @math and @math , where @math .
|
The question of communication in networks of unknown topology has been widely studied in recent years. In @cite_17 , the authors state that broadcasting algorithms which function in unknown GRNs also function in the resulting fault-free connected components of faulty GRNs. A basic performance criterion of broadcasting algorithms is the time necessary for the algorithm to terminate; in synchronous networks, this time is measured as the number of communication rounds. For networks whose fault-free part has a diameter @math , @math is a trivial lower bound on broadcast time, but optimal running time is a function of the information available to the algorithms (cf., e.g., @cite_5 ). For instance, in @cite_5 , an algorithm was obtained which accomplishes broadcast in arbitrary GRNs in time @math under the assumption that nodes have a large amount of knowledge about the network, i.e. given that all nodes have a knowledge radius larger than @math , the largest communication radius. The authors also show that algorithms broadcasting in time @math are asymptotically optimal, for unknown GRNs when nodes communicate spontaneously and either can detect collisions or have knowledge of node locations at some positive distance @math , arbitrarily small.
|
{
"cite_N": [
"@cite_5",
"@cite_17"
],
"mid": [
"2061572891",
"2042997796"
],
"abstract": [
"We consider deterministic broadcasting in geometric radio networks (GRN) whose nodes know only a limited part of the network. Nodes of a GRN are situated in the plane and each of them is equipped with a transmitter of some range r. A signal from this node can reach all nodes at distance at most r from it but if a node u is situated within the range of two nodes transmitting simultaneously, then a collision occurs at u and u cannot get any message. Each node knows the part of the network within knowledge radius s from it, i.e., it knows the positions, labels and ranges of all nodes at distance at most s. The aim of this paper is to study the impact of knowledge radius s on the time of deterministic broadcasting in a GRN with n nodes and eccentricity D of the source. Our results show sharp contrasts between the efficiency of broadcasting in geometric radio networks as compared to broadcasting in arbitrary graphs. They also show quantitatively the impact of various types of knowledge available to nodes on broadcasting time in GRN. Efficiency of broadcasting is influenced by knowledge radius, knowledge of individual positions when knowledge radius is zero, and awareness of collisions.",
"We study the completion time of broadcast operations on static ad hoc wireless networks in presence of unpredictable and dynamical faults.Concerning oblivious fault-tolerant distributed protocols, we provide an Ω(Dn) lower bound where n is the number of nodes of the network and D is the source eccentricity in the fault-free part of the network. Rather surprisingly, this lower bound implies that the simple Round Robin protocol, working in O(Dn) time, is an optimal fault-tolerant oblivious protocol. Then, we demonstrate that networks of o(n log n) maximum in-degree admit faster oblivious protocols. Indeed, we derive an oblivious protocol having O(D min n, Δ log n ) completion time on any network of maximum in-degree Δ.Finally, we address the question whether adaptive protocols can be faster than oblivious ones. We show that the answer is negative at least in the general setting: we indeed prove an Ω(Dn) lower bound when D = Θ(√n). This clearly implies that no (adaptive) protocol can achieve, in general, o(Dn) completion time."
]
}
|
1406.1923
|
2949318254
|
In this paper, we address the problem of broadcasting in a wireless network under a novel communication model: the swamping communication model. In this model, nodes communicate only with those nodes at geometric distance greater than @math and at most @math from them. Communication between nearby nodes under this model can be very time consuming, as the length of the path between two nodes within distance @math is only bounded above by the diameter @math , in many cases. For the @math -node lattice networks, we present algorithms of optimal time complexity, respectively @math for the lattice line and @math for the two-dimensional lattice. We also consider networks of unknown topology of diameter @math and of a parameter @math ( granularity ). More specifically, we consider networks with @math the minimum distance between any two nodes and @math . We present broadcast algorithms for networks of nodes placed on the line and on the plane with respective time complexities @math and @math , where @math .
|
More recently, in @cite_2 , it was shown that the time of broadcast depends on the network diameter @math and the smallest geometric distance @math (denoted @math in their paper) between any two nodes. Under the conditional wake-up model network!conditional wake up model , where nodes start transmitting only after hearing a first message, the authors proposed an algorithm that completes broadcasting in time @math . They also proved that, in this context, every broadcasting algorithm requires @math time. Under the spontaneous wake up model network!spontaneous wake up model , where nodes may transmit from the beginning of the communication process, the authors combined two sub-optimal algorithms into one algorithm, which completes broadcasting in optimal time @math . The results in @cite_2 hold under the assumption that nodes can communicate with other neraby nodes. We, on the other hand, consider the communication model where nodes are prevented from communicating with other nodes nearby.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2088895311"
],
"abstract": [
"The paper considers broadcasting in radio networks, modeled as unit disk graphs (UDG). Such networks occur in wireless communication between sites (e.g., stations or sensors) situated in a terrain. Network stations are represented by points in the Euclidean plane, where a station is connected to all stations at distance at most 1 from it. A message transmitted by a station reaches all its neighbors, but a station hears a message (receives the message correctly) only if exactly one of its neighbors transmits at a given time step. One station of the network, called the source, has a message which has to be disseminated to all other stations. Stations are unaware of the network topology. Two broadcasting models are considered. In the conditional wake up model, the stations other than the source are initially idle and cannot transmit until they hear a message for the first time. In the spontaneous wake up model, all stations are awake (and may transmit messages) from the beginning. It turns out that broadcasting time depends on two parameters of the UDG network, namely, its diameter D and its granularity g, which is the inverse of the minimum distance between any two stations. We present a deterministic broadcasting algorithm which works in time O (D g) under the conditional wake up model and prove that broadcasting in this model cannot be accomplished by any deterministic algorithm in time better than ( (D g ) ) . For the spontaneous wake up model, we design two deterministic broadcasting algorithms: the first works in time O (D + g 2) and the second in time O (D log g). While neither of these algorithms alone is optimal for all parameter values, we prove that the algorithm obtained by interleaving their steps, and thus working in time ( O ( D + g^2, D g ) ), turns out to be optimal by establishing a matching lower bound."
]
}
|
1406.1923
|
2949318254
|
In this paper, we address the problem of broadcasting in a wireless network under a novel communication model: the swamping communication model. In this model, nodes communicate only with those nodes at geometric distance greater than @math and at most @math from them. Communication between nearby nodes under this model can be very time consuming, as the length of the path between two nodes within distance @math is only bounded above by the diameter @math , in many cases. For the @math -node lattice networks, we present algorithms of optimal time complexity, respectively @math for the lattice line and @math for the two-dimensional lattice. We also consider networks of unknown topology of diameter @math and of a parameter @math ( granularity ). More specifically, we consider networks with @math the minimum distance between any two nodes and @math . We present broadcast algorithms for networks of nodes placed on the line and on the plane with respective time complexities @math and @math , where @math .
|
In @cite_21 , under the conditional wakeup model, @math was shown to be the tight lower bound on broadcasting time. However, for networks where nodes locations are restricted to the vertices of a grid of squares of size @math , the authors proposed an @math -time broadcasting algorithm, thus showing that the broadcast time is not always linearly dependent on @math .
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"1984466682"
],
"abstract": [
"The paper studies broadcasting in radio networks whose stations are represented by points in the Euclidean plane. In any given time step, a station can either receive or transmit. A message transmitted from station (v) is delivered to every station (u) at distance at most (1) from (v), but (u) successfully hears the message if and only if (v) is the only station at distance at most (1) from (u) that transmitted in this time step. A designated source station has a message that should be disseminated throughout the network. All stations other than the source are initially idle and wake up upon the first time they hear the source message. It is shown in [11] that the time complexity of broadcasting depends on two parameters of the network, namely, its diameter (in hops) (D) and a lower bound (d) on the Euclidean distance between any two stations. The inverse of (d) is called the granularity of the network, denoted by (g). Specifically, the authors of [11] present a broadcasting algorithm that works in time (O (D g)) and prove that every broadcasting algorithm requires Ω (D √g) time. In this paper, we distinguish between the arbitrary deployment setting, originally studied in [11], in which stations can be placed everywhere in the plane, and the new grid deployment setting, in which stations are only allowed to be placed on a (d)-spaced grid. Does the latter (more restricted) setting provides any speedup in broadcasting time complexity? Although the (O (D g)) broadcasting algorithm of [11] works under the (original) arbitrary deployment setting, it turns out that the Ω (D √g) lower bound remains valid under the grid deployment setting. Still, the above question is left unanswered. The current paper answers this question affirmatively by presenting a provable separation between the two deployment settings. We establish a tight lower bound on the time complexity of broadcasting under the arbitrary deployment setting proving that broadcasting cannot be completed in less than Ω ( D √g) time. For the grid deployment setting, we develop a broadcasting algorithm that runs in time O ( D g5 6 log g), thus breaking the linear dependency on (g)."
]
}
|
1406.1923
|
2949318254
|
In this paper, we address the problem of broadcasting in a wireless network under a novel communication model: the swamping communication model. In this model, nodes communicate only with those nodes at geometric distance greater than @math and at most @math from them. Communication between nearby nodes under this model can be very time consuming, as the length of the path between two nodes within distance @math is only bounded above by the diameter @math , in many cases. For the @math -node lattice networks, we present algorithms of optimal time complexity, respectively @math for the lattice line and @math for the two-dimensional lattice. We also consider networks of unknown topology of diameter @math and of a parameter @math ( granularity ). More specifically, we consider networks with @math the minimum distance between any two nodes and @math . We present broadcast algorithms for networks of nodes placed on the line and on the plane with respective time complexities @math and @math , where @math .
|
In @cite_7 , the problem of broadcasting in unknown topology networks was proposed given that nodes do not perceive their location accurately and that they do not know the minimum distance @math between them. Under the spontaneous wake up model, the authors showed a broadcasting algorithm maintaining optimal time complexity @math in these conditions given an upper bound @math on the inaccuracy of node location perception; beyond this upper bound on inaccuracy, the authors showed that broadcasting is impossible. The solution proposed in @cite_7 uses the election of ambassadors that represent a large number of nodes and communicate information to regions of the graph in range. In contrast, we show the impossibility of using this mechanism in the presence of swamping.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"1736159137"
],
"abstract": [
"We study broadcasting time in radio networks, modeled as unit disk graphs (UDG). showed that broadcasting time depends on two parameters of the UDG network, namely, its diameter D(in hops) and its granularityg. The latter is the inverse of the densitydof the network which is the minimum Euclidean distance between any two stations. They proved that the minimum broadcasting time is @math , assuming that each node knows the density of the network and knows exactly its own position in the plane. In many situations these assumptions are unrealistic. Does removing them influence broadcasting time? The aim of this paper is to answer this question, hence we assume that density is unknown and nodes perceive their position with some unknown error margin i¾?. It turns out that this combination of missing and inaccurate information substantially changes the problem: the main new challenge becomes fast broadcasting in sparse networks (with constant density), when optimal time is O(D). Nevertheless, under our very weak scenario, we construct a broadcasting algorithm that maintains optimal time @math for all networks with at least 2 nodes, of diameter Dand granularity g, if each node perceives its position with error margin i¾?= i¾?d, for any (unknown) constant i¾?< 1 2. Rather surprisingly, the minimum time of an algorithm stopping if the source is alone, turns out to be i¾?(D+ g2). Thus, the mere stopping requirement for the special case of the lonely source causes an exponential increase in broadcasting time, for networks of any density and any small diameter. Finally, broadcasting is impossible if i¾?i¾? d 2."
]
}
|
1406.1923
|
2949318254
|
In this paper, we address the problem of broadcasting in a wireless network under a novel communication model: the swamping communication model. In this model, nodes communicate only with those nodes at geometric distance greater than @math and at most @math from them. Communication between nearby nodes under this model can be very time consuming, as the length of the path between two nodes within distance @math is only bounded above by the diameter @math , in many cases. For the @math -node lattice networks, we present algorithms of optimal time complexity, respectively @math for the lattice line and @math for the two-dimensional lattice. We also consider networks of unknown topology of diameter @math and of a parameter @math ( granularity ). More specifically, we consider networks with @math the minimum distance between any two nodes and @math . We present broadcast algorithms for networks of nodes placed on the line and on the plane with respective time complexities @math and @math , where @math .
|
In 2003, Kuhn, Wattenhofer and Zollinger @cite_8 , introduced a variant of the UDG model handling transmissions and interference separately, named Quasi Unit Disk Graph (Q-UDG) model. In this model, two concentric discs are associated with each station, the smaller representing its communication range and the larger representing its interference range. In our work, we consider a very different situation: as in traditional radio communication models, interference and communication ranges are equal; contrary to previous work, we add the swamping range - a self-interference range - which must be smaller than the communication range.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2065248207"
],
"abstract": [
"In this paper we study a model for ad-hoc networks close enough to reality as to represent existing networks, being at the same time concise enough to promote strong theoretical results. The Quasi Unit Disk Graph model contains all edges shorter than a parameter d between 0 and 1 and no edges longer than 1.We show that .in comparison to the cost known on Unit Disk Graphs .the complexity results in this model contain the additional factor 1 d2. We prove that in Quasi Unit Disk Graphs flooding is an asymptotically message-optimal routing technique, provide a geometric routing algorithm being more efficient above all in dense networks, and show that classic geometric routing is possible with the same performance guarantees as for Unit Disk Graphs if d = 1 v2."
]
}
|
1406.2235
|
2115292639
|
Collaborative filtering is used to recommend items to a user without requiring a knowledge of the item itself and tends to outperform other techniques. However, collaborative filtering suffers from the cold-start problem, which occurs when an item has not yet been rated or a user has not rated any items. Incorporating additional information, such as item or user descriptions, into collaborative filtering can address the cold-start problem. In this paper, we present a neural network model with latent input variables (latent neural network or LNN) as a hybrid collaborative filtering technique that addresses the cold-start problem. LNN outperforms a broad selection of content-based filters (which make recommendations based on item descriptions) and other hybrid approaches while maintaining the accuracy of state-of-the-art collaborative filtering techniques.
|
Pure collaborative filtering (CF) techniques are not able to handle the cold-start problem for items or users. As a result, several hybrid methods have been developed that incorporate item and or user descriptions into collaborative filtering approaches. The most common, as surveyed by Burke @cite_14 , involves using separate CBF and CF techniques and then combining their outputs (i.e. weighted average, combining the output from both techniques, or switching depending on the context) or using the output from one technique as input to another. Content-boosted collaborative filtering @cite_4 uses CBF to fill in the missing values in the ratings matrix and then the dense ratings matrix is passed to a collaborative filtering method (in their implementation, a neighbor based CF). Other work addresses the cold-start problem by build user item descriptions for later use in a recommendation system @cite_20 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_20"
],
"mid": [
"281665770",
"2168118654",
"2018571751"
],
"abstract": [
"Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants. Further, we show that semantic ratings obtained from the knowledge-based part of the system enhance the effectiveness of collaborative filtering.",
"One of the potential advantages of multiple classifier systems is an increased robustness to noise and other imperfections in data. Previous experiments on classification noise have shown that bagging is fairly robust but that boosting is quite sensitive. Decorate is a recently introduced ensemble method that constructs diverse committees using artificial data. It has been shown to generally outperform both boosting and bagging when training data is limited. This paper compares the sensitivity of bagging, boosting, and Decorate to three types of imperfect data: missing features, classification noise, and feature noise. For missing data, Decorate is the most robust. For classification noise, bagging and Decorate are both robust, with bagging being slightly better than Decorate, while boosting is quite sensitive. For feature noise, all of the ensemble methods increase the resilience of the base classifier.",
"A key challenge in recommender system research is how to effectively profile new users, a problem generally known as cold-start recommendation. Recently the idea of progressively querying user responses through an initial interview process has been proposed as a useful new user preference elicitation strategy. In this paper, we present functional matrix factorization (fMF), a novel cold-start recommendation method that solves the problem of initial interview construction within the context of learning user and item profiles. Specifically, fMF constructs a decision tree for the initial interview with each node being an interview question, enabling the recommender to query a user adaptively according to her prior responses. More importantly, we associate latent profiles for each node of the tree --- in effect restricting the latent profiles to be a function of possible answers to the interview questions --- which allows the profiles to be gradually refined through the interview process based on user responses. We develop an iterative optimization algorithm that alternates between decision tree construction and latent profiles extraction as well as a regularization scheme that takes into account of the tree structure. Experimental results on three benchmark recommendation data sets demonstrate that the proposed fMF algorithm significantly outperforms existing methods for cold-start recommendation."
]
}
|
1406.2400
|
1635918143
|
This paper presents a currently bilingual but potentially multilingual FrameNet-based grammar library implemented in Grammatical Framework. The contribution of this paper is two-fold. First, it offers a methodological approach to automatically generate the grammar based on semantico-syntactic valence patterns extracted from FrameNet-annotated corpora. Second, it provides a proof of concept for two use cases illustrating how the acquired multilingual grammar can be exploited in different CNL applications in the domains of arts and tourism.
|
The main difference between this work and the previous approaches to CNL grammars is that we present an effort to exploit a robust and well established semantic model in the grammar development. Our approach can be compared with the work on multilingual verbalisation of modular ontologies using GF and , the Lexicon Model for Ontologies @cite_15 . We use additional lexical information about syntactic arguments for building the concrete syntax.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"45390577"
],
"abstract": [
"This paper presents an approach to multilingual ontology verbalisation of controlled language based on the Grammatical Framework (GF) and the lemon model. It addresses specific challenges that arise when classes are used to create a consensus-based conceptual framework, in which many parties individually contribute instances. The approach is presented alongside a concrete case, in which ontologies are used to capture business processes by linguistically untrained stakeholders across business disciplines. GF is used to create multilingual grammars that enable transparent multilingual verbalisation. Capturing the instance labels in lemon lexicons reduces the need for GF engineering to the class level: The lemon lexicons with the labels of the instances are converted into GF grammars based on a mapping described in this paper. The grammars are modularised in accordance with the ontology modularisation and can deal with the different styles of label choosing that occur in practice."
]
}
|
1406.2400
|
1635918143
|
This paper presents a currently bilingual but potentially multilingual FrameNet-based grammar library implemented in Grammatical Framework. The contribution of this paper is two-fold. First, it offers a methodological approach to automatically generate the grammar based on semantico-syntactic valence patterns extracted from FrameNet-annotated corpora. Second, it provides a proof of concept for two use cases illustrating how the acquired multilingual grammar can be exploited in different CNL applications in the domains of arts and tourism.
|
The grounding of NLG using the frame semantics theory has been addressed in the work on text-to-scene generation @cite_13 and in the work on text generation for navigational tasks @cite_7 . In that research, the content of frames is utilized through alignment between the frame-semantic structure and the domain-semantic representation. Discourse is supported by applying aggregation and pronominalization techniques. In the CH use case, we also show how an application which utilizes the FN-based grammar can become more discourse-oriented; something that is necessary in actual NLG applications and that has been demonstrated for the CH domain in GF before @cite_16 . In our current approach, the semantic representation of the domain and the linguistic structures of the grammar are based on FN-annotated data.
|
{
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_7"
],
"mid": [
"",
"1592946230",
"2136642445"
],
"abstract": [
"",
"This paper introduces Vignette Semantics, a lexical semantic theory based on Frame Semantics that represents conceptual and graphical relations. We also describe a lexical resource that implements this theory, VigNet, and its application in text-to-scene generation.",
"Route directions are natural language (NL) statements that specify, for a given navigational task and an automatically computed route representation, a sequence of actions to be followed by the user to reach his or her goal. A corpus-based approach to generate route directions involves (i) the selection of elements along the route that need to be mentioned, and (ii) the induction of a mapping from route elements to linguistic structures that can be used as a basis for NL generation. This paper presents an Expectation-Maximization (EM) based algorithm that aligns geographical route representations with semantically annotated NL directions, as a basis for the above tasks. We formulate one basic and two extended models, the latter capturing special properties of the route direction task. Although our current data set is small, both extended models achieve better results than the simple model and a random baseline. The best results are achieved by a combination of both extensions, which outperform the random baseline and the simple model by more than an order of magnitude."
]
}
|
1406.2400
|
1635918143
|
This paper presents a currently bilingual but potentially multilingual FrameNet-based grammar library implemented in Grammatical Framework. The contribution of this paper is two-fold. First, it offers a methodological approach to automatically generate the grammar based on semantico-syntactic valence patterns extracted from FrameNet-annotated corpora. Second, it provides a proof of concept for two use cases illustrating how the acquired multilingual grammar can be exploited in different CNL applications in the domains of arts and tourism.
|
As suggested before @cite_17 , a FN-like approach can be used to deal with polysemy in CNL texts. Although we consider lexicalisation alternatives and restrictions for LUs and FEs, we do not address the problem of selectional restrictions and word sense disambiguation in general.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"1579696959"
],
"abstract": [
"Computational semantics and logic-based controlled natural languages (CNL) do not address systematically the word sense disambiguation problem of content words, i.e., they tend to interpret only some functional words that are crucial for construction of discourse representation structures. We show that micro-ontologies and multi-word units allow integration of the rich and polysemous multi-domain background knowledge into CNL thus providing interpretation for the content words. The proposed approach is demonstrated by extending the Attempto Controlled English (ACE) with polysemous and procedural constructs resulting in a more natural CNL named PAO covering narrative multi-domain texts."
]
}
|
1406.2008
|
2950113508
|
We introduce a variant of the deterministic rendezvous problem for a pair of heterogeneous agents operating in an undirected graph, which differ in the time they require to traverse particular edges of the graph. Each agent knows the complete topology of the graph and the initial positions of both agents. The agent also knows its own traversal times for all of the edges of the graph, but is unaware of the corresponding traversal times for the other agent. The goal of the agents is to meet on an edge or a node of the graph. In this scenario, we study the time required by the agents to meet, compared to the meeting time @math in the offline scenario in which the agents have complete knowledge about each others speed characteristics. When no additional assumptions are made, we show that rendezvous in our model can be achieved after time @math in a @math -node graph, and that such time is essentially in some cases the best possible. However, we prove that the rendezvous time can be reduced to @math when the agents are allowed to exchange @math bits of information at the start of the rendezvous process. We then show that under some natural assumption about the traversal times of edges, the hardness of the heterogeneous rendezvous problem can be substantially decreased, both in terms of time required for rendezvous without communication, and the communication complexity of achieving rendezvous in time @math .
|
The rendezvous problem has been thoroughly studied in the literature in different contexts. In a general setting, the rendezvous problem was first mentioned in @cite_13 . Authors investigating rendezvous (cf. @cite_14 for an extensive survey) considered either the geometric scenario (rendezvous in an interval of the real line, see, e.g., @cite_21 @cite_20 @cite_7 , or in the plane, see, e.g., @cite_12 @cite_22 ) or the graph scenario (see, e.g., @cite_8 @cite_3 @cite_15 ). A natural extension of the rendezvous problem is that of gathering @cite_19 @cite_9 @cite_0 @cite_6 , when more than two agents have to meet in one location.
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"1501957312",
"2014192080",
"2034224690",
"1827234103",
"",
"1971467559",
"1526420573",
"1975690872",
"2007267487",
"2010017329",
"2138881680",
"",
"2096182510",
"2030068303"
],
"abstract": [
"Search Theory is one of the original disciplines within the field of Operations Research. It deals with the problem faced by a Searcher who wishes to minimize the time required to find a hidden object, or “target. ” The Searcher chooses a path in the “search space” and finds the target when he is sufficiently close to it. Traditionally, the target is assumed to have no motives of its own regarding when it is found; it is simply stationary and hidden according to a known distribution (e. g. , oil), or its motion is determined stochastically by known rules (e. g. , a fox in a forest). The problems dealt with in this book assume, on the contrary, that the “target” is an independent player of equal status to the Searcher, who cares about when he is found. We consider two possible motives of the target, and divide the book accordingly. Book I considers the zero-sum game that results when the target (here called the Hider) does not want to be found. Such problems have been called Search Games (with the “ze- sum” qualifier understood). Book II considers the opposite motive of the target, namely, that he wants to be found. In this case the Searcher and the Hider can be thought of as a team of agents (simply called Player I and Player II) with identical aims, and the coordination problem they jointly face is called the Rendezvous Search Problem.",
"We consider rendezvous problems in which two players move on the plane and wish to cooperate to minimise their first meeting time. We begin by considering the case where both players are placed such that the vector difference is chosen equiprobably from a finite set. We also consider a situation in which they know they are a distanced apart, but they do not know the direction of the other player. Finally, we give some results for the case in which player 1 knows the initial position of player 2, while player 2 is given information only on the initial distance of player 1.",
"We present two new results for the asymmetric rendezvous problem on the line. We first show that it is never optimal for one player to be stationary during the entire search period in the two-player rendezvous. Then we consider the meeting time of n-players in the worst case and show that it has an asymptotic behavior of n = 2 + O(log n).",
"Two mobile agents having distinct identifiers and located in nodes of an unknown anonymous connected graph, have to meet at some node of the graph. We present fast deterministic algorithms for this rendezvous problem.",
"",
"Two players A and B are randomly placed on a line. The distribution of the distance between them is unknown except that the expected initial distance of the (two) players does not exceed some constant @math The players can move with maximal velocity 1 and would like to meet one another as soon as possible. Most of the paper deals with the asymmetric rendezvous in which each player can use a different trajectory. We find rendezvous trajectories which are efficient against all probability distributions in the above class. (It turns out that our trajectories do not depend on the value of @math ) We also obtain the minimax trajectory of player A if player B just waits for him. This trajectory oscillates with a geometrically increasing amplitude. It guarantees an expected meeting time not exceeding @math We show that, if player B also moves, then the expected meeting time can be reduced to @math The expected meeting time can be further reduced if the players use mixed strategies. We show that if player B rests, then the optimal strategy of player A is a mixture of geometric trajectories. It guarantees an expected meeting time not exceeding @math This value can be reduced even more (below @math ) if player B also moves according to a (correlated) mixed strategy. We also obtain a bound for the expected meeting time of the corresponding symmetric rendezvous problem.",
"We study the size of memory of mobile agents that permits to solve deterministically the rendezvous problem, i.e., the task of meeting at some node, for two identical agents moving from node to node along the edges of an unknown anonymous connected graph. The rendezvous problem is unsolvable in the class of arbitrary connected graphs, as witnessed by the example of the cycle. Hence we restrict attention to rendezvous in trees, where rendezvous is feasible if and only if the initial positions of the agents are not symmetric. We prove that the minimum memory size guaranteeing rendezvous in all trees of size at most nis i¾?(logn) bits. The upper bound is provided by an algorithm for abstract state machines accomplishing rendezvous in all trees, and using O(logn) bits of memory in trees of size at most n. The lower bound is a consequence of the need to distinguish between up to ni¾? 1 links incident to a node. Thus, in the second part of the paper, we focus on the potential existence of pairs of finiteagents (i.e., finite automata) capable of accomplishing rendezvous in all bounded degreetrees. We show that, as opposed to what has been proved for the graph exploration problem, there are no finite agents capable of accomplishing rendezvous in all bounded degree trees.",
"If two searchers are searching for a stationary target and wish to minimize the expected time until both searchers and the lost target are reunited, there is a trade off between searching for the target and checking back to see if the other searcher has already found the target. This note solves a non-linear optimization problem to find the optimal search strategy for this problem.",
"Suppose that @math players are placed randomly on the real line at consecutive integers, and faced in random directions. Each player has maximum speed one, cannot see the others, and doesn't know his relative position. What is the minimum time @math required to ensure that all the players can meet together at a single point, regardless of their initial placement? We prove that @math , @math , and @math is asymptotic to @math We also consider a variant of the problem which requires players who meet to stick together, and find in this case that three players require @math time units to ensure a meeting. This paper is thus a minimax version of the rendezvous search problem, which has hitherto been studied only in terms of minimizing the expected meeting time.",
"In this paper we study the problem of gathering a collection of identical oblivious mobile robots in the same location of the plane. Previous investigations have focused mostly on the unlimited visibility setting, where each robot can always see all the others regardless of their distance.In the more difficult and realistic setting where the robots have limited visibility, the existing algorithmic results are only for convergence (towards a common point, without ever reaching it) and only for semi-synchronous environments, where robots' movements are assumed to be performed instantaneously.In contrast, we study this problem in a totally asynchronous setting, where robots' actions, computations, and movements require a finite but otherwise unpredictable amount of time. We present a protocol that allows anonymous oblivious robots with limited visibility to gather in the same location in finite time, provided they have orientation (i.e., agreement on a coordinate system).Our result indicates that, with respect to gathering, orientation is at least as powerful as instantaneous movements.",
"A set of k mobile agents with distinct identifiers and located in nodes of an unknown anonymous connected network, have to meet at some node. We show that this gathering problem is no harder than its special case for k=2, called the rendezvous problem, and design deterministic protocols solving the rendezvous problem with arbitrary startups in rings and in general networks. The measure of performance is the number of steps since the startup of the last agent until the rendezvous is achieved. For rings we design an oblivious protocol with cost O([email protected]?), where n is the size of the network and @? is the minimum label of participating agents. This result is asymptotically optimal due to the lower bound showed by [A. Dessmark, P. Fraigniaud, D. Kowalski, A. Pelc, Deterministic rendezvous in graphs, Algorithmica 46 (2006) 69-96]. For general networks we show a protocol with cost polynomial in n and [email protected]?, independent of the maximum difference @t of startup times, which answers in the affirmative the open question by [A. Dessmark, P. Fraigniaud, D. Kowalski, A. Pelc, Deterministic rendezvous in graphs, Algorithmica 46 (2006) 69-96].",
"",
"",
"Leaving marks at the starting points in a rendezvous search problem may provide the players with important information. Many of the standard rendezvous search problems are investigated under this new framework which we call markstart rendezvous search. Somewhat surprisingly, the relative difficulties of analysing problems in the two scenarios differ from problem to problem. Symmetric rendezvous on the line seems to be more tractable in the new setting whereas asymmetric rendezvous on the line when the initial distance is chosen by means of a convex distribution appears easier to analyse in the original setting. Results are also obtained for markstart rendezvous on complete graphs and on the line when the players' initial distance is given by an unknown probability distribution. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 722–731, 2001"
]
}
|
1406.2008
|
2950113508
|
We introduce a variant of the deterministic rendezvous problem for a pair of heterogeneous agents operating in an undirected graph, which differ in the time they require to traverse particular edges of the graph. Each agent knows the complete topology of the graph and the initial positions of both agents. The agent also knows its own traversal times for all of the edges of the graph, but is unaware of the corresponding traversal times for the other agent. The goal of the agents is to meet on an edge or a node of the graph. In this scenario, we study the time required by the agents to meet, compared to the meeting time @math in the offline scenario in which the agents have complete knowledge about each others speed characteristics. When no additional assumptions are made, we show that rendezvous in our model can be achieved after time @math in a @math -node graph, and that such time is essentially in some cases the best possible. However, we prove that the rendezvous time can be reduced to @math when the agents are allowed to exchange @math bits of information at the start of the rendezvous process. We then show that under some natural assumption about the traversal times of edges, the hardness of the heterogeneous rendezvous problem can be substantially decreased, both in terms of time required for rendezvous without communication, and the communication complexity of achieving rendezvous in time @math .
|
Scenarios with agents having different capabilities have been also studied. In @cite_4 the authors considered multiple colliding robots with different velocities traveling along a ring with a goal to determine their initial positions and velocities. Mobile agents with different speeds were also studied in the context of patrolling a boundary, see e.g. @cite_18 @cite_27 . In @cite_5 agents capable of traveling in two different modes that differ with maximal speeds were considered in the context of searching a line segment. We also mention that speed, although very natural, is not the only attribute that can be used to differentiate the agents. For example, authors in @cite_11 studied robots with different ranges or, in other words, with different battery sizes limiting the distance that a robot can travel.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_27",
"@cite_5",
"@cite_11"
],
"mid": [
"1728603878",
"1981392728",
"2035206572",
"1833254390",
"1459833043"
],
"abstract": [
"A set of k mobile agents are placed on the boundary of a simply connected planar object represented by a cycle of unit length. Each agent has its own predefined maximal speed, and is capable of moving around this boundary without exceeding its maximal speed. The agents are required to protect the boundary from an intruder which attempts to penetrate to the interior of the object through a point of the boundary, unknown to the agents. The intruder needs some time interval of length τ to accomplish the intrusion. Will the intruder be able to penetrate into the object, or is there an algorithm allowing the agents to move perpetually along the boundary, so that no point of the boundary remains unprotected for a time period τ? Such a problem may be solved by designing an algorithm which defines the motion of agents so as to minimize the idle time I, i.e., the longest time interval during which any fixed boundary point remains unvisited by some agent, with the obvious goal of achieving I < τ. Depending on the type of the environment, this problem is known as either boundary patrolling or fence patrolling in the robotics literature. The most common heuristics adopted in the past include the cyclic strategy, where agents move in one direction around the cycle covering the environment, and the partition strategy, in which the environment is partitioned into sections patrolled separately by individual agents. This paper is, to our knowledge, the first study of the fundamental problem of boundary patrolling by agents with distinct maximal speeds. In this scenario, we give special attention to the performance of the cyclic strategy and the partition strategy. We propose general bounds and methods for analyzing these strategies, obtaining exact results for cases with 2, 3, and 4 agents. We show that there are cases when the cyclic strategy is optimal, cases when the partition strategy is optimal and, perhaps more surprisingly, novel, alternative methods have to be used to achieve optimality.",
"We study the localization problem in the ring: a collection of @math n anonymous mobile robots are deployed in a continuous ring of perimeter one. All robots start moving at the same time with arbitrary velocities, starting in clockwise or counterclockwise direction around the ring. The robots bounce against each other according to the principles of conservation of energy and momentum. The task of each robot is to find out, in finite time, the initial position and the initial velocity of every deployed robot. The only way that robots perceive the information about the environment is by colliding with their neighbors; robots have no control of their walks or velocities moreover any type of communication among them is not possible. The configuration of initial positions of robots and their speeds is considered feasible, if there is a finite time, after which every robot starting at this configuration knows initial positions and velocities of all other robots. It was conjectured in (2012) that if the principles of conservation of energy and momentum were assumed and the robots had arbitrary velocities, the localization problem might be solvable. We prove that this conjecture is false. We show that if @math v0,v1,?,vn-1 are the velocities of a given robot configuration @math S, then @math S is feasible if and only if @math vi?v¯ for all @math 0≤i≤n-1, where @math v¯=v0+?+vn-1n. To figure out the initial positions of all robots no more than @math 2min0≤i≤n-1|vi-v¯| time is required.",
"Suppose we want to patrol a fence (line segment) using k mobile agents with speeds v 1, …, v k so that every point on the fence is visited by an agent at least once in every unit time period. conjectured that the maximum length of the fence that can be patrolled is (v 1 + … + v k ) 2, which is achieved by the simple strategy where each agent i moves back and forth in a segment of length v i 2. We disprove this conjecture by a counterexample involving k = 6 agents. We also show that the conjecture is true for k ≤ 3.",
"We introduce and study a new problem concerning the exploration of a geometric domain by mobile robots. Consider a line segment [0,I] and a set of n mobile robots r 1,r 2,…, r n placed at one of its endpoints. Each robot has a searching speed s i and a walking speed w i , where s i < w i . We assume that every robot is aware of the number of robots of the collection and their corresponding speeds.",
"We consider mobile agents of limited energy, which have to collaboratively deliver data from specified sources of a network to a central repository. Every move consumes energy that is proportional to the travelled distance. Thus, every agent is limited in the total distance it can travel. We ask whether there is a schedule of agents’ movements that accomplishes the delivery. We provide hardness results, as well as exact, approximation, and resource-augmented algorithms for several variants of the problem. Among others, we show that the decision problem is NP-hard already for a single source, and we present a 2-approximation algorithm for the problem of finding the minimum energy that can be assigned to each agent such that the agents can deliver the data."
]
}
|
1406.2031
|
2951329458
|
Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different "detectability" patterns caused by deformations, occlusion and or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1 AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.
|
There is a considerable body of work on part-based object detectors. Some of these methods represent an object with a holistic object ( root") template attached to a fixed number of parts @cite_0 @cite_22 @cite_30 . The main disadvantage of these methods is that they are not robust against occlusion. @cite_4 propose a DPM that allows missing parts. Our method is different from theirs as we consider of parts instead of , and our model is more flexible as we can switch off the root" as well. We also show significantly better results. @cite_12 propose a grammar model to handle a variable number of parts. The difference between our method and theirs is that we consider occlusion of body parts, while they model occlusion for latent parts of the model. @cite_11 , the authors propose a multi-resolution model to better detect small objects, where the parts are switched off at small scales. In contrast, we do not explicitly incorporate size into the model and let the model choose whether the parts are useful to describe an object or not.
|
{
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_22",
"@cite_0",
"@cite_12",
"@cite_11"
],
"mid": [
"1856540134",
"2122686738",
"2141357020",
"2168356304",
"2153185908",
"1560380655"
],
"abstract": [
"This paper presents a new object representation, Active Mask Hierarchies (AMH), for object detection. In this representation, an object is described using a mixture of hierarchical trees where the nodes represent the object and its parts in pyramid form. To account for shape variations at a range of scales, a dictionary of masks with varied shape patterns are attached to the nodes at different layers. The shape masks are \"active\" in that they enable parts to move with different displacements. The masks in this active hierarchy are associated with histograms of words (HOWs) and oriented gradients (HOGs) to enable rich appearance representation of both structured (eg, cat face) and textured (eg, cat body) image regions. Learning the hierarchical model is a latent SVM problem which can be solved by the incremental concave-convex procedure (iCCCP). The resulting system is comparable with the state-of-the-art methods when evaluated on the challenging public PASCAL 2007 and 2009 datasets.",
"We propose a framework for large scale learning and annotation of structured models. The system interleaves interactive labeling (where the current model is used to semi-automate the labeling of a new example) and online learning (where a newly labeled example is used to update the current model parameters). This framework is scalable to large datasets and complex image models and is shown to have excellent theoretical and practical properties in terms of train time, optimality guarantees, and bounds on the amount of annotation effort per image. We apply this framework to part-based detection, and introduce a novel algorithm for interactive labeling of deformable part models. The labeling tool updates and displays in real-time the maximum likelihood location of all parts as the user clicks and drags the location of one or more parts. We demonstrate that the system can be used to efficiently and robustly train part and pose detectors on the CUB Birds-200-a challenging dataset of birds in unconstrained pose and environment.",
"We present a latent hierarchical structural learning method for object detection. An object is represented by a mixture of hierarchical tree models where the nodes represent object parts. The nodes can move spatially to allow both local and global shape deformations. The models can be trained discriminatively using latent structural SVM learning, where the latent variables are the node positions and the mixture component. But current learning methods are slow, due to the large number of parameters and latent variables, and have been restricted to hierarchies with two layers. In this paper we describe an incremental concave-convex procedure (iCCCP) which allows us to learn both two and three layer models efficiently. We show that iCCCP leads to a simple training algorithm which avoids complex multi-stage layer-wise training, careful part selection, and achieves good performance without requiring elaborate initialization. We perform object detection using our learnt models and obtain performance comparable with state-of-the-art methods when evaluated on challenging public PASCAL datasets. We demonstrate the advantages of three layer hierarchies – outperforming 's two layer models on all 20 classes.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data.",
"Most current approaches to recognition aim to be scale-invariant. However, the cues available for recognizing a 300 pixel tall object are qualitatively different from those for recognizing a 3 pixel tall object. We argue that for sensors with finite resolution, one should instead use scale-variant, or multiresolution representations that adapt in complexity to the size of a putative detection window. We describe a multiresolution model that acts as a deformable part-based model when scoring large instances and a rigid template with scoring small instances. We also examine the interplay of resolution and context, and demonstrate that context is most helpful for detecting low-resolution instances when local models are limited in discriminative power. We demonstrate impressive results on the Caltech Pedestrian benchmark, which contains object instances at a wide range of scales. Whereas recent state-of-the-art methods demonstrate missed detection rates of 86 -37 at 1 false-positive-per-image, our multiresolution model reduces the rate to 29 ."
]
}
|
1406.2031
|
2951329458
|
Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different "detectability" patterns caused by deformations, occlusion and or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1 AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.
|
The recent works on human pose estimation @cite_8 @cite_7 @cite_5 @cite_25 typically do not consider the holistic object (root) template in their model as they are faced with highly variable poses. Since detecting the individual parts can be hard when objects are small, having a root model can be helpful so our model adaptively switches on or off the root.
|
{
"cite_N": [
"@cite_5",
"@cite_25",
"@cite_7",
"@cite_8"
],
"mid": [
"2159185518",
"1540144755",
"2135533529",
""
],
"abstract": [
"We address the problem of estimating human pose in a single image using a part based approach. Pose accuracy is directly affected by the accuracy of the part detectors but more accurate detectors are likely to be also more computationally expensive. We propose to use multiple, heterogeneous part detectors with varying accuracy and computation requirements, ordered in a hierarchy, to achieve more accurate and efficient pose estimation. For inference, we propose an algorithm to localize articulated objects by exploiting an ordered hierarchy of detectors with increasing accuracy. The inference uses branch and bound method to search for each part and use kinematics from neighboring parts to guide the branching behavior and compute bounds on the best part estimate. We demonstrate our approach on a publicly available People dataset and outperform the state-of-art methods. Our inference is 3 times faster than one based on using a single, highly accurate detector.",
"We address the problem of articulated human pose estimation by learning a coarse-to-fine cascade of pictorial structure models. While the fine-level state-space of poses of individual parts is too large to permit the use of rich appearance models, most possibilities can be ruled out by efficient structured models at a coarser scale. We propose to learn a sequence of structured models at different pose resolutions, where coarse models filter the pose space for the next level via their max-marginals. The cascade is trained to prune as much as possible while preserving true poses for the final level pictorial structure model. The final level uses much more expensive segmentation, contour and shape features in the model for the remaining filtered set of candidates. We evaluate our framework on the challenging Buffy and PASCAL human pose datasets, improving the state-of-the-art.",
"We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50 relative improvement in pose estimation accuracy over a stateof-the-art method.",
""
]
}
|
1406.2282
|
2953280859
|
Human pose estimation is a key step to action recognition. We propose a method of estimating 3D human poses from a single image, which works in conjunction with an existing 2D pose joint detector. 3D pose estimation is challenging because multiple 3D poses may correspond to the same 2D pose after projection due to the lack of depth information. Moreover, current 2D pose estimators are usually inaccurate which may cause errors in the 3D estimation. We address the challenges in three ways: (i) We represent a 3D pose as a linear combination of a sparse set of bases learned from 3D human skeletons. (ii) We enforce limb length constraints to eliminate anthropomorphically implausible skeletons. (iii) We estimate a 3D pose by minimizing the @math -norm error between the projection of the 3D pose and the corresponding 2D detection. The @math -norm loss term is robust to inaccurate 2D joint estimations. We use the alternating direction method (ADM) to solve the optimization problem efficiently. Our approach outperforms the state-of-the-arts on three benchmark datasets.
|
Existing work on 3D pose estimation can be classified into four categories according to their inputs to the system, e.g. the image, image features, camera parameters, etc. The first class @cite_8 @cite_0 takes camera parameters as inputs. For example, @cite_8 represent a 3D pose by a skeleton model and parameterize the body parts by truncated cones. They estimate the rotation angles of body parts by minimizing the silhouette discrepancy between the model projections and the input image by applying Markov Chain Monte Carlo (MCMC). Simo- @cite_0 represent a 3D pose by a set of joint locations. They automatically estimate the 2D pose, model each joint by a Gaussian distribution, and propagate the uncertainty to 3D pose space. They sample a set of 3D skeletons from the space and learn a SVM to determine the most feasible one.
|
{
"cite_N": [
"@cite_0",
"@cite_8"
],
"mid": [
"2088196373",
"2140670136"
],
"abstract": [
"Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.",
"This paper addresses the problem of estimating human body pose in static images. This problem is challenging due to the high dimensional state space of body poses, the presence of pose ambiguity, and the need to segment the human body in an image. We use an image generative approach by modeling the human kinematics, the shape and the clothing probabilistically. These models are used for deriving a good likelihood measure to evaluate samples in the solution, space. We adopt a data-driven MCMC framework for searching the solution space efficiently. Our observation data include the face, head-shoulders contour, skin color blobs, and ridges; and they provide evidences on the positions of the head, shoulders and limbs. To translate these inferences into pose hypotheses, we introduce the use of 'proposal maps', which is an efficient way of consolidating the evidence and generating 3D pose candidates during the MCMC search. As experimental results show, the proposed technique estimates the human 3D pose accurately on various test images."
]
}
|
1406.2282
|
2953280859
|
Human pose estimation is a key step to action recognition. We propose a method of estimating 3D human poses from a single image, which works in conjunction with an existing 2D pose joint detector. 3D pose estimation is challenging because multiple 3D poses may correspond to the same 2D pose after projection due to the lack of depth information. Moreover, current 2D pose estimators are usually inaccurate which may cause errors in the 3D estimation. We address the challenges in three ways: (i) We represent a 3D pose as a linear combination of a sparse set of bases learned from 3D human skeletons. (ii) We enforce limb length constraints to eliminate anthropomorphically implausible skeletons. (iii) We estimate a 3D pose by minimizing the @math -norm error between the projection of the 3D pose and the corresponding 2D detection. The @math -norm loss term is robust to inaccurate 2D joint estimations. We use the alternating direction method (ADM) to solve the optimization problem efficiently. Our approach outperforms the state-of-the-arts on three benchmark datasets.
|
The second class @cite_3 @cite_20 requires manually labeled 2D joint locations in a video as input. @cite_3 first apply structure from motion to estimate the camera parameters and the 3D pose of the rigid torso, and then require human input to resolve the depth ambiguities for non-torso joints. @cite_20 propose the rigid body constraints'', e.g. the pelvis, left and right hip joints form a rigid structure, and require that the distance between any two joints on the rigid structure remains unchanged across time. They estimate the 3D poses by minimizing the discrepancy between the projection of the 3D poses and the 2D joint detections without violating the rigid body constraints''.
|
{
"cite_N": [
"@cite_20",
"@cite_3"
],
"mid": [
"2143482322",
"1491968272"
],
"abstract": [
"This paper introduces an efficient algorithm that reconstructs 3D human poses as well as camera parameters from a small number of 2D point correspondences obtained from uncalibrated monocular images. This problem is challenging because 2D image constraints (e.g. 2D point correspondences) are often not sufficient to determine 3D poses of an articulated object. The key idea of this paper is to identify a set of new constraints and use them to eliminate the ambiguity of 3D pose reconstruction. We also develop an optimization process to simultaneously reconstruct both human poses and camera parameters from various forms of reconstruction constraints. We demonstrate the power and effectiveness of our system by evaluating the performance of the algorithm on both real and synthetic data. We show the algorithm can accurately reconstruct 3D poses and camera parameters from a wide variety of real images, including internet photos and key frames extracted from monocular video sequences.",
"This paper explores a method, first proposed by Wei and Chai [1], for estimating 3D human pose from several frames of uncalibrated 2D point correspondences containing projected body joint locations. In their work Wei and Chai boldly claimed that, through the introduction of rigid constraints to the torso and hip, camera scales, bone lengths and absolute depths could be estimated from a finite number of frames (i.e. ≥ 5). In this paper we show this claim to be false, demonstrating in principle one can never estimate these parameters in a finite number of frames. Further, we demonstrate their approach is only valid for rigid sub-structures of the body (e.g. torso). Based on this analysis we propose a novel approach using deterministic structure from motion based on assumptions of rigidity in the body's torso. Our approach provides notably more accurate estimates and is substantially faster than Wei and Chai's approach, and unlike the original, can be solved as a deterministic least-squares problem."
]
}
|
1406.2282
|
2953280859
|
Human pose estimation is a key step to action recognition. We propose a method of estimating 3D human poses from a single image, which works in conjunction with an existing 2D pose joint detector. 3D pose estimation is challenging because multiple 3D poses may correspond to the same 2D pose after projection due to the lack of depth information. Moreover, current 2D pose estimators are usually inaccurate which may cause errors in the 3D estimation. We address the challenges in three ways: (i) We represent a 3D pose as a linear combination of a sparse set of bases learned from 3D human skeletons. (ii) We enforce limb length constraints to eliminate anthropomorphically implausible skeletons. (iii) We estimate a 3D pose by minimizing the @math -norm error between the projection of the 3D pose and the corresponding 2D detection. The @math -norm loss term is robust to inaccurate 2D joint estimations. We use the alternating direction method (ADM) to solve the optimization problem efficiently. Our approach outperforms the state-of-the-arts on three benchmark datasets.
|
The third class @cite_7 @cite_16 requires manually labeled 2D joints in one image. Taylor @cite_7 assumes that the limb lengths are known and calculates the relative depths of the limbs by considering foreshortening. It requires human input to resolve the depth ambiguities at each joint. @cite_16 represent a 3D pose by a linear combination of a set of bases. They split the training data into classes, apply PCA to each class, and combine the principal components as bases. They greedily add the most correlated basis into the model and estimate the coefficients by minimizing an @math -norm error between the projection of 3D pose and the 2D pose. They enforce a constraint on the sum of the limb lengths, which is just a weak constraint. This work @cite_16 achieves the state-of-the-art performance but relies on 2D joint locations.
|
{
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"2155196764",
"2018854916"
],
"abstract": [
"Reconstructing an arbitrary configuration of 3D points from their projection in an image is an ill-posed problem. When the points hold semantic meaning, such as anatomical landmarks on a body, human observers can often infer a plausible 3D configuration, drawing on extensive visual memory. We present an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory. Our method solves for anthropometrically regular body pose and explicitly estimates the camera via a matching pursuit algorithm operating on the image projections. Anthropometric regularity (i.e., that limbs obey known proportions) is a highly informative prior, but directly applying such constraints is intractable. Instead, we enforce a necessary condition on the sum of squared limb-lengths that can be solved for in closed form to discourage implausible configurations in 3D. We evaluate performance on a wide variety of human poses captured from different viewpoints and show generalization to novel 3D configurations and robustness to missing data.",
"This paper investigates the problem of recovering information about the configuration of an articulated object, such as a human figure, from point correspondences in a single image. Unlike previous approaches, the proposed reconstruction method does not assume that the imagery was acquired with a calibrated camera. An analysis is presented which demonstrates that there is a family of solutions to this reconstruction problem parameterized by a single variable. A simple and effective algorithm is proposed for recovering the entire set of solutions by considering the foreshortening of the segments of the model in the image. Results obtained by applying this algorithm to real images are presented."
]
}
|
1406.2282
|
2953280859
|
Human pose estimation is a key step to action recognition. We propose a method of estimating 3D human poses from a single image, which works in conjunction with an existing 2D pose joint detector. 3D pose estimation is challenging because multiple 3D poses may correspond to the same 2D pose after projection due to the lack of depth information. Moreover, current 2D pose estimators are usually inaccurate which may cause errors in the 3D estimation. We address the challenges in three ways: (i) We represent a 3D pose as a linear combination of a sparse set of bases learned from 3D human skeletons. (ii) We enforce limb length constraints to eliminate anthropomorphically implausible skeletons. (iii) We estimate a 3D pose by minimizing the @math -norm error between the projection of the 3D pose and the corresponding 2D detection. The @math -norm loss term is robust to inaccurate 2D joint estimations. We use the alternating direction method (ADM) to solve the optimization problem efficiently. Our approach outperforms the state-of-the-arts on three benchmark datasets.
|
The fourth class @cite_10 @cite_9 requires only a single image or image features ( e.g. silhouettes). For example, @cite_10 match a test image to the stored exemplars using shape context descriptors, and transfer the matched 2D pose to the test image. They lift the 2D pose to 3D using the method proposed in @cite_7 . @cite_9 propose to learn view-based silhouettes manifolds and the mapping function from the manifold to 3D poses. These approaches do not explicitly estimate camera parameters, but require a lot of training data from various viewpoints.
|
{
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_7"
],
"mid": [
"2152386463",
"",
"2018854916"
],
"abstract": [
"We aim to infer 3D body pose directly from human silhouettes. Given a visual input (silhouette), the objective is to recover the intrinsic body configuration, recover the viewpoint, reconstruct the input and detect any spatial or temporal outliers. In order to recover intrinsic body configuration (pose) from the visual input (silhouette), we explicitly learn view-based representations of activity manifolds as well as learn mapping functions between such central representations and both the visual input space and the 3D body pose space. The body pose can be recovered in a closed form in two steps by projecting the visual input to the learned representations of the activity manifold, i.e., finding the point on the learned manifold representation corresponding to the visual input, followed by interpolating 3D pose.",
"",
"This paper investigates the problem of recovering information about the configuration of an articulated object, such as a human figure, from point correspondences in a single image. Unlike previous approaches, the proposed reconstruction method does not assume that the imagery was acquired with a calibrated camera. An analysis is presented which demonstrates that there is a family of solutions to this reconstruction problem parameterized by a single variable. A simple and effective algorithm is proposed for recovering the entire set of solutions by considering the foreshortening of the segments of the model in the image. Results obtained by applying this algorithm to real images are presented."
]
}
|
1406.2199
|
2952186347
|
We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.
|
Video recognition research has been largely driven by the advances in image recognition methods, which were often adapted and extended to deal with video data. A large family of video action recognition methods is based on shallow high-dimensional encodings of local spatio-temporal features. For instance, the algorithm of @cite_19 consists in detecting sparse spatio-temporal interest points, which are then described using local spatio-temporal features: Histogram of Oriented Gradients (HOG) @cite_20 and Histogram of Optical Flow (HOF). The features are then encoded into the Bag Of Features (BoF) representation, which is pooled over several spatio-temporal grids (similarly to spatial pyramid pooling) and combined with an SVM classifier. In a later work @cite_14 , it was shown that dense sampling of local features outperforms sparse interest points.
|
{
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_20"
],
"mid": [
"2142194269",
"1993229407",
""
],
"abstract": [
"The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.",
"Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature and promising recognition results were demonstrated for a number of action classes. The comparison of existing methods, however, is often limited given the different experimental settings used. The purpose of this paper is to evaluate and compare previously proposed space-time features in a common experimental setup. In particular, we consider four different feature detectors and six local feature descriptors and use a standard bag-of-features SVM approach for action recognition. We investigate the performance of these methods on a total of 25 action classes distributed over three datasets with varying difficulty. Among interesting conclusions, we demonstrate that regular sampling of space-time features consistently outperforms all tested space-time interest point detectors for human actions in realistic settings. We also demonstrate a consistent ranking for the majority of methods over different datasets and discuss their advantages and limitations.",
""
]
}
|
1406.2199
|
2952186347
|
We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.
|
Instead of computing local video features over spatio-temporal cuboids, state-of-the-art shallow video representations @cite_7 @cite_22 @cite_21 make use of dense point trajectories. The approach, first introduced in @cite_23 , consists in adjusting local descriptor support regions, so that they follow dense trajectories, computed using optical flow. The best performance in the trajectory-based pipeline was achieved by the Motion Boundary Histogram (MBH) @cite_5 , which is a gradient-based feature, separately computed on the horizontal and vertical components of optical flow. A combination of several features was shown to further boost the accuracy. Recent improvements of trajectory-based hand-crafted representations include compensation of global (camera) motion @cite_12 @cite_18 @cite_7 , and the use of the Fisher vector encoding @cite_13 (in @cite_7 ) or its deeper variant @cite_17 (in @cite_21 ).
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_21",
"@cite_23",
"@cite_5",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2126579184",
"2951552696",
"2105101328",
"",
"2126574503",
"",
"1606858007",
"1996904744",
""
],
"abstract": [
"With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion.",
"Video based action recognition is one of the important and challenging problems in computer vision research. Bag of Visual Words model (BoVW) with local features has become the most popular method and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from a set of local features, which is mainly composed of five steps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Many efforts have been made in each step independently in different scenarios and their effect on action recognition is still unknown. Meanwhile, video data exhibits different views of visual pattern, such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Many feature fusion methods have been developed in other areas and their influence on action recognition has never been investigated before. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practice to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid representation, by exploring the complementarity of different BoVW frameworks and local descriptors. Using this representation, we obtain the state-of-the-art on the three challenging datasets: HMDB51 (61.1 ), UCF50 (92.3 ), and UCF101 (87.9 ).",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"",
"Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.",
"",
"The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9 to 58.3 . Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets.",
"Several recent works on action recognition have attested the importance of explicitly integrating motion characteristics in the video description. This paper establishes that adequately decomposing visual motion into dominant and residual motions, both in the extraction of the space-time trajectories and for the computation of descriptors, significantly improves action recognition algorithms. Then, we design a new motion descriptor, the DCS descriptor, based on differential motion scalar quantities, divergence, curl and shear features. It captures additional information on the local motion patterns enhancing results. Finally, applying the recent VLAD coding technique proposed in image retrieval provides a substantial improvement for action recognition. Our three contributions are complementary and lead to outperform all reported results by a significant margin on three challenging datasets, namely Hollywood 2, HMDB51 and Olympic Sports.",
""
]
}
|
1406.2139
|
2045057203
|
Representing videos by densely extracted local space-time features has recently become a popular approach for analysing actions. In this study, the authors tackle the problem of categorising human actions by devising bag of words (BoWs) models based on covariance matrices of spatiotemporal features, with the features formed from histograms of optical flow. Since covariance matrices form a special type of Riemannian manifold, the space of symmetric positive definite (SPD) matrices, non-Euclidean geometry should be taken into account while discriminating between covariance matrices. To this end, the authors propose to embed SPD manifolds to Euclidean spaces via a diffeomorphism and extend the BoW approach to its Riemannian version. The proposed BoW approach takes into account the manifold geometry of SPD matrices during the generation of the codebook and histograms. Experiments on challenging human action datasets show that the proposed method obtains notable improvements in discrimination accuracy, in comparison with several state-of-the-art methods.
|
Human action recognition has been addressed extensively in the computer vision community from various perspectives. Some methods rely on global descriptors; two examples are the methods proposed by Ali and Shah @cite_44 and Razzaghi al @cite_47 . @cite_44 , a set of optical flow based kinematic features is extracted. Kinematic models are computed by applying principal component analysis on the volumes of kinematic features. Razzaghi al @cite_47 represent human motion by spatio-temporal volume and propose a new affine invariant descriptor based on a function of spherical harmonics. A downside of global representations is their reliance on localisation of the region of interest, and hence they are sensitive to viewpoint change, noise, and occlusion @cite_49 .
|
{
"cite_N": [
"@cite_44",
"@cite_47",
"@cite_49"
],
"mid": [
"2172207578",
"1979158214",
"1993014362"
],
"abstract": [
"We propose a set of kinematic features that are derived from the optical flow for human action recognition in videos. The set of kinematic features includes divergence, vorticity, symmetric and antisymmetric flow fields, second and third principal invariants of flow gradient and rate of strain tensor, and third principal invariant of rate of rotation tensor. Each kinematic feature, when computed from the optical flow of a sequence of images, gives rise to a spatiotemporal pattern. It is then assumed that the representative dynamics of the optical flow are captured by these spatiotemporal patterns in the form of dominant kinematic trends or kinematic modes. These kinematic modes are computed by performing principal component analysis (PCA) on the spatiotemporal volumes of the kinematic features. For classification, we propose the use of multiple instance learning (MIL) in which each action video is represented by a bag of kinematic modes. Each video is then embedded into a kinematic-mode-based feature space and the coordinates of the video in that space are used for classification using the nearest neighbor algorithm. The qualitative and quantitative results are reported on the benchmark data sets.",
"The aim of this paper is to introduce a new descriptor for the spatio-temporal volume (STV). Human motion is completely represented by STV (action volume) which is constructed over successive frames by stacking human silhouettes in consecutive frames. Action volume comprehensively contains spatial and temporal information about an action. The main contribution of this paper is to propose a new affine invariant action volume descriptor based on a function of spherical harmonic coefficients. This means, it is invariant under rotation, non-uniform scaling and translation. In the 3D shape analysis literature, there have been a few attempts to use coefficients of spherical harmonics to describe a 3D shape. However, those descriptors are not affine invariant and they are only rotation invariant. In addition, the proposed approach employs a parametric form of spherical harmonics that handles genus zero surfaces regardless of whether they are stellar or not. Another contribution of this paper is the way that action volume is constructed. We applied the proposed descriptor to the KTH, Weizmann, IXMAS and Robust datasets and compared the performance of our algorithm to competing methods available in the literature. The results of our experiments show that our method has a comparable performance to the most successful and recent existing algorithms.",
"We propose a new action and gesture recognition method based on spatio-temporal covariance descriptors and a weighted Riemannian locality preserving projection approach that takes into account the curved space formed by the descriptors. The weighted projection is then exploited during boosting to create a final multiclass classification algorithm that employs the most useful spatio-temporal regions. We also show how the descriptors can be computed quickly through the use of integral video representations. Experiments on the UCF sport, CK+ facial expression and Cambridge hand gesture datasets indicate superior performance of the proposed method compared to several recent state-of-the-art techniques. The proposed method is robust and does not require additional processing of the videos, such as foreground detection, interest-point detection or tracking."
]
}
|
1406.2139
|
2045057203
|
Representing videos by densely extracted local space-time features has recently become a popular approach for analysing actions. In this study, the authors tackle the problem of categorising human actions by devising bag of words (BoWs) models based on covariance matrices of spatiotemporal features, with the features formed from histograms of optical flow. Since covariance matrices form a special type of Riemannian manifold, the space of symmetric positive definite (SPD) matrices, non-Euclidean geometry should be taken into account while discriminating between covariance matrices. To this end, the authors propose to embed SPD manifolds to Euclidean spaces via a diffeomorphism and extend the BoW approach to its Riemannian version. The proposed BoW approach takes into account the manifold geometry of SPD matrices during the generation of the codebook and histograms. Experiments on challenging human action datasets show that the proposed method obtains notable improvements in discrimination accuracy, in comparison with several state-of-the-art methods.
|
To address the abovementioned issues, videos of actions can also be represented through sets of local features, either in a sparse @cite_24 @cite_8 or dense @cite_12 @cite_42 manner. Sparse feature detectors (also referred to as interest point detectors) abstract video information by maximising saliency functions at every point in order to extract salient spatio-temporal patches. Examples are Harris3D @cite_23 and Cuboid @cite_8 detectors. Laptev and Lindeberg @cite_23 extract interest points at multiple scales using a 3D Harris corner detector and subsequently process the extracted points for modelling actions. The Cuboid detector proposed by Dollar al @cite_8 extracts salient points based on temporal Gabor filters. It is especially designed to extract space-time points with local periodic motions.
|
{
"cite_N": [
"@cite_8",
"@cite_42",
"@cite_24",
"@cite_23",
"@cite_12"
],
"mid": [
"2533739470",
"2068611653",
"2034328688",
"2020163092",
"1993229407"
],
"abstract": [
"A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.",
"This paper introduces a video representation based on dense trajectories and motion boundary descriptors. Trajectories capture the local motion information of the video. A dense representation guarantees a good coverage of foreground motion as well as of the surrounding context. A state-of-the-art optical flow algorithm enables a robust and efficient extraction of dense trajectories. As descriptors we extract features aligned with the trajectories to characterize shape (point coordinates), appearance (histograms of oriented gradients) and motion (histograms of optical flow). Additionally, we introduce a descriptor based on motion boundary histograms (MBH) which rely on differential optical flow. The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion. We evaluate our video representation in the context of action classification on nine datasets, namely KTH, YouTube, Hollywood2, UCF sports, IXMAS, UIUC, Olympic Sports, UCF50 and HMDB51. On all datasets our approach outperforms current state-of-the-art results.",
"Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper, we demonstrate how such features can be used for recognizing complex motion patterns. We construct video representations in terms of local space-time features and integrate such representations with SVM classification schemes for recognition. For the purpose of evaluation we introduce a new video database containing 2391 sequences of six human actions performed by 25 people in four different scenarios. The presented results of action recognition justify the proposed method and demonstrate its advantage compared to other relative approaches for action recognition.",
"Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.",
"Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature and promising recognition results were demonstrated for a number of action classes. The comparison of existing methods, however, is often limited given the different experimental settings used. The purpose of this paper is to evaluate and compare previously proposed space-time features in a common experimental setup. In particular, we consider four different feature detectors and six local feature descriptors and use a standard bag-of-features SVM approach for action recognition. We investigate the performance of these methods on a total of 25 action classes distributed over three datasets with varying difficulty. Among interesting conclusions, we demonstrate that regular sampling of space-time features consistently outperforms all tested space-time interest point detectors for human actions in realistic settings. We also demonstrate a consistent ranking for the majority of methods over different datasets and discuss their advantages and limitations."
]
}
|
1406.2139
|
2045057203
|
Representing videos by densely extracted local space-time features has recently become a popular approach for analysing actions. In this study, the authors tackle the problem of categorising human actions by devising bag of words (BoWs) models based on covariance matrices of spatiotemporal features, with the features formed from histograms of optical flow. Since covariance matrices form a special type of Riemannian manifold, the space of symmetric positive definite (SPD) matrices, non-Euclidean geometry should be taken into account while discriminating between covariance matrices. To this end, the authors propose to embed SPD manifolds to Euclidean spaces via a diffeomorphism and extend the BoW approach to its Riemannian version. The proposed BoW approach takes into account the manifold geometry of SPD matrices during the generation of the codebook and histograms. Experiments on challenging human action datasets show that the proposed method obtains notable improvements in discrimination accuracy, in comparison with several state-of-the-art methods.
|
Wang al @cite_12 demonstrate that dense sampling approaches consistently outperform space-time interest point based methods for human action categorisation. A dense sampling at regular positions in space and time guarantees good coverage of foreground motions as well as of surrounding context. To characterise local patterns ( motion, appearance, or shape), the descriptors divide small 3D volumes into a grid of @math cells and for each cell the related information is accumulated. Examples are HOG and HOF @cite_34 , HOG3D @cite_33 , and 3D SIFT @cite_4 .
|
{
"cite_N": [
"@cite_34",
"@cite_4",
"@cite_33",
"@cite_12"
],
"mid": [
"2142194269",
"2108333036",
"2024868105",
"1993229407"
],
"abstract": [
"The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.",
"In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data.",
"In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.",
"Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature and promising recognition results were demonstrated for a number of action classes. The comparison of existing methods, however, is often limited given the different experimental settings used. The purpose of this paper is to evaluate and compare previously proposed space-time features in a common experimental setup. In particular, we consider four different feature detectors and six local feature descriptors and use a standard bag-of-features SVM approach for action recognition. We investigate the performance of these methods on a total of 25 action classes distributed over three datasets with varying difficulty. Among interesting conclusions, we demonstrate that regular sampling of space-time features consistently outperforms all tested space-time interest point detectors for human actions in realistic settings. We also demonstrate a consistent ranking for the majority of methods over different datasets and discuss their advantages and limitations."
]
}
|
1406.2139
|
2045057203
|
Representing videos by densely extracted local space-time features has recently become a popular approach for analysing actions. In this study, the authors tackle the problem of categorising human actions by devising bag of words (BoWs) models based on covariance matrices of spatiotemporal features, with the features formed from histograms of optical flow. Since covariance matrices form a special type of Riemannian manifold, the space of symmetric positive definite (SPD) matrices, non-Euclidean geometry should be taken into account while discriminating between covariance matrices. To this end, the authors propose to embed SPD manifolds to Euclidean spaces via a diffeomorphism and extend the BoW approach to its Riemannian version. The proposed BoW approach takes into account the manifold geometry of SPD matrices during the generation of the codebook and histograms. Experiments on challenging human action datasets show that the proposed method obtains notable improvements in discrimination accuracy, in comparison with several state-of-the-art methods.
|
An alternative line of research proposes to track given spatial point over time and capture related information. Messing al @cite_37 track Harris3D @cite_23 interest points with a KLT tracker @cite_26 and extract velocity history information. To improve performance, other useful features such as appearance and location are taken into account in a generative mixture model. Recently, Wang al @cite_42 show promising results by tracking densely sampled points and extract aligned shape, appearance, and motion features. They also introduce Motion Boundary Histograms (MBH) based on differential optical flow.
|
{
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_42",
"@cite_23"
],
"mid": [
"2166294429",
"2118877769",
"2068611653",
"2020163092"
],
"abstract": [
"We present an activity recognition feature inspired by human psychophysical performance. This feature is based on the velocity history of tracked keypoints. We present a generative mixture model for video sequences using this feature, and show that it performs comparably to local spatio-temporal features on the KTH activity recognition dataset. In addition, we contribute a new activity recognition dataset, focusing on activities of daily living, with high resolution video sequences of complex actions. We demonstrate the superiority of our velocity history feature on high resolution video sequences of complicated activities. Further, we show how the velocity history feature can be extended, both with a more sophisticated latent velocity model, and by combining the velocity history feature with other useful information, like appearance, position, and high level semantic information. Our approach performs comparably to established and state of the art methods on the KTH dataset, and significantly outperforms all other methods on our challenging new dataset.",
"Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system.",
"This paper introduces a video representation based on dense trajectories and motion boundary descriptors. Trajectories capture the local motion information of the video. A dense representation guarantees a good coverage of foreground motion as well as of the surrounding context. A state-of-the-art optical flow algorithm enables a robust and efficient extraction of dense trajectories. As descriptors we extract features aligned with the trajectories to characterize shape (point coordinates), appearance (histograms of oriented gradients) and motion (histograms of optical flow). Additionally, we introduce a descriptor based on motion boundary histograms (MBH) which rely on differential optical flow. The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion. We evaluate our video representation in the context of action classification on nine datasets, namely KTH, YouTube, Hollywood2, UCF sports, IXMAS, UIUC, Olympic Sports, UCF50 and HMDB51. On all datasets our approach outperforms current state-of-the-art results.",
"Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds."
]
}
|
1406.1532
|
2128927665
|
Bobbin lace is a fibre art form in which intricate and delicate patterns are created by braiding together many threads. We present an overview of the making of bobbin lace, illustrated with a simple, traditional bookmark design. We briefly summarize research on the topology of textiles and braid theory, which form a base for the current work. We then define a new mathematical model that supports the enumeration and generation of bobbin lace patterns using an intelligent combinatorial search, and which is capable of producing patterns that have never been seen before. Finally, we apply our new patterns to an original bookmark design and propose future areas for exploration.
|
The application of mathematical modeling to fibre arts is a fairly new area of research, with most of the focus on weaving and knitting. In the introduction to @cite_10 , Belcastro and Yackel give a nice overview of its history. As far as we are aware, there have not been any publications specific to the mathematical exploration of bobbin lace. However, more general work on the topology of threads in textiles @cite_37 is directly applicable. Similarly, Artin's theory of braids @cite_22 , while only remotely inspired by textiles and lace, gives a precise way of describing an alternating braid which is key to the structure of bobbin lace. We must also acknowledge the work of modern lacemakers who have systematically explored ways to create new lace grounds.
|
{
"cite_N": [
"@cite_37",
"@cite_10",
"@cite_22"
],
"mid": [
"2161847190",
"",
"2410271282"
],
"abstract": [
"This paper proposes a new systematic approach for the description and classification of textile structures based on topological principles. It is shown that textile structures can be considered as a specific case of knots or links and can be represented by diagrams on a torus. This enables modern methods of knot theory to be applied to the study of the topology of textiles. The basics of knot theory are briefly introduced. Some specific matters relating to the application of these methods to textiles are discussed, including enumeration of textile structures and topological invariants of doubly-periodic structures.",
"",
"The theory of braids shows the interplay of two disciplines of pure mathematics? topology, used in the definition of braids, and the theory of groups, used in their treatment. The fundamentals of the theory can be understood without too much technical knowledge. It originated from a much older problem in pure mathematics?the classification of knots. Much progress has been achieved in this field ; but all the prog ress seems only to emphasize the extreme difficulty of the problem. Today we are still very far from a complete solution. In view of this fact it is advisable to study objects that are in some fashion similar to knots, yet simple enough so as to make a complete classification possible. Braids are such objects. In order to develop the theory of braids we first explain what we call a weaving pat tern of order ( being an ordinary inte gral number which is taken to be 5 in Fig. 1). Let Li and L2 be two parallel straight lines in space with given orientation in the same sense (indicated by arrows). If is a point on Lh Q a point on L2, we shall sometimes join and Q by a curve c. In our drawings we can only indicate the pro jection of c onto the plane containing L"
]
}
|
1406.1532
|
2128927665
|
Bobbin lace is a fibre art form in which intricate and delicate patterns are created by braiding together many threads. We present an overview of the making of bobbin lace, illustrated with a simple, traditional bookmark design. We briefly summarize research on the topology of textiles and braid theory, which form a base for the current work. We then define a new mathematical model that supports the enumeration and generation of bobbin lace patterns using an intelligent combinatorial search, and which is capable of producing patterns that have never been seen before. Finally, we apply our new patterns to an original bookmark design and propose future areas for exploration.
|
Grishanov, Meshkov and Omelchenko have examined the structure of machine-made textiles and classified these textiles using the ambient isotopy invariant of knots @cite_37 . Textiles are typically made by repeating an arrangement of fibers in a periodic manner to cover an indefinitely large area. The arrangement of fibers can be represented as a period parallelogram which is translated in two non-parallel directions to create an edge-to-edge tiling of the plane @cite_31 . Periodic repetition in textiles is a stronger property than just simple translation of a wallpaper decoration: fibers that terminate at the edges of the parallelogram must connect with fibers of adjacent copies. This property, which Grishanov et al call doubly periodic, can be visualized by joining opposite edges of the period parallelogram to form a torus (see Figure ). When wrapped around a torus, the fibers connect, forming a knot or a link as shown in Figure . The toroidal representation also reduces a pattern description from infinite to finite size without loss of information, a key idea which we will revisit when describing our own model in Section .
|
{
"cite_N": [
"@cite_37",
"@cite_31"
],
"mid": [
"2161847190",
"1977183350"
],
"abstract": [
"This paper proposes a new systematic approach for the description and classification of textile structures based on topological principles. It is shown that textile structures can be considered as a specific case of knots or links and can be represented by diagrams on a torus. This enables modern methods of knot theory to be applied to the study of the topology of textiles. The basics of knot theory are briefly introduced. Some specific matters relating to the application of these methods to textiles are discussed, including enumeration of textile structures and topological invariants of doubly-periodic structures.",
"\"Remarkable...It will surely remain the unique reference in this area for many years to come.\" Roger Penrose , Nature \"...an outstanding achievement in mathematical education.\" Bulletin of The London Mathematical Society \"I am enormously impressed...Will be the definitive reference on tiling theory for many decades. Not only does the book bring together older results that have not been brought together before, but it contains a wealth of new material...I know of no comparable book.\" Martin Gardner"
]
}
|
1406.1532
|
2128927665
|
Bobbin lace is a fibre art form in which intricate and delicate patterns are created by braiding together many threads. We present an overview of the making of bobbin lace, illustrated with a simple, traditional bookmark design. We briefly summarize research on the topology of textiles and braid theory, which form a base for the current work. We then define a new mathematical model that supports the enumeration and generation of bobbin lace patterns using an intelligent combinatorial search, and which is capable of producing patterns that have never been seen before. Finally, we apply our new patterns to an original bookmark design and propose future areas for exploration.
|
In the 2D projection, strand positions are labelled @math to @math from left to right. Using standard braid notation @cite_1 , @math represents a strand in position @math crossing its neighbour to the right. Similarly, @math represents a strand in position @math crossing its neighbour to the right. We will use this standard notation to represent the basic cross and twist actions of bobbin lace. As mentioned in the introduction, bobbin lace actions are performed on four threads or two pairs of threads at a time. A mathematically idealized thread with no thickness can be equated to a strand. If we label the pairs from left to right, the two adjacent pairs @math and @math correspond to the four threads in positions @math , @math , @math and @math where @math . The cross action is represented by @math and the twist action is represented by @math . From this generalized description, we see that @math will only occur for odd values of @math and @math will only occur for even values of @math .
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1606676009"
],
"abstract": [
"1. Introduction & Foundations. 2. The Braid Group. 3. World Problem. 4. Special types of braids. 5. Quotient groups of the braid group. 6. Isotopy of braids. 7. Homotopy braid theory. 8. From knots to braids. 9. Markov's theorem. 10. Knot invariants. 11. Braid groups on surfaces. 12. Algebraic equations. Appendix I: Group theory. Appendix II: Topology. Appendix III: Symplectic group. Appendix IV. Appendix V. Bibliography. Index."
]
}
|
1406.1532
|
2128927665
|
Bobbin lace is a fibre art form in which intricate and delicate patterns are created by braiding together many threads. We present an overview of the making of bobbin lace, illustrated with a simple, traditional bookmark design. We briefly summarize research on the topology of textiles and braid theory, which form a base for the current work. We then define a new mathematical model that supports the enumeration and generation of bobbin lace patterns using an intelligent combinatorial search, and which is capable of producing patterns that have never been seen before. Finally, we apply our new patterns to an original bookmark design and propose future areas for exploration.
|
An is a braid in which each strand alternates going over and under the strands that it crosses (see Figures c and d). Alternating braids are characterized by the property that the @math generators for even positions have the opposite sign (superscript) from the @math generators for odd positions @cite_1 . Given the generator representation for bobbin lace actions, we infer that any combination of cross and twist will result in an alternating braid.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1606676009"
],
"abstract": [
"1. Introduction & Foundations. 2. The Braid Group. 3. World Problem. 4. Special types of braids. 5. Quotient groups of the braid group. 6. Isotopy of braids. 7. Homotopy braid theory. 8. From knots to braids. 9. Markov's theorem. 10. Knot invariants. 11. Braid groups on surfaces. 12. Algebraic equations. Appendix I: Group theory. Appendix II: Topology. Appendix III: Symplectic group. Appendix IV. Appendix V. Bibliography. Index."
]
}
|
1406.1818
|
1564794196
|
In this paper, we consider resource allocation optimization problem in cellular networks for different types of users running multiple applications simultaneously. In our proposed model, each user application is assigned a utility function that represents the application type running on the user equipment (UE). The network operators assign a subscription weight to each UE based on its subscription. Each UE assigns an application weight to each of its applications based on the instantaneous usage percentage of the application. Additionally, UEs with higher priority assign applications target rates to their applications. Our objective is to allocate the resources optimally among the UEs and their applications from a single evolved node B (eNodeB) based on a utility proportional fairness policy with priority to realtime application users. A minimum quality of service (QoS) is guaranteed to each UE application based on the UE subscription weight, the UE application weight and the UE application target rate. We propose a two-stage rate allocation algorithm to allocate the eNodeB resources among users and their applications. Finally, we present simulation results for the performance of our rate allocation algorithm.
|
In @cite_1 , @cite_6 and @cite_8 , the authors present an optimal rate allocation algorithm for users connected to a single carrier. The optimal rates are achieved by formulating the rate allocation optimization problem in a convex optimization framework. In @cite_8 , the authors considered a resource allocation optimization problem with service-offering differentiation and application status differentiation. In their system model, each subscriber is running multiple applications simultaneously. The rate allocation algorithm is achieved in two stages. A rate allocation with carrier aggregation approach is presented in @cite_2 , the authors used two stage algorithm to allocate two carriers resources optimally among UEs. In @cite_5 , the authors present a resource allocation optimization problem for two groups of users. The two groups are commercial and public safety users. The algorithm gives priority to the public safety users when allocating the eNodeB resources.
|
{
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_5"
],
"mid": [
"2074956630",
"2013722958",
"2005007320",
"1968719883",
"2073839497"
],
"abstract": [
"In this paper, we consider resource allocation optimization problem in fourth generation long term evolution (4G-LTE) with users running multiple applications. Each mobile user can run both delay-tolerant and real-time applications. In every user equipment (UE), each application has a application-status differentiation from other applications depending on its instantaneous usage percentage. In addition, the network operators provide subscriber differentiation by assigning each UE a subscription weight relative to its subscription. The objective is to optimally allocate the resources with a utility proportional fairness policy. We propose an algorithm to allocate the resources in two-stages. In the first-stage, the UEs collaborate with the evolved node B (eNodeB) that allocates the optimal rates to users according to that policy. In the second-stage, each user allocates its assigned rate internally to its applications according to their usage percentage. We prove that the two-stage resource allocation algorithm allocates the optimal rates without eNodeB knowledge of the UEs utilities. Finally, numerical results on the performance of the proposed algorithm are presented.",
"In this paper, we introduce an approach for resource allocation of elastic and inelastic adaptive real-time traffic in fourth generation long term evolution (4G-LTE) system. In our model, we use logarithmic and sigmoidal-like utility functions to represent the users applications running on different user equipments (UE)s. We present a resource allocation optimization problem with utility proportional fairness policy, where the fairness among users is in utility percentage (i.e user satisfaction with the service) of the corresponding applications. Our objective is to allocate the resources to the users with priority given to the adaptive real-time application users. In addition, a minimum resource allocation for users with elastic and inelastic traffic should be guaranteed. Our goal is that every user subscribing for the mobile service should have a minimum quality-of-service (QoS) with a priority criterion. We prove that our resource allocation optimization problem is convex and therefore the optimal solution is tractable. We present a distributed algorithm to allocate evolved NodeB (eNodeB) resources optimally with a priority criterion. Finally, we present simulation results for the performance of our rate allocation algorithm.",
"In this paper, we consider resource allocation optimization problem in the fourth generation long-term evolution (4G-LTE) with elastic and inelastic real-time traffic. Mobile users are running either delay-tolerant or real-time applications. The users applications are approximated by logarithmic or sigmoidal-like utility functions. Our objective is to allocate resources according to the utility proportional fairness policy. Prior utility proportional fairness resource allocation algorithms fail to converge for high-traffic situations. We present a robust algorithm that solves the drawbacks in prior algorithms for the utility proportional fairness policy. Our robust optimal algorithm allocates the optimal rates for both high-traffic and low-traffic situations. It prevents fluctuation in the resource allocation process. In addition, we show that our algorithm provides traffic-dependent pricing for network providers. This pricing could be used to flatten the network traffic and decrease the cost per bandwidth for the users. Finally, numerical results are presented on the performance of the proposed algorithm.",
"In this paper, we consider a resource allocation optimization problem with carrier aggregation in fourth generation long term evolution (4G-LTE). In our proposed model, each user equipment (UE) is assigned a utility function that represents the application type running on the UE. Our objective is to allocate the resources from two carriers to each user based on its application that is represented by the utility function assigned to that user. We consider two groups of users, one with elastic traffic and the other with inelastic traffic. Each user is guaranteed a minimum resource allocation. In addition, a priority resource allocation is given to the UEs running adaptive real time applications. We prove that the optimal rate allocated to each UE by the single carrier resource allocation optimization problem is equivalent to the aggregated optimal rates allocated to the same user by the primary and secondary carriers when their total resources is equivalent to the single carrier resources. Our goal is to guarantee a minimum quality of service (QoS) that varies based on the user application type. We present a carrier aggregation rate allocation algorithm to allocate two carriers resources optimally among users. Finally we present simulation results with the carrier aggregation rate allocation algorithm.",
"In this paper, we consider resource allocation optimization problem in fourth generation long term evolution (4G-LTE) for public safety and commercial users running elastic or inelastic traffic. Each mobile user can run delay-tolerant or real-time applications. In our proposed model, each user equipment (UE) is assigned a utility function that represents the application type running on the UE. Our objective is to allocate the resources from a single evolved node B (eNodeB) to each user based on the user application that is represented by the utility function assigned to that user. We consider two groups of users, one represents public safety users with elastic or inelastic traffic and the other represents commercial users with elastic or inelastic traffic. The public safety group is given priority over the commercial group and within each group the inelastic traffic is prioritized over the elastic traffic. Our goal is to guarantee a minimum quality of service (QoS) that varies based on the user type, the user application type and the application target rate. A rate allocation algorithm is presented to allocate the eNodeB resources optimally among public safety and commercial users. Finally, the simulation results are presented on the performance of the proposed rate allocation algorithm."
]
}
|
1406.1284
|
1599739241
|
Author(s): Azimdoost, B; Westphal, C; Sadjadpour, HR | Abstract: We are studying some fundamental properties of the interface between control and data planes in Information-Centric Networks. We try to evaluate the traffic between these two planes based on allowing a minimum level of acceptable distortion in the network state representation in the control plane. We apply our framework to content distribution, and see how we can compute the overhead of maintaining the location of content in the control plane. This is of importance to evaluate content-oriented network architectures: we identify scenarios where the cost of updating the control plane for content routing overwhelms the benefit of fetching a nearby copy. We also show how to minimize the cost of this overhead when associating costs to peering traffic and to internal traffic for operator-driven CDNs.
|
One impetus to study the relationship between the control layer and the network layer comes from the increased network state complexity from trying to route directly to content. Request-routing mechanisms have been in place for a while @cite_3 and proposals @cite_23 have been suggested to share information between different CDNs, in essence enabling the control planes of two domains to interact. Many architectures have been proposed that are oriented around content @cite_4 @cite_21 @cite_6 @cite_8 @cite_11 @cite_18 and some have raised concerns about the scalability of properly identifying the location of up to @math pieces of content @cite_12 . Our model presents a mathematical foundation to study the pros and cons of such architecture.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_23",
"@cite_12",
"@cite_11"
],
"mid": [
"2109983959",
"1668625574",
"2367132056",
"2168903090",
"1653792747",
"2014952121",
"",
"2133961349",
""
],
"abstract": [
"The information-centric networking (ICN) concept is a significant common approach of several future Internet research activities. The approach leverages in-network caching, multiparty communication through replication, and interaction models decoupling senders and receivers. The goal is to provide a network infrastructure service that is better suited to today?s use (in particular. content distribution and mobility) and more resilient to disruptions and failures. The ICN approach is being explored by a number of research projects. We compare and discuss design choices and features of proposed ICN architectures, focusing on the following main components: named data objects, naming and security, API, routing and transport, and caching. We also discuss the advantages of the ICN approach in general.",
"The primary use of the Internet is content distribution -- the delivery of web pages, audio, and video to client applications -- yet the Internet was never architected for scalable content delivery. The result has been a proliferation of proprietary protocols and ad hoc mechanisms to meet growing content demand. In this paper, we describe a content routing design based on name-based routing as part of an explicit Internet content layer. We claim that this content routing is a natural extension of current Internet directory and routing systems, allows efficient content location, and can be implemented to scale with the Internet.",
"",
"The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution.",
"This document presents a summary of Request-Routing techniques that are used to direct client requests to surrogates based on various policies and a possible set of metrics. The document covers techniques that were commonly used in the industry on or before December 2000. In this memo, the term Request-Routing represents techniques that is commonly called content routing or content redirection. In principle, Request-Routing techniques can be classified under: DNS Request-Routing, Transport-layer Request-Routing, and Application-layer Request-Routing.",
"Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls.",
"",
"There have been many recent papers on data-oriented or content-centric network architectures. Despite the voluminous literature, surprisingly little clarity is emerging as most papers focus on what differentiates them from other proposals. We begin this paper by identifying the existing commonalities and important differences in these designs, and then discuss some remaining research issues. After our review, we emerge skeptical (but open-minded) about the value of this approach to networking.",
""
]
}
|
1406.0924
|
2950067095
|
We describe a framework for defining high-order image models that can be used in a variety of applications. The approach involves modeling local patterns in a multiscale representation of an image. Local properties of a coarsened image reflect non-local properties of the original image. In the case of binary images local properties are defined by the binary patterns observed over small neighborhoods around each pixel. With the multiscale representation we capture the frequency of patterns observed at different scales of resolution. This framework leads to expressive priors that depend on a relatively small number of parameters. For inference and learning we use an MCMC method for block sampling with very large blocks. We evaluate the approach with two example applications. One involves contour detection. The other involves binary segmentation.
|
FRAME models @cite_8 and more recently Fields of Experts (FoE) @cite_10 defined high-order energy models using the response of linear filters. FoP models are closely related. The detection of 3x3 patterns at different resolutions corresponds to using filters of increasing size. In FoP we have a fixed set of pre-defined non-linear filters that detect common patterns at different resolutions. This avoids filter learning, which leads to a non-convex optimization problem in FoE.
|
{
"cite_N": [
"@cite_10",
"@cite_8"
],
"mid": [
"2130184048",
"2116877738"
],
"abstract": [
"We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach provides a practical method for learning high-order Markov random field (MRF) models with potential functions that extend over large pixel neighborhoods. These clique potentials are modeled using the Product-of-Experts framework that uses non-linear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field-of-Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with specialized techniques.",
"This paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner. Our method is based on higher order conditional random fields and uses potentials defined on sets of pixels (image segments) generated using unsupervised segmentation algorithms. These potentials enforce label consistency in image regions and can be seen as a strict generalization of the commonly used pairwise contrast sensitive smoothness potentials. The higher order potential functions used in our framework take the form of the robust Pn model. This enables the use of powerful graph cut based move making algorithms for performing inference in the framework [14 ]. We test our method on the problem of multi-class object segmentation by augmenting the conventional CRF used for object segmentation with higher order potentials defined on image regions. Experiments on challenging data sets show that integration of higher order potentials quantitatively and qualitatively improves results leading to much better definition of object boundaries. We believe that this method can be used to yield similar improvements for many other labelling problems."
]
}
|
1406.0924
|
2950067095
|
We describe a framework for defining high-order image models that can be used in a variety of applications. The approach involves modeling local patterns in a multiscale representation of an image. Local properties of a coarsened image reflect non-local properties of the original image. In the case of binary images local properties are defined by the binary patterns observed over small neighborhoods around each pixel. With the multiscale representation we capture the frequency of patterns observed at different scales of resolution. This framework leads to expressive priors that depend on a relatively small number of parameters. For inference and learning we use an MCMC method for block sampling with very large blocks. We evaluate the approach with two example applications. One involves contour detection. The other involves binary segmentation.
|
The work in @cite_23 defined a variety of multiresolution models for images based on a quad-tree representation. The quad-tree leads to models that support efficient learning and inference via dynamic programming, but such models also suffer from artifacts due to the underlying tree-structure. The work in @cite_3 defined binary image priors using deep Boltzmann machines. Those models are based on a hierarchy of hidden variables that is related to our multiscale representation. However in our case the multiscale representation is a deterministic function of the image and does not involve extra hidden variables as @cite_3 . The approach we take to define a multiscale model is similar to @cite_18 where local properties of subsampled signals where used to model curves.
|
{
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_23"
],
"mid": [
"2125310690",
"2105180511",
"2139549194"
],
"abstract": [
"We describe a new hierarchical representation for two-dimensional objects that captures shape information at multiple levels of resolution. This representation is based on a hierarchical description of an object's boundary and can be used in an elastic matching framework, both for comparing pairs of objects and for detecting objects in cluttered images. In contrast to classical elastic models, our representation explicitly captures global shape information. This leads to richer geometric models and more accurate recognition results. Our experiments demonstrate classification results that are significantly better than the current state-of-the-art in several shape datasets. We also show initial experiments in matching shapes to cluttered images.",
"A good model of object shape is essential in applications such as segmentation, object detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shape can help where the object boundary is noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to part of the object. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of Deep Boltzmann Machine [22] that we call a Shape Boltzmann Machine (ShapeBM) for the task of modeling binary shape images. We show that the ShapeBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the ShapeBM learns distributions that are qualitatively and quantitatively better than existing models for this task.",
"Reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts-in particular, making ties to topics such as wavelets and multigrid methods. A third goal is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for self-similar and 1 f processes. We also illustrate how these methods have been used in practice."
]
}
|
1406.0238
|
1594004154
|
A distributed randomized block coordinate descent method for minimizing a convex function of a huge number of variables is proposed. The complexity of the method is analyzed under the assumption that the smooth part of the objective function is partially block separable. The number of iterations required is bounded by a function of the error and the degree of separability, which extends the results in Richtarik and Takac (Parallel Coordinate Descent Methods for Big Data Optimization, Mathematical Programming, DOI:10.1007 s10107-015-0901-6) to a distributed environment. Several approaches to the distribution and synchronization of the computation across a cluster of multi-core computer are described and promising computational results are provided.
|
Before we proceed, we give a brief overview of some existing literature on coordinate descent methods. For further references, we refer the reader to @cite_13 @cite_7 @cite_24 .
|
{
"cite_N": [
"@cite_24",
"@cite_13",
"@cite_7"
],
"mid": [
"2951253949",
"1603765807",
"2032395696"
],
"abstract": [
"We propose a new stochastic coordinate descent method for minimizing the sum of convex functions each of which depends on a small number of coordinates only. Our method (APPROX) is simultaneously Accelerated, Parallel and PROXimal; this is the first time such a method is proposed. In the special case when the number of processors is equal to the number of coordinates, the method converges at the rate @math , where @math is the iteration counter, @math is an average degree of separability of the loss function, @math is the average of Lipschitz constants associated with the coordinates and individual functions in the sum, and @math is the distance of the initial point from the minimizer. We show that the method can be implemented without the need to perform full-dimensional vector operations, which is the major bottleneck of existing accelerated coordinate descent methods. The fact that the method depends on the average degree of separability, and not on the maximum degree of separability, can be attributed to the use of new safe large stepsizes, leading to improved expected separable overapproximation (ESO). These are of independent interest and can be utilized in all existing parallel stochastic coordinate descent algorithms based on the concept of ESO.",
"",
"In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is a simple expression depending on the number of parallel processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there may be no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modeling situations with busy or unreliable processors. We show that our algorithm is able to solve a LASSO problem involving a matrix with 20 billion nonzeros in 2 h on a large memory node with 24 cores."
]
}
|
1406.0238
|
1594004154
|
A distributed randomized block coordinate descent method for minimizing a convex function of a huge number of variables is proposed. The complexity of the method is analyzed under the assumption that the smooth part of the objective function is partially block separable. The number of iterations required is bounded by a function of the error and the degree of separability, which extends the results in Richtarik and Takac (Parallel Coordinate Descent Methods for Big Data Optimization, Mathematical Programming, DOI:10.1007 s10107-015-0901-6) to a distributed environment. Several approaches to the distribution and synchronization of the computation across a cluster of multi-core computer are described and promising computational results are provided.
|
Block-coordinate descent. Block-coordinate descent is a simple iterative optimization strategy, where two subsequent iterates differ only in a single block of coordinates. In a very common special case, each block consists of a single coordinate. The choice of the block can be deterministic, e.g., cyclic ( @cite_31 ), greedy ( @cite_18 ), or randomized. Recent theoretical guarantees for randomized coordinate-descent algorithms can be found in @cite_43 @cite_16 @cite_12 @cite_42 @cite_19 @cite_34 . Coordinate descent algorithms are also closely related to coordinate relaxation, linear and non-linear Gauss-Seidel methods, subspace correction, and domain decomposition (see @cite_39 for references). For classical references on non-randomized variants, we refer to the work of Tseng @cite_17 @cite_36 @cite_4 @cite_9 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_36",
"@cite_9",
"@cite_42",
"@cite_34",
"@cite_39",
"@cite_43",
"@cite_19",
"@cite_31",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"1737373496",
"",
"",
"",
"2075660001",
"2963541115",
"",
"2095984592",
"2560624268",
"1514006156",
"2117686388",
"2099119387",
"2039050532"
],
"abstract": [
"In this work we propose solving huge-scale instances of the truss topology design problem with coordinate descent methods. We develop four efficient codes: serial and parallel implementations of randomized and greedy rules for the selection of the variable(s) (potential bar(s)) to be updated in the next iteration. Both serial methods enjoy an O(n k) iteration complexity guarantee, where n is the number of potential bars and k the iteration counter. Our parallel implementations, written in CUDA and running on a graphical processing unit (GPU), are capable of speedups of up to two orders of magnitude when compared to their serial counterparts. Numerical experiments were performed on instances with up to 30 million potential bars.",
"",
"",
"",
"In this paper we analyze the randomized block-coordinate descent (RBCD) methods proposed in Nesterov (SIAM J Optim 22(2):341---362, 2012), Richtarik and Takaă? (Math Program 144(1---2):1---38, 2014) for minimizing the sum of a smooth convex function and a block-separable convex function, and derive improved bounds on their convergence rates. In particular, we extend Nesterov's technique developed in Nesterov (SIAM J Optim 22(2):341---362, 2012) for analyzing the RBCD method for minimizing a smooth convex function over a block-separable closed convex set to the aforementioned more general problem and obtain a sharper expected-value type of convergence rate than the one implied in Richtarik and Takaă? (Math Program 144(1---2):1---38, 2014). As a result, we also obtain a better high-probability type of iteration complexity. In addition, for unconstrained smooth convex minimization, we develop a new technique called randomized estimate sequence to analyze the accelerated RBCD method proposed by Nesterov (SIAM J Optim 22(2):341---362, 2012) and establish a sharper expected-value type of convergence rate than the one given in Nesterov (SIAM J Optim 22(2):341---362, 2012).",
"In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying per-iteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box. In addition to providing a proof of convergence for this new general method, we show that it is numerically stable, efficiently implementable, and in certain regimes, asymptotically optimal. To highlight the power of this algorithm, we show how it can used to create faster linear system solvers in several regimes: - We show how this method achieves a faster asymptotic runtime than conjugate gradient for solving a broad class of symmetric positive definite systems of equations. - We improve the convergence guarantees for Kaczmarz methods, a popular technique for image reconstruction and solving over determined systems of equations, by accelerating an algorithm of Strohmer and Vershynin. - We achieve the best known running time for solving Symmetric Diagonally Dominant (SDD) system of equations in the unit-cost RAM model, obtaining a running time of O(m log3 2n (log log n)1 2 log((log n) eps)) by accelerating a recent solver by Beyond the independent interest of these solvers, we believe they highlight the versatility of the approach of this paper and we hope that they will open the door for further algorithmic improvements in the future.",
"",
"In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method, and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size.",
"",
"Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in machine learning. Reasons for this include its simplicity, speed and stability, as well as its competitive performance on @math regularized smooth optimization problems. Surprisingly, very little is known about its finite time convergence behavior on these problems. Most existing results either just prove convergence or provide asymptotic rates. We fill this gap in the literature by proving @math convergence rates (where @math is the iteration counter) for two variants of cyclic coordinate descent under an isotonicity assumption. Our analysis proceeds by comparing the objective values attained by the two variants with each other, as well as with the gradient descent algorithm. We show that the iterates generated by the cyclic coordinate descent methods remain better than those of gradient descent uniformly over time.",
"In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an ( )-accurate solution with probability at least (1- ) in at most (O((n ) (1 )) ) iterations, where (n ) is the number of blocks. This extends recent results of Nesterov (SIAM J Optim 22(2): 341–362, 2012), which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing ( ) from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving first true iteration complexity bounds. For strongly convex functions the method converges linearly. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Finally, we demonstrate numerically that the algorithm is able to solve huge-scale ( _1 )-regularized least squares problems with a billion variables.",
"We study the performance of a family of randomized parallel coordinate descent methods for minimizing the sum of a nonsmooth and separable convex functions. The problem class includes as a special case L1-regularized L1 regression and the minimization of the exponential loss (\"AdaBoost problem\"). We assume the input data defining the loss function is contained in a sparse @math matrix @math with at most @math nonzeros in each row. Our methods need @math iterations to find an approximate solution with high probability, where @math is the number of processors and @math for the fastest variant. The notation hides dependence on quantities such as the required accuracy and confidence levels and the distance of the starting iterate from an optimal point. Since @math is a decreasing function of @math , the method needs fewer iterations when more processors are used. Certain variants of our algorithms perform on average only @math arithmetic operations during a single iteration per processor and, because @math decreases when @math does, fewer iterations are needed for sparser problems.",
"We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with l1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. The local Lipschitzian error bound holds under assumptions analogous to those for constrained smooth optimization, e.g., the convex function is polyhedral and the smooth function is (nonconvex) quadratic or is the composition of a strongly convex function with a linear mapping. We report numerical experience with solving the l1-regularization of unconstrained optimization problems from in ACM Trans. Math. Softw. 7, 17–41, 1981 and from the CUTEr set (Gould and Orban in ACM Trans. Math. Softw. 29, 373–394, 2003). Comparison with L-BFGS-B and MINOS, applied to a reformulation of the l1-regularized problem as a bound-constrained optimization problem, is also reported."
]
}
|
1406.0238
|
1594004154
|
A distributed randomized block coordinate descent method for minimizing a convex function of a huge number of variables is proposed. The complexity of the method is analyzed under the assumption that the smooth part of the objective function is partially block separable. The number of iterations required is bounded by a function of the error and the degree of separability, which extends the results in Richtarik and Takac (Parallel Coordinate Descent Methods for Big Data Optimization, Mathematical Programming, DOI:10.1007 s10107-015-0901-6) to a distributed environment. Several approaches to the distribution and synchronization of the computation across a cluster of multi-core computer are described and promising computational results are provided.
|
Parallel block-coordinate descent. Clearly, one can parallelize coordinate descent by updating several blocks in parallel. The related complexity issues were studied by a number of authors. Richt ' a rik and Tak ' a c studied a broad class of parallel methods for the same problem we study in this paper, and introduced the concept of ESO @cite_7 . The complexity was improved by @cite_37 . An efficient accelerated version was introduced by Fercoq and Richt ' a rik @cite_24 and an inexact version was studied in @cite_2 . An asynchronous variant was studied by @cite_27 . A non-uniform sampling and a method for dealing with non-smooth functions were described in @cite_5 and @cite_12 , respectively. Further related work can be found in @cite_32 @cite_40 @cite_41 @cite_28 .
|
{
"cite_N": [
"@cite_37",
"@cite_7",
"@cite_41",
"@cite_28",
"@cite_32",
"@cite_24",
"@cite_27",
"@cite_40",
"@cite_2",
"@cite_5",
"@cite_12"
],
"mid": [
"1854151777",
"2032395696",
"1512309675",
"2138243089",
"2963707635",
"2951253949",
"2100007248",
"2035233604",
"2950930039",
"2164075197",
"2099119387"
],
"abstract": [
"In this work we study the parallel coordinate descent method (PCDM) proposed by Richt 'arik and Tak 'a c [26] for minimizing a regularized convex function. We adopt elements from the work of Xiao and Lu [39], and combine them with several new insights, to obtain sharper iteration complexity results for PCDM than those presented in [26]. Moreover, we show that PCDM is monotonic in expectation, which was not confirmed in [26], and we also derive the first high probability iteration complexity result where the initial levelset is unbounded.",
"In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is a simple expression depending on the number of parallel processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there may be no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modeling situations with busy or unreliable processors. We show that our algorithm is able to solve a LASSO problem involving a matrix with 20 billion nonzeros in 2 h on a large memory node with 24 cores.",
"Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Gradient Descent (prox-SGD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a rather high variance, which negatively affects the convergence of the underlying optimization procedure. In this paper we study stochastic optimization with importance sampling, which improves the convergence rate by reducing the stochastic variance. Specifically, we study prox-SGD (actually, stochastic mirror descent) with importance sampling and prox-SDCA with importance sampling. For prox-SGD, instead of adopting uniform sampling throughout the training process, the proposed algorithm employs importance sampling to minimize the variance of the stochastic gradient. For prox-SDCA, the proposed importance sampling scheme aims to achieve higher expected dual value at each dual coordinate ascent step. We provide extensive theoretical analysis to show that the convergence rates with the proposed importance sampling methods can be significantly improved under suitable conditions both for prox-SGD and for prox-SDCA. Experiments are provided to verify the theoretical analysis.",
"Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.",
"Large-scale l1-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. High-performance algorithms and implementations are critical to efficiently solving these problems. Building upon previous work on coordinate descent algorithms for l1-regularized problems, we introduce a novel family of algorithms called block-greedy coordinate descent that includes, as special cases, several existing algorithms such as SCD, Greedy CD, Shotgun, and Thread-Greedy. We give a unified convergence analysis for the family of block-greedy algorithms. The analysis suggests that block-greedy coordinate descent can better exploit parallelism if features are clustered so that the maximum inner product between features in different blocks is small. Our theoretical convergence analysis is supported with experimental results using data from diverse real-world applications. We hope that algorithmic approaches and convergence analysis we provide will not only advance the field, but will also encourage researchers to systematically explore the design space of algorithms for solving large-scale l1-regularization problems.",
"We propose a new stochastic coordinate descent method for minimizing the sum of convex functions each of which depends on a small number of coordinates only. Our method (APPROX) is simultaneously Accelerated, Parallel and PROXimal; this is the first time such a method is proposed. In the special case when the number of processors is equal to the number of coordinates, the method converges at the rate @math , where @math is the iteration counter, @math is an average degree of separability of the loss function, @math is the average of Lipschitz constants associated with the coordinates and individual functions in the sum, and @math is the distance of the initial point from the minimizer. We show that the method can be implemented without the need to perform full-dimensional vector operations, which is the major bottleneck of existing accelerated coordinate descent methods. The fact that the method depends on the average degree of separability, and not on the maximum degree of separability, can be attributed to the use of new safe large stepsizes, leading to improved expected separable overapproximation (ESO). These are of independent interest and can be utilized in all existing parallel stochastic coordinate descent algorithms based on the concept of ESO.",
"We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1 K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n1 2) in unconstrained optimization and O(n1 4) in the separable-constrained case, where n is the number of variables. We describe results from implementation on 40-core processors.",
"In this paper, we study decomposition methods based on separable approximations for minimizing the augmented Lagrangian. In particular, we study and compare the diagonal quadratic approximation method DQAM of Mulvey and Ruszczynski [A diagonal quadratic approximation method for large scale linear programs, Oper. Res. Lett. 12 1992, pp. 205–215] and the parallel coordinate descent method PCDM of Richtarik and Takac [Parallel coordinate descent methods for big data optimization. Technical report, November 2012. arXiv:1212.0873]. We show that the two methods are equivalent for feasibility problems up to the selection of a step-size parameter. Furthermore, we prove an improved complexity bound for PCDM under strong convexity, and show that this bound is at least 8L′ Lω−12 times better than the best known bound for DQAM, where ω is the degree of partial separability and L’ and L are the maximum and average of the block Lipschitz constants of the gradient of the quadratic penalty appearing in the augmented Lagrangian.",
"In this paper we consider the problem of minimizing a convex function using a randomized block coordinate descent method. One of the key steps at each iteration of the algorithm is determining the update to a block of variables. Existing algorithms assume that in order to compute the update, a particular subproblem is solved exactly. In his work we relax this requirement, and allow for the subproblem to be solved inexactly, leading to an inexact block coordinate descent method. Our approach incorporates the best known results for exact updates as a special case. Moreover, these theoretical guarantees are complemented by practical considerations: the use of iterative techniques to determine the update as well as the use of preconditioning for further acceleration.",
"We propose and analyze a new parallel coordinate descent method—NSync—in which at each iteration a random subset of coordinates is updated, in parallel, allowing for the subsets to be chosen using an arbitrary probability law. This is the first method of this type. We derive convergence rates under a strong convexity assumption, and comment on how to assign probabilities to the sets to optimize the bound. The complexity and practical performance of the method can outperform its uniform variant by an order of magnitude. Surprisingly, the strategy of updating a single randomly selected coordinate per iteration—with optimal probabilities—may require less iterations, both in theory and practice, than the strategy of updating all coordinates at every iteration.",
"We study the performance of a family of randomized parallel coordinate descent methods for minimizing the sum of a nonsmooth and separable convex functions. The problem class includes as a special case L1-regularized L1 regression and the minimization of the exponential loss (\"AdaBoost problem\"). We assume the input data defining the loss function is contained in a sparse @math matrix @math with at most @math nonzeros in each row. Our methods need @math iterations to find an approximate solution with high probability, where @math is the number of processors and @math for the fastest variant. The notation hides dependence on quantities such as the required accuracy and confidence levels and the distance of the starting iterate from an optimal point. Since @math is a decreasing function of @math , the method needs fewer iterations when more processors are used. Certain variants of our algorithms perform on average only @math arithmetic operations during a single iteration per processor and, because @math decreases when @math does, fewer iterations are needed for sparser problems."
]
}
|
1406.0238
|
1594004154
|
A distributed randomized block coordinate descent method for minimizing a convex function of a huge number of variables is proposed. The complexity of the method is analyzed under the assumption that the smooth part of the objective function is partially block separable. The number of iterations required is bounded by a function of the error and the degree of separability, which extends the results in Richtarik and Takac (Parallel Coordinate Descent Methods for Big Data Optimization, Mathematical Programming, DOI:10.1007 s10107-015-0901-6) to a distributed environment. Several approaches to the distribution and synchronization of the computation across a cluster of multi-core computer are described and promising computational results are provided.
|
Distributed block-coordinate descent. Distributed coordinate descent was first proposed by Bertsekas and Tsitsiklis @cite_13 . The literature on this topic was rather sparse, c.f. @cite_29 , until the research presented in this paper raised the interest, which lead to the analyses of Richt ' a rik and Tak ' a c @cite_26 and @cite_35 . These papers do not consider blocks, and specialise our results to convex functions admitting a quadratic upper bound. In the machine-learning community, distributed algorithms have been studied for particular problems, e.g., training of support vector machines @cite_0 . Google @cite_21 developed a library called PSVM, where parallel row-based incomplete Cholesky factorization is employed in an interior-point method. A MapReduce-based distributed algorithm for SVM was found to be effective in automatic image annotation @cite_6 . Nevertheless, none of these papers use coordinate descent.
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_29",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_13"
],
"mid": [
"2964252705",
"2949085567",
"2165966284",
"2099262739",
"1963611305",
"2540030413",
"1603765807"
],
"abstract": [
"We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions. The method attains the optimal O(1 k) convergence rate, where k is the iteration counter. The core of the work is the theoretical study of stepsize parameters. We have implemented the method on Archer—the largest supercomputer in the UK—and show that the method is capable of solving a (synthetic) LASSO optimization problem with 50 billion variables.",
"In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method for solving loss minimization problems with big data. We initially partition the coordinates (features) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bounds on the number of iterations sufficient to approximately solve the problem with high probability, and show how it depends on the data and on the partitioning. We perform numerical experiments with a LASSO instance described by a 3TB matrix.",
"In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-loss functions. The proposed method is simple and reaches an e-accurate solution in O(log(1 e)) iterations. Experiments indicate that our method is much faster than state of the art solvers such as Pegasos, TRON, SVMperf, and a recent primal coordinate descent implementation.",
"Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let n denote the number of training instances, p the reduced matrix dimension after factorization (p is significantly smaller than n), and m the number of machines. PSVM reduces the memory requirement from O(n2) to O(np m), and improves computation time to O(np2 m). Empirical study shows PSVM to be effective. PSVM Open Source is available for download at http: code.google.com p psvm .",
"Machine learning techniques have facilitated image retrieval by automatically classifying and annotating images with keywords. Among them Support Vector Machines (SVMs) have been used extensively due to their generalization properties. However, SVM training is notably a computationally intensive process especially when the training dataset is large. This paper presents MRSMO, a MapReduce based distributed SVM algorithm for automatic image annotation. The performance of the MRSMO algorithm is evaluated in an experimental environment. By partitioning the training dataset into smaller subsets and optimizing the partitioned subsets across a cluster of computers, the MRSMO algorithm reduces the training time significantly while maintaining a high level of accuracy in both binary and multiclass classifications.",
"Support Vector Machine (SVM) is an efficient data mining approach for data classification. However, SVM algorithm requires very large memory requirement and computational time to deal with very large dataset. To reduce the computational time during the process of training the SVM, a combination of distributed and parallel computing method, CoDLib have been proposed. Instead of using a single machine for parallel computing, multiple machines in a cluster are used. Message Passing Interface (MPI) is used in the communication between machines in the cluster. The original dataset is split and distributed to the respective machines. Experiments results shows a great speed up on the training of the MNIST dataset where training time has been significantly reduced compared with standard LIBSVM without affecting the quality of the SVM.",
""
]
}
|
1406.0370
|
1839698358
|
Prototyping is an important part in research and development of tangible user interfaces (TUIs). On the way from the idea to a working prototype, new hardware prototypes usually have to be crafted repeatedly in numerous iterations. This brings us to think about virtual prototypes that exhibit the same functionality as a real TUI, but reduce the amount of time and resources that have to be spent. Building upon existing open-source software - the middleware Robot Operating System (ROS) and the 3D simulator Gazebo - we have created a toolkit that can be used for developing and testing fully functional implementations of a tangible user interface as a virtual device. The entire interaction between the TUI and other hardware and software components is controlled by the middleware, while the human interaction with the TUI can be explored using the 3D simulator and 3D input output technologies. We argue that by simulating parts of the hardware-software co-design process, the overall development effort can be reduced.
|
The @cite_14 toolkit enables to build tangible interfaces using computer vision, electronic tags, and bar codes. Owners of a Vicon Tracking system can make use of the rapid prototyping workbench @cite_17 . It allows designing functional interfaces on 3D physical objects of any shape. Other commonly used hardware toolkits are Arduino http: www.arduino.cc , last accessed May ,6, 2014 , Phidgets http: www.phidgets.com , last accessed May ,6, 2014 or iStuff @cite_11 .
|
{
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_17"
],
"mid": [
"2160288944",
"2161969471",
"2003941864"
],
"abstract": [
"Tangible user interfaces (TUIs) augment the physical world by integrating digital information with everyday physical objects. Currently, building these UIs requires \"getting down and dirty\" with input technologies such as computer vision. Consequently, only a small cadre of technology experts can currently build these UIs. Based on a literature review and structured interviews with nine TUI researchers, we created Papier-Mâche, a toolkit for building tangible interfaces using computer vision, electronic tags, and barcodes. Papier-Mache introduces a high-level event model for working with these technologies that facilitates technology portability. For example, an application can be prototyped with computer vision and deployed with RFID. We present an evaluation of our toolkit with six class projects and a user study with seven programmers, finding the input abstractions, technology portability, and monitoring window to be highly effective.",
"The iStuff toolkit of physical devices, and the flexible software infrastructure to support it, were designed to simplify the exploration of novel interaction techniques in the post-desktop era of multiple users, devices, systems and applications collaborating in an interactive environment. The toolkit leverages an existing interactive workspace in-frastructure, making it lightweight and platform independent. The supporting software framework includes a dynamically configurable intermediary to simplify the mapping of devices to applications. We describe the iStuff architecture and provide several examples of iStuff, organized into a design space of ubiquitous computing interaction components. The main contribution is a physical toolkit for distributed, heterogeneous environments with run-time retargetable device data flow. We conclude with some insights and experiences derived from using this toolkit and framework to prototype experimental interaction techniques for ubiquitous computing environments.",
"This paper introduces DisplayObjects, a rapid prototyping workbench that allows functional interfaces to be projected onto real 3D physical prototypes. DisplayObjects uses a Vicon motion capture system to track the location of physical models. 3D software renditions of the 3D physical model are then texture-mapped with interactive behavior and projected back onto the physical model to allow real- time interactions with the object. We discuss the implementation of the system, as well as a selection of one and two-handed interaction techniques for DisplayObjects. We conclude with a design case that comments on some of the early design experiences with the system."
]
}
|
1406.0609
|
1499135110
|
Nowadays, a computing cluster in a typical data center can easily consist of hundreds of thousands of commodity servers, making component machine failures the norm rather than exception. A parallel processing job can be delayed substantially as long as one of its many tasks is being assigned to a failing machine. To tackle this so-called straggler problem, most parallel processing frameworks such as MapReduce have adopted various strategies under which the system may speculatively launch additional copies of the same task if its progress is abnormally slow or simply because extra idling resource is available. In this paper, we focus on the design of speculative execution schemes for a parallel processing cluster under different loading conditions. For the lightly loaded case, we analyze and propose two optimization-based schemes, namely, the Smart Cloning Algorithm (SCA) which is based on maximizing the job utility and the Straggler Detection Algorithm (SDA) which minimizes the overall resource consumption of a job. We also derive the workload threshold under which SCA or SDA should be used for speculative execution. Our simulation results show both SCA and SDA can reduce the job flowtime by nearly 60 comparing to the speculative execution strategy of Microsoft Mantri. For the heavily loaded case, we propose the Enhanced Speculative Execution (ESE) algorithm which is an extension of the Microsoft Mantri scheme. We show that the ESE algorithm can beat the Mantri baseline scheme by 18 in terms of job flowtime while consuming the same amount of resource.
|
To accurately and promptly identify stragglers, @cite_11 proposes a Smart Speculative Execution strategy and @cite_18 presents an Enhanced Self-Adaptive MapReduce Scheduling Algorithm respectively. The main ideas of @cite_11 include: i) use exponentially weighted moving average to predict process speed and compute the remaining time of a task and ii) determine which task to backup based on the load of a cluster using a cost-benefit model. Recently, @cite_14 proposes to mitigate the straggler problem by cloning every small job and avoid the extra delay caused by the straggler monitoring detection process. When the majority of the jobs in the system are small, the cloned copies only consume a small amount of additional resource.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_11"
],
"mid": [
"2140424185",
"1903497807",
"2040722314"
],
"abstract": [
"MapReduce is a programming model and an associated implementation for processing and generating large data sets. Hadoop is an open-source implementation of Map Reduce, enjoying wide adoption, and is used not only for batch jobs but also for short jobs where low response time is critical. However, Hadoop's performance is currently limited by its default task scheduler, which implicitly assumes that cluster nodes are homogeneous and tasks make progress linearly, and uses these assumptions to decide when to speculatively re-execute tasks that appear to be stragglers. In practice, the homogeneity assumption does not always hold. Longest Approximate Time to End (LATE) is a Map Reduce scheduling algorithm that takes heterogeneous environments into consideration. It, however, adopts a static method to compute the progress of tasks. As a result neither Hadoop default nor LATE schedulers perform well in a heterogeneous environment. Self-adaptive Map Reduce Scheduling Algorithm (SAMR) uses historical information to adjust stage weights of map and reduce tasks when estimating task execution times. However, SAMR does not consider the fact that for different types of jobs their map and reduce stage weights may be different. Even for the same type of jobs, different datasets may lead to different weights. To this end, we propose ESAMR: an Enhanced Self-Adaptive Map Reduce scheduling algorithm to improve the speculative re-execution of slow tasks in Map Reduce. In ESAMR, in order to identify slow tasks accurately, we differentiate historical stage weights information on each node and divide them into k clusters using a k-means clustering algorithm and when executing a job's tasks on a node, ESAMR classifies the tasks into one of the clusters and uses the cluster's weights to estimate the execution time of the job's tasks on the node. Experimental results show that among the aforementioned algorithms, ESAMR leads to the smallest error in task execution time estimation and identifies slow tasks most accurately.",
"Small jobs, that are typically run for interactive data analyses in datacenters, continue to be plagued by disproportionately long-running tasks called stragglers. In the production clusters at Facebook and Microsoft Bing, even after applying state-of-the-art straggler mitigation techniques, these latency sensitive jobs have stragglers that are on average 8 times slower than themedian task in that job. Such stragglers increase the average job duration by 47 . This is because current mitigation techniques all involve an element of waiting and speculation. We instead propose full cloning of small jobs, avoiding waiting and speculation altogether. Cloning of small jobs only marginally increases utilization because workloads show that while the majority of jobs are small, they only consume a small fraction of the resources. The main challenge of cloning is, however, that extra clones can cause contention for intermediate data. We use a technique, delay assignment, which efficiently avoids such contention. Evaluation of our system, Dolly, using production workloads shows that the small jobs speedup by 34 to 46 after state-of-the-artmitigation techniques have been applied, using just 5 extra resources for cloning.",
"MapReduce is a widely used parallel computing framework for large scale data processing. The two major performance metrics in MapReduce are job execution time and cluster throughput. They can be seriously impacted by straggler machines-machines on which tasks take an unusually long time to finish. Speculative execution is a common approach for dealing with the straggler problem by simply backing up those slow running tasks on alternative machines. Multiple speculative execution strategies have been proposed, but they have some pitfalls: (i) Use average progress rate to identify slow tasks while in reality the progress rate can be unstable and misleading, (ii) Cannot appropriately handle the situation when there exists data skew among the tasks, (iii) Do not consider whether backup tasks can finish earlier when choosing backup worker nodes. In this paper, we first present a detailed analysis of scenarios where existing strategies cannot work well. Then we develop a new strategy, maximum cost performance (MCP), which improves the effectiveness of speculative execution significantly. To accurately and promptly identify stragglers, we provide the following methods in MCP: (i) Use both the progress rate and the process bandwidth within a phase to select slow tasks, (ii) Use exponentially weighted moving average (EWMA) to predict process speed and calculate a task's remaining time, (iii) Determine which task to backup based on the load of a cluster using a cost-benefit model. To choose proper worker nodes for backup tasks, we take both data locality and data skew into consideration. We evaluate MCP in a cluster of 101 virtual machines running a variety of applications on 30 physical servers. Experiment results show that MCP can run jobs up to 39 percent faster and improve the cluster throughput by up to 44 percent compared to Hadoop-0.21."
]
}
|
1406.0085
|
1490305410
|
A wide range of multi-agent coordination problems including reference tracking and disturbance rejection requirements can be formulated as a cooperative output regulation problem. The general framework captures typical problems such as output synchronization, leader-follower synchronization, and many more. In the present paper, we propose a novel distributed regulator for groups of identical and non-identical linear agents. We consider global external signals affecting all agents and local external signals affecting only individual agents in the group. Both signal types may contain references and disturbances. Our main contribution is a novel coupling among the agents based on their transient state components or estimates thereof in the output feedback case. This coupling achieves transient synchronization in order to improve the cooperative behavior of the group in transient phases and guarantee a desired decay rate of the synchronization error. This leads to a cooperative reaction of the group on local disturbances acting on individual agents. The effectiveness of the proposed distributed regulator is illustrated by a vehicle platooning example and a coordination example for a group of four non-identical 3-DoF helicopter models.
|
Reference tracking and disturbance rejection problems can be formulated in the general framework of output regulation which was developed in the 1970s, @cite_20 @cite_35 . The basic setup consists of a so-called , an autonomous system that generates all external signals (references as well as disturbances) acting on the plant, and a description of the plant. The signal generated by the exosystem is referred to as the . The tracking and regulation requirements are formulated in terms of a regulation error depending on the plant state and the external signals. The objective is to find a control law, also called , which ensures internal stability of the plant and asymptotic convergence to zero of the regulation error for all initial conditions. For the details on the output regulation theory, the reader is referred to the books @cite_15 , @cite_32 , and @cite_24 .
|
{
"cite_N": [
"@cite_35",
"@cite_32",
"@cite_24",
"@cite_15",
"@cite_20"
],
"mid": [
"2057895073",
"2149811200",
"296003671",
"1518846109",
"2126185507"
],
"abstract": [
"Necessary structural criteria are obtained for linear multivariable regulators which retain loop stability and output regulation in the presence of small perturbations, of specified types, in system parameters. It is shown that structural stability thus defined requires feedback of the regulated variable, together with a suitably reduplicated model, internal to the feedback loop, of the dynamic structure of the exogenous reference and disturbance signals which the regulator is required to process. Necessity of these structural features constitutes the ‘internal model principle’.",
"Control Theory for Linear Systems deals with the mathematical theory of feedback control of linear systems. It treats a wide range of control synthesis problems for linear state space systems with inputs and outputs. The book provides a treatment of these problems using state space methods, often with a geometric flavour. Its subject matter ranges from controllability and observability, stabilization, disturbance decoupling, and tracking and regulation, to linear quadratic regulation, @math and @math control, and robust stabilization. Each chapter of the book contains a series of exercises, intended to increase the reader's understanding of the material. Often, these exercises generalize and extend the material treated in the regular text.",
"Preface 1. Linear output regulation 2. Introduction to nonlinear systems 3. Nonlinear output regulation 4. Approximation method for the nonlinear output regulation 5. Nonlinear robust output regulation 6. From output regulation to stabilization 7. Global robust output regulation 8. Output regulation for singular nonlinear systems 9. Output regulation for discrete-time nonlinear systems Notes and references Appendices Bibliography Index.",
"1 The problem of output regulation.- 1.1 Introduction.- 1.2 Problem statement.- 1.3 Output regulation via full information.- 1.4 Output regulation via error feedback.- 1.5 A particular case.- 1.6 Well-posedness and robustness.- 1.7 The construction of a robust regulator.- 2 Disturbance attenuation via H?-methods.- 2.1 Introduction.- 2.2 Problem statement.- 2.3 A characterization of the L2-gain of a linear system.- 2.4 Disturbance attenuation via full information.- 2.5 Disturbance attenuation via measured feedback.- 3 Full information regulators.- 3.1 Problem statement.- 3.2 Time-dependent control strategies.- 3.3 Examples.- 3.4 Time-independent control strategies.- 3.5 The local case.- 4 Nonlinear observers.- 4.1 Problem statement.- 4.2 Time-dependent observers.- 4.3 Error feedback regulators.- 4.4 Examples.- 5 Nonlinear H?-techniques.- 5.1 Introduction.- 5.2 Construction of the saddle-point.- 5.3 The local scenario.- 5.4 Disturbance attenuation via linearization.- A Matrix equations.- A.l Linear matrix equations.- A.2 Algebraic Riccati equations.- B Invariant manifolds.- B.l Existence theorem.- B.2 Outflowing manifolds.- B.3 Asymptotic phase.- B.5 A special case.- B.6 Dichotomies and Lyapunov functions.- C Hamilton-Jacobi-Bellman-Isaacs equation.- C.l Introduction.- C.2 Method of characteristics.- C.3 The equation of Isaacs.- C.4 The Hamiltonian version of Isaacs' equation.",
"For the multivariable control system described by [ x = Ax + Bu, y = Cx, z = Dx, ] necessary and sufficient conditions are found for the existence of state feedback @math such that @math and @math as @math . It is assumed that @math is A-invariant, or equivalently that a dynamic observer is utilized. A constructive version of the existence conditions is obtained under the further assumption that auxiliary integrating elements can be introduced by way of dynamic compensation, and a bound is given for the number of such elements required."
]
}
|
1406.0455
|
2141542880
|
The majority of recommender systems are designed to recommend items (such as movies and products) to users. We focus on the problem of recommending buyers to sellers which comes with new challenges: (1) constraints on the number of recommendations buyers are part of before they become overwhelmed, (2) constraints on the number of recommendations sellers receive within their budget, and (3) constraints on the set of buyers that sellers want to receive (e.g., no more than two people from the same household). We propose the following critical problems of recommending buyers to sellers: Constrained Recommendation (C-REC) capturing the first two challenges, and Conflict-Aware Constrained Recommendation (CAC-REC) capturing all three challenges at the same time. We show that C-REC can be modeled using linear programming and can be efficiently solved using modern solvers. On the other hand, we show that CAC-REC is NP-hard. We propose two approximate algorithms to solve CAC-REC and show that they achieve close to optimal solutions via comprehensive experiments using real-world datasets.
|
RBS is related to the task of constraint-based recommendation (cf. @cite_7 @cite_10 @cite_8 @cite_16 @cite_21 @cite_4 @cite_2 ).
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_2",
"@cite_16",
"@cite_10"
],
"mid": [
"2010127467",
"2032717442",
"2034464625",
"568287058",
"2005653178",
"1963547452",
"2168975651"
],
"abstract": [
"Classical recommender systems provide users with a list of recommendations where each recommendation consists of a single item, e.g., a book or DVD. However, several applications can benefit from a system capable of recommending packages of items, in the form of sets. Sample applications include travel planning with a limited budget (price or time) and twitter users wanting to select worthwhile tweeters to follow given that they can deal with only a bounded number of tweets. In these contexts, there is a need for a system that can recommend top-k packages for the user to choose from. Motivated by these applications, we consider composite recommendations, where each recommendation comprises a set of items. Each item has both a value (rating) and a cost associated with it, and the user specifies a maximum total cost (budget) for any recommended set of items. Our composite recommender system has access to one or more component recommender system, focusing on different domains, as well as to information sources which can provide the cost associated with each item. Because the problem of generating the top recommendation (package) is NP-complete, we devise several approximation algorithms for generating top-k packages as recommendations. We analyze their efficiency as well as approximation quality. Finally, using two real and two synthetic data sets, we subject our algorithms to thorough experimentation and empirical analysis. Our findings attest to the efficiency and quality of our approximation algorithms for top-k packages compared to exact algorithms.",
"Recommender systems support users in identifying products and services in e-commerce and other information-rich environments. Recommendation problems have a long history as a successful AI application area, with substantial interest beginning in the mid-1990s, and increasing with the subsequent rise of e-commerce. Recommender systems research long focused on recommending only simple products such as movies or books; constraint-based recommendation now receives increasing attention due to the capability of recommending complex products and services. In this paper, we first introduce a taxonomy of recommendation knowledge sources and algorithmic approaches. We then go on to discuss the most prevalent techniques of constraint-based recommendation and outline open research issues.",
"Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. Most previous works have set the problem up as using a paper as a query to independently \"retrieve\" a set of reviewers that should review the paper. A more appropriate formulation of the problem would be to simultaneously optimize the assignments of all the papers to an entire committee of reviewers under constraints such as the review quota. In this paper, we solve the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using an existing data set shows that the proposed algorithm is effective for committee review assignments based on multi-aspect expertise matching.",
"",
"We study the problem of making recommendations when the objects to be recommended must also satisfy constraints or requirements. In particular, we focus on course recommendations: the courses taken by a student must satisfy requirements (e.g., take two out of a set of five math courses) in order for the student to graduate. Our work is done in the context of the CourseRank system, used by students to plan their academic program at Stanford University. Our goal is to recommend to these students courses that not only help satisfy constraints, but that are also desirable (e.g., popular or taken by similar students). We develop increasingly expressive models for course requirements, and present a variety of schemes for both checking if the requirements are satisfied, and for making recommendations that take into account the requirements. We show that some types of requirements are inherently expensive to check, and we present exact, as well as heuristic techniques, for those cases. Although our work is specific to course requirements, it provides insights into the design of recommendation systems in the presence of complex constraints found in other applications.",
"Introduction and Preliminaries. Problems, Algorithms, and Complexity. LINEAR ALGEBRA. Linear Algebra and Complexity. LATTICES AND LINEAR DIOPHANTINE EQUATIONS. Theory of Lattices and Linear Diophantine Equations. Algorithms for Linear Diophantine Equations. Diophantine Approximation and Basis Reduction. POLYHEDRA, LINEAR INEQUALITIES, AND LINEAR PROGRAMMING. Fundamental Concepts and Results on Polyhedra, Linear Inequalities, and Linear Programming. The Structure of Polyhedra. Polarity, and Blocking and Anti--Blocking Polyhedra. Sizes and the Theoretical Complexity of Linear Inequalities and Linear Programming. The Simplex Method. Primal--Dual, Elimination, and Relaxation Methods. Khachiyana s Method for Linear Programming. The Ellipsoid Method for Polyhedra More Generally. Further Polynomiality Results in Linear Programming. INTEGER LINEAR PROGRAMMING. Introduction to Integer Linear Programming. Estimates in Integer Linear Programming. The Complexity of Integer Linear Programming. Totally Unimodular Matrices: Fundamental Properties and Examples. Recognizing Total Unimodularity. Further Theory Related to Total Unimodularity. Integral Polyhedra and Total Dual Integrality. Cutting Planes. Further Methods in Integer Linear Programming. References. Indexes.",
"Recommender Systems (RS) suggest useful and interesting items to users in order to increase user satisfaction and online conversion rates. They typically exploit explicit or implicit user feedback such as ratings, buying records or clickstream data and apply statistical methods to derive recommendations. This paper focuses on explicitly formulated customer requirements as the sole type of user feedback. Its contribution lies in comparing different techniques such as knowledge- and utility-based methods, collaborative filtering, association rule mining as well as hybrid variants when user models consist solely of explicit customer requirements. We examine how this type of user feedback can be exploited for personalization in e-commerce scenarios. Furthermore, examples of actual online shops are developed where such contextual user information is available, demonstrating how more efficient RS configurations can be implemented. Results indicate that, especially for new users, explicit customer requirements are a useful source of feedback for personalization and hybrid configurations of collaborative and knowledge-based techniques achieve best results."
]
}
|
1406.0455
|
2141542880
|
The majority of recommender systems are designed to recommend items (such as movies and products) to users. We focus on the problem of recommending buyers to sellers which comes with new challenges: (1) constraints on the number of recommendations buyers are part of before they become overwhelmed, (2) constraints on the number of recommendations sellers receive within their budget, and (3) constraints on the set of buyers that sellers want to receive (e.g., no more than two people from the same household). We propose the following critical problems of recommending buyers to sellers: Constrained Recommendation (C-REC) capturing the first two challenges, and Conflict-Aware Constrained Recommendation (CAC-REC) capturing all three challenges at the same time. We show that C-REC can be modeled using linear programming and can be efficiently solved using modern solvers. On the other hand, we show that CAC-REC is NP-hard. We propose two approximate algorithms to solve CAC-REC and show that they achieve close to optimal solutions via comprehensive experiments using real-world datasets.
|
Karimzadehgan et. al in @cite_0 @cite_8 @cite_16 study the problem of optimizing the review assignments of scientific papers. Similarly to C-REC, they employ constraints on the quota of papers each reviewer is assigned. However, differently from C-REC, in their optimization setup, matching of reviewers with a paper is done based on matching of multiple aspects of expertise. Taylor in @cite_21 also considers the paper review assignment problem. The difference from C-REC is that it does not consider an ordering on the reviewers and papers.
|
{
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_21",
"@cite_8"
],
"mid": [
"2126793736",
"1963547452",
"568287058",
"2034464625"
],
"abstract": [
"Review assignment is a common task that many people such as conference organizers, journal editors, and grant administrators would have to do routinely. As a computational problem, it involves matching a set of candidate reviewers with a paper or proposal to be reviewed. A common deficiency of all existing work on solving this problem is that they do not consider the multiple aspects of topics or expertise and all match the entire document to be reviewed with the overall expertise of a reviewer. As a result, if a document contains multiple subtopics, which often happens, existing methods would not attempt to assign reviewers to cover all the subtopics; instead, it is quite possible that all the assigned reviewers would cover the major subtopic quite well, but not covering any other subtopic. In this paper, we study how to model multiple aspects of expertise and assign reviewers so that they together can cover all subtopics in the document well. We propose three general strategies for solving this problem and propose new evaluation measures for this task. We also create a multi-aspect review assignment test set using ACM SIGIR publications. Experiment results on this data set show that the proposed methods are effective for assigning reviewers to cover all topical aspects of a document.",
"Introduction and Preliminaries. Problems, Algorithms, and Complexity. LINEAR ALGEBRA. Linear Algebra and Complexity. LATTICES AND LINEAR DIOPHANTINE EQUATIONS. Theory of Lattices and Linear Diophantine Equations. Algorithms for Linear Diophantine Equations. Diophantine Approximation and Basis Reduction. POLYHEDRA, LINEAR INEQUALITIES, AND LINEAR PROGRAMMING. Fundamental Concepts and Results on Polyhedra, Linear Inequalities, and Linear Programming. The Structure of Polyhedra. Polarity, and Blocking and Anti--Blocking Polyhedra. Sizes and the Theoretical Complexity of Linear Inequalities and Linear Programming. The Simplex Method. Primal--Dual, Elimination, and Relaxation Methods. Khachiyana s Method for Linear Programming. The Ellipsoid Method for Polyhedra More Generally. Further Polynomiality Results in Linear Programming. INTEGER LINEAR PROGRAMMING. Introduction to Integer Linear Programming. Estimates in Integer Linear Programming. The Complexity of Integer Linear Programming. Totally Unimodular Matrices: Fundamental Properties and Examples. Recognizing Total Unimodularity. Further Theory Related to Total Unimodularity. Integral Polyhedra and Total Dual Integrality. Cutting Planes. Further Methods in Integer Linear Programming. References. Indexes.",
"",
"Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. Most previous works have set the problem up as using a paper as a query to independently \"retrieve\" a set of reviewers that should review the paper. A more appropriate formulation of the problem would be to simultaneously optimize the assignments of all the papers to an entire committee of reviewers under constraints such as the review quota. In this paper, we solve the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using an existing data set shows that the proposed algorithm is effective for committee review assignments based on multi-aspect expertise matching."
]
}
|
1406.0455
|
2141542880
|
The majority of recommender systems are designed to recommend items (such as movies and products) to users. We focus on the problem of recommending buyers to sellers which comes with new challenges: (1) constraints on the number of recommendations buyers are part of before they become overwhelmed, (2) constraints on the number of recommendations sellers receive within their budget, and (3) constraints on the set of buyers that sellers want to receive (e.g., no more than two people from the same household). We propose the following critical problems of recommending buyers to sellers: Constrained Recommendation (C-REC) capturing the first two challenges, and Conflict-Aware Constrained Recommendation (CAC-REC) capturing all three challenges at the same time. We show that C-REC can be modeled using linear programming and can be efficiently solved using modern solvers. On the other hand, we show that CAC-REC is NP-hard. We propose two approximate algorithms to solve CAC-REC and show that they achieve close to optimal solutions via comprehensive experiments using real-world datasets.
|
Xie, Lakshmanan, and Wood in @cite_4 study the problem of composite recommendations, where each recommendation comprises a set of items. They also consider constraints including the number of items that can be recommended to a user. Their objective, however, is to minimize the cost of a recommended set of items when each item has a price to be paid.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2010127467"
],
"abstract": [
"Classical recommender systems provide users with a list of recommendations where each recommendation consists of a single item, e.g., a book or DVD. However, several applications can benefit from a system capable of recommending packages of items, in the form of sets. Sample applications include travel planning with a limited budget (price or time) and twitter users wanting to select worthwhile tweeters to follow given that they can deal with only a bounded number of tweets. In these contexts, there is a need for a system that can recommend top-k packages for the user to choose from. Motivated by these applications, we consider composite recommendations, where each recommendation comprises a set of items. Each item has both a value (rating) and a cost associated with it, and the user specifies a maximum total cost (budget) for any recommended set of items. Our composite recommender system has access to one or more component recommender system, focusing on different domains, as well as to information sources which can provide the cost associated with each item. Because the problem of generating the top recommendation (package) is NP-complete, we devise several approximation algorithms for generating top-k packages as recommendations. We analyze their efficiency as well as approximation quality. Finally, using two real and two synthetic data sets, we subject our algorithms to thorough experimentation and empirical analysis. Our findings attest to the efficiency and quality of our approximation algorithms for top-k packages compared to exact algorithms."
]
}
|
1406.0455
|
2141542880
|
The majority of recommender systems are designed to recommend items (such as movies and products) to users. We focus on the problem of recommending buyers to sellers which comes with new challenges: (1) constraints on the number of recommendations buyers are part of before they become overwhelmed, (2) constraints on the number of recommendations sellers receive within their budget, and (3) constraints on the set of buyers that sellers want to receive (e.g., no more than two people from the same household). We propose the following critical problems of recommending buyers to sellers: Constrained Recommendation (C-REC) capturing the first two challenges, and Conflict-Aware Constrained Recommendation (CAC-REC) capturing all three challenges at the same time. We show that C-REC can be modeled using linear programming and can be efficiently solved using modern solvers. On the other hand, we show that CAC-REC is NP-hard. We propose two approximate algorithms to solve CAC-REC and show that they achieve close to optimal solutions via comprehensive experiments using real-world datasets.
|
Parameswaran, Venetis, and Garcia-Molina in @cite_2 study the problem of course recommendations with course requirement constraints. Similarly as @cite_4 , the goal of @cite_2 is to come up with set recommendations. However, the challenge they address is the modeling of complex academic requirements (e.g., take @math out of a set of @math math courses to meet the degree requirement). Such constraints are different from those that we consider.
|
{
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2010127467",
"2005653178"
],
"abstract": [
"Classical recommender systems provide users with a list of recommendations where each recommendation consists of a single item, e.g., a book or DVD. However, several applications can benefit from a system capable of recommending packages of items, in the form of sets. Sample applications include travel planning with a limited budget (price or time) and twitter users wanting to select worthwhile tweeters to follow given that they can deal with only a bounded number of tweets. In these contexts, there is a need for a system that can recommend top-k packages for the user to choose from. Motivated by these applications, we consider composite recommendations, where each recommendation comprises a set of items. Each item has both a value (rating) and a cost associated with it, and the user specifies a maximum total cost (budget) for any recommended set of items. Our composite recommender system has access to one or more component recommender system, focusing on different domains, as well as to information sources which can provide the cost associated with each item. Because the problem of generating the top recommendation (package) is NP-complete, we devise several approximation algorithms for generating top-k packages as recommendations. We analyze their efficiency as well as approximation quality. Finally, using two real and two synthetic data sets, we subject our algorithms to thorough experimentation and empirical analysis. Our findings attest to the efficiency and quality of our approximation algorithms for top-k packages compared to exact algorithms.",
"We study the problem of making recommendations when the objects to be recommended must also satisfy constraints or requirements. In particular, we focus on course recommendations: the courses taken by a student must satisfy requirements (e.g., take two out of a set of five math courses) in order for the student to graduate. Our work is done in the context of the CourseRank system, used by students to plan their academic program at Stanford University. Our goal is to recommend to these students courses that not only help satisfy constraints, but that are also desirable (e.g., popular or taken by similar students). We develop increasingly expressive models for course requirements, and present a variety of schemes for both checking if the requirements are satisfied, and for making recommendations that take into account the requirements. We show that some types of requirements are inherently expensive to check, and we present exact, as well as heuristic techniques, for those cases. Although our work is specific to course requirements, it provides insights into the design of recommendation systems in the presence of complex constraints found in other applications."
]
}
|
1406.0574
|
1747743061
|
As human computation on crowdsourcing systems has become popular and powerful for performing tasks, malicious users have started misusing these systems by posting malicious tasks, propagating manipulated contents, and targeting popular web services such as online social networks and search engines. Recently, these malicious users moved to Fiverr, a fast-growing micro-task marketplace, where workers can post crowdturfing tasks (i.e., astroturfing campaigns run by crowd workers) and malicious customers can purchase those tasks for only $5. In this paper, we present a comprehensive analysis of Fiverr. First, we identify the most popular types of crowdturfing tasks found in this marketplace and conduct case studies for these crowdturfing tasks. Then, we build crowdturfing task detection classifiers to filter these tasks and prevent them from becoming active in the marketplace. Our experimental results show that the proposed classification approach effectively detects crowdturfing tasks, achieving 97.35 accuracy. Finally, we analyze the real world impact of crowdturfing tasks by purchasing active Fiverr tasks and quantifying their impact on a target site. As part of this analysis, we show that current security systems inadequately detect crowdsourced manipulation, which confirms the necessity of our proposed crowdturfing task detection approach.
|
@cite_8 analyzed user demographics on Amazon Mechanical Turk, and found that the number of non-US workers has been increased, especially led by Indian workers who were mostly young, well-educated males. Heymann and Garcia-Molina @cite_15 proposed a novel analytics tool for crowdsourcing systems to gather logging events such as workers' location and used browser type.
|
{
"cite_N": [
"@cite_15",
"@cite_8"
],
"mid": [
"2122436085",
"2114269021"
],
"abstract": [
"We present \"Turkalytics,\" a novel analytics tool for human computation systems. Turkalytics processes and reports logging events from workers in real-time and has been shown to scale to over one hundred thousand logging events per day. We present a state model for worker interaction that covers the Mechanical Turk (the SCRAP model) and a data model that demonstrates the diversity of data collected by Turkalytics. We show that Turkalytics is effective at data collection, in spite of it being unobtrusive. Lastly, we describe worker locations, browser environments, activity information, and other examples of data collected by our tool.",
"Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is increasingly popular with researchers and developers. Here we extend previous studies of the demographics and usage behaviors of MTurk workers. We describe how the worker population has changed over time, shifting from a primarily moderate-income, U.S.-based workforce towards an increasingly international group with a significant population of young, well-educated Indian workers. This change in population points to how workers may treat Turking as a full-time job, which they rely on to make ends meet."
]
}
|
1406.0574
|
1747743061
|
As human computation on crowdsourcing systems has become popular and powerful for performing tasks, malicious users have started misusing these systems by posting malicious tasks, propagating manipulated contents, and targeting popular web services such as online social networks and search engines. Recently, these malicious users moved to Fiverr, a fast-growing micro-task marketplace, where workers can post crowdturfing tasks (i.e., astroturfing campaigns run by crowd workers) and malicious customers can purchase those tasks for only $5. In this paper, we present a comprehensive analysis of Fiverr. First, we identify the most popular types of crowdturfing tasks found in this marketplace and conduct case studies for these crowdturfing tasks. Then, we build crowdturfing task detection classifiers to filter these tasks and prevent them from becoming active in the marketplace. Our experimental results show that the proposed classification approach effectively detects crowdturfing tasks, achieving 97.35 accuracy. Finally, we analyze the real world impact of crowdturfing tasks by purchasing active Fiverr tasks and quantifying their impact on a target site. As part of this analysis, we show that current security systems inadequately detect crowdsourced manipulation, which confirms the necessity of our proposed crowdturfing task detection approach.
|
Researchers began studying crowdturfing problems and market. @cite_16 analyzed abusive tasks on Freelancer. @cite_7 analyzed two Chinese crowdsourcing sites and estimated that 90 Compared with the previous research work, we collected a large number of active tasks in Fiverr and analyzed crowdturfing tasks among them. We then developed crowdturfing task detection classifiers for the first time and effectively detected crowdturfing tasks. We measured the impact of these crowdturfing tasks in Twitter. This research will complement the existing research work.
|
{
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"1916595307",
"2011366667"
],
"abstract": [
"Modern Web services inevitably engender abuse, as attackers find ways to exploit a service and its user base. However, while defending against such abuse is generally considered a technical endeavor, we argue that there is an increasing role played by human labor markets. Using over seven years of data from the popular crowd-sourcing site Freelancer.com, as well data from our own active job solicitations, we characterize the labor market involved in service abuse. We identify the largest classes of abuse work, including account creation, social networking link generation and search engine optimization support, and characterize how pricing and demand have evolved in supporting this activity.",
"The users of microblogging services, such as Twitter, use the count of followers of an account as a measure of its reputation or influence. For those unwilling or unable to attract followers naturally, a growing industry of \"Twitter follower markets\" provides followers for sale. Some markets use fake accounts to boost the follower count of their customers, while others rely on a pyramid scheme to turn non-paying customers into followers for each other, and into followers for paying customers. In this paper, we present a detailed study of Twitter follower markets, report in detail on both the static and dynamic properties of customers of these markets, and develop and evaluate multiple techniques for detecting these activities. We show that our detection system is robust and reliable, and can detect a significant number of customers in the wild."
]
}
|
1406.0292
|
332274304
|
The Isabelle proof assistant comes equipped with a very powerful tactic for term simplification. While tremendously useful, the results of simplifying a term do not always match the user’s expectation: sometimes, the resulting term is not in the form the user expected, or the simplifier fails to apply a rule. We describe a new, interactive tracing facility which offers insight into the hierarchical structure of the simplification with user-defined filtering, memoization and search. The new simplifier trace is integrated into the Isabelle jEdit Prover IDE.
|
is a logic programming language @cite_8 @cite_9 . A program consists of a set of , namely and . Rules are (usually) Horn clauses, with a head and a body. Facts are merely rules with an empty body. Heads may contain variables, and bodies may contain variables not occurring in the head. Variable names must begin with an upper-case letter or an underscore, whereas must begin with a lower-case latter.
|
{
"cite_N": [
"@cite_9",
"@cite_8"
],
"mid": [
"1488560674",
"205234667"
],
"abstract": [
"Part 1 Logic programs: basic constructs database programming recursive programming the computation model of logic programs theory of logic programs. Part 2 The Prolog language: pure Prolog programming in pure Prolog arithmetic structure inspection meta-logical predicates cuts and negation extra-logical predicates program development. Part 3 Advanced Prolog programming techniques: nondeterministic programming incomplete data structures second-order programming interpreters program transformation logic grammars search techniques. Part 4 Applications: game-playing programs a credit evaluation expert system an equation solver a compiler. Appendix: operators.",
"PROLOG, logic programming based languages, became very popular in the eighties, taking a circuitous route to the United States; from Europe to Japan to mainstream American computer science."
]
}
|
1406.0292
|
332274304
|
The Isabelle proof assistant comes equipped with a very powerful tactic for term simplification. While tremendously useful, the results of simplifying a term do not always match the user’s expectation: sometimes, the resulting term is not in the form the user expected, or the simplifier fails to apply a rule. We describe a new, interactive tracing facility which offers insight into the hierarchical structure of the simplification with user-defined filtering, memoization and search. The new simplifier trace is integrated into the Isabelle jEdit Prover IDE.
|
The Prolog implementation provides a tracing facility for queries [ 2.9, 4.38] fruehwirth2012swi . An example for the tracing output can be seen in Fig. (the term creep denotes continuing the normal process). A discussion of tracing in Prolog can be found in [ 8] clocksin2003programming , and further analyses in @cite_10 . SWI uses a slightly extended variant thereof.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2036292005"
],
"abstract": [
"Abstract Programming environments are essential for the acceptance of programming languages. This survey emphasizes that program analysis, both static and dynamic, is the central issue of programming environments. Because their clean semantics makes powerful analysis possible, logic programming languages have an indisputable asset in the long term. This survey is focused on logic program analysis and debugging. The large number of references provided show that the field, although maybe scattered, is active. A unifying framework is given which separates environment tools into extraction, analysis, and visualization. It facilitates the analysis of existing tools and should give some guidelines to develop new ones. Achievements in logic programming are listed; some techniques developed for other languages are pointed out, and some trends for further research are drawn. Among the main achievements are algorithmic debugging, tracing for sequential Prolog, and abstract interpretation. The main missing techniques are slicing, test case generation, and program mutation. The perspectives we see are integration, evaluation, and above all, automated static and dynamic analysis."
]
}
|
1406.0292
|
332274304
|
The Isabelle proof assistant comes equipped with a very powerful tactic for term simplification. While tremendously useful, the results of simplifying a term do not always match the user’s expectation: sometimes, the resulting term is not in the form the user expected, or the simplifier fails to apply a rule. We describe a new, interactive tracing facility which offers insight into the hierarchical structure of the simplification with user-defined filtering, memoization and search. The new simplifier trace is integrated into the Isabelle jEdit Prover IDE.
|
Maude has been an active target for research for refining the trace even further, providing insights into when and how a particular term emerged during reducing (e.g. Alpuente et.al. @cite_6 ). Term provenance could certainly be an interesting extension for our work on the simplifier trace, but would require significantly more instrumentation of the simplifier.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"95377021"
],
"abstract": [
"We present IJulienne, a trace analyzer for conditional rewriting logic theories that can be used to compute abstract views of Maude executions that help users understand and debug programs. Given a Maude execution trace and a slicing criterion which consists of a set of target symbols occurring in a selected state of the trace, IJulienne is able to track back reverse dependences and causality along the trace in order to incrementally generate highly reduced program and trace slices that reconstruct all and only those pieces of information that are needed to deliver the symbols of interest. IJulienne is also endowed with a trace querying mechanism that increases flexibility and reduction power and allows program runs to be examined at the appropriate level of abstraction."
]
}
|
1406.0403
|
2949277052
|
Deeply embedded systems often have the tightest constraints on energy consumption, requiring that they consume tiny amounts of current and run on batteries for years. However, they typically execute code directly from flash, instead of the more energy efficient RAM. We implement a novel compiler optimization that exploits the relative efficiency of RAM by statically moving carefully selected basic blocks from flash to RAM. Our technique uses integer linear programming, with an energy cost model to select a good set of basic blocks to place into RAM, without impacting stack or data storage. We evaluate our optimization on a common ARM microcontroller and succeed in reducing the average power consumption by up to 41 and reducing energy consumption by up to 22 , while increasing execution time. A case study is presented, where an application executes code then sleeps for a period of time. For this example we show that our optimization could allow the application to run on battery for up to 32 longer. We also show that for this scenario the total application energy can be reduced, even if the optimization increases the execution time of the code.
|
The problem of moving parts of code and data from one memory to a faster memory has been studied extensively in the context of scratchpad memory. Most studies focus on static assignment of code and data to the scratchpad memory with the aim of decreasing program execution time or energy consumption. @cite_19 compare scratchpad memories and caches, finding that a scratchpad memory can save up to 43
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2105778948"
],
"abstract": [
"The number of embedded systems is increasing and a remarkable percentage is designed as mobile applications. For the latter, energy consumption is a limiting factor because of today's battery capacities. Besides the processor, memory accesses consume a high amount of energy. The use of additional less power hungry memories like caches or scratchpads is thus common. Caches incorporate the hardware control logic for moving data in and out automatically. On the other hand, this logic requires chip area and energy. A scratchpad memory is much more energy efficient, but there is a need for software control of its content. In this paper, an algorithm integrated into a compiler is presented which analyses the application and selects program and data parts which are placed into the scratchpad. Comparisons against a cache solution show remarkable advantages between 12 and 43 in energy consumption for designs of the same memory size."
]
}
|
1406.0403
|
2949277052
|
Deeply embedded systems often have the tightest constraints on energy consumption, requiring that they consume tiny amounts of current and run on batteries for years. However, they typically execute code directly from flash, instead of the more energy efficient RAM. We implement a novel compiler optimization that exploits the relative efficiency of RAM by statically moving carefully selected basic blocks from flash to RAM. Our technique uses integer linear programming, with an energy cost model to select a good set of basic blocks to place into RAM, without impacting stack or data storage. We evaluate our optimization on a common ARM microcontroller and succeed in reducing the average power consumption by up to 41 and reducing energy consumption by up to 22 , while increasing execution time. A case study is presented, where an application executes code then sleeps for a period of time. For this example we show that our optimization could allow the application to run on battery for up to 32 longer. We also show that for this scenario the total application energy can be reduced, even if the optimization increases the execution time of the code.
|
Other work on scratchpad memory has attempted to dynamically move objects into memory as they are needed @cite_17 . This study identified which parts of the code should remain in scratchpad memory and which parts should be brought in dynamically at specific locations through the program. Another study @cite_3 applies techniques developed for global register allocation to scratchpad memories, reducing the energy consumption by up to 34 A different approach is taken by @cite_10 , where Presburger formulae are used to minimize the number of transfers between main memory and the scratchpad memory. This technique manages to reduce the number of off-chip references and memory energy consumption.
|
{
"cite_N": [
"@cite_10",
"@cite_3",
"@cite_17"
],
"mid": [
"2155825157",
"2154632001",
""
],
"abstract": [
"Effective utilization of on-chip storage space is important from both performance (execution cycles) and memory system energy consumptions perspectives. While on-chip cache memories have been widely used in the past, several factors, including lack of data access time predictability and limited effectiveness of compiler optimizations, indicate that they may not be the best candidate for portable embedded devices. This paper presents a compiler-directed on-chip scratch-pad memory (software-managed on-chip memory) management strategy for data accesses. Our strategy is oriented towards minimizing the number of data transfers between off-chip memory and the scratch-pad memory, thereby exploiting reuse for the data residing in the scratch-pad memory. We report experimental data from our implementation showing the usefulness of our technique.",
"The memory subsystem accounts for a significant portion of the aggregate energy budget of contemporary embedded systems. Moreover, there exists a large potential for optimizing the energy consumption of the memory subsystem. Consequently, novel memories as well as novel algorithms for their efficient utilization are being designed. Scratchpads are known to perform better than caches in terms of power, performance, area and predictability. However, unlike caches they depend upon software allocation techniques for their utilization. In this paper, we present an allocation technique which analyzes the application and inserts instructions to dynamically copy both code segments and variables onto the scratchpad at runtime. We demonstrate that the problem of dynamically overlaying scratchpad is an extension of the Global Register Allocation problem. The overlay problem is solved optimally using ILP formulation techniques. Our approach improves upon the only previously known allocation technique for statically allocating both variables and code segments onto the scratchpad. Experiments report an average reduction of 34 and 18 in the energy consumption and the runtime of the applications, respectively. A minimal increase in code size is also reported.",
""
]
}
|
1406.0403
|
2949277052
|
Deeply embedded systems often have the tightest constraints on energy consumption, requiring that they consume tiny amounts of current and run on batteries for years. However, they typically execute code directly from flash, instead of the more energy efficient RAM. We implement a novel compiler optimization that exploits the relative efficiency of RAM by statically moving carefully selected basic blocks from flash to RAM. Our technique uses integer linear programming, with an energy cost model to select a good set of basic blocks to place into RAM, without impacting stack or data storage. We evaluate our optimization on a common ARM microcontroller and succeed in reducing the average power consumption by up to 41 and reducing energy consumption by up to 22 , while increasing execution time. A case study is presented, where an application executes code then sleeps for a period of time. For this example we show that our optimization could allow the application to run on battery for up to 32 longer. We also show that for this scenario the total application energy can be reduced, even if the optimization increases the execution time of the code.
|
Sharing a scratchpad memory between multiple tasks has been tackled in @cite_23 , by attempting to optimally pack different task's regions of code and data into the scratchpad memory.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2001234477"
],
"abstract": [
"This paper presents a new technique for reducing the energy consumption of a multi-task system by sharing its scratchpad memory (SPM) space among the tasks. With this technique, tasks can interfere by using common areas of the SPM. However, this requires to update these areas during context switches, which involves considerable overheads. Hence, an integer linear programming formulation is used at compile time for finding the best assignment of memory objects to the SPM and their respective locations inside it. Experiments show that the technique achieves up to 85 energy reduction with 8Kb of SPM and surpasses other sharing approaches."
]
}
|
1406.0403
|
2949277052
|
Deeply embedded systems often have the tightest constraints on energy consumption, requiring that they consume tiny amounts of current and run on batteries for years. However, they typically execute code directly from flash, instead of the more energy efficient RAM. We implement a novel compiler optimization that exploits the relative efficiency of RAM by statically moving carefully selected basic blocks from flash to RAM. Our technique uses integer linear programming, with an energy cost model to select a good set of basic blocks to place into RAM, without impacting stack or data storage. We evaluate our optimization on a common ARM microcontroller and succeed in reducing the average power consumption by up to 41 and reducing energy consumption by up to 22 , while increasing execution time. A case study is presented, where an application executes code then sleeps for a period of time. For this example we show that our optimization could allow the application to run on battery for up to 32 longer. We also show that for this scenario the total application energy can be reduced, even if the optimization increases the execution time of the code.
|
Many other scratchpad memory allocation schemes have been proposed. A comprehensive review of these is given in @cite_0 , including multiple scratchpad memories, and partitioned memories.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2140935245"
],
"abstract": [
"In the context of mobile embedded devices, reducing energy is one of the prime objectives. Memories are responsible for a significant percentage of a system's aggregate energy consumption. Consequently, novel memories as well as novel-memory architectures are being designed to reduce the energy consumption. Caches and scratchpads are two contrasting memory architectures. The former relies on hardware logic while the latter relies on software for its utilization. To meet different requirements, most contemporary high-end embedded microprocessors include on-chip instruction and data caches along with a scratchpad. Previous approaches for utilizing scratchpad did not consider caches and hence fail for the contemporary high-end systems. Instructions are allocated onto the scratchpad, while taking into account the behavior of the instruction cache present in the system. The problem of scratchpad allocation is solved using a heuristic and also optimally using an integer linear programming formulation. An average reduction of 7 and 23 in processor cycles and instruction-memory energy, respectively, is reported when compared against a previously published technique. The average deviation between optimal and nonoptimal solutions was found to be less than 6 both in terms of processor cycles and energy. The scratchpad in the presented architecture is similar to a preloaded loop cache. Comparing the energy consumption of the presented approach against that of a preloaded loop cache, an average reduction of 9 and 29 in processor cycles and instruction-memory energy, respectively, is reported"
]
}
|
1406.0403
|
2949277052
|
Deeply embedded systems often have the tightest constraints on energy consumption, requiring that they consume tiny amounts of current and run on batteries for years. However, they typically execute code directly from flash, instead of the more energy efficient RAM. We implement a novel compiler optimization that exploits the relative efficiency of RAM by statically moving carefully selected basic blocks from flash to RAM. Our technique uses integer linear programming, with an energy cost model to select a good set of basic blocks to place into RAM, without impacting stack or data storage. We evaluate our optimization on a common ARM microcontroller and succeed in reducing the average power consumption by up to 41 and reducing energy consumption by up to 22 , while increasing execution time. A case study is presented, where an application executes code then sleeps for a period of time. For this example we show that our optimization could allow the application to run on battery for up to 32 longer. We also show that for this scenario the total application energy can be reduced, even if the optimization increases the execution time of the code.
|
Modeling energy consumption has been explored at the function level of code @cite_13 . This involves creating a data bank' of how much energy each function costs to run. These energy figures can then be distributed with libraries, or combined with instruction level modeling @cite_15 to estimate a programs energy consumption.
|
{
"cite_N": [
"@cite_15",
"@cite_13"
],
"mid": [
"1568759922",
"2121532505"
],
"abstract": [
"In this contribution the concept of Functional-Level Power Analysis (FLPA) for power estimation of programmable processors is extended in order to model even embedded general purpose processors. The basic FLPA approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory etc. The power consumption of these blocks is described by parameterized arithmetic models. By application of a parser based automated analysis of assembler codes the input parameters of the arithmetic functions like e.g. the achieved degree of parallelism or the kind and number of memory accesses can be computed. For modeling an embedded general purpose processor (here, an ARM940T) the basic FLPA modeling concept had to be extended to a so-called hybrid functional level and instruction level model in order to achieve a good modeling accuracy. The approach is exemplarily demonstrated and evaluated applying a variety of basic digital signal processing tasks ranging from basic filters to complete audio decoders. Estimated power figures for the inspected tasks are compared to physically measured values. A resulting maximum estimation error of less than 8 is achieved.",
"We have developed a function-level power estimation methodology for predicting the power dissipation of embedded software. For a given microprocessor core, we empirically build the “power data bank”, which stores the power information of the built-in library functions and basic instructions. To estimate the average power of an embedded software on this core, we first get the execution information of the target software from program profiling tracing tools. Then we evaluate the total energy consumption and execution time based on the “power data bank”, and take their ratio as the average power. High efficiency is achieved because no power simulator is used once the “power data bank” is built. We apply this method to a commercial microprocessor core and get power estimates with an average error of 3 . With this method, microprocessor vendors can provide users the “power data bank” without releasing details of the core to help users get early power estimates and eventually guide power optimization."
]
}
|
1406.0403
|
2949277052
|
Deeply embedded systems often have the tightest constraints on energy consumption, requiring that they consume tiny amounts of current and run on batteries for years. However, they typically execute code directly from flash, instead of the more energy efficient RAM. We implement a novel compiler optimization that exploits the relative efficiency of RAM by statically moving carefully selected basic blocks from flash to RAM. Our technique uses integer linear programming, with an energy cost model to select a good set of basic blocks to place into RAM, without impacting stack or data storage. We evaluate our optimization on a common ARM microcontroller and succeed in reducing the average power consumption by up to 41 and reducing energy consumption by up to 22 , while increasing execution time. A case study is presented, where an application executes code then sleeps for a period of time. For this example we show that our optimization could allow the application to run on battery for up to 32 longer. We also show that for this scenario the total application energy can be reduced, even if the optimization increases the execution time of the code.
|
Energy modeling has also been explored at a higher level, by considering the average power of each state the processor can be in @cite_24 @cite_6 . This requires less knowledge about the exact instruction stream of the processor, simply the times spent in each mode.
|
{
"cite_N": [
"@cite_24",
"@cite_6"
],
"mid": [
"2042437844",
"2056994060"
],
"abstract": [
"Reducing energy consumption is one of the most important design aspects for small form-factor mobile platforms, such as smartphones and tablets. Despite its potential for power savings, optimally leveraging system low-power sleep states during active mobile workloads, such as video streaming and web browsing, has not been fully explored. One major challenge is to make intelligent power management decisions based on, among other things, accurate system idle duration prediction, which is difficult due to the non-deterministic system interrupt behavior. In this paper, we propose a novel framework, called E2S3 (Energy Efficient Sleep-State Selection), that dynamically enters the optimal low-power sleep state to minimize the system power consumption. In particular, E2S3 detects and exploits short idle durations during active mobile workloads by, (i) finding optimal thresholds (i.e., energy break-even times) for multiple low-power sleep states, (ii) predicting the sleep-state selection error probabilities heuristically, and by (iii) selecting the optimal sleep state based on the expected reward, e.g., power consumption, which incorporates the risks of making a wrong decision We implemented and evaluated E2S3 on Android-based smartphones, demonstrating the effectiveness of the algorithm. The evaluation results show that E2S3 significantly reduces the platform energy consumption, by up to 50 (hence extending battery life), without compromising system performance.",
"Abstract Motivated by the importance of energy consumption in mobile electronics this work describes a methodology developed at ARM for power modeling and energy estimation in complex System-on-Chips (SoCs). The approach is based on developing statistical power models for the system components using regression analysis and extends previous work that has mainly focused on microprocessor cores. The power models are derived from post-layout power-estimation data, after exploring the high-level activity space of each component. The models are then used to conduct an energy analysis based on realistic use cases including web browser benchmarks and multimedia algorithms running on a dual-core processor under Linux. The obtained results show the effects of different hardware configurations on power and energy for a given application and that system level energy consumption analysis can help the design team to make informed architectural trade-offs during the design process."
]
}
|
1406.0288
|
1968289204
|
Continuous action recognition is more challenging than isolated recognition because classification and segmentation must be simultaneously carried out. We build on the well known dynamic time warping framework and devise a novel visual alignment technique, namely dynamic frame warping (DFW), which performs isolated recognition based on per-frame representation of videos, and on aligning a test sequence with a model sequence. Moreover, we propose two extensions which enable to perform recognition concomitant with segmentation, namely one-pass DFW and two-pass DFW. These two methods have their roots in the domain of continuous recognition of speech and, to the best of our knowledge, their extension to continuous visual action recognition has been overlooked. We test and illustrate the proposed techniques with a recently released dataset (RAVEL) and with two public-domain datasets widely used in action recognition (Hollywood-1 and Hollywood-2). We also compare the performances of the proposed isolated and continuous recognition algorithms with several recently published methods.
|
The HMM-based generative models that we just discussed make strict assumptions that observations are conditionally independent, given class labels, and cannot describe long-range dependencies of the observations. This limitation makes the implementation of one-pass dynamic programming methods unreliable because it is difficult to decide which type of transition (within-action or between-action) should be preferred along the DP forward pass. Conditional random fields (CRFs) are discriminative models that explicitly allow transition probabilities to depend on past, present, and future observations. CRF models applied to isolated activity recognition outperform HMMs, e.g., @cite_8 @cite_36 . Several authors extended the CRF framework to incorporate additional latent (or hidden) state variables in order to better deal with the complex structure of human actions and gestures. For example @cite_26 proposed a latent-dynamic CRF model, or LDCRF, to better capture both the sub-gesture and between-gesture dynamics. The method was applied to segment and classify head movements and eye gazing in a human-avatar interactive task.
|
{
"cite_N": [
"@cite_36",
"@cite_26",
"@cite_8"
],
"mid": [
"2123277412",
"2117497855",
"2151214862"
],
"abstract": [
"Activity recognition is a key component for creating intelligent, multi-agent systems. Intrinsically, activity recognition is a temporal classification problem. In this paper, we compare two models for temporal classification: hidden Markov models (HMMs), which have long been applied to the activity recognition problem, and conditional random fields (CRFs). CRFs are discriminative models for labeling sequences. They condition on the entire observation sequence, which avoids the need for independence assumptions between observations. Conditioning on the observations vastly expands the set of features that can be incorporated into the model without violating its assumptions. Using data from a simulated robot tag domain, chosen because it is multi-agent and produces complex interactions between observations, we explore the differences in performance between the discriminatively trained CRF and the generative HMM. Additionally, we examine the effect of incorporating features which violate independence assumptions between observations; such features are typically necessary for high classification accuracy. We find that the discriminatively trained CRF performs as well as or better than an HMM even when the model features do not violate the independence assumptions of the HMM. In cases where features depend on observations from many time steps, we confirm that CRFs are robust against any degradation in performance.",
"Many problems in vision involve the prediction of a class label for each frame in an unsegmented sequence. In this paper, we develop a discriminative framework for simultaneous sequence segmentation and labeling which can capture both intrinsic and extrinsic class dynamics. Our approach incorporates hidden state variables which model the sub-structure of a class sequence and learn dynamics between class labels. Each class label has a disjoint set of associated hidden states, which enables efficient training and inference in our model. We evaluated our method on the task of recognizing human gestures from unsegmented video streams and performed experiments on three different datasets of head and eye gestures. Our results demonstrate that our model compares favorably to Support Vector Machines, Hidden Markov Models, and Conditional Random Fields on visual gesture recognition tasks.",
"Abstract We describe algorithms for recognizing human motion in monocular video sequences, based on discriminative conditional random fields (CRFs) and maximum entropy Markov models (MEMMs). Existing approaches to this problem typically use generative structures like the hidden Markov model (HMM). Therefore, they have to make simplifying, often unrealistic assumptions on the conditional independence of observations given the motion class labels and cannot accommodate rich overlapping features of the observation or long-term contextual dependencies among observations at multiple timesteps. This makes them prone to myopic failures in recognizing many human motions, because even the transition between simple human activities naturally has temporal segments of ambiguity and overlap. The correct interpretation of these sequences requires more holistic, contextual decisions, where the estimate of an activity at a particular timestep could be constrained by longer windows of observations, prior and even posterior to that timestep. This would not be computationally feasible with a HMM which requires the enumeration of a number of observation sequences exponential in the size of the context window. In this work we follow a different philosophy: instead of restrictively modeling the complex image generation process – the observation, we work with models that can unrestrictedly take it as an input, hence condition on it. Conditional models like the proposed CRFs seamlessly represent contextual dependencies and have computationally attractive properties: they support efficient, exact recognition using dynamic programming, and their parameters can be learned using convex optimization. We introduce conditional graphical models as complementary tools for human motion recognition and present an extensive set of experiments that show not only how these can successfully classify diverse human activities like walking, jumping, running, picking or dancing, but also how they can discriminate among subtle motion styles like normal walks and wander walks."
]
}
|
1406.0288
|
1968289204
|
Continuous action recognition is more challenging than isolated recognition because classification and segmentation must be simultaneously carried out. We build on the well known dynamic time warping framework and devise a novel visual alignment technique, namely dynamic frame warping (DFW), which performs isolated recognition based on per-frame representation of videos, and on aligning a test sequence with a model sequence. Moreover, we propose two extensions which enable to perform recognition concomitant with segmentation, namely one-pass DFW and two-pass DFW. These two methods have their roots in the domain of continuous recognition of speech and, to the best of our knowledge, their extension to continuous visual action recognition has been overlooked. We test and illustrate the proposed techniques with a recently released dataset (RAVEL) and with two public-domain datasets widely used in action recognition (Hollywood-1 and Hollywood-2). We also compare the performances of the proposed isolated and continuous recognition algorithms with several recently published methods.
|
The methods described so far use motion (or pose) parameters which are extracted using motion capture systems. The characterization of actions using such parameters is attractive both because they are highly discriminant and because they live in a low-dimensional space, hence they can be easily plugged in the HMM and CRF frameworks. However, it is not always possible to reliably extract discriminant motion or pose descriptors from visual data, and sophisticated multiple-camera setups are required both for training and recognition. Alternatively, image-based descriptors are easy to extract but the corresponding feature vectors are less discriminant and have dimensions as high as hundreds, which make them unsuitable for training graphical models. Recently @cite_20 proposed to plug a latent pose estimator into the LDCRF model of @cite_26 by jointly training an image-to-pose regressor and a hidden-state conditional random field model. Although appealing, this model also requires a large training set gathered with synchronized multiple-camera and motion capture systems @cite_30 .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_20"
],
"mid": [
"2099333815",
"2117497855",
"1526529273"
],
"abstract": [
"While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.",
"Many problems in vision involve the prediction of a class label for each frame in an unsegmented sequence. In this paper, we develop a discriminative framework for simultaneous sequence segmentation and labeling which can capture both intrinsic and extrinsic class dynamics. Our approach incorporates hidden state variables which model the sub-structure of a class sequence and learn dynamics between class labels. Each class label has a disjoint set of associated hidden states, which enables efficient training and inference in our model. We evaluated our method on the task of recognizing human gestures from unsegmented video streams and performed experiments on three different datasets of head and eye gestures. Our results demonstrate that our model compares favorably to Support Vector Machines, Hidden Markov Models, and Conditional Random Fields on visual gesture recognition tasks.",
"Recently, models based on conditional random fields (CRF) have produced promising results on labeling sequential data in several scientific fields. However, in the vision task of continuous action recognition, the observations of visual features have dimensions as high as hundreds or even thousands. This might pose severe difficulties on parameter estimation and even degrade the performance. To bridge the gap between the high dimensional observations and the random fields, we propose a novel model that replace the observation layer of a traditional random fields model with a latent pose estimator. In training stage, the human pose is not observed in the action data, and the latent pose estimator is learned under the supervision of the labeled action data, instead of image-to-pose data. The advantage of this model is twofold. First, it learns to convert the high dimensional observations into more compact and informative representations. Second, it enables transfer learning to fully utilize the existing knowledge and data on image-to-pose relationship. The parameters of the latent pose estimator and the random fields are jointly optimized through a gradient ascent algorithm. Our approach is tested on HumanEva [1] --- a publicly available dataset. The experiments show that our approach can improve recognition accuracy over standard CRF model and its variations. The performance can be further significantly improved by using additional image-to-pose data for training. Our experiments also show that the model trained on HumanEva can generalize to different environment and human subjects."
]
}
|
1406.0288
|
1968289204
|
Continuous action recognition is more challenging than isolated recognition because classification and segmentation must be simultaneously carried out. We build on the well known dynamic time warping framework and devise a novel visual alignment technique, namely dynamic frame warping (DFW), which performs isolated recognition based on per-frame representation of videos, and on aligning a test sequence with a model sequence. Moreover, we propose two extensions which enable to perform recognition concomitant with segmentation, namely one-pass DFW and two-pass DFW. These two methods have their roots in the domain of continuous recognition of speech and, to the best of our knowledge, their extension to continuous visual action recognition has been overlooked. We test and illustrate the proposed techniques with a recently released dataset (RAVEL) and with two public-domain datasets widely used in action recognition (Hollywood-1 and Hollywood-2). We also compare the performances of the proposed isolated and continuous recognition algorithms with several recently published methods.
|
The proposed one-pass continuous recognition algorithm also differs from recently proposed dynamic time warping methods. @cite_34 address the problem of continuous hand gesture recognition and a pruning strategy is proposed such that DTW paths that do not correspond to valid (trained) gestures are abandoned. At runtime, this is less efficient than one-pass DP algorithms which extract a single path rather than multiple paths. @cite_29 address the problem of continuous action recognition but propose an average-template representation for an action category and a dissimilarity measure which would not be able to handle large intra-class variance. Dynamic time warping has also been applied to action recognition in combination with unsupervised manifold learning techniques, e.g., @cite_28 @cite_49 @cite_37 , but the problem of continuous recognition was not addressed in these papers. To the best of our knowledge, the full potential of dynamic time warping for the problem of simultaneous segmentation and recognition of human actions has not been systematically exploited in the computer vision domain.
|
{
"cite_N": [
"@cite_37",
"@cite_28",
"@cite_29",
"@cite_49",
"@cite_34"
],
"mid": [
"2121045468",
"1570157063",
"2165199619",
"2150696241",
"2125201599"
],
"abstract": [
"We address the problem of learning view-invariant 3D models of human motion from motion capture data, in order to recognize human actions from a monocular video sequence with arbitrary viewpoint. We propose a Spatio-Temporal Manifold (STM) model to analyze non-linear multivariate time series with latent spatial structure and apply it to recognize actions in the joint-trajectories space. Based on STM, a novel alignment algorithm Dynamic Manifold Warping (DMW) and a robust motion similarity metric are proposed for human action sequences, both in 2D and 3D. DMW extends previous works on spatio-temporal alignment by incorporating manifold learning. We evaluate and compare the approach to state-of-the-art methods on motion capture data and realistic videos. Experimental results demonstrate the effectiveness of our approach, which yields visually appealing alignment results, produces higher action recognition accuracy, and can recognize actions from arbitrary views with partial occlusion.",
"2D Tracking.- Understanding Human Motion: A Historic Review.- The Role of Manifold Learning in Human Motion Analysis.- Recognition of Action as a Bayesian Parameter Estimation Problem over Time.- The William Harvey Code: Mathematical Analysis of Optical Flow Computation for Cardiac Motion.- Detection and Tracking of Humans in Single View Sequences Using 2D Articulated Model.- Learning.- Combining Discrete and Continuous 3D Trackers.- Graphical Models for Human Motion Modelling.- 3D Human Motion Analysis in Monocular Video: Techniques and Challenges.- Spatially and Temporally Segmenting Movement to Recognize Actions.- Topologically Constrained Isometric Embedding.- 2D-3D Tracking.- Contours, Optic Flow, and Prior Knowledge: Cues for Capturing 3D Human Motion in Videos.- Tracking Clothed People.- An Introduction to Interacting Simulated Annealing.- Motion Capture for Interaction Environments.- Markerless Motion Capture for Biomechanical Applications.- Biomechanics and Applications.- Qualitative and Quantitative Aspects of Movement: The Discrepancy Between Clinical Gait Analysis and Activities of Daily Life.- Optimization of Human Motion Exemplified with Handbiking by Means of Motion Analysis and Musculoskelet al Models.- Imitation Learning and Transferring of Human Movement and Hand Grasping to Adapt to Environment Changes.- Accurate and Model-free Pose Estimation of Crash Test Dummies.- Modelling and Animation.- A Relational Approach to Content-based Analysis of Motion Capture Data.- The Representation of Rigid Body Motions in the Conformal Model of Geometric Algebra.- Video-based Capturing and Rendering of People.- Interacting Deformable Objects.- From Performance Theory to Character Animation Tools.",
"Several researchers have addressed the problem of human action recognition using a variety of algorithms. An underlying assump- tion in most of these algorithms is that action boundaries are already known in a test video sequence. In this paper, we propose a fast method for continuous human action recognition in a video sequence. We pro- pose the use of a low dimensional feature vector which consists of (a) the projections of the width profile of the actor on to a Discrete Co- sine Transform (DCT) basis and (b) simple spatio-temporal features. We use an earlier proposed average-template with multiple features for modelling human actions and combine it with One-pass Dynamic Pro- graming (DP) algorithm for continuous action recognition. This model accounts for intra-class variability in the way an action is performed. Furthermore, we demonstrate a way to perform noise robust recognition by creating a noise match condition between the train and the test data. The effectiveness of our method is demonstrated by conducting experi- ments on the IXMAS dataset of persons performing various actions and an outdoor Action database collected by us.",
"Alignment of time series is an important problem to solve in many scientific disciplines. In particular, temporal alignment of two or more subjects performing similar activities is a challenging problem due to the large temporal scale difference between human actions as well as the inter intra subject variability. In this paper we present canonical time warping (CTW), an extension of canonical correlation analysis (CCA) for spatio-temporal alignment of human motion between two subjects. CTW extends previous work on CCA in two ways: (i) it combines CCA with dynamic time warping (DTW), and (ii) it extends CCA by allowing local spatial deformations. We show CTW's effectiveness in three experiments: alignment of synthetic data, alignment of motion capture data of two subjects performing similar actions, and alignment of similar facial expressions made by two people. Our results demonstrate that CTW provides both visually and qualitatively better alignment than state-of-the-art techniques based on DTW.",
"Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American sign language (ASL)."
]
}
|
1405.7975
|
2525722290
|
Multi-document summarization is a process of automatic generation of a compressed version of the given collection of documents. Recently, the graph-based models and ranking algorithms have been actively investigated by the extractive document summarization community. While most work to date focuses on homogeneous connecteness of sentences and heterogeneous connecteness of documents and sentences (e.g. sentence similarity weighted by document importance), in this paper we present a novel 3-layered graph model that emphasizes not only sentence and document level relations but also the influence of under sentence level relations (e.g. a part of sentence similarity).
|
The graph-based models have been developed by the extractive document summarization community in the past years @cite_5 @cite_8 . Conventionally, they model a document or a set of documents as a text graph composed by taking a text unit as a node and similarity between text units as edges. The significance of a node in a graph is estimated by graph-based ranking algorithms, such as PageRank @cite_6 or HITS @cite_4 . Sentences in document(s) are ranked based on the computed node significance and the most salient ones are selected to form an extractive summary. An algorithm called LexRank @cite_5 , adapted from PageRank, was applied to calculate sentence significance, which was then used as the criterion to rank and select summary sentences. Meanwhile, Mihalcea and Tarau @cite_8 presented their PageRank variation, called TextRank, in the same year.
|
{
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_6",
"@cite_8"
],
"mid": [
"2110693578",
"2138621811",
"2066636486",
"1525595230"
],
"abstract": [
"We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.",
"The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.",
"In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http: google.stanford.edu . To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.",
"In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications."
]
}
|
1405.7741
|
2309463454
|
Many convex optimization methods are conceived of and analyzed in a largely separate fashion. In contrast to this traditional separation, this manuscript points out and demonstrates the utility of an important but largely unremarked common thread running through many prominent optimization methods. Specifically, we show that methods such as successive orthogonal projection, gradient descent, projected gradient descent, the proximal point method, forward-backward splitting, the alternating direction method of multipliers, and under- or over-relaxed variants of the preceding all involve updates that are of a common type --- namely, the updates satisfy a property known as pseudocontractivity. Moreover, since the property of pseudocontractivity is preserved under both composition and convex combination, updates constructed via these operations from pseudocontractive updates are themselves pseudocontractive. Having demonstrated that pseudocontractive updates are to be found in many optimization methods, we then provide an initial example of the type of unified analysis that becomes possible in the settings where the property of pseudocontractivity is found to hold. Specifically, we prove a novel bound satisfied by the norm of the difference in iterates of pseudocontractive updates and we then use this bound to establish an @math worst-case convergence rate on the error criterion @math for any method involving pseudocontractive updates (where @math is the number of iterations).
|
In @cite_16 the discussion covers (projected) gradient descent, the proximal-point method, forward-backward splitting, the alternating direction method of multipliers, and numerous other methods. The coverage does touch on the special case of @math -pseudocontractivity (i.e., firm nonexpansiveness) and on the special case of @math -inverse strong monotonicity; however, without the more general concepts of @math -pseudocontractivity and @math -inverse strong monotonicity (and the relationship between these concepts) there are significant limits to the results that can be obtained. One particularly notable limitation of a focus on firmly nonexpansive operators is that the class of firmly nonexpansive operators is not closed under composition.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"1525779601"
],
"abstract": [
"An electrically-operated shutter provided with a change-over device or arrangement for effecting daylight and flash photography automatically shifting from a daylight mode to a flash photography mode as a function of the detected brightness level of the field being photographed. One embodiment provides for manually presetting the camera for flash photography which is then automatically carried out with an automatic change-over from a daylight photography mode to a flash photography mode. When carrying out daylight photography the power source energy is used in control and in energizing an electromagnet that controls the exposure termination under control of a delay circuit. When taking flash exposure the delay circuit is not employed and a fixed exposure time is used. The power supply is then used to energize the flashbulb and the accuracy of the shutter timing is unaffected as might be the case if the power source were energizing both the flashbulb and driving or energizing the shutter. The same shutter-operating elements are used in both modes of operation and the change-over arrangement eliminates the use of the delay circuit when the brightness level is below a predetermined level so that flash photography becomes necessary."
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
In general, corpus-based approaches to paraphrase have extended the distributional hypothesis from words to phrases. The extended distributional hypothesis is that phrases that occur in similar contexts tend to have similar meanings @cite_18 . For example, consider the following fragments of text @cite_23 :
|
{
"cite_N": [
"@cite_18",
"@cite_23"
],
"mid": [
"1965605789",
"1549339229"
],
"abstract": [
"In this paper, we propose an unsupervised method for discovering inference rules from text, such as \"X is author of Y a X wrote Y\", \"X solved Y a X found a solution to Y\", and \"X caused Y a Y is triggered by X\". Inference rules are extremely important in many fields such as natural language processing, information retrieval, and artificial intelligence in general. Our algorithm is based on an extended version of Harris' Distributional Hypothesis, which states that words that occurred in the same contexts tend to be similar. Instead of using this hypothesis on words, we apply it to paths in the dependency trees of a parsed corpus.",
"This paper presents a lightweight method for unsupervised extraction of paraphrases from arbitrary textual Web documents. The method differs from previous approaches to paraphrase acquisition in that 1) it removes the assumptions on the quality of the input data, by using inherently noisy, unreliable Web documents rather than clean, trustworthy, properly formatted documents; and 2) it does not require any explicit clue indicating which documents are likely to encode parallel paraphrases, as they report on the same events or describe the same stories. Large sets of paraphrases are collected through exhaustive pairwise alignment of small needles, i.e., sentence fragments, across a haystack of Web document sentences. The paper describes experiments on a set of about one billion Web documents, and evaluates the extracted paraphrases in a natural-language Web search application."
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
From the shared context, we can infer a degree of semantic similarity between the phrases withdrew from and pulled out of . We call this the holistic (non-compositional) approach to paraphrase @cite_7 , because the phrases are treated as opaque wholes. The holistic approach does not model the individual words in the phrases.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2127002961"
],
"abstract": [
"Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics."
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
The creative power of language comes from combining words to create new meanings. With a vocabulary of @math unigrams, there are @math possible bigrams and @math possible trigrams. We give meaning to @math -grams ( @math ) by composing the meanings of their component words. The holistic approach lacks the ability to compose meanings and cannot scale up to phrases and sentences. Holistic approaches to paraphrase do not address the creative power of language @cite_32 @cite_0 .
|
{
"cite_N": [
"@cite_0",
"@cite_32"
],
"mid": [
"1514897281",
"1555082340"
],
"abstract": [
"1. WHY MEANING PROBABLY ISN'T CONCEPTUAL ROLE (1991) 5. THE EMPTINESS OF THE LEXICON (1998) 7. BRANDOM'S BURDENS: CRITICAL STUDY OF BRANDOM'S ARTICULATING REASONS",
"The work written by the noted American linguist two decades ago explains the basic principles of transformational generative grammar, its relation to the general structure of an adequate language theory, and its specific application to English."
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
Let @math be a noun-modifier phrase, and assume that we have context vectors @math and @math that represent the component words @math and @math . One of the earliest proposals for semantic composition is to represent the bigram @math by the vector sum @math @cite_29 . To measure the similarity of a noun-modifier phrase, @math , and a noun, @math , we calculate the cosine of the angle between @math and the context vector @math for @math .
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"1983578042"
],
"abstract": [
"How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched."
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
mitchell08,mitchell10 suggest element-wise multiplication as a composition operation, @math , where @math . Since @math , element-wise multiplication is not sensitive to word order. However, in an experimental evaluation of seven compositional models and two noncompositional models, element-wise multiplication had the best performance @cite_5 .
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1984052055"
],
"abstract": [
"Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task."
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
In the holistic approach, @math is treated as if it were an individual word. A context vector for @math is constructed from a corpus in the same manner as it would be constructed for a unigram. This approach does not scale up, but it does work well for a predetermined small set of high frequency @math -grams @cite_7 . guevara10 and baroni10 point out that a small set of bigrams with holistic context vectors can be used to train a regression model. For example, a regression model can be trained to map the context vectors @math and @math to the holistic context vector for @math @cite_16 . Given a new bigram, @math , with context vectors @math and @math , the regression model can use @math and @math to predict the holistic context vector for @math .
|
{
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"2251471769",
"2127002961"
],
"abstract": [
"In this paper we explore the computational modelling of compositionality in distributional models of semantics. In particular, we model the semantic composition of pairs of adjacent English Adjectives and Nouns from the British National Corpus. We build a vector-based semantic space from a lemmatised version of the BNC, where the most frequent A-N lemma pairs are treated as single tokens. We then extrapolate three different models of compositionality: a simple additive model, a pointwise-multiplicative model and a Partial Least Squares Regression (PLSR) model. We propose two evaluation methods for the implemented models. Our study leads to the conclusion that regression-based models of compositionality generally out-perform additive and multiplicative approaches, and also show a number of advantages that make them very promising for future research.",
"Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics."
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
Many other ideas have been proposed for extending distributional semantics to phrases and sentences. Recently there have been several overviews of this topic @cite_5 @cite_7 @cite_27 . Most of the proposed extensions to distributional semantics involve operations from linear algebra, such as tensor products @cite_30 @cite_31 @cite_20 @cite_9 . Another proposal is to operate on similarities instead of (or in addition to) working directly with context vectors @cite_33 @cite_7 @cite_21 .
|
{
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_20"
],
"mid": [
"2193700427",
"2103305545",
"2127002961",
"1845242646",
"2962769333",
"2250379126",
"1984052055",
"",
""
],
"abstract": [
"The are two main approaches to the representation of meaning in Computational Linguistics: a symbolic approach and a distributional approach. This paper considers the fundamental question of how these approaches might be combined. The proposal is to adapt a method from the Cognitive Science literature, in which symbolic and connectionist representations are combined using tensor products. Possible applications of this method for language processing are described. Finally, a potentially fruitful link between Quantum Mechanics, Computational Linguistics, and other related areas such as Information Retrieval and Machine Learning, is proposed.",
"Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.",
"Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.",
"Formal and distributional semantic models offer complementary benefits in modeling meaning. The categorical compositional distributional model of meaning of (2010) (abbreviated to DisCoCat in the title) combines aspects of both to provide a general framework in which meanings of words, obtained distributionally, are composed using methods from the logical setting to form sentence meaning. Concrete consequences of this general abstract setting and applications to empirical data are under active study (, 2011; Grefenstette and Sadrzadeh, 2011). In this paper, we extend this study by examining transitive verbs, represented as matrices in a DisCoCat. We discuss three ways of constructing such matrices, and evaluate each method in a disambiguation task developed by Grefenstette and Sadrzadeh (2011).",
"There have been several efforts to extend distributional semantics beyond individual words, to measure the similarity of word pairs, phrases, and sentences (briefly, tuples ; ordered sets of words, contiguous or noncontiguous). One way to extend beyond words is to compare two tuples using a function that combines pairwise similarities between the component words in the tuples. A strength of this approach is that it works with both relational similarity (analogy) and compositional similarity (paraphrase). However, past work required hand-coding the combination function for different tasks. The main contribution of this paper is that combination functions are generated by supervised learning. We achieve state-of-the-art results in measuring relational similarity between word pairs (SAT analogies and SemEval 2012 Task 2) and measuring compositional similarity between noun-modifier phrases and unigrams (multiple-choice paraphrase questions).",
"Distributional representations have recently been proposed as a general-purpose representation of natural language meaning, to replace logical form. There is, however, one important difference between logical and distributional representations: Logical languages have a clear semantics, while distributional representations do not. In this paper, we propose a semantics for distributional representations that links points in vector space to mental concepts. We extend this framework to a joint semantics of logic and distributions by linking intensions of logical expressions to mental concepts.",
"Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.",
"",
""
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
Much work focuses on finding the right @math for various types of semantic composition @cite_30 @cite_31 @cite_22 @cite_5 @cite_16 @cite_9 . We call this general approach context composition , due to the arguments of the function @math .
|
{
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_9",
"@cite_5",
"@cite_31",
"@cite_16"
],
"mid": [
"2193700427",
"",
"1845242646",
"1984052055",
"",
"2251471769"
],
"abstract": [
"The are two main approaches to the representation of meaning in Computational Linguistics: a symbolic approach and a distributional approach. This paper considers the fundamental question of how these approaches might be combined. The proposal is to adapt a method from the Cognitive Science literature, in which symbolic and connectionist representations are combined using tensor products. Possible applications of this method for language processing are described. Finally, a potentially fruitful link between Quantum Mechanics, Computational Linguistics, and other related areas such as Information Retrieval and Machine Learning, is proposed.",
"",
"Formal and distributional semantic models offer complementary benefits in modeling meaning. The categorical compositional distributional model of meaning of (2010) (abbreviated to DisCoCat in the title) combines aspects of both to provide a general framework in which meanings of words, obtained distributionally, are composed using methods from the logical setting to form sentence meaning. Concrete consequences of this general abstract setting and applications to empirical data are under active study (, 2011; Grefenstette and Sadrzadeh, 2011). In this paper, we extend this study by examining transitive verbs, represented as matrices in a DisCoCat. We discuss three ways of constructing such matrices, and evaluate each method in a disambiguation task developed by Grefenstette and Sadrzadeh (2011).",
"Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.",
"",
"In this paper we explore the computational modelling of compositionality in distributional models of semantics. In particular, we model the semantic composition of pairs of adjacent English Adjectives and Nouns from the British National Corpus. We build a vector-based semantic space from a lemmatised version of the BNC, where the most frequent A-N lemma pairs are treated as single tokens. We then extrapolate three different models of compositionality: a simple additive model, a pointwise-multiplicative model and a Partial Least Squares Regression (PLSR) model. We propose two evaluation methods for the implemented models. Our study leads to the conclusion that regression-based models of compositionality generally out-perform additive and multiplicative approaches, and also show a number of advantages that make them very promising for future research."
]
}
|
1405.7908
|
2169417777
|
Semantic composition is the task of understanding the meaning of text by composing the meanings of the individual words in the text. Semantic decomposition is the task of understanding the meaning of an individual word by decomposing it into various aspects (factors, constituents, components) that are latent in the meaning of the word. We take a distributional approach to semantics, in which a word is represented by a context vector. Much recent work has considered the problem of recognizing compositions and decompositions, but we tackle the more dicult generation problem. For simplicity, we focus on noun-modier bigrams and noun unigrams. A test for semantic composition is, given context vectors for the noun and modier in a noun-modier bigram ( red salmon), generate a noun unigram that is synonymous with the given bigram (sockeye). A test for semantic decomposition is, given a context vector for a noun unigram (snifter), generate a noun-modier bigram that is synonymous with the given unigram ( brandy glass). With a vocabulary of about 73,000 unigrams from WordNet, there are 73,000 candidate unigram compositions for a bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a unigram. We generate ranked lists of potential solutions in two passes. A fast unsupervised learning algorithm generates an initial list of candidates and then a slower supervised learning algorithm renes the list. We evaluate the candidate solutions by comparing them to WordNet synonym sets. For decomposition (unigram to bigram), the top 100 most highly ranked bigrams include a WordNet synonym of the given unigram 50.7 of the time. For composition (bigram to unigram), the top 100 most highly ranked unigrams include a WordNet synonym of the given bigram 77.8 of the time.
|
In a noun-modifier phrase, the modifier may be either a noun or an adjective; therefore adjective-noun phrases are a subset of noun-modifier phrases. dinu13 hypothesize that adjectives are functions that map nouns onto modified nouns @cite_1 , thus they believe that noun-noun phrases and adjective-noun phrases should have different kinds of models. The models we present here () treat all noun-modifiers the same way, hence our datasets contain both noun-noun phrases and adjective-noun phrases. For comparison, we will also evaluate our models on dinu13 adjective-noun dataset ().
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1608322251"
],
"abstract": [
"We propose an approach to adjective-noun composition (AN) for corpus-based distributional semantics that, building on insights from theoretical linguistics, represents nouns as vectors and adjectives as data-induced (linear) functions (encoded as matrices) over nominal vectors. Our model significantly outperforms the rivals on the task of reconstructing AN vectors not seen in training. A small post-hoc analysis further suggests that, when the model-generated AN vector is not similar to the corpus-observed AN vector, this is due to anomalies in the latter. We show moreover that our approach provides two novel ways to represent adjective meanings, alternative to its representation via corpus-based co-occurrence vectors, both outperforming the latter in an adjective clustering task."
]
}
|
1405.7058
|
1813069714
|
Regular expression matching using backtracking can have exponential runtime, leading to an algorithmic complexity attack known as REDoS in the systems security literature. In this paper, we build on a recently published static analysis that detects whether a given regular expression can have exponential runtime for some inputs. We systematically construct a more accurate analysis by forming powers and products of transition relations and thereby reducing the REDoS problem to reachability. The correctness of the analysis is proved using a substructural calculus of search trees, where the branching of the tree causing exponential blowup is characterized as a form of non-linearity.
|
The starting point for the present paper was the regular expression analysis RXXR @cite_10 . While that paper was aimed at a security audience, the present paper complements it by using a programming language approach inspired by type theory and logic.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"87129872"
],
"abstract": [
"Regular expressions are a concise yet expressive language for expressing patterns. For instance, in networked software, they are used for input validation and intrusion detection. Yet some widely deployed regular expression matchers based on backtracking are themselves vulnerable to denial-of-service attacks, since their runtime can be exponential for certain input strings. This paper presents a static analysis for detecting such vulnerable regular expressions. The running time of the analysis compares favourably with tools based on fuzzing, that is, randomly generating inputs and measuring how long matching them takes. Unlike fuzzers, the analysis pinpoints the source of the vulnerability and generates possible malicious inputs for programmers to use in security testing. Moreover, the analysis has a firm theoretical foundation in abstract machines. Testing the analysis on two large repositories of regular expressions shows that the analysis is able to find significant numbers of vulnerable regular expressions in a matter of seconds."
]
}
|
1405.7058
|
1813069714
|
Regular expression matching using backtracking can have exponential runtime, leading to an algorithmic complexity attack known as REDoS in the systems security literature. In this paper, we build on a recently published static analysis that detects whether a given regular expression can have exponential runtime for some inputs. We systematically construct a more accurate analysis by forming powers and products of transition relations and thereby reducing the REDoS problem to reachability. The correctness of the analysis is proved using a substructural calculus of search trees, where the branching of the tree causing exponential blowup is characterized as a form of non-linearity.
|
Program analysis for security is by now a well established field @cite_21 . REDoS is known in the literature as a special case of algorithmic complexity attacks @cite_20 @cite_8 . Parsing Expression Grammars (PEGs) have been proposed as an alternative to regular expressions @cite_28 that avoid their nondeterminism. In a series of tutorials @cite_6 @cite_16 , Cox has argued for Thompson's lockstep matcher @cite_24 as a superior alternative to backtracking matchers. However, backtracking matchers vulnerable to REDoS are still widely deployed, including the matchers in the Java and .NET platforms as well as the PCRE matcher used in some intrusion detection systems. Hence the REDoS problem will remain with us for the foreseeable future.
|
{
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_16",
"@cite_20"
],
"mid": [
"2105956753",
"2018045485",
"2158297335",
"",
"",
"2139626883",
"1563402047"
],
"abstract": [
"Network Intrusion Detection Systems (NIDS) have become crucial to securing modern networks. To be effective, a NIDS must be able to counter evasion attempts and operate at or near wire-speed. Failure to do so allows malicious packets to slip through a NIDS undetected. In this paper, we explore NIDS evasion through algorithmic complexity attacks. We present a highly effective attack against the Snort NIDS, and we provide a practical algorithmic solution that successfully thwarts the attack. This attack exploits the behavior of rule matching, yielding inspection times that are up to 1.5 million times slower than that of benign packets. Our analysis shows that this attack is applicable to many rules in Snort?s ruleset, rendering vulnerable the thousands of networks protected by it. Our countermeasure confines the inspection time to within one order of magnitude of benign packets. Experimental results using a live system show that an attacker needs only 4.0 kbps of bandwidth to perpetually disable an unmodified NIDS, whereas all intrusions are detected when our countermeasure is used.",
"For decades we have been using Chomsky's generative system of grammars, particularly context-free grammars (CFGs) and regular expressions (REs), to express the syntax of programming languages and protocols. The power of generative grammars to express ambiguity is crucial to their original purpose of modelling natural languages, but this very power makes it unnecessarily difficult both to express and to parse machine-oriented languages using CFGs. Parsing Expression Grammars (PEGs) provide an alternative, recognition-based formal foundation for describing machine-oriented syntax, which solves the ambiguity problem by not introducing ambiguity in the first place. Where CFGs express nondeterministic choice between alternatives, PEGs instead use prioritized choice. PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components. A linear-time parser can be built for any PEG, avoiding both the complexity and fickleness of LR parsers and the inefficiency of generalized CFG parsing. While PEGs provide a rich set of operators for constructing grammars, they are reducible to two minimal recognition schemas developed around 1970, TS TDPL and gTS GTDPL, which are here proven equivalent in effective recognition power.",
"All software projects are guaranteed to have one artifact in common $source code. Together with architectural risk analysis, code review for security ranks very high on the list of software security best practices. We look at how to automate source-code security analysis with static analysis tools.",
"",
"",
"Reynolds's defunctionalization technique is a whole-program transformation from higher-order to first-order functional programs. We study practical applications of this transformation and uncover new connections between seemingly unrelated higher-order and first-order specifications and between their correctness proofs. Defunctionalization therefore appearsboth as a springboard for rev ealing new connections and as a bridge for transferring existing results between the first-order world and the higher-order world.",
"We present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have \"average-case\" expected running time that's far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71 of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks."
]
}
|
1405.7545
|
160239212
|
The recent trend in action recognition is towards larger datasets, an increasing number of action classes and larger visual vocabularies. State-of-the-art human action classification in challenging video data is currently based on a bag-of-visual-words pipeline in which space-time features are aggregated globally to form a histogram. The strategies chosen to sample features and construct a visual vocabulary are critical to performance, in fact often dominating performance. In this work we provide a critical evaluation of various approaches to building a vocabulary and show that good practises do have a significant impact. By subsampling and partitioning features strategically, we are able to achieve state-of-the-art results on 5 major action recognition datasets using relatively small visual vocabularies.
|
bof per category The selection of good partitioning clusters to form a visual vocabulary is also important, as they form the basic units of the histogram representation on which a classifier will base its decision @cite_12 . Thus, for a categorisation task, having clusters which represent feature patches that distinguish the classes is most likely to make classification easier. This motivates per-category clustering, to preserve discriminative information that may be lost by a universal vocabulary @cite_12 , especially when distinct categories are very similar. The downside is that learning a separate visual vocabulary per class may also generate many redundant clusters when features are shared amongst multiple categories. On the other hand, since the complexity of building visual vocabularies depends on the number of cluster centres @math , clustering features independently allows to reduce @math whilst keeping the representation dimensionality high, and makes the vocabulary learning easily parallelisable. So far, per-category training has seen promise on a single dataset with a small number of classes @cite_12 ; it is therefore to be seen how it performs on challenging action classification data with a large number of action classes.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"280632315"
],
"abstract": [
"In this paper we propose two distinct enhancements to the basic “bag-of-keypoints” image categorisation scheme proposed in [4]. In this approach images are represented as a variable sized set of local image features (keypoints). Thus, we require machine learning tools which can operate on sets of vectors. In [4] this is achieved by representing the set as a histogram over bins found by k-means. We show how this approach can be improved and generalised using Gaussian Mixture Models (GMMs). Alternatively, the set of keypoints can be represented directly as a probability density function, over which a kernel can be defined. This approach is shown to give state of the art categorisation performance."
]
}
|
1405.7475
|
2952407544
|
Graph-based assessment formalisms have proven to be useful in the safety, dependability, and security communities to help stakeholders manage risk and maintain appropriate documentation throughout the system lifecycle. In this paper, we propose a set of methods to automatically construct security argument graphs, a graphical formalism that integrates various security-related information to argue about the security level of a system. Our approach is to generate the graph in a progressive manner by exploiting logical relationships among pieces of diverse input information. Using those emergent argument patterns as a starting point, we define a set of extension templates that can be applied iteratively to grow a security argument graph. Using a scenario from the electric power sector, we demonstrate the graph generation process and highlight its application for system security evaluation in our prototype software tool, CyberSAGE.
|
Safety case generation A safety case uses certain argument strategies to organize a body of evidence so as to provide a compelling case for supporting certain safety claims (goals) @cite_7 . Safety cases are typically constructed manually. Recent efforts (e.g., @cite_5 @cite_19 ) have begun to introduce formal semantics to help automate the safety case generation process. Compared with those recent efforts, our proposed approach focuses on argument patterns that incorporate various security-related evidence, including security goals and attacker models. We also formalize the template in a local way, which simplifies the definition and instantiation of the template while still allowing progressive generation of the argument graph.
|
{
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_7"
],
"mid": [
"177970991",
"1855492256",
"2144814285"
],
"abstract": [
"By capturing common structures of successful arguments, safety case patterns provide an approach for reusing strategies for reasoning about safety. In the current state of the practice, patterns exist as descriptive specifications with informal semantics, which not only offer little opportunity for more sophisticated usage such as automated instantiation, composition and manipulation, but also impede standardization efforts and tool interoperability. To address these concerns, this paper gives (i) a formal definition for safety case patterns, clarifying both restrictions on the usage of multiplicity and well-founded recursion in structural abstraction, (ii) formal semantics to patterns, and (iii) a generic data model and algorithm for pattern instantiation. We illustrate our contributions by application to a new pattern, the requirements breakdown pattern, which builds upon our previous work.",
"We describe a method for the automatic assembly of aviation safety cases by combining auto-generated argument fragments derived from the application of a formal method to software, with manually created argument fragments derived from system safety analysis. Our approach emphasizes the heterogeneity of safety-relevant information and we show how such diverse content can be integrated into a single safety case. We illustrate our approach by applying it to an experimental Unmanned Aircraft System (UAS).",
""
]
}
|
1405.7475
|
2952407544
|
Graph-based assessment formalisms have proven to be useful in the safety, dependability, and security communities to help stakeholders manage risk and maintain appropriate documentation throughout the system lifecycle. In this paper, we propose a set of methods to automatically construct security argument graphs, a graphical formalism that integrates various security-related information to argue about the security level of a system. Our approach is to generate the graph in a progressive manner by exploiting logical relationships among pieces of diverse input information. Using those emergent argument patterns as a starting point, we define a set of extension templates that can be applied iteratively to grow a security argument graph. Using a scenario from the electric power sector, we demonstrate the graph generation process and highlight its application for system security evaluation in our prototype software tool, CyberSAGE.
|
Fault tree generation Fault tree analysis is a classic deductive method used to determine what combinations of basic component failures can lead to a system-level fault event @cite_8 . While fault trees are usually constructed manually in practice, there has been a steady stream of efforts to automate the fault tree generation process. For example, @cite_4 propose a method to transform a UML system model to dynamic fault trees. @cite_1 propose a method to automatically generate a dynamic fault tree from an Architectural Analysis and Design Language (AADL) model. Recently, @cite_11 propose an automatic synthesis method to generate a static fault tree from a system model specified with SysML.
|
{
"cite_N": [
"@cite_1",
"@cite_11",
"@cite_4",
"@cite_8"
],
"mid": [
"",
"2103311653",
"2135675358",
"1543835543"
],
"abstract": [
"",
"Fault tree analysis (FTA) is a traditional reliability analysis technique. In practice, the manual development of fault trees could be costly and error-prone, especially in the case of fault tolerant systems due to the inherent complexities such as various dependencies and interactions among components. Some dynamic fault tree gates, such as Functional Dependency (FDEP) and Priority AND (PAND), are proposed to model the functional and sequential dependencies, respectively. Unfortunately, the potential semantic troubles and limitations of these gates have not been well studied before. In this paper, we describe a framework to automatically generate static fault trees from system models specified with SysML. A reliability configuration model (RCM) and a static fault tree model (SFTM) are proposed to embed system configuration information needed for reliability analysis and error mechanism for fault tree generation, respectively. In the SFTM, the static representations of functional and sequential dependencies with standard Boolean AND and OR gates are proposed, which can avoid the problems of the dynamic FDEP and PAND gates and can reduce the cost of analysis based on a combinatorial model. A fault-tolerant parallel processor (FTTP) example is used to demonstrate our approach.",
"The reliability of a computer-based system may be as important as its performance and its correctness of computation. It is worthwhile to estimate system reliability at the conceptual design stage, since reliability can influence the subsequent design decisions and may often be pivotal for making trade-offs or in establishing system cost. In this paper we describe a framework for modeling computer-based systems, based on the Unified Modeling Language (UML), that facilitates automated dependability analysis during design. An algorithm to automatically synthesize dynamic fault trees (DFTs) from the UML system model is developed. We succeed both in embedding information needed for reliability analysis within the system model and in generating the DFT Thereafter, we evaluate our approach using examples of real systems. We analytically compute system unreliability from the algorithmically developed DFT and we compare our results with the analytical solution of manually developed DFTs. Our solutions produce the same results as manually generated DFTs.",
"Introduction: Since 1975, a short course entitled \"System Safety and Reliability Analysis\" has been presented to over 200 NRC personnel and contractors. The course has been taught jointly by David F. Haasl, Institute of System Sciences, Professor Norman H. Roberts, University of Washington, and members of the Probabilistic Analysis Staff, NRC, as part of a risk assessment training program sponsored by the Probabilistic Analysis Staff. This handbook has been developed not only to serve as text for the System Safety and Reliability Course, but also to make available to others a set of otherwise undocumented material on fault tree construction and evaluation. The publication of this handbook is in accordance with the recommendations of the Risk Assessment Review Group Report (NUREG CR-0400) in which it was stated that the fault event tree methodology both can and should be used more widely by the NRC. It is hoped that this document will help to codify and systematize the fault tree approach to systems analysis."
]
}
|
1405.7475
|
2952407544
|
Graph-based assessment formalisms have proven to be useful in the safety, dependability, and security communities to help stakeholders manage risk and maintain appropriate documentation throughout the system lifecycle. In this paper, we propose a set of methods to automatically construct security argument graphs, a graphical formalism that integrates various security-related information to argue about the security level of a system. Our approach is to generate the graph in a progressive manner by exploiting logical relationships among pieces of diverse input information. Using those emergent argument patterns as a starting point, we define a set of extension templates that can be applied iteratively to grow a security argument graph. Using a scenario from the electric power sector, we demonstrate the graph generation process and highlight its application for system security evaluation in our prototype software tool, CyberSAGE.
|
Attack trees and other security assessment techniques Attack trees and their variations (e.g., attack graphs @cite_10 @cite_15 , ADVISE @cite_17 , and attack-defense trees @cite_0 ) have been shown to be useful for security assessment. Inspired by the fault tree formalism, an attack tree graphically represents how a potential threat can be realized through various possible combinations of attacks. Attack trees are usually constructed manually. For the specific domain of network security, multiple efforts (e.g., @cite_10 @cite_15 ) have tried to automate the generation of attack graphs, which are meant to model how an attacker can use staged attacks to compromise certain assets in a network. ADVISE @cite_17 automates the search of attack strategies. Those efforts differ from ours in that they do not provide a framework that can automatically integrate heterogeneous pieces of information (e.g., relating to security goals, workflows, system information, and the attacker) to produce a holistic security argument.
|
{
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"1521415124",
"2110908300",
"2121805588",
"2111145992"
],
"abstract": [
"We introduce and give formal definitions of attack-defense trees. We argue that these trees are a simple, yet powerful tool to analyze complex security and privacy problems. Our formalization is generic in the sense that it supports different semantical approaches. We present several semantics for attack-defense trees along with usage scenarios, and we show how to evaluate attributes.",
"Attack graphs are important tools for analyzing security vulnerabilities in enterprise networks. Previous work on attack graphs has not provided an account of the scalability of the graph generating process, and there is often a lack of logical formalism in the representation of attack graphs, which results in the attack graph being difficult to use and understand by human beings. Pioneer work by Sheyner, et al is the first attack-graph tool based on formal logical techniques, namely model-checking. However, when applied to moderate-sized networks, Sheyner's tool encountered a significant exponential explosion problem. This paper describes a new approach to represent and generate attack graphs. We propose logical attack graphs, which directly illustrate logical dependencies among attack goals and configuration information. A logical attack graph always has size polynomial to the network being analyzed. Our attack graph generation tool builds upon MulVAL, a network security analyzer based on logical programming. We demonstrate how to produce a derivation trace in the MulVAL logic-programming engine, and how to use the trace to generate a logical attack graph in quadratic time. We show experimental evidence that our logical attack graph generation algorithm is very efficient. We have generated logical attack graphs for fully connected networks of 1000 machines using a Pentium 4 CPU with 1GB of RAM.",
"An integral part of modeling the global view of network security is constructing attack graphs. Manual attack graph construction is tedious, error-prone, and impractical for attack graphs larger than a hundred nodes. In this paper we present an automated technique for generating and analyzing attack graphs. We base our technique on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently. We also describe two analyses to help decide which attacks would be most cost-effective to guard against. We implemented our technique in a tool suite and tested it on a small network example, which includes models of a firewall and an intrusion detection system.",
"System architects need quantitative security metrics to make informed trade-off decisions involving system security. The security metrics need to provide insight on weak points in the system defense, considering characteristics of both the system and its adversaries. To provide such metrics, we formally define the ADversary View Security Evaluation (ADVISE) method. Our approach is to create an executable state-based security model of a system and an adversary that represents how the adversary is likely to attack the system and the results of such an attack. The attack decision function uses information about adversary attack preferences and possible attacks against the system to mimic how the adversary selects the most attractive next attack step. The adversary's decision involves looking ahead some number of attack steps. System architects can use ADVISE to compare the security strength of system architecture variants and analyze the threats posed by different adversaries. We demonstrate the feasibility and benefits of ADVISE using a case study. To produce quantitative model-based security metrics, we have implemented the ADVISE method in a tool that facilitates user input of system and adversary data and automatically generates executable models."
]
}
|
1405.6824
|
1594182567
|
Assessing political conversations in social media requires a deeper understanding of the underlying practices and styles that drive these conversations. In this paper, we present a computational approach for assessing online conversational practices of political parties. Following a deductive approach, we devise a number of quantitative measures from a discussion of theoretical constructs in sociological theory. The resulting measures make different - mostly qualitative - aspects of online conversational practices amenable to computation. We evaluate our computational approach by applying it in a case study. In particular, we study online conversational practices of German politicians on Twitter during the German federal election 2013. We find that political parties share some interesting patterns of behavior, but also exhibit some unique and interesting idiosyncrasies. Our work sheds light on (i) how complex cultural phenomena such as online conversational practices are amenable to quantification and (ii) the way social media such as Twitter are utilized by political parties.
|
Twitter is used for many purposes, including the reporting of daily activities, communicating with other users, sharing information, and reporting, or commenting on, news @cite_1 . As such it enables several . Hashtags are primarily used to describe news or communications with others and to find other users’ tweets about certain topics. Since tagging behavior is inspired by the observed use of hashtags in a users’ network @cite_28 , coherent semantic structures emerge from hashtag streams @cite_14 . Retweeting is the forwarding of other users’ tweets. By 2010, conventions as to how, why, and what users retweet had emerged, but the practice had not yet stabilized @cite_7 . The retweetability of a Twitter message is related to its informational content and value and the embeddedness of its sender in following networks @cite_18 . Mentions, sometimes called @mentions or replies, emerged to be Twitter’s convention for the interactive use of addressing others, although it is also being used for other purposes like referencing. In 2009, most tweets without a mention reported daily activities while tweets with @ signs exhibited much higher variance in terms of topics and types of content @cite_25 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_1",
"@cite_25"
],
"mid": [
"2026318959",
"2028900906",
"2001653897",
"2106321773",
"2046804949",
"2140173168"
],
"abstract": [
"Retweeting is the key mechanism for information diffusion in Twitter. It emerged as a simple yet powerful way of disseminating information in the Twitter social network. Even though a lot of information is shared in Twitter, little is known yet about how and why certain information spreads more widely than others. In this paper, we examine a number of features that might affect retweetability of tweets. We gathered content and contextual features from 74M tweets and used this data set to identify factors that are significantly associated with retweet rate. We also built a predictive retweet model. We found that, amongst content features, URLs and hashtags have strong relationships with retweetability. Amongst contextual features, the number of followers and followees as well as the age of the account seem to affect retweetability, while, interestingly, the number of past tweets does not predict retweetability of a user's tweet. We believe that this research would inform the design of sensemaking and analytics tools for social media streams.",
"Although one might argue that little wisdom can be conveyed in messages of 140 characters or less, this paper sets out to explore whether the aggregation of messages in social awareness streams, such as Twitter, conveys meaningful information about a given domain. As a research community, we know little about the structural and semantic properties of such streams, and how they can be analyzed, characterized and used. This paper introduces a network-theoretic model of social awareness stream, a so-called \"tweetonomy\", together with a set of stream-based measures that allow researchers to systematically define and compare different stream aggregations. We apply the model and measures to a dataset acquired from Twitter to study emerging semantics in selected streams. The network-theoretic model and the corresponding measures introduced in this paper are relevant for researchers interested in information retrieval and ontology learning from social awareness streams. Our empirical findings demonstrate that different social awareness stream aggregations exhibit interesting differences, making them amenable for different applications.",
"Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice.",
"Users on Twitter, a microblogging service, started the phenomenon of adding tags to their messages sometime around February 2008. These tags are distinct from those in other Web 2.0 systems because users are less likely to index messages for later retrieval. We compare tagging patterns in Twitter with those in Delicious to show that tagging behavior in Twitter is different because of its conversational, rather than organizational nature. We use a mixed method of statistical analysis and an interpretive approach to study the phenomenon. We find that tagging in Twitter is more about filtering and directing content so that it appears in certain streams. The most illustrative example of how tagging in Twitter differs is the phenomenon of the Twitter micro-meme: emergent topics for which a tag is created, used widely for a few days, then disappears. We describe the micro-meme phenomenon and discuss the importance of this new tagging practice for the larger real-time search context.",
"Microblogging is a new form of communication in which users can describe their current status in short posts distributed by instant messages, mobile phones, email or the Web. Twitter, a popular microblogging tool has seen a lot of growth since it launched in October, 2006. In this paper, we present our observations of the microblogging phenomena by studying the topological and geographical properties of Twitter's social network. We find that people use microblogging to talk about their daily activities and to seek or share information. Finally, we analyze the user intentions associated at a community level and show how users with similar intentions connect with each other.",
"The microblogging service Twitter is in the process of being appropriated for conversational interaction and is starting to be used for collaboration, as well. In order to determine how well Twitter supports user-touser exchanges, what people are using Twitter for, and what usage or design modifications would make it (more) usable as a tool for collaboration, this study analyzes a corpus of naturally-occurring public Twitter messages (tweets), focusing on the functions and uses of the @ sign and the coherence of exchanges. The findings reveal a surprising degree of conversationality, facilitated especially by the use of @ as a marker of addressivity, and shed light on the limitations of Twitters current design for collaborative use."
]
}
|
1405.6223
|
2949245459
|
The essence of the challenges cold start and sparsity in Recommender Systems (RS) is that the extant techniques, such as Collaborative Filtering (CF) and Matrix Factorization (MF), mainly rely on the user-item rating matrix, which sometimes is not informative enough for predicting recommendations. To solve these challenges, the objective item attributes are incorporated as complementary information. However, most of the existing methods for inferring the relationships between items assume that the attributes are "independently and identically distributed (iid)", which does not always hold in reality. In fact, the attributes are more or less coupled with each other by some implicit relationships. Therefore, in this pa-per we propose an attribute-based coupled similarity measure to capture the implicit relationships between items. We then integrate the implicit item coupling into MF to form the Coupled Item-based Matrix Factorization (CIMF) model. Experimental results on two open data sets demonstrate that CIMF outperforms the benchmark methods.
|
Content-based techniques are another successful method by which to recommend relevant items to users by matching the users' personal interests to descriptive item information @cite_20 @cite_23 @cite_22 . Generally, content-based methods are able to cope with the sparsity problem, however, they often assume an item's attributes are iid" which does not always hold in reality. Actually, several research outcomes @cite_7 @cite_19 @cite_13 @cite_4 have been proposed to handle these challenging issues. To the best of our knowledge, in relation to RS, there is only one paper @cite_21 which applies a coupled clustering method to group the items, then exploits CF to make recommendations. But from the perspective of RS, this paper does not fundamentally disclose the iid" assumption for items. This motivates us to analyze the intrinsic relationships from different levels to unfold the assumption.
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_21",
"@cite_19",
"@cite_23",
"@cite_13",
"@cite_20"
],
"mid": [
"2031557486",
"",
"2105896409",
"95086061",
"2108782127",
"2127480961",
"91849029",
"2171960770"
],
"abstract": [
"Clustering ensemble is a powerful approach for improving the accuracy and stability of individual (base) clustering algorithms. Most of the existing clustering ensemble methods obtain the final solutions by assuming that base clusterings perform independently with one another and all objects are independent too. However, in real-world data sources, objects are more or less associated in terms of certain coupling relationships. Base clusterings trained on the source data are complementary to one another since each of them may only capture some specific rather than full picture of the data. In this paper, we discuss the problem of explicating the dependency between base clusterings and between objects in clustering ensembles, and propose a framework for coupled clustering ensembles (CCE). CCE not only considers but also integrates the coupling relationships between base clusterings and between objects. Specifically, we involve both the intra-coupling within one base clustering (i.e., cluster label frequency distribution) and the inter-coupling between different base clusterings (i.e., cluster label co-occurrence dependency). Furthermore, we engage both the intra-coupling between two objects in terms of the base clustering aggregation and the inter-coupling among other objects in terms of neighborhood relationship. This is the first work which explicitly addresses the dependency between base clusterings and between objects, verified by the application of such couplings in three types of consensus functions: clustering-based, object-based and cluster-based. Substantial experiments on synthetic and UCI data sets demonstrate that the CCE framework can effectively capture the interactions embedded in base clusterings and objects with higher clustering accuracy and stability compared to several state-of-the-art techniques, which is also supported by statistical analysis.",
"",
"Coupled behaviors refer to the activities of one to many actors who are associated with each other in terms of certain relationships. With increasing network and community-based events and applications, such as group-based crime and social network interactions, behavior coupling contributes to the causes of eventual business problems. Effective approaches for analyzing coupled behaviors are not available, since existing methods mainly focus on individual behavior analysis. This paper discusses the problem of Coupled Behavior Analysis (CBA) and its challenges. A Coupled Hidden Markov Model (CHMM)-based approach is illustrated to model and detect abnormal group-based trading behaviors. The CHMM models cater for: 1) multiple behaviors from a group of people, 2) behavioral properties, 3) interactions among behaviors, customers, and behavioral properties, and 4) significant changes between coupled behaviors. We demonstrate and evaluate the models on order-book-level stock tick data from a major Asian exchange and demonstrate that the proposed CHMMs outperforms HMM-only for modeling a single sequence or combining multiple single sequences, without considering coupling relationships to detect anomalies. Finally, we discuss interaction relationships and modes between coupled behaviors, which are worthy of substantial study.",
"Recommender systems are very useful due to the huge volume of information available on the Web. It helps users alleviate the information overload problem by recommending users with the personalized information, products or services (called items). Collaborative filtering and content-based recommendation algorithms have been widely deployed in e-commerce web sites. However, they both suffer from the scalability problem. In addition, there are few suitable similarity measures for the content-based recommendation methods to compute the similarity between items. In this paper, we propose a hybrid recommendation algorithm by combing the content-based and collaborative filtering techniques as well as incorporating the coupled similarity. Our method firstly partitions items into several item groups by using a coupled version of k-modes clustering algorithm, where the similarity between items is measured by the Coupled Object Similarity considering coupling between items. The collaborative filtering technique is then used to produce the recommendations for active users. Experimental results show that our proposed hybrid recommendation algorithm effectively solves the scalability issue of recommender systems and provides a comparable recommendation quality when lacking most of the item features.",
"The similarity between nominal objects is not straightforward, especially in unsupervised learning. This paper proposes coupled similarity metrics for nominal objects, which consider not only intra-coupled similarity within an attribute (i.e., value frequency distribution) but also inter-coupled similarity between attributes (i.e. feature dependency aggregation). Four metrics are designed to calculate the inter-coupled similarity between two categorical values by considering their relationships with other attributes. The theoretical analysis reveals their equivalent accuracy and superior efficiency based on intersection against others, in particular for large-scale data. Substantial experiments on extensive UCI data sets verify the theoretical conclusions. In addition, experiments of clustering based on the derived dissimilarity metrics show a significant performance improvement.",
"Recommender systems improve access to relevant products and information by making personalized suggestions based on previous examples of a user's likes and dislikes. Most existing recommender systems use collaborative filtering methods that base recommendations on other users' preferences. By contrast,content-based methods use information about an item itself to make suggestions.This approach has the advantage of being able to recommend previously unrated items to users with unique interests and to provide explanations for its recommendations. We describe a content-based book recommending system that utilizes information extraction and a machine-learning algorithm for text categorization. Initial experimental results demonstrate that this approach can produce accurate recommendations.",
"The usual representation of quantitative data is to formalize it as an information table, which assumes the independence of attributes. In real-world data, attributes are more or less interacted and coupled via explicit or implicit relationships. Limited research has been conducted on analyzing such attribute interactions, which only describe a local picture of attribute couplings in an implicit way. This paper proposes a framework of the coupled attribute analysis to capture the global dependency of continuous attributes. Such global couplings integrate the intra-coupled interaction within an attribute (i.e. the correlations between attributes and their own powers) and inter-coupled interaction among different attributes (i.e. the correlations between attributes and the powers of others) to form a coupled representation for numerical objects by the Taylor-like expansion. This work makes one step forward towards explicitly addressing the global interactions of continuous attributes, verified by the applications in data structure analysis, data clustering, and data classification. Substantial experiments on 13 UCI data sets demonstrate that the coupled representation can effectively capture the global couplings of attributes and outperforms the traditional way, supported by statistical analysis.",
"This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations."
]
}
|
1405.6369
|
1826013816
|
Advances in high energy physics have created the need to increase computational capacity. Project HEPGAME was composed to address this challenge. One of the issues is that numerical integration of expressions of current interest have millions of terms and takes weeks to compute. We have investigated ways to simplify these expressions, using Horner schemes and common subexpression elimination. Our approach applies MCTS, a search procedure that has been successful in AI. We use it to find near-optimal Horner schemes. Although MCTS finds better solutions, this approach gives rise to two further challenges. (1) MCTS (with UCT) introduces a constant, @math that governs the balance between exploration and exploitation. This constant has to be tuned manually. (2) There should be more guided exploration at the bottom of the tree, since the current approach reduces the quality of the solution towards the end of the expression. We investigate NMCS (Nested Monte Carlo Search) to address both issues, but find that NMCS is computationally unfeasible for our problem. Then, we modify the MCTS formula by introducing a dynamic exploration-exploitation parameter @math that decreases linearly with the iteration number. Consequently, we provide a performance analysis. We observe that a variable @math solves our domain: it yields more exploration at the bottom and as a result the tuning problem has been simplified. The region in @math for which good values are found is increased by more than a tenfold. This result encourages us to continue our research to solve other prominent problems in High Energy Physics.
|
Computer algebra systems, expression simplification, and boolean problems are closely related. General purpose packages such as Mathematica and Maple have evolved out of early systems created by physicists and artificial intelligence in the 1960s. A prime example for the first development was the work by the later Nobel Prize laureate Martinus Veltman, who designed a program for symbolic mathematics, especially High Energy Physics, called Schoonschip (Dutch for clean ship,'' or clean up'') in 1963. In the course of the 1960s Gerard 't Hoofd accompanied Veltman with whom he shared the Nobel Prize. The first popular computer algebra systems were muMATH, Reduce, Derive (based on muMATH), and Macsyma. It is interesting to note that FORM @cite_1 , the system that we use in our work, is a direct successor to Schoonschip.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1963991920"
],
"abstract": [
"Abstract We present version 4.0 of the symbolic manipulation system Form . The most important new features are manipulation of rational polynomials and the factorization of expressions. Many other new functions and commands are also added; some of them are very general, while others are designed for building specific high level packages, such as one for Grobner bases. New is also the checkpoint facility, that allows for periodic backups during long calculations. Finally, Form 4.0 has become available as open source under the GNU General Public License version 3. Program summary Program title: FORM. Catalogue identifier: AEOT_v1_0 Program summary URL: http: cpc.cs.qub.ac.uk summaries AEOT_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 151599 No. of bytes in distributed program, including test data, etc.: 1 078 748 Distribution format: tar.gz Programming language: The FORM language. FORM itself is programmed in a mixture of C and C++. Computer: All. Operating system: UNIX, LINUX, Mac OS, Windows. Classification: 5. Nature of problem: FORM defines a symbolic manipulation language in which the emphasis lies on fast processing of very large formulas. It has been used successfully for many calculations in Quantum Field Theory and mathematics. In speed and size of formulas that can be handled it outperforms other systems typically by an order of magnitude. Special in this version: The version 4.0 contains many new features. Most important are factorization and rational arithmetic. The program has also become open source under the GPL. The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl form formcvs.php because of frequent bug fixes. Solution method: See “Nature of Problem”, above. Additional comments: NOTE: The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl form formcvs.php because of frequent bug fixes."
]
}
|
1405.6369
|
1826013816
|
Advances in high energy physics have created the need to increase computational capacity. Project HEPGAME was composed to address this challenge. One of the issues is that numerical integration of expressions of current interest have millions of terms and takes weeks to compute. We have investigated ways to simplify these expressions, using Horner schemes and common subexpression elimination. Our approach applies MCTS, a search procedure that has been successful in AI. We use it to find near-optimal Horner schemes. Although MCTS finds better solutions, this approach gives rise to two further challenges. (1) MCTS (with UCT) introduces a constant, @math that governs the balance between exploration and exploitation. This constant has to be tuned manually. (2) There should be more guided exploration at the bottom of the tree, since the current approach reduces the quality of the solution towards the end of the expression. We investigate NMCS (Nested Monte Carlo Search) to address both issues, but find that NMCS is computationally unfeasible for our problem. Then, we modify the MCTS formula by introducing a dynamic exploration-exploitation parameter @math that decreases linearly with the iteration number. Consequently, we provide a performance analysis. We observe that a variable @math solves our domain: it yields more exploration at the bottom and as a result the tuning problem has been simplified. The region in @math for which good values are found is increased by more than a tenfold. This result encourages us to continue our research to solve other prominent problems in High Energy Physics.
|
The Boolean Satisfiability problem, or SAT, is a central problem in symbolic logic and computational complexity. Ever since Cook's seminal work @cite_5 , finding efficient solvers for SAT has driven much progress in computational logic and combinatorial optimization. MCTS has been quite successful in adversary search and optimization @cite_19 . In the current work, we discuss the application of MCTS to expression simplification. Curiously, we are not aware of many other works, with the notable exception of @cite_0 .
|
{
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_0"
],
"mid": [
"",
"2036265926",
"1708874704"
],
"abstract": [
"",
"It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a tautology. Here “reduced” means, roughly speaking, that the first problem can be solved deterministically in polynomial time provided an oracle is available for solving the second. From this notion of reducible, polynomial degrees of difficulty are defined, and it is shown that the problem of determining tautologyhood has the same polynomial degree as the problem of determining whether the first of two given graphs is isomorphic to a subgraph of the second. Other examples are discussed. A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed.",
"In this paper, we investigate the feasibility of applying algorithms based on the Uniform Confidence bounds applied to Trees [12] to the satisfiability of CNF formulas. We develop a new family of algorithms based on the idea of balancing exploitation (depth-first search) and exploration (breadth-first search), that can be combined with two different techniques to generate random playouts or with a heuristics-based evaluation function. We compare our algorithms with a DPLL-based algorithm and with WalkSAT, using the size of the tree and the number of flips as the performance measure. While our algorithms perform on par with DPLL on instances with little structure, they do quite well on structured instances where they can effectively reuse information gathered from one iteration on the next. We also discuss the pros and cons of our different algorithms and we conclude with a discussion of a number of avenues for future work."
]
}
|
1405.6369
|
1826013816
|
Advances in high energy physics have created the need to increase computational capacity. Project HEPGAME was composed to address this challenge. One of the issues is that numerical integration of expressions of current interest have millions of terms and takes weeks to compute. We have investigated ways to simplify these expressions, using Horner schemes and common subexpression elimination. Our approach applies MCTS, a search procedure that has been successful in AI. We use it to find near-optimal Horner schemes. Although MCTS finds better solutions, this approach gives rise to two further challenges. (1) MCTS (with UCT) introduces a constant, @math that governs the balance between exploration and exploitation. This constant has to be tuned manually. (2) There should be more guided exploration at the bottom of the tree, since the current approach reduces the quality of the solution towards the end of the expression. We investigate NMCS (Nested Monte Carlo Search) to address both issues, but find that NMCS is computationally unfeasible for our problem. Then, we modify the MCTS formula by introducing a dynamic exploration-exploitation parameter @math that decreases linearly with the iteration number. Consequently, we provide a performance analysis. We observe that a variable @math solves our domain: it yields more exploration at the bottom and as a result the tuning problem has been simplified. The region in @math for which good values are found is increased by more than a tenfold. This result encourages us to continue our research to solve other prominent problems in High Energy Physics.
|
Expression simplification is a widely studied problem. We have already mentioned Horner schemes @cite_20 , and common subexpression elimination (CSEE) @cite_4 , but there are several other methods, such as partial syntactic factorization @cite_12 and Breuer's growth algorithm @cite_24 . Horner schemes and CSEE do not require much algebra: only the commutative and associative properties of the operators are used. Much research is put into simplifications using more algebraic properties, such as factorization, especially because of its interest for cryptographic research. In section we will introduce modifications to UCT, in order to make the importance of exploration versus exploitation iteration-number dependent. In the past related changes have been proposed. For example, Discounted UCB @cite_14 and Accelerated UCT @cite_22 both modify the average score of a node to discount old wins over new ones. The difference between our method and past work is that the previous modifications alter the importance of exploring based on the history and do not guarantee that the focus shifts from exploration to exploitation. In contrast, this work focuses on the exploration-exploitation constant @math and on the role of exploration during the simulation.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_24",
"@cite_20",
"@cite_12"
],
"mid": [
"",
"",
"1468005580",
"2066433655",
"1981663184",
"2141600282"
],
"abstract": [
"",
"",
"Monte-Carlo Tree Search (MCTS) is a successful approach for improving the performance of game-playing programs. This paper presents the Accelerated UCT algorithm, which overcomes a weakness of MCTS caused by deceptive structures which often appear in game tree search. It consists in using a new backup operator that assigns higher weights to recently visited actions, and lower weights to actions that have not been visited for a long time. Results in Othello, Havannah, and Go show that Accelerated UCT is not only more effective than previous approaches but also improves the strength of Fuego, which is one of the best computer Go programs.",
"Given a set of expressions which are to be compiled, methods are presented for increasing the efficiency of the object code produced by first factoring the expressions, i.e. finding a set of subexpressions each of which occurs in two or more other expressions or subexpressions. Once all the factors have been ascertained, a sequencing procedure is applied which orders the factors and expressions such that all information is computed in the correct sequence and factors need be retained in memory a minimal amount of time. An assignment algorithm is then executed in order to minimize the total number of temporary storage cells required to hold the results of evaluating the factors. In order to make these techniques computationally feasible, heuristic procedures are applied, and hence global optimal results are not necessarily generated. The factorization algorithms are also applicable to the problem of factoring Boolean switching expressions and of factoring polynomials encountered in symbol manipulating systems.",
"Indicia receiving matte sheet materials, comprising a polyester base support precoated with a cellulosic film-forming polymer and having an outermost layer of an antistatic composition comprising a sulphonated polystyrene and a cycloaliphatic amine salt of an alcohol sulphate.",
"Minimizing the evaluation cost of a polynomial expression is a fundamental problem in computer science. We propose tools that, for a polynomial P given as the sum of its terms, compute a representation that permits a more efficient evaluation. Our algorithm runs in d(nt)O(1) bit operations plus dtO(1) operations in the base field where d, n and t are the total degree, number of variables and number of terms of P. Our experimental results show that our approach can handle much larger polynomials than other available software solutions. Moreover, our computed representation reduce the evaluation cost of P substantially."
]
}
|
1405.6362
|
151718868
|
Exascale systems are predicted to have approximately one billion cores, assuming Gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the current parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. There is therefore an urgent need to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics, but has recently been extended to a wider range of problems, including preconditioners for sparse linear solvers. It's high arithmetic intensity combined with its linear complexity and asynchronous communication patterns makes it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on inter-node communication. We develop a performance model that considers the communication patterns of the FMM, and observe a good match between our model and the actual communication time, when latency, bandwidth, network topology, and multi-core penalties are all taken into account. To our knowledge, this is the first formal characterization of inter-node communication in FMM, which validates the model against actual measurements of communication time.
|
Performance modeling and characterization for understanding and predicting the performance of scientific applications on HPC platforms has been targeted by many related projects. For example, Clement and Quinn developed a performance prediction methodology through symbolic analysis of their source code @cite_18 . Mendes and Reed focused on predicting scalability of an application program executing on a given parallel system @cite_11 . Mendes proposed methodology to predict the performance scalability of data parallel applications on multi-computers based on information collected at compile time @cite_8 . The approach of combining computation and communication to obtain a general performance model is described by Snavely @cite_21 . DeRose and Reed concentrate on tool development for performance analysis @cite_13 . Performance models for a specific given application domain, which presents performance bounds for implicit CFD codes have also been considered @cite_22 . The efficiency of the spectral transform method on parallel computers has been evaluated by Foster @cite_23 . Kerbyson provide an analytical model for the application SAGE @cite_4 . Performance models for AMG were developed by Gahvari @cite_0 . Traditional evaluation of specific machines via benchmarking is presented by Worley @cite_3 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_13",
"@cite_11"
],
"mid": [
"2119347750",
"2533204039",
"1596846800",
"1866018165",
"1651687773",
"2057969379",
"2079472602",
"2018743852",
"2158182626",
"2016048375"
],
"abstract": [
"Recent advances in the power of parallel computers have made them attractive for solving large computational problems. Scalable parallel programs are particularly well suited to Massively Parallel Processing (MPP) machines since the number of computations can be increased to match the available number of processors. Performance tuning can be particularly difficult for these applications since it must often be performed with a smaller problem size than that targeted for eventual execution. This research develops a performance prediction methodology that addresses this problem through symbolic analysis of program source code. Algebraic manipulations can then be performed on the resulting analytical model to determine performance for scaled up applications on different hardware architectures. >",
"In this work we present a predictive analytical model that encompasses the performance and scaling characteristics of an important ASCI application. SAGE (SAIC's Adaptive Grid Eulerian hydrocode) is a multidimensional hydrodynamics code with adaptive mesh refinement. The model is validated against measurements on several systems including ASCI Blue Mountain, ASCI White, and a Compaq Alphaserver ES45 system showing high accuracy. It is parametric --- basic machine performance numbers (latency, MFLOPS rate, bandwidth) and application characteristics (problem size, decomposition method, etc.) serve as input. The model is applied to add insight into the performance of current systems, to reveal bottlenecks, and to illustrate where tuning efforts can be effective. We also use the model to predict performance on future systems.",
"Traditionally, numerical analysts have evaluated the performance of algorithms by counting the number of floating-point operations. On the algorithmic side, tremendous strides have been made; many algorithms now require only a few floating-point operations per mesh point. However, on the hardware side, memory system performance is improving at a rate that is much slower than that of processor performance. The result is a mismatch in capabilities: algorithm design has minimized the work per data item, but hardware design depends on executing an increasing large number of operations per data item. The importance of memory bandwidth to the overall performance is suggested by the available results. These show that the STREAM results are much better indicator of performance than the peak numbers. The chapter illustrates the performance limitations caused by insufficient available memory bandwidth with a discussion of sparse matrix-vector multiply, a critical operation in many iterative methods used in implicit CFD codes. It also focuses on the per-processor performance of compute nodes used in parallel computers. Experiments have shown that PETSc-FUN3D has good scalability. In fact, since good per-processor performance reduces the fraction of time spent computing as opposed to communication, achieving the best per-processor performance is a critical prerequisite to demonstrating uninflated parallel performance.",
"Despite the performance potential of multicomputers, several factors have limited their widespread adoption. Of these, performance variability is among the most significant. Execution of some programs may yield only a small fraction of peak system performance, whereas others approach the system's theoretical performance peak. Moreover, the observed performance may change substantially as application program parameters vary. Data parallel languages, which facilitate the programming of multicomputers, increase the semantic distance between the program's source code and its observable performance, thus aggravating the performance problem. In this thesis, we propose a new methodology to predict the performance scalability of data parallel applications on multicomputers. Our technique represents the execution time of a program as a symbolic expression that is a function of the number of processors (P), problem size (N), and other system-dependent parameters. This methodology is based on information collected at compile time. By extending an existing data parallel compiler (Fortran D95), we derive, during compilation, a symbolic model that represents the cost of each high-level program section and, inductively, of the complete program. These symbolic expressions may be simplified externally with current symbolic tools. Predicting performance of the program for a given pair @math requires simply the evaluation of its corresponding cost expression. We validate our implementation by predicting scalability of a variety of loop nests, with distinct computation and communication patterns. To demonstrate the applicability of our technique, we present a series of concrete performance problems where it was successfully employed: prediction of total execution time, identification and tracking of bottlenecks, cross-system prediction, and evaluation of code transformations. These examples show that the technique would be useful both to users, in optimizing and tuning their programs, and to advanced compilers, which would have a means to evaluate the expected performance of a synthesized code. According to the results of our study, by integrating compilation, performance analysis and symbolic manipulation tools, it is possible to correctly predict, in an automated fashion, the major performance variations of a data parallel program written in a high-level language.",
"This paper presents a performance modeling methodology that is faster than traditional cycle-accurate simulation, more sophisticated than performance estimation based on system peak-performance metrics, and is shown to be effective on a class of High Performance Computing benchmarks. The method yields insight into the factors that affect performance on single-processor and parallel computers.",
"Oak Ridge National Laboratory (ORNL) has recently installed both a Compaq AlphaServer SC and an IBM SP, each with 4-way SMP nodes, allowing a direct comparison of the two architectures. In this paper, we describe our initial evaluation. The evaluation looks at both kernel and application performance for a spectral atmospheric general circulation model, an important application for the ORNL systems.",
"Now that the performance of individual cores has plateaued, future supercomputers will depend upon increasing parallelism for performance. Processor counts are now in the hundreds of thousands for the largest machines and will soon be in the millions. There is an urgent need to model application performance at these scales and to understand what changes need to be made to ensure continued scalability. This paper considers algebraic multigrid (AMG), a popular and highly efficient iterative solver for large sparse linear systems that is used in many applications. We discuss the challenges for AMG on current parallel computers and future exascale architectures, and we present a performance model for an AMG solve cycle as well as performance measurements on several massively-parallel platforms.",
"The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations on a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC 860, DELTA, and Paragon, and the nCUBE 2, but we also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional fast Fourier transforms (FFTs) and other parallel transforms.",
"In this paper we present the design of SvPablo, a language independent performance analysis and visualization system that can be easily extended to new contexts with minimal changes to the software infrastructure. At present, SvPablo supports analysis of applications written in C, Fortran 77, Fortran 90, and HPF on a variety of sequential and parallel systems. In addition to capturing application data via software instrumentation, SvPablo also exploits hardware performance counters to capture the interaction of software and hardware. Both hardware and software performance data are summarized during program execution, enabling measurement of programs that execute for hours or days on hundreds of processors. This performance data is stored in a format designed to be language transparent and portable. We demonstrate the usefulness of SvPablo for tuning application programs with a case study running on an SGI Origin 2000.",
"Despite the performance potential of parallel systems, several factors have hindered their widespread adoption. Of these, performance variability is among the most significant. Data parallel languages, which facilitate the programming of those systems, increase the semantic distance between the program's source code and its observable performance, thus aggravating the optimization problem. In this paper, we present a new methodology to automatically predict the performance scalability of data parallel applications on multicomputers. Our technique represents the execution time of a program as a symbolic expression that includes the number of processors (P), problem size (N), and other system-dependent parameters. This methodology is strongly based on information collected at compile time. By extending an existing data parallel compiler (Fortran D95), we derive during compilation, a symbolic cost model that represents the expected cost of each high-level code section and, inductively, of the complete program."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.