aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1706.09076
2727857880
A descriptive approach for automatic generation of visual blends is presented. The implemented system, the Blender, is composed of two components: the Mapper and the Visual Blender. The approach uses structured visual representations along with sets of visual relations which describe how the elements (in which the visual representation can be decomposed) relate among each other. Our system is a hybrid blender, as the blending process starts at the Mapper (conceptual level) and ends at the Visual Blender (visual representation level). The experimental results show that the Blender is able to create analogies from input mental spaces and produce well-composed blends, which follow the rules imposed by its base-analogy and its relations. The resulting blends are visually interesting and some can be considered as unexpected.
The Boat-House Visual Blending Experience @cite_9 is, to the best of our knowledge, one of the earliest attempts to computationally produce visual blends. The work was motivated by the need to interpret and visualize blends produced by a preliminary version of the Divago framework, which is one of the first artificial creative systems based on CB theory @cite_5 . In addition to a declarative description of the concepts via rules and concept maps (i.e., graphs representing binary relations between concepts), Pereira and Cardoso also considered a domain of instances, which were drawn using a Logo-like programming language. To test the system, the authors performed several experiments with the and blend @cite_11 considering different instances for the input spaces.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_11" ], "mid": [ "156215361", "1510894875", "198041307" ], "abstract": [ "2AbstractStructure-mapping or structural alignment is fast emerging as a unifying principle fora variety of different phenomena; including analogy, metaphor, similarity andconceptual combination. In this paper, we argue that it is inappropriate to extendthis idea to conceptual combination, as has been done in dual-process theory(Wisniewski, 1997a, in press). There are theoretical and empirical grounds fortaking up this position. We propose an alternative account based on the constrainttheory of combination (see Costello, 1996; Costello & Keane, 1997, 1998a, 1998b, inpress), which views the interpretation of concept combinations as one of satisfyingmultiple constraints of diagnosticity, plausibility and informativeness. This theory,which we would like to advertise as being the truth, does not use structure-mapping.We conclude by promoting the idea that rather than applying analogy’s constraints toconceptual combination, it may be more productive to consider how conceptualcombination’s constraints might apply to analogy.", "The main subject of this paper is to describe an experiment of blending two domains using our previous work, Blender [1], a computational attempt to model Fauconnier and Turner’s [2] Conceptual Blending. In this experiment, we visualize the resulting instances using a logo-like language, allowing us to produce visual blends. We discuss the results and suggest further improvements.", "This paper introduces a new approach to user interface design and other areas, called algebraic semiotics. The approach is based on a notion of sign, which allows complex hierarchical structure and incorporates the insight (emphasized by Saussure) that signs come in systems, and should be studied at that level, rather than individually. A user interface can be considered as a representation of the underlying functionality to which it provides access, and thus user interface design can be considered a craft of constructing such representations, where both the interface and the underlying functionality are considered as (structured) sign systems. In this setting, representations appear as mappings, or morphisms, between sign systems, which should preserve as much structure as possible. This motivates developing a calculus having systematic ways to combine signs, sign systems, and representations. One important mode of composition is blending, introduced by Fauconnier and Turner; we relate this to certain concepts from the very abstract area of mathematics called category theory. Applications for algebraic semiotics include not only user interface design, but also cognitive linguistics, especially metaphor theory and cognitive poetics. The main contribution of this paper is the precision it can bring to such areas. Building on an insight from computer science, that discrete structures can be described by algebraic theories, sign systems are defined to be algebraic theories with extra structure, and semiotic morphisms are defined to be mappings of algebraic theories that (to some extent) preserve the extra structure. As an aid for practical design, we show that the quality of representations is closely related to the preservation properties of semiotic morphisms; these measures of quality also provide the orderings needed by our category theoretic formulation of blending." ] }
1706.09076
2727857880
A descriptive approach for automatic generation of visual blends is presented. The implemented system, the Blender, is composed of two components: the Mapper and the Visual Blender. The approach uses structured visual representations along with sets of visual relations which describe how the elements (in which the visual representation can be decomposed) relate among each other. Our system is a hybrid blender, as the blending process starts at the Mapper (conceptual level) and ends at the Visual Blender (visual representation level). The experimental results show that the Blender is able to create analogies from input mental spaces and produce well-composed blends, which follow the rules imposed by its base-analogy and its relations. The resulting blends are visually interesting and some can be considered as unexpected.
proposed a discursive approach to evaluate the quality of blends (although there is no evidence of an implementation). The main idea was to use Lakatosian argumentative dialogue @cite_3 to iteratively construct valuable and novel blends as opposed to a strictly combinatorial approach. To exemplify the argumentative approach, the authors focused on icon design by introducing a semiotic system for modelling computer icons. Since icons can be considered as a combination of signs that can convey multiple intended meanings to the icon, proposed argumentation to evaluate and refine the quality of the icons.
{ "cite_N": [ "@cite_3" ], "mid": [ "2011043790" ], "abstract": [ "Editors' preface Acknowledgments Author's introduction 1. A problem and a conjecture 2. A proof 3. Criticism of the proof by counterexamples which are local but not global 4. Criticism of the conjecture by global counterexamples 5. Criticism of the proof-analysis by counterexamples which are global but not local: the problem of rigour 6. Return to criticism of the proof by counterexamples which are local but not global: the problem of content 7. The problem of content revisited 8. Concept-formation 9. How criticism may turn mathematical truth into logical truth Appendices Bibliography Index of names Index of subjects." ] }
1706.09076
2727857880
A descriptive approach for automatic generation of visual blends is presented. The implemented system, the Blender, is composed of two components: the Mapper and the Visual Blender. The approach uses structured visual representations along with sets of visual relations which describe how the elements (in which the visual representation can be decomposed) relate among each other. Our system is a hybrid blender, as the blending process starts at the Mapper (conceptual level) and ends at the Visual Blender (visual representation level). The experimental results show that the Blender is able to create analogies from input mental spaces and produce well-composed blends, which follow the rules imposed by its base-analogy and its relations. The resulting blends are visually interesting and some can be considered as unexpected.
Xiao and Linkola proposed Vismantic, a semi-automatic system aimed at producing visual compositions to express specific meanings, namely the ones of abstract concepts. Their system is based on three binary image operations (juxtaposition, replacement and fusion), which are the basic operations to represent visual metaphors @cite_10 . For example, Vismantic represents the slogan as an image of an electric light bulb where the wire filament and screw base are fused with an image of green leaves. The selection of images as well as the application of the visual operations require user's intervention.
{ "cite_N": [ "@cite_10" ], "mid": [ "2098062982" ], "abstract": [ "The goal of rhetorical theory is always to organize the possibilities for persuasion within a domain and to relate each possible stratagem to specific desired outcomes. In this article we develop a visual rhetoric that differentiates the pictorial strategies available to advertisers and links them to consumer response. We propose a new typology that distinguishes nine types of visual rhetorical figures according to their degree of complexity and ambiguity. We then derive empirically testable predictions concerning how these different types of visual figures may influence such consumer responses as elaboration and belief change. The article concludes with a discussion of the importance of marrying textual analysis, as found in literary, semiotic and rhetorical disciplines, with the experimental methodology characteristic of social and cognitive psychology." ] }
1706.09076
2727857880
A descriptive approach for automatic generation of visual blends is presented. The implemented system, the Blender, is composed of two components: the Mapper and the Visual Blender. The approach uses structured visual representations along with sets of visual relations which describe how the elements (in which the visual representation can be decomposed) relate among each other. Our system is a hybrid blender, as the blending process starts at the Mapper (conceptual level) and ends at the Visual Blender (visual representation level). The experimental results show that the Blender is able to create analogies from input mental spaces and produce well-composed blends, which follow the rules imposed by its base-analogy and its relations. The resulting blends are visually interesting and some can be considered as unexpected.
Recent works such as DeepStyle @cite_7 can also be seen as a form of visual blending. DeepStyle is based on a deep neural network that has the ability to separate image content from certain aspects of style, allowing to recombine the content of an arbitrary image with a given rendering style (style transfer). The system is known for mimicking features of different painting styles.
{ "cite_N": [ "@cite_7" ], "mid": [ "1924619199" ], "abstract": [ "In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery." ] }
1706.09076
2727857880
A descriptive approach for automatic generation of visual blends is presented. The implemented system, the Blender, is composed of two components: the Mapper and the Visual Blender. The approach uses structured visual representations along with sets of visual relations which describe how the elements (in which the visual representation can be decomposed) relate among each other. Our system is a hybrid blender, as the blending process starts at the Mapper (conceptual level) and ends at the Visual Blender (visual representation level). The experimental results show that the Blender is able to create analogies from input mental spaces and produce well-composed blends, which follow the rules imposed by its base-analogy and its relations. The resulting blends are visually interesting and some can be considered as unexpected.
Several other authors have seen the potential of deep neural networks for tasks related to visual blending @cite_4 @cite_6 @cite_8 . For instance, Berov and K " u hnberger proposed a computational model of visual hallucination based on deep neural networks. To some extent, the creations of this system can be seen as visual blends.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_8" ], "mid": [ "2605185938", "2530969562", "2464189215" ], "abstract": [ "", "We examine two recent artificial intelligence (AI) based deep learning algorithms for visual blending in convolutional neural networks ( 2015, 2015). To investigate the potential value of these algorithms as tools for computational creativity research, we explain and schematize the essential aspects of the algorithms' operation and give visual examples of their output. We discuss the relationship of the two algorithms to human cognitive science theories of creativity such as conceptual blending theory and honing theory, and characterize the algorithms with respect to generation of novelty and aesthetic quality.", "This paper motivates the use of computational argumentation for evaluating ‘concept blends’ and other forms of combinatorial creativity. We exemplify our approach in the domain of computer icon design, where icons are understood as creative artefacts generated through concept blending. We present a semiotic system for representing icons, showing how they can be described in terms of interpretations and how they are related by sign patterns. The interpretation of a sign pattern conveys an intended meaning for an icon. This intended meaning is subjective, and depends on the way concept blending for creating the icon is realised. We show how the intended meaning of icons can be discussed in an explicit and social argumentation process modeled as a dialogue game, and show examples of these following the style of Lakatos (1976). In this way, we are able to evaluate concept blends through an open-ended and dynamic discussion in which concept blends can be improved and the reasons behind a specific evaluation are made explicit. In the closing section, we explore argumentation and the potential roles that can play at different stages of the concept blending process." ] }
1706.08931
2726185373
In this paper, we provide details of implementing a system for managing a fleet of autonomous mobile robots (AMR) operating in a factory or a warehouse premise. While the robots are themselves autonomous in its motion and obstacle avoidance capability, the target destination for each robot is provided by a global planner. The global planner and the ground vehicles (robots) constitute a multi agent system (MAS) which communicate with each other over a wireless network. Three different approaches are explored for implementation. The first two approaches make use of the distributed computing based Networked Robotics architecture and communication framework of Robot Operating System (ROS) itself while the third approach uses Rapyuta Cloud Robotics framework for this implementation. The comparative performance of these approaches are analyzed through simulation as well as real world experiment with actual robots. These analyses provide an in-depth understanding of the inner working of the Cloud Robotics Platform in contrast to the usual ROS framework. The insight gained through this exercise will be valuable for students as well as practicing engineers interested in implementing similar systems else where. In the process, we also identify few critical limitations of the current Rapyuta platform and provide suggestions to overcome them.
Also a slightly related work is done by in which they have made a system to detect position of robots through a camera placed on ceiling and control their motion so that they don't collide. They have exploited the large computation power provided by the cloud. @cite_8 . A collision avoidance and path planning system which works on individual robots also exist @cite_14 . They have used common ROS topic for inter robot communication and AMCL for localization.
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "6827933", "2005212370" ], "abstract": [ "This paper describes a multi-robot collision avoidance system based on the velocity obstacle paradigm. In contrast to previous approaches, we alleviate the strong requirement for perfect sensing (i.e. global positioning) using Adaptive Monte-Carlo Localization on a per-agent level. While such methods as Optimal Reciprocal Collision Avoidance guarantee local collision-free motion for a large number of robots, given perfect knowledge of positions and speeds, a realistic implementation requires further extensions to deal with inaccurate localization and message passing delays. The presented algorithm bounds the error introduced by localization and combines the computation for collision-free motion with localization uncertainty. We provide an open source implementation using the Robot Operating System (ROS). The system is tested and evaluated with up to eight robots in simulation and on four differential drive robots in a real-world situation.", "The pathway for the concept of cloud robotics is continually unfolding and revealing new opportunities in science. With this, the focus of the research paper is aimed at identifying the progress completed towards the development of a full scale cloud infrastructure to implement formation control on a multi robot system. A small scale cloud infrastructure was developed utilizing a single virtual machine operating with the boundaries of a hypervisor's resource pool. A robot with minimal hardware was constructed to work within the control of the cloud. Once the proof of concept on a lower tier has been completed, more advance robotics concepts, such as Null-Spaced-base behavior control and advanced neural network control, will be tested by offloading the computational load to the cloud infrastructure. The goal is to demonstrate the ability to simplify the robot hardware and implement control on a global scale utilizing the cloud infrastructure." ] }
1706.09172
1967694736
Sketches and diagrams play an important role in the daily work of software developers. In this paper, we investigate the use of sketches and diagrams in software engineering practice. To this end, we used both quantitative and qualitative methods. We present the results of an exploratory study in three companies and an online survey with 394 participants. Our participants included software developers, software architects, project managers, consultants, as well as researchers. They worked in different countries and on projects from a wide range of application areas. Most questions in the survey were related to the last sketch or diagram that the participants had created. Contrary to our expectations and previous work, the majority of sketches and diagrams contained at least some UML elements. However, most of them were informal. The most common purposes for creating sketches and diagrams were designing, explaining, and understanding, but analyzing requirements was also named often. More than half of the sketches and diagrams were created on analog media like paper or whiteboards and have been revised after creation. Most of them were used for more than a week and were archived. We found that the majority of participants related their sketches to methods, classes, or packages, but not to source code artifacts with a lower level of abstraction.
Artists sketch to clarify existing ideas and to develop new ones @cite_22 . In mechanical design, sketches not only document final designs, but also provide designers with a memory extension to help ideas taking shape and to communicate concepts to colleagues @cite_21 . Beside sketches being an external representation of memory and a means for communication @cite_6 @cite_11 , they serve as documentation @cite_13 . Sch " showed that the possibility to sketch has a positive effect on the quality of solutions in early stages of the design process @cite_13 . Furthermore, the ambiguity in sketches is a source of creativity @cite_49 @cite_17 @cite_25 and they support problem-solving and understanding @cite_1 . In our survey, we found that the latter was one of the main reasons why participants archived their sketch.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_1", "@cite_17", "@cite_6", "@cite_49", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "2044397448", "2071673578", "1811607892", "", "186992169", "2173807117", "1890447598", "", "" ], "abstract": [ "Although new technology is widely used for detailed design and image manipulation, its use in the early stages of visual invention is much less common, one reason being that the denotation systems used in paper-and-pencil sketching assist creativity in ways that are poorly understood. Leonardo da Vinci advocated the use of untidy indeterminacies for working out composition, because he believed that they stimulated visual invention. Recent research in cognitive psychology suggests a mental-imagery model that expands Leonardo’s theory and provides evidence for cognitive mechanisms that clarify the function of familiar sketch attributes. Sketches may mediate mental translation between spatially depictive and structurally descriptive modes of visual representation. Evidence for a hybrid percept-image theory of ordinary paper sketching is briefly outlined. Some implications of this theory for the development of improved computer sketching systems are briefly discussed.", "Abstract This paper is a study on the importance of drawing (both formal drafting and informal sketching) during the process of mechanical design. Five hypotheses, focused on the types of drawings, their necessity in mechanical problem solving, and their relation to the external representation medium, are presented and supported. Support is through referenced studies in other domains and the results of protocol studies performed on five mechanical designers. Videotapes of all the marks-on-paper made by designers in representative sections of the design process were studied in detail for their type and purpose. The resulting data is supportive of the hypotheses. These results also give requirements for future computer aided design tools and graphics education, and goals for further studies.", "External representations such as diagrams, sketches, charts, graphs and scribbles on napkins play facilitatory roles in inference, problem-solving and understanding (e.g. [1],[2],[3],[4],[5],[6],[7],[8],[9]). How does the externality and visibility of representations facilitate inference and problem-solving? One benefit of external representations is on memory. They reduce working memory load by providing external tokens for the elements that must otherwise be kept in mind. This frees working memory to perform mental calculations on the elements rather than both keeping elements in mind and operating on them [2],[9]. External representations also serve as visuo-spatial retrieval cues for long term memory, evoking relevant information that might not otherwise be retrieved. Another benefit of external representations is to promote discovery and inference, both visuo-spatial and metaphorical. Perceptual judgements about size, distance, and direction are easily made from external representations (e.g.[4]). In a Venn diagram, set relations such as inclusion are abstractly mapped onto visuo-spatial diagrammatic features, enabling direct perceptual calculation. Visuospatial features such as proximity, connectivity, and alignment provide useful hints to selection of appropriate inference paths (e.g.[1],[6],[8]) and to proper understanding of the structure of a target system (e.g.[5]). Calculations requiring counting, sorting, or ordering are easily made by rearranging external spaces (e.g. [7]).", "", "Sketches are produced in many domains to communicate with self or others. They are a kind of external representation serving as a cognitive tool to augment memory and information processing by relieving the mind of some of those burdens. Sketches schematize. They do not portray reality; rather they convey conceptions of reality. Sketches use elements and spatial relations on paper to represent elements and spatial or abstract relations in the domain of interest. They incorporate relevant information and omit irrelevant. As such, they reveal people’s conceptions of domains.. An analysis of sketches of routes, for example, reveals that they segment routes around action points, typically turns. Order of drawing reflects mental organization of the domain. From sketches, novices extract structural information about the spatial relations among elements. Experts are also able to extract functional information, information that must be inferred from rather than directly extracted from sketches.", "Introduction Drawing and sketching are activities all humans engage in, at some level or another, as of a very young age (if not deprived of the sense of sight). In developed societies, toddlers use drawing implements to make marks on paper. In less-developed societies, children and adults use sticks to draw on sand. Why do children draw? It seems that for a child, drawing is a form of play, with developmental benefits similar to those of both symbolic play and construction games (play typology instituted by Piaget and Inhelder). Most people acquire enough drawing skills during childhood to make graphic production an accessible strategy whenever pictorial representation is more effective than linguistic representation in communication and reasoning. For some communication and reasoning tasks, however, ordinary drawing skills are not sufficient, just as linguistic skills acquired during childhood are not necessarily adequate for sophisticated verbal and written expression tasks. A better command of language makes for better orators and reporters, and a better command of drawing skills makes for better illustrators and decorators. A special class of representational skill, linguistic or graphic, is the one needed for inventive purposes: this is the case of the poet, the visual artist, and the designer. The inventive process does not require wider skills: not necessarily a larger vocabulary or unlimited graphic techniques. Rather, what is required is an ability to use the representational act to reason with on the fly. Usually, this is a “front edge” process in which partial and rudimentary representations are produced, evaluated, transformed, modified, refined, and replaced by others if need be, until their maker is satisfied with the results. The unique thing about such processes is that, since they involve ill-structured problem-solving, it is not clear at the outset where the process is leading to, and what the end result might be. In this paper, it is our purpose to look at the way in which sketching assists in generating ideas and strengthening them by interpreting the “backtalk” of a sketch in progress, or one that has just been completed. We use a developmental axis to illustrate our claims. We start with children and show how they “read” new information off their sketches or drawings, and use it to define or refine the rationale for their representations. We then show how designers habitually practice a similar process in the early idea-generation phase of the design process.", "This study presents an experimental investigation dealing with the support value of sketching activity in the early stages of the design process. The main focus is the process of sketching, including the sketching activity and the simultaneous use of the produced sketches. Sketching has a positive impact on the quality of the designed solution and on the individual experience of the design process. The quality of the solution concepts increases from entirely mental design problem solving without external support over partly supported to completely supported problem solving. On the basis of a representative case, the procedure of sketching and the activities performed while generating a design solution are analysed. A continuous change between internal mental and external materializing activities can be seen in the design process.", "", "" ] }
1706.09172
1967694736
Sketches and diagrams play an important role in the daily work of software developers. In this paper, we investigate the use of sketches and diagrams in software engineering practice. To this end, we used both quantitative and qualitative methods. We present the results of an exploratory study in three companies and an online survey with 394 participants. Our participants included software developers, software architects, project managers, consultants, as well as researchers. They worked in different countries and on projects from a wide range of application areas. Most questions in the survey were related to the last sketch or diagram that the participants had created. Contrary to our expectations and previous work, the majority of sketches and diagrams contained at least some UML elements. However, most of them were informal. The most common purposes for creating sketches and diagrams were designing, explaining, and understanding, but analyzing requirements was also named often. More than half of the sketches and diagrams were created on analog media like paper or whiteboards and have been revised after creation. Most of them were used for more than a week and were archived. We found that the majority of participants related their sketches to methods, classes, or packages, but not to source code artifacts with a lower level of abstraction.
Software designers not only use sketches to design the appearance, but also the behavior of software @cite_43 . A study of @cite_48 revealed the importance of sketches for collaboration between user interaction designers and software developers. @cite_35 showed that diagramming in distributed environments like open-source projects differs from diagramming in co-located settings. Dekel and Herbsleb @cite_44 studied software design teams at the , observing that teams intentionally improvise representations to fulfill ad-hoc needs, which arise during the evolution of object-oriented design, and thus diverge from standard-notations like UML. We can support this with our survey, since most sketches were informal, but often contained at least some UML elements.
{ "cite_N": [ "@cite_44", "@cite_43", "@cite_35", "@cite_48" ], "mid": [ "1977634046", "2097720650", "1990303617", "2133154036" ], "abstract": [ "Software designers in the object-oriented paradigm can make use of modeling tools and standard notations such as UML. Nevertheless, casual observations from collocated design collaborations suggest that teams tend to use physical mediums to sketch a plethora of informal diagrams in varied representations that often diverge from UML. To better understand such collaborations and support them with tools, we need to understand the origins, roles, uses, and implications of these alternate representations. To this end we conducted observational studies of collaborative design exercises, in which we focused on representation use. Our primary finding is that teams intentionally improviserepresentations and organize design information in responseto ad-hoc needs, which arise from the evolution of the design, and which are difficult to meet with fixed standard notations. This behavior incurs orientation and grounding difficulties for which teams compensate by relying on memory, other communication mediums, and contextual cues. Without this additional information the artifacts are difficult to interpret and have limited documentation potential. Collaborative design tools and processes should therefore focus on preserving contextual information while permitting unconstrained mixing and improvising of notations.", "Designers are skilled at sketching and prototyping the look of interfaces, but to explore various behaviors (what the interface does in response to input) typically requires programming using Javascript, ActionScript for Flash, or other languages. In our survey of 259 designers, 86 reported that the behavior is more difficult to prototype than the appearance. Often (78 of the time), designing the behavior requires collaborating with developers, but 76 of designers reported that communicatin1g the behavior to developers was more difficult than the appearance. Other results include that annotations such as arrows and paragraphs of text are used on top of sketches and storyboards to explain behaviors, and designers want to explore multiple versions of behaviors, but todaypsilas tools make this difficult. The results provide new ideas for future tools.", "In co-located software development, diagramming practices, such as sketching ideas out with a pen and paper, support the creative process and allow designers to shape, analyze, and communicate their ideas. This study focuses on the diagramming practices used in the design of Open Source Software (OSS), where the norm is highly distributed group work. In OSS, text-based communication (e.g., mailing lists) dominates, and sketching and drawing diagrams collaboratively remains difficult due to the barriers imposed by distance and technology. Previous studies have examined these practices and barriers in the context of individual projects. To understand how contributors across OSS projects use diagrams in design-related activities, we conducted a survey of 230 contributors from 40 different OSS projects, and interviewed eight participants. Our results show that although contributors understand the advantages of using diagrams for design-related activities, diagrams are infrequently used in OSS. This motivated us to examine how and why diagramming occurs, and the factors that prevent widespread use in OSS. Finally, we propose new ideas for supporting design activities in OSS projects.", "Agile development places a strong emphasis on interaction and collaboration between people. In this paper, we present a study of collaboration between user interaction designers and developers, with a particular emphasis on the role of artefacts in the process. Our research method is an ethnographic study of a team at work, followed by the application of several kinds of qualitative analysis: activity system analysis, interaction analysis, grounded theory, and contradiction analysis. Each of these analyses yields results that inform an understanding of artefact-mediated collaboration. In particular, we find that both sketches and design stories have critical roles, that these artefacts support creation and reflection, facilitate resolution of contradiction, and also work at a level of consciousness that is below the level of self-awareness." ] }
1706.09172
1967694736
Sketches and diagrams play an important role in the daily work of software developers. In this paper, we investigate the use of sketches and diagrams in software engineering practice. To this end, we used both quantitative and qualitative methods. We present the results of an exploratory study in three companies and an online survey with 394 participants. Our participants included software developers, software architects, project managers, consultants, as well as researchers. They worked in different countries and on projects from a wide range of application areas. Most questions in the survey were related to the last sketch or diagram that the participants had created. Contrary to our expectations and previous work, the majority of sketches and diagrams contained at least some UML elements. However, most of them were informal. The most common purposes for creating sketches and diagrams were designing, explaining, and understanding, but analyzing requirements was also named often. More than half of the sketches and diagrams were created on analog media like paper or whiteboards and have been revised after creation. Most of them were used for more than a week and were archived. We found that the majority of participants related their sketches to methods, classes, or packages, but not to source code artifacts with a lower level of abstraction.
@cite_24 analyzed eight workflows of software developers in an academic setting. They report on a variety of transitions that sketches and diagrams undergo. Our quantitative as well as qualitative results provided insights into the transitions of sketches. More than half of the sketches were revised. Furthermore, respondents reported on sketches being shared with others or redrawn digitally.
{ "cite_N": [ "@cite_24" ], "mid": [ "2091491804" ], "abstract": [ "Informal visualization in the form of sketching and diagramming has long been an established practise of professionals working in the fields of design, architecture, and engineering. Less is known, however, about the sketching and diagramming practices of computer scientists and software developers. Through a series of interviews with computer science researchers who develop software, we probed the purpose, contexts, and media in which they created and re-created sketches and diagrams, and the ways in which these informal visualizations evolved over time. Through our analysis we created visualizations of the observed sketching and diagramming lifecycles, which can contribute to a better understanding of the roles of sketching and diagramming in software development." ] }
1706.09172
1967694736
Sketches and diagrams play an important role in the daily work of software developers. In this paper, we investigate the use of sketches and diagrams in software engineering practice. To this end, we used both quantitative and qualitative methods. We present the results of an exploratory study in three companies and an online survey with 394 participants. Our participants included software developers, software architects, project managers, consultants, as well as researchers. They worked in different countries and on projects from a wide range of application areas. Most questions in the survey were related to the last sketch or diagram that the participants had created. Contrary to our expectations and previous work, the majority of sketches and diagrams contained at least some UML elements. However, most of them were informal. The most common purposes for creating sketches and diagrams were designing, explaining, and understanding, but analyzing requirements was also named often. More than half of the sketches and diagrams were created on analog media like paper or whiteboards and have been revised after creation. Most of them were used for more than a week and were archived. We found that the majority of participants related their sketches to methods, classes, or packages, but not to source code artifacts with a lower level of abstraction.
In another study, @cite_9 analyzed 82 whiteboard drawings in a research institution to achieve a better understanding of what they called spontaneous visualizations. Our study suggests that one reason for archiving a sketch is that it helps to visualize the implementation, issues, or processes.
{ "cite_N": [ "@cite_9" ], "mid": [ "2135482287" ], "abstract": [ "While it is still most common for information visualization researchers to develop new visualizations from a data-or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design." ] }
1706.09172
1967694736
Sketches and diagrams play an important role in the daily work of software developers. In this paper, we investigate the use of sketches and diagrams in software engineering practice. To this end, we used both quantitative and qualitative methods. We present the results of an exploratory study in three companies and an online survey with 394 participants. Our participants included software developers, software architects, project managers, consultants, as well as researchers. They worked in different countries and on projects from a wide range of application areas. Most questions in the survey were related to the last sketch or diagram that the participants had created. Contrary to our expectations and previous work, the majority of sketches and diagrams contained at least some UML elements. However, most of them were informal. The most common purposes for creating sketches and diagrams were designing, explaining, and understanding, but analyzing requirements was also named often. More than half of the sketches and diagrams were created on analog media like paper or whiteboards and have been revised after creation. Most of them were used for more than a week and were archived. We found that the majority of participants related their sketches to methods, classes, or packages, but not to source code artifacts with a lower level of abstraction.
@cite_3 conducted a survey with 280 software engineers at Microsoft. They found that paper and whiteboards were perceived as most effective for designing. Furthermore, they state that understanding the rationale behind code is the biggest problem for developers. In our study, over half of the sketches helped the respondents to understand source code or general issues.
{ "cite_N": [ "@cite_3" ], "mid": [ "2148357053" ], "abstract": [ "To understand developers' typical tools, activities, and practices and their satisfaction with each, we conducted two surveys and eleven interviews. We found that many problems arose because developers were forced to invest great effort recovering implicit knowledge by exploring code and interrupting teammates and this knowledge was only saved in their memory. Contrary to expectations that email and IM prevent expensive task switches caused by face-to-face interruptions, we found that face-to-face communication enjoys many advantages. Contrary to expectations that documentation makes understanding design rationale easy, we found that current design documents are inadequate. Contrary to expectations that code duplication involves the copy and paste of code snippets, developers reported several types of duplication. We use data to characterize these and other problems and draw implications for the design of tools for their solution." ] }
1706.09172
1967694736
Sketches and diagrams play an important role in the daily work of software developers. In this paper, we investigate the use of sketches and diagrams in software engineering practice. To this end, we used both quantitative and qualitative methods. We present the results of an exploratory study in three companies and an online survey with 394 participants. Our participants included software developers, software architects, project managers, consultants, as well as researchers. They worked in different countries and on projects from a wide range of application areas. Most questions in the survey were related to the last sketch or diagram that the participants had created. Contrary to our expectations and previous work, the majority of sketches and diagrams contained at least some UML elements. However, most of them were informal. The most common purposes for creating sketches and diagrams were designing, explaining, and understanding, but analyzing requirements was also named often. More than half of the sketches and diagrams were created on analog media like paper or whiteboards and have been revised after creation. Most of them were used for more than a week and were archived. We found that the majority of participants related their sketches to methods, classes, or packages, but not to source code artifacts with a lower level of abstraction.
@cite_28 interviewed eight software developers at Microsoft, identifying nine scenarios in which developers created and used drawings. They explored these scenarios using a survey with 427 participants, also recruited at Microsoft. We based our list of purposes for sketch creation on their scenarios (see Table ) and found similar results. However, we asked for further purposes and found analyzing requirements to be an important task. state that the use of formal notations like UML was very low. This is consistent with Petre @cite_4 , who reports on a series of semi-structured interviews with 50 professional software developers on their UML use. She states that the majority of interviewed practitioners did not use UML and those using UML, tended to use it informally and selectively. Our study confirms the informal use of UML, but we found that 57
{ "cite_N": [ "@cite_28", "@cite_4" ], "mid": [ "2110957405", "2057909950" ], "abstract": [ "Software developers are rooted in the written form of their code, yet they often draw diagrams representing their code. Unfortunately, we still know little about how and why they create these diagrams, and so there is little research to inform the design of visual tools to support developers' work. This paper presents findings from semi-structured interviews that have been validated with a structured survey. Results show that most of the diagrams had a transient nature because of the high cost of changing whiteboard sketches to electronic renderings. Diagrams that documented design decisions were often externalized in these temporary drawings and then subsequently lost. Current visualization tools and the software development practices that we observed do not solve these issues, but these results suggest several directions for future research.", "Abstract Designers, during a conceptual design process, do not just synthesize solutions that satisfy initially given requirements, but also invent design issues or requirements that capture important aspects of the given problem. How do they do this? What becomes the impetus for the invention of important issues or requirements? So-called unexpected discoveries', the acts of attending to visuo-spatial features in sketches which were not intended when they were drawn, are believed to contribute to it. The purpose of the present research is to verify this hypothesis. Analysing the cognitive processes of a practising architect in a design session, we found that in about a half of his entire design process there were bi-directional relations between unexpected discoveries and the invention of issues or requirements. Not only did unexpected discoveries become the driving force for the invention of issues or requirements, but also the occurrence of invention, in turn, tended to cause new unexpected discoveries. This has provided empirical evidence for two anecdotal views of designing. First, designing is a situated act; designers invent design issues or requirements in a way situated in the environment in which they design. We call inventions of this sort situated-invention (S-invention). Secondly, a design process progresses in such a way that the problem-space and the solution-space co-evolve. Further, this has brought a pedagogical implication as well as an insight about an important aspect of learning by experience in design." ] }
1706.08917
2760787585
Across a majority of pedestrian detection datasets, it is typically assumed that pedestrians will be standing upright with respect to the image coordinate system. This assumption, however, is not always valid for many vision-equipped mobile platforms such as mobile phones, UAVs or construction vehicles on rugged terrain. In these situations, the motion of the camera can cause images of pedestrians to be captured at extreme angles. This can lead to very poor pedestrian detection performance when using standard pedestrian detectors. To address this issue, we propose a Rotational Rectification Network (R2N) that can be inserted into any CNN-based pedestrian (or object) detector to adapt it to significant changes in camera rotation. The rotational rectification network uses a 2D rotation estimation module that passes rotational information to a spatial transformer network to undistort image features. To enable robust rotation estimation, we propose a Global Polar Pooling (GP-Pooling) operator to capture rotational shifts in convolutional features. Through our experiments, we show how our rotational rectification network can be used to improve the performance of the state-of-the-art pedestrian detector under heavy image rotation by up to 45
@cite_14 introduce a general warp framework called Spatial Transformer Networks to enable affine transformations with differentiable sampling inside the network. It achieves transformation invariance within CNN architectures very efficiently without data augmentation. An important point which is often overlooked is that the design of the whose purpose is to estimate transformation parameters is not explored in the original work. They use two baseline CNNs as the localization networks in the spatial transformer: (1) two fully-connected layers; (2) a CNN with two convolutional and two max pooling layers. The models are evaluated on the distorted MNIST dataset. However, the MNIST dataset is small and has low resolution images which is not a strict criteria to judge transformation invariance of a network. In other words, the design of a network with natural transformation invariance is still an open problem. Note that our work is complementary to the spatial transformer because our proposed rotation estimation module with GP-Pooling operators can be viewed as an expert localization network with natural rotation invariance.
{ "cite_N": [ "@cite_14" ], "mid": [ "603908379" ], "abstract": [ "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations." ] }
1706.08917
2760787585
Across a majority of pedestrian detection datasets, it is typically assumed that pedestrians will be standing upright with respect to the image coordinate system. This assumption, however, is not always valid for many vision-equipped mobile platforms such as mobile phones, UAVs or construction vehicles on rugged terrain. In these situations, the motion of the camera can cause images of pedestrians to be captured at extreme angles. This can lead to very poor pedestrian detection performance when using standard pedestrian detectors. To address this issue, we propose a Rotational Rectification Network (R2N) that can be inserted into any CNN-based pedestrian (or object) detector to adapt it to significant changes in camera rotation. The rotational rectification network uses a 2D rotation estimation module that passes rotational information to a spatial transformer network to undistort image features. To enable robust rotation estimation, we propose a Global Polar Pooling (GP-Pooling) operator to capture rotational shifts in convolutional features. Through our experiments, we show how our rotational rectification network can be used to improve the performance of the state-of-the-art pedestrian detector under heavy image rotation by up to 45
H-Nets @cite_19 replace regular CNN filter with complex circular harmonics and is also able to capture continuous rotational changes. But H-Nets assume the learned filters are in the harmonic wavelets space. Instead, GP-Pooling does not impose any assumption on the image filters. Moreover, H-Nets is designed to learn local rotational robust filters while GP-Pooling operator focuses on global rotational changes. More importantly, most existing methods only test rotational robustness on synthetic tasks such as digit recognition from MNIST. Our proposed GP-Pooling operator is able to succeed on real-world task, namely, pedestrian detection on the Caltech Pedestrians dataset.
{ "cite_N": [ "@cite_19" ], "mid": [ "2576915720" ], "abstract": [ "Translating or rotating an input image should not affect the results of many computer vision tasks. Convolutional neural networks (CNNs) are already translation equivariant: input image translations produce proportionate feature map translations. This is not the case for rotations. Global rotation equivariance is typically sought through data augmentation, but patch-wise equivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN exhibiting equivariance to patch-wise translation and 360-rotation. We achieve this by replacing regular CNN filters with circular harmonics, returning a maximal response and orientation for every receptive field patch. H-Nets use a rich, parameter-efficient and fixed computational complexity representation, and we show that deep feature maps within the network encode complicated rotational invariants. We demonstrate that our layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization. We also achieve state-of-the-art classification on rotated-MNIST, and competitive results on other benchmark challenges." ] }
1706.08917
2760787585
Across a majority of pedestrian detection datasets, it is typically assumed that pedestrians will be standing upright with respect to the image coordinate system. This assumption, however, is not always valid for many vision-equipped mobile platforms such as mobile phones, UAVs or construction vehicles on rugged terrain. In these situations, the motion of the camera can cause images of pedestrians to be captured at extreme angles. This can lead to very poor pedestrian detection performance when using standard pedestrian detectors. To address this issue, we propose a Rotational Rectification Network (R2N) that can be inserted into any CNN-based pedestrian (or object) detector to adapt it to significant changes in camera rotation. The rotational rectification network uses a 2D rotation estimation module that passes rotational information to a spatial transformer network to undistort image features. To enable robust rotation estimation, we propose a Global Polar Pooling (GP-Pooling) operator to capture rotational shifts in convolutional features. Through our experiments, we show how our rotational rectification network can be used to improve the performance of the state-of-the-art pedestrian detector under heavy image rotation by up to 45
More recent detection methods are based on region proposals. They perform detection by classifying region proposal of images and regressing the bounding box simultaneously. For example, Ren al @cite_8 introduce a Region Proposal Network (RPN) to enable nearly cost-free region proposals and propose an unified detection framework. Liu al @cite_27 introduce default boxes which tiles input images and then regress the offset for each box in the work of Single Shot MultiBox Detector (SSD). In the context of pedestrian detection, Zhang al @cite_28 analyze the performance of Faster-RCNN on pedestrian detection and propose a simple yet powerful baseline for pedestrian detection based on RPN. In many of these methods, the region proposals are represented by axis-aligned rectangles, which are not suitable for detecting pedestrians imaged at an angle. To address this issue, Ma al @cite_16 propose a novel framework to detect text with arbitrary orientation in natural scene images. In their work, they present the Rotation Region Proposal Networks (RRPN) to generate rectangular proposals at different rotations instead of axis-aligned proposals only. This approach is limited because the RRPN can only deal with a discrete set of rotations and it is only applicable to proposal-based detection network.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_16", "@cite_8" ], "mid": [ "2497039038", "2193145675", "2963299736", "2613718673" ], "abstract": [ "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "Deep Convolution Neural Networks (DCNNs) are capable of learning unprecedentedly effective image representations. However, their ability in handling significant local and global image rotations remains limited. In this paper, we propose Active Rotating Filters (ARFs) that actively rotate during convolution and produce feature maps with location and orientation explicitly encoded. An ARF acts as a virtual filter bank containing the filter itself and its multiple unmaterialised rotated versions. During back-propagation, an ARF is collectively updated using errors from all its rotated versions. DCNNs using ARFs, referred to as Oriented Response Networks (ORNs), can produce within-class rotation-invariant deep features while maintaining inter-class discrimination for classification tasks. The oriented response produced by ORNs can also be used for image and object orientation estimation tasks. Over multiple state-of-the-art DCNN architectures, such as VGG, ResNet, and STN, we consistently observe that replacing regular filters with the proposed ARFs leads to significant reduction in the number of network parameters and improvement in classification performance. We report the best results on several commonly used benchmarks.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn." ] }
1706.08891
2732001288
Wayfinding signs play an important role in guiding users to navigate in a virtual environment and in helping pedestrians to find their ways in a real-world architectural site. Conventionally, the wayfinding design of a virtual environment is created manually, so as the wayfinding design of a real-world architectural site. The many possible navigation scenarios, as well as the interplay between signs and human navigation, can make the manual design process overwhelming and non-trivial. As a result, creating a wayfinding design for a typical layout can take months to several years. In this paper, we introduce the Way to Go! approach for automatically generating a wayfinding design for a given layout. The designer simply has to specify some navigation scenarios; our approach will automatically generate an optimized wayfinding design with signs properly placed considering human agents' visibility and possibility of making mistakes during a navigation. We demonstrate the effectiveness of our approach in generating wayfinding designs for different layouts such as a train station, a downtown and a canyon. We evaluate our results by comparing different wayfinding designs and show that our optimized wayfinding design can guide pedestrians to their destinations effectively and efficiently. Our approach can also help the designer visualize the accessibility of a destination from different locations, and correct any "blind zone" with additional signs.
The process of wayfinding involves four major steps @cite_3 : , , and . refers to determining one's current location. refers to selecting paths to navigate to the destination. refers to continuously verifying that the path indeed leads to the destination. Finally, refers to confirming that the destination has been reached. Our goal in this work is to automatically generate a wayfinding design for a given environment to facilitate the above wayfinding steps.
{ "cite_N": [ "@cite_3" ], "mid": [ "1563185315" ], "abstract": [ "Whether a marketing campaign or a museum exhibit, a video game or a complex control system, the design we see is the culmination of many concepts and practices brought together from a variety of disciplines. Because no one can be an expert on everything, designers have always had to scramble to find the information and know-how required to make a design work-until now. Universal Principles of Design is the first comprehensive, cross-disciplinary encyclopedia of design. Richly illustrated and easy to navigate, it pairs clear explanations of every design concept with visual examples of the concepts applied in practice. From the \"80 20\" rule to chunking, from baby-face bias to Occam's razor, and from self-similarity to storytelling, every major design concept is defined and illustrated for readers to expand their knowledge. This landmark reference will become the standard for designers, engineers, architects, and students who seek to broaden and improve their design expertise." ] }
1706.08891
2732001288
Wayfinding signs play an important role in guiding users to navigate in a virtual environment and in helping pedestrians to find their ways in a real-world architectural site. Conventionally, the wayfinding design of a virtual environment is created manually, so as the wayfinding design of a real-world architectural site. The many possible navigation scenarios, as well as the interplay between signs and human navigation, can make the manual design process overwhelming and non-trivial. As a result, creating a wayfinding design for a typical layout can take months to several years. In this paper, we introduce the Way to Go! approach for automatically generating a wayfinding design for a given layout. The designer simply has to specify some navigation scenarios; our approach will automatically generate an optimized wayfinding design with signs properly placed considering human agents' visibility and possibility of making mistakes during a navigation. We demonstrate the effectiveness of our approach in generating wayfinding designs for different layouts such as a train station, a downtown and a canyon. We evaluate our results by comparing different wayfinding designs and show that our optimized wayfinding design can guide pedestrians to their destinations effectively and efficiently. Our approach can also help the designer visualize the accessibility of a destination from different locations, and correct any "blind zone" with additional signs.
Wayfinding Map Generation. In computer graphics, there are interesting approaches for automatically generating tourist brochures @cite_56 and destination maps @cite_62 . Though these maps are intended for real-world navigation use, they could potentially be used to assist navigation in virtual environments. Given a map and some desired destinations, these approaches select a subset of roads to reach the destinations, and visualize the important routing instructions on a generated map which is intuitive to use. Our approach is inspired by these approaches, but focuses on optimizing the placement of wayfinding signs in the layout so as to guide pedestrians to reach their destinations easily. Combining automatically generated maps with the wayfinding signs generated by our approach can potentially provide users with effective wayfinding aids to navigate smoothly in virtual environments.
{ "cite_N": [ "@cite_62", "@cite_56" ], "mid": [ "2086631382", "2127684608" ], "abstract": [ "Destination maps are navigational aids designed to show anyone within a region how to reach a location (the destination). Hand-designed destination maps include only the most important roads in the region and are non-uniformly scaled to ensure that all of the important roads from the highways to the residential streets are visible. We present the first automated system for creating such destination maps based on the design principles used by mapmakers. Our system includes novel algorithms for selecting the important roads based on mental representations of road networks, and for laying out the roads based on a non-linear optimization procedure. The final layouts are labeled and rendered in a variety of styles ranging from informal to more formal map styles. The system has been used to generate over 57,000 destination maps by thousands of users. We report feedback from both a formal and informal user study, as well as provide quantitative measures of success.", "We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest POIs. This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization." ] }
1706.08891
2732001288
Wayfinding signs play an important role in guiding users to navigate in a virtual environment and in helping pedestrians to find their ways in a real-world architectural site. Conventionally, the wayfinding design of a virtual environment is created manually, so as the wayfinding design of a real-world architectural site. The many possible navigation scenarios, as well as the interplay between signs and human navigation, can make the manual design process overwhelming and non-trivial. As a result, creating a wayfinding design for a typical layout can take months to several years. In this paper, we introduce the Way to Go! approach for automatically generating a wayfinding design for a given layout. The designer simply has to specify some navigation scenarios; our approach will automatically generate an optimized wayfinding design with signs properly placed considering human agents' visibility and possibility of making mistakes during a navigation. We demonstrate the effectiveness of our approach in generating wayfinding designs for different layouts such as a train station, a downtown and a canyon. We evaluate our results by comparing different wayfinding designs and show that our optimized wayfinding design can guide pedestrians to their destinations effectively and efficiently. Our approach can also help the designer visualize the accessibility of a destination from different locations, and correct any "blind zone" with additional signs.
Layout design is an important problem in computer graphics. A layout typically consists of a number of sites connected by paths, with each site serving a different purpose. Computer-generated layouts can be used for creating virtual environments where virtual agents and human users can navigate for simulation and entertainment purposes. proposed to generate roads procedurally given a natural landscape with river and hills @cite_41 . Computationally generated layouts can also be used for architectural design @cite_32 @cite_4 @cite_59 and urban planning @cite_25 @cite_61 @cite_12 @cite_21 . Refer to the survey @cite_36 by for a comprehensive review of the state-of-the-art procedural modeling techniques for generating layout designs for virtual environments.
{ "cite_N": [ "@cite_61", "@cite_4", "@cite_41", "@cite_36", "@cite_21", "@cite_32", "@cite_59", "@cite_25", "@cite_12" ], "mid": [ "2046890933", "2058387381", "", "1549847324", "", "1977371710", "1967720527", "", "2073549223" ], "abstract": [ "The main contribution of our work is in closing the loop between behavioral and geometrical modeling of cities. Editing of urban design variables is performed intuitively and visually using a graphical user interface. Any design variable can be constrained or changed. The design process uses an iterative dynamical system for reaching equilibrium: a state where the demands of behavioral modeling match those of geometrical modeling. 3D models are generated in a few seconds and conform to plausible urban behavior and urban geometry. Our framework includes an interactive agent-based behavioral modeling system as well as adaptive geometry generation algorithms. We demonstrate interactive and incremental design and editing for synthetic urban spaces spanning over 200 square kilometers.", "Good building layouts are required to conform to regulatory guidelines, while meeting certain quality measures. While different methods can sample the space of such good layouts, there exists little support for a user to understand and systematically explore the samples. Starting from a discrete set of good layouts, we analytically characterize the local shape space of good layouts around each initial layout, compactly encode these spaces, and link them to support transitions across the different local spaces. We represent such transitions in the form of a portal graph. The user can then use the portal graph, along with the family of local shape spaces, to globally and locally explore the space of good building layouts. We use our framework on a variety of different test scenarios to showcase an intuitive design, navigation, and exploration interface.", "", "Procedural modelling deals with semi-automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modelling attractive for creating virtual environments increasingly used in movies, games and simulations. We survey procedural methods that are useful to generate features of virtual worlds, including terrains, vegetation, rivers, roads, buildings and entire cities. In this survey, we focus particularly on the degree of intuitive control and of interactivity offered by each procedural method, because these properties are instrumental for their typical users: designers and artists. We identify the most promising research results that have been recently achieved, but we also realize that there is far from widespread acceptance of procedural methods among non-technical, creative professionals. We conclude by discussing some of the most important challenges of procedural modelling.", "", "We present a method for automated generation of building layouts for computer graphics applications. Our approach is motivated by the layout design process developed in architecture. Given a set of high-level requirements, an architectural program is synthesized using a Bayesian network trained on real-world data. The architectural program is realized in a set of floor plans, obtained through stochastic optimization. The floor plans are used to construct a complete three-dimensional building with internal structure. We demonstrate a variety of computer-generated buildings produced by the presented approach.", "In this paper, we tackle the problem of tiling a domain with a set of deformable templates. A valid solution to this problem completely covers the domain with templates such that the templates do not overlap. We generalize existing specialized solutions and formulate a general layout problem by modeling important constraints and admissible template deformations. Our main idea is to break the layout algorithm into two steps: a discrete step to lay out the approximate template positions and a continuous step to refine the template shapes. Our approach is suitable for a large class of applications, including floorplans, urban layouts, and arts and design.", "", "We propose a framework that enables adding intuitive high level control to an existing urban procedural model. In particular, we provide a mechanism to interactively edit urban models, a task which is important to stakeholders in gaming, urban planning, mapping, and navigation services. Procedural modeling allows a quick creation of large complex 3D models, but controlling the output is a well-known open problem. Thus, while forward procedural modeling has thrived, in this paper we add to the arsenal an inverse modeling tool. Users, unaware of the rules of the underlying urban procedural model, can alternatively specify arbitrary target indicators to control the modeling process. The system itself will discover how to alter the parameters of the urban procedural model so as to produce the desired 3D output. We label this process inverse design." ] }
1706.08891
2732001288
Wayfinding signs play an important role in guiding users to navigate in a virtual environment and in helping pedestrians to find their ways in a real-world architectural site. Conventionally, the wayfinding design of a virtual environment is created manually, so as the wayfinding design of a real-world architectural site. The many possible navigation scenarios, as well as the interplay between signs and human navigation, can make the manual design process overwhelming and non-trivial. As a result, creating a wayfinding design for a typical layout can take months to several years. In this paper, we introduce the Way to Go! approach for automatically generating a wayfinding design for a given layout. The designer simply has to specify some navigation scenarios; our approach will automatically generate an optimized wayfinding design with signs properly placed considering human agents' visibility and possibility of making mistakes during a navigation. We demonstrate the effectiveness of our approach in generating wayfinding designs for different layouts such as a train station, a downtown and a canyon. We evaluate our results by comparing different wayfinding designs and show that our optimized wayfinding design can guide pedestrians to their destinations effectively and efficiently. Our approach can also help the designer visualize the accessibility of a destination from different locations, and correct any "blind zone" with additional signs.
An important consideration in designing a layout is the navigation experience of the pedestrians. Recently, proposed an approach @cite_26 which uses crowd simulation to generate mid-scale layouts optimal with respect to human navigation properties such as mobility, accessibility and coziness. However, concerning navigation, one important consideration is missing: the wayfinding experience of the pedestrians in the generated environments. We argue that their generated layouts are navigation-aware only if wayfinding signs are properly placed in the layouts.
{ "cite_N": [ "@cite_26" ], "mid": [ "2463843416" ], "abstract": [ "We propose a novel approach for designing mid-scale layouts by optimizing with respect to human crowd properties. Given an input layout domain such as the boundary of a shopping mall, our approach synthesizes the paths and sites by optimizing three metrics that measure crowd flow properties: mobility, accessibility, and coziness. While these metrics are straightforward to evaluate by a full agent-based crowd simulation, optimizing a layout usually requires hundreds of evaluations, which would require a long time to compute even using the latest crowd simulation techniques. To overcome this challenge, we propose a novel data-driven approach where nonlinear regressors are trained to capture the relationship between the agent-based metrics, and the geometrical and topological features of a layout. We demonstrate that by using the trained regressors, our approach can synthesize crowd-aware layouts and improve existing layouts with better crowd flow properties." ] }
1706.08801
2953072744
We propose an algorithm to detect approximate reflection symmetry present in a set of volumetrically distributed points belonging to @math containing a distorted reflection symmetry pattern. We pose the problem of detecting approximate reflection symmetry as the problem of establishing the correspondences between the points which are reflections of each other and determining the reflection symmetry transformation. We formulate an optimization framework in which the problem of establishing the correspondences amounts to solving a linear assignment problem and the problem of determining the reflection symmetry transformation amounts to an optimization problem on a smooth Riemannian product manifold. The proposed approach estimates the symmetry from the distribution of the points and is descriptor independent. We evaluate the robustness of our approach by varying the amount of distortion in a perfect reflection symmetry pattern where we perturb each point by a different amount of perturbation. We demonstrate the effectiveness of the method by applying it to the problem of 2-D reflection symmetry detection along with relevant comparisons.
These algorithms primarily rely on the local image features such as edge orientations, curvatures, and gradients. The recent works such as ( @cite_49 @cite_40 @cite_28 @cite_31 @cite_51 @cite_52 @cite_11 @cite_27 @cite_18 @cite_12 @cite_26 @cite_42 ) present excellent algorithms for reflection symmetry detection in images. Given the accurate detection of keypoints, the algorithm developed in this work can be used to detect reflection symmetry in images without using local features.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_28", "@cite_42", "@cite_52", "@cite_40", "@cite_27", "@cite_49", "@cite_31", "@cite_51", "@cite_12", "@cite_11" ], "mid": [ "174734558", "2520607739", "2114379931", "1529464419", "2099363089", "2133814640", "1965837944", "2736736899", "1566328901", "2217188300", "2127952943", "2003233837" ], "abstract": [ "In this work we propose a learning-based approach to symmetry detection in natural images. We focus on ribbon-like structures, i.e. contours marking local and approximate reflection symmetry and make three contributions to improve their detection. First, we create and make publicly available a ground-truth dataset for this task by building on the Berkeley Segmentation Dataset. Second, we extract features representing multiple complementary cues, such as grayscale structure, color, texture, and spectral clustering information. Third, we use supervised learning to learn how to combine these cues, and employ MIL to accommodate the unknown scale and orientation of the symmetric structures. We systematically evaluate the performance contribution of each individual component in our pipeline, and demonstrate that overall we consistently improve upon results obtained using existing alternatives.", "Symmetry in visual data represents repeated patterns or shapes that is easily found in natural and human-made objects. Symmetry pattern on an object works as a salient visual feature attracting human attention and letting the object to be easily recognized. Most existing symmetry detection methods are based on sparsely detected local features describing the appearance of their neighborhood, which have difficulty in capturing object structure mostly supported by edges and contours. In this work, we propose a new reflection symmetry detection method extracting robust 4-dimensional Appearance of Structure descriptors based on a set of outstanding neighbourhood edge segments in multiple scales. Our experimental evaluations on multiple public symmetry detection datasets show promising reflection symmetry detection results on challenging real world and synthetic images.", "Symmetry is a powerful shape regularity that's been exploited by perceptual grouping researchers in both human and computer vision to recover part structure from an image without a priori knowledge of scene content. Drawing on the concept of a medial axis, defined as the locus of centers of maximal inscribed discs that sweep out a symmetric part, we model part recovery as the search for a sequence of deformable maximal inscribed disc hypotheses generated from a multiscale super pixel segmentation, a framework proposed by LEV09. However, we learn affinities between adjacent super pixels in a space that's invariant to bending and tapering along the symmetry axis, enabling us to capture a wider class of symmetric parts. Moreover, we introduce a global cost that perceptually integrates the hypothesis space by combining a pair wise and a higher-level smoothing term, which we minimize globally using dynamic programming. The new framework is demonstrated on two datasets, and is shown to significantly outperform the baseline LEV09.", "A system is presented that takes a single image as an input (e.g. showing the interior of St.Peter's Basilica) and automatically detects an arbitrarily oriented symmetry plane in 3D space. Given this symmetry plane a second camera is hallucinated that serves as a virtual second image for dense 3D reconstruction, where the point of view for reconstruction can be chosen on the symmetry plane. This naturally creates a symmetry in the matching costs for dense stereo. Alternatively, we also show how to enforce the 3D symmetry in dense depth estimation for the original image. The two representations are qualitatively compared on several real world images, that also validate our fully automatic approach for dense single image reconstruction.", "We present a new technique for extracting local features from images of architectural scenes, based on detecting and representing local symmetries. These new features are motivated by the fact that local symmetries, at different scales, are a fundamental characteristic of many urban images, and are potentially more invariant to large appearance changes than lower-level features such as SIFT. Hence, we apply these features to the problem of matching challenging pairs of photos of urban scenes. Our features are based on simple measures of local bilateral and rotational symmetries computed using local image operations. These measures are used both for feature detection and for computing descriptors. We demonstrate our method on a challenging new dataset containing image pairs exhibiting a range of dramatic variations in lighting, age, and rendering style, and show that our features can improve matching performance for this difficult task.", "Skeletonization algorithms typically decompose an object's silhouette into a set of symmetric parts, offering a powerful representation for shape categorization. However, having access to an object's silhouette assumes correct figure-ground segmentation, leading to a disconnect with the mainstream categorization community, which attempts to recognize objects from cluttered images. In this paper, we present a novel approach to recovering and grouping the symmetric parts of an object from a cluttered scene. We begin by using a multiresolution superpixel segmentation to generate medial point hypotheses, and use a learned affinity function to perceptually group nearby medial points likely to belong to the same medial branch. In the next stage, we learn higher granularity affinity functions to group the resulting medial branches likely to belong to the same object. The resulting framework yields a skelet al approximation that is free of many of the instabilities that occur with traditional skeletons. More importantly, it does not require a closed contour, enabling the application of skeleton-based categorization systems to more realistic imagery.", "We propose a novel approach for detecting partial reflectional symmetry in images. Our method consists of two principal stages: candidate selection and validation. In the first step, candidates for mirror-symmetric patches are identified using an existing heuristic procedure based on Hough voting. The candidates are then validated using a principled statistical procedure inspired from the a contrario theory, which minimizes the number of false positives. Our algorithm uses integral image properties to enhance the execution time.", "Natural images often exhibit symmetries that should be taken into account when editing them. In this paper we present Nautilus --- a method for automatically identifying symmetric regions in an image along with their corresponding symmetry transformations. We compute dense local similarity symmetry transformations using a novel variant of the Generalised PatchMatch algorithm that uses Metropolis-Hastings sampling. We combine and refine these local symmetries using an extended Lucas-Kanade algorithm to compute regional transformations and their spatial extents. Our approach produces dense estimates of complex symmetries that are combinations of translation, rotation, scale, and reflection under perspective distortion. This enables a number of automatic symmetry-aware image editing applications including inpainting, rectification, beautification, and segmentation, and we demonstrate state-of-the-art applications for each of them.", "A novel and efficient method is presented for grouping feature points on the basis of their underlying symmetry and characterising the symmetries present in an image. We show how symmetric pairs of features can be efficiently detected, how the symmetry bonding each pair is extracted and evaluated, and how these can be grouped into symmetric constellations that specify the dominant symmetries present in the image. Symmetries over all orientations and radii are considered simultaneously, and the method is able to detect local or global symmetries, locate symmetric figures in complex backgrounds, detect bilateral or rotational symmetry, and detect multiple incidences of symmetry.", "Symmetry, as one of the key components of Gestalt theory, provides an important mid-level cue that serves as input to higher visual processes such as segmentation. In this work, we propose a complete approach that links the detection of curved reflection symmetries to produce symmetry-constrained segments of structures regions in real images with clutter. For curved reflection symmetry detection, we leverage on patch-based symmetric features to train a Structured Random Forest classifier that detects multiscaled curved symmetries in 2D images. Next, using these curved symmetries, we modulate a novel symmetry-constrained foreground-background segmentation by their symmetry scores so that we enforce global symmetrical consistency in the final segmentation. This is achieved by imposing a pairwise symmetry prior that encourages symmetric pixels to have the same labels over a MRF-based representation of the input image edges, and the final segmentation is obtained via graph-cuts. Experimental results over four publicly available datasets containing annotated symmetric structures: 1) SYMMAX-300 [38], 2) BSD-Parts, 3) Weizmann Horse (both from [18]) and 4) NY-roads [35] demonstrate the approach's applicability to different environments with state-of-the-art performance.", "We present a spectral approach for detecting and analyzing rotational and reflectional symmetries in n-dimensions. Our main contribution is the derivation of a symmetry detection and analysis scheme for sets of points IRn and its extension to image analysis by way of local features. Each object is represented by a set of points S ∈ IRn, where the symmetry is manifested by the multiple self-alignments of S . The alignment problem is formulated as a quadratic binary optimization problem, with an efficient solution via spectral relaxation. For symmetric objects, this results in a multiplicity of eigenvalues whose corresponding eigenvectors allow the detection and analysis of both types of symmetry. We improve the scheme's robustness by incorporating geometrical constraints into the spectral analysis. Our approach is experimentally verified by applying it to 2D and 3D synthetic objects as well as real images.", "Reflection symmetry detection receives increasing attentions in recent years. The state-of-the-art algorithms mainly use the matching of intensity-based features (such as the SIFT) within a single image to find symmetry axes. This paper proposes a novel approach by establishing the correspondence of locally affine invariant edge-based features, which are superior to the intensity based in the aspects that it is insensitive to illumination variations, and applicable to textureless objects. The locally affine invariance is achieved by simple linear algebra for efficient and robust computations, making the algorithm suitable for detections under object distortions like perspective projection. Commonly used edge detectors and a voting process are, respectively, used before and after the edge description and matching steps to form a complete reflection detection pipeline. Experiments are performed using synthetic and real-world images with both multiple and single reflection symmetry axis. The test results are compared with existing algorithms to validate the proposed method." ] }
1706.08801
2953072744
We propose an algorithm to detect approximate reflection symmetry present in a set of volumetrically distributed points belonging to @math containing a distorted reflection symmetry pattern. We pose the problem of detecting approximate reflection symmetry as the problem of establishing the correspondences between the points which are reflections of each other and determining the reflection symmetry transformation. We formulate an optimization framework in which the problem of establishing the correspondences amounts to solving a linear assignment problem and the problem of determining the reflection symmetry transformation amounts to an optimization problem on a smooth Riemannian product manifold. The proposed approach estimates the symmetry from the distribution of the points and is descriptor independent. We evaluate the robustness of our approach by varying the amount of distortion in a perfect reflection symmetry pattern where we perturb each point by a different amount of perturbation. We demonstrate the effectiveness of the method by applying it to the problem of 2-D reflection symmetry detection along with relevant comparisons.
Our algorithm is similar to Iterative Closest Point (ICP) algorithm ( @cite_3 @cite_43 ) only in the sense that we also follow the alternation between the optimization of reflection transformation (rotation and translation in ICP) and the correspondences between the mirror symmetric points (correspondences between the points of two different shapes in ICP). Our algorithm differs from the ICP algorithm since ICP has a different error function in the transformation parameters than the error function of our problem. Furthermore, our matching is bijective since we impose the bijectivity constraints in our optimization framework. These constraints ensure that each point has exactly one mirror image point.
{ "cite_N": [ "@cite_43", "@cite_3" ], "mid": [ "2119851068", "2049981393" ], "abstract": [ "The ICP (Iterative Closest Point) algorithm is widely used for geometric alignment of three-dimensional models when an initial estimate of the relative pose is known. Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minimization strategy. We enumerate and classify many of these variants, and evaluate their effect on the speed with which the correct alignment is reached. In order to improve convergence for nearly-flat meshes with small features, such as inscribed surfaces, we introduce a new variant based on uniform sampling of the space of normals. We conclude by proposing a combination of ICP variants optimized for high speed. We demonstrate an implementation that is able to align two range images in a few tens of milliseconds, assuming a good initial guess. This capability has potential application to real-time 3D model acquisition and model-based tracking.", "The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >" ] }
1706.08257
2731344966
We present a semiclassical proof of the weak gravity conjecture in @math spacetime dimensions for scalar matter gauged under a @math gauge group. We compute the non-perturbative macroscopic entropy of a scalar field in an extremal black hole background at the level of linearized backreaction on the metric. The scalar field is assumed to violate or saturate the weak gravity conjecture. The scalar contributes a logarithmic correction to the entropy in the black hole geometry that outgrows the classical contribution. We demonstrate that the entropy of the gauged scalar violates the generalized second law in the limit of large black hole charge. Our result suggests that entropy inequalities may directly discriminate between effective field theories that live in the landscape versus the swampland.
Qualitatively similar results to our entropy calculation appear in @cite_17 . However, not all of their quantitative results match ours exactly. We believe that this results from the formalism they use to calculate the entropy of the black hole, which is not exactly equivalent to ours. We also believe that their conclusions and interpretation of results differ significantly enough from our own. Moreover, they do not attempt to prove the weak gravity conjecture using entropy inequalities, although they allude to this possibility.
{ "cite_N": [ "@cite_17" ], "mid": [ "2550488390" ], "abstract": [ "Motivated by the desire to improve our understanding of the Weak Gravity Conjecture, we compute the one-loop correction of charged particles to the geometry and entropy of extremal black holes in 4d. We find that fermion loops provide evidence for the necessity of the magnetic' WGC cutoff. Moreover, for a certain regime of black holes, we find entropy corrections with unusual area scaling. The corrections are reduced when supersymmetry is present, and disappear in @math supergravity. We further provide some speculative arguments that in a theory with only sub-extremal particles, classical Reissner-Nordstrom black holes actually possess an infinite microcanonical entropy, though only a finite amount is visible to an external observer." ] }
1706.08257
2731344966
We present a semiclassical proof of the weak gravity conjecture in @math spacetime dimensions for scalar matter gauged under a @math gauge group. We compute the non-perturbative macroscopic entropy of a scalar field in an extremal black hole background at the level of linearized backreaction on the metric. The scalar field is assumed to violate or saturate the weak gravity conjecture. The scalar contributes a logarithmic correction to the entropy in the black hole geometry that outgrows the classical contribution. We demonstrate that the entropy of the gauged scalar violates the generalized second law in the limit of large black hole charge. Our result suggests that entropy inequalities may directly discriminate between effective field theories that live in the landscape versus the swampland.
A separate application of the second law towards understanding the weak gravity conjecture appears in @cite_23 , which appeared during the preparation of this manuscript. However, their calculation is orthogonal to ours. Their paper argues for the weak gravity conjecture using a bound on relaxation rates of quasinormal modes of near-extremal black holes. Although related to the second law, the connection is indirect: the second law implies the relaxation rate bound, which in turn implies the weak gravity conjecture. In this paper, we present a a more direct link between the second law and the weak gravity conjecture.
{ "cite_N": [ "@cite_23" ], "mid": [ "2616210988" ], "abstract": [ "The weak gravity conjecture suggests that, in a self-consistent theory of quantum gravity, the strength of gravity is bounded from above by the strengths of the various gauge forces in the theory. In particular, this intriguing conjecture asserts that in a theory describing a U(1) gauge field coupled consistently to gravity, there must exist a particle whose proper mass is bounded (in Planck units) by its charge: m mP < q. This beautiful and remarkably compact conjecture has attracted the attention of physicists and mathematicians over the last decade. It should be emphasized, however, that despite the fact that there are numerous examples from field theory and string theory that support the conjecture, we still lack a general proof of its validity. In the present paper, we prove that the weak gravity conjecture (and, in particular, the mass–charge upper bound m mP < q) can be inferred directly from Bekenstein’s generalized second law of thermodynamics, a law which is widely believed to reflect a fundamen..." ] }
1706.08115
2708101729
In the Steiner point removal (SPR) problem, we are given a weighted graph @math and a set of terminals @math of size @math . The objective is to find a minor @math of @math with only the terminals as its vertex set, such that the distance between the terminals will be preserved up to a small multiplicative distortion. Kamma, Krauthgamer and Nguyen [KKN15] used a ball-growing algorithm with exponential distributions to show that the distortion is at most @math . Cheung [Che17] improved the analysis of the same algorithm, bounding the distortion by @math . We improve the analysis of this ball-growing algorithm even further, bounding the distortion by @math .
Krauthgamer, Nguyen and Zondiner @cite_10 showed that if we allowing the minor @math to contain at most @math Steiner vertices in addition to the terminals, then distortion @math can be achieved. They further showed that for graphs with constant treewidth, @math Steiner points will suffice for distortion @math . Cheung, Gramoz and Henzinger @cite_3 showed that allowing @math Steiner vertices, one can achieve distortion @math (in particular distortion @math with @math Steiners). For planar graphs, Cheung et. al. achieved @math distortion with @math Steiner points.
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "1968729061", "2963319367" ], "abstract": [ "We introduce the following notion of compressing an undirected graph @math with (nonnegative) edge-lengths and terminal vertices @math . A distance-preserving minor is a minor @math (of @math ) with possibly different edge-lengths, such that @math and the shortest-path distance between every pair of terminals is exactly the same in @math and in @math . We ask: what is the smallest @math such that every graph @math with @math terminals admits a distance-preserving minor @math with at most @math vertices? Simple analysis shows that @math . Our main result proves that @math , significantly improving on the trivial @math . Our lower bound holds even for planar graphs @math , in contrast to graphs @math of constant treewidth, for which we prove that @math vertices suffice.", "Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 2.5 3 must have Omega(k^2) Omega(k^ 5 4 ) Omega(k^ 6 5 ) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k^2) non-terminals, and any planar graph admits a minor with 1 + epsilon distortion and O((k epsilon)^2) non-terminals." ] }
1706.07972
2678049721
Most information spreading models consider that all individuals are identical psychologically. They ignore, for instance, the curiosity level of people, which may indicate that they can be influenced to seek for information given their interest. For example, the game Pokemon GO spread rapidly because of the aroused curiosity among users. This paper proposes an information propagation model considering the curiosity level of each individual, which is a dynamical parameter that evolves over time. We evaluate the efficiency of our model in contrast to traditional information propagation models, like SIR or IC, and perform analysis on different types of artificial and real-world networks, like Google+, Facebook, and the United States roads map. We present a mean-field approach that reproduces with a good accuracy the evolution of macroscopic quantities, such as the density of stiflers, for the system's behavior with the curiosity. We also obtain an analytical solution of the mean-field equations that allows to predicts a transition from a phase where the information remains confined to a small number of users to a phase where it spreads over a large fraction of the population. The results indicate that the curiosity increases the information spreading in all networks as compared with the spreading without curiosity, and that this increase is larger in spatial networks than in social networks. When the curiosity is taken into account, the maximum number of informed individuals is reached close to the transition point. Since curious people are more open to a new product, concepts, and ideas, this is an important factor to be considered in propagation modeling. Our results contribute to the understanding of the interplay between diffusion process and dynamical heterogeneous transmission in social networks.
Several models have been proposed for modeling propagation dynamics on networks @cite_11 @cite_6 @cite_24 @cite_17 @cite_0 @cite_27 @cite_21 @cite_5 @cite_10 . These models consider some assumptions about the propagation process and network structure, like the degree correlation or distribution, classes of vertices, among others @cite_11 @cite_13 @cite_24 @cite_17 . The majority of the spreading models consider only homogeneous or fixed transmission probabilities, i.e., each vertex has the same likelihood of transmitting the information @cite_24 @cite_13 @cite_17 . For instance, in @cite_0 the authors propose a discrete-time model for rumor propagation with heterogeneous transmission probabilities, but still, each vertex has a constant probability over time. Other transmission models consider characteristics like short-term of immunity or steady active state @cite_16 , and spreaders' procrastination in the transmission of information by heterogeneous delays @cite_27 . Also, there are some proposals that introduced disbelieving or forgetting mechanism @cite_21 , lost of interest @cite_5 , apathy and many others @cite_13 . Regarding the contagion dynamic, the diffusion process can be spontaneous or by different kind of contact interaction or adaptation among the subjects @cite_10 .
{ "cite_N": [ "@cite_13", "@cite_11", "@cite_21", "@cite_6", "@cite_24", "@cite_0", "@cite_27", "@cite_5", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "1878853999", "2054476043", "2076367080", "", "2402962589", "2529088107", "2552421666", "1977878271", "", "2407477614", "" ], "abstract": [ "In recent years the research community has accumulated overwhelming evidence for the emergence of complex and heterogeneous connectivity patterns in a wide range of biological and sociotechnical systems. The complex properties of real-world networks have a profound impact on the behavior of equilibrium and nonequilibrium phenomena occurring in various systems, and the study of epidemic spreading is central to our understanding of the unfolding of dynamical processes in complex networks. The theoretical analysis of epidemic spreading in heterogeneous networks requires the development of novel analytical frameworks, and it has produced results of conceptual and practical relevance. A coherent and comprehensive review of the vast research activity concerning epidemic processes is presented, detailing the successful theoretical approaches as well as making their limits and assumptions clear. Physicists, mathematicians, epidemiologists, computer, and social scientists share a common interest in studying epidemic spreading and rely on similar models for the description of the diffusion of pathogens, knowledge, and innovation. For this reason, while focusing on the main results and the paradigmatic models in infectious disease modeling, the major results concerning generalized social contagion processes are also presented. Finally, the research activity at the forefront in the study of epidemic spreading in coevolving, coupled, and time-varying networks is reported.", "Online social networks play a major role in the spread of information at very large scale. A lot of effort have been made in order to understand this phenomenon, ranging from popular topic detection to information diffusion modeling, including influential spreaders identification. In this article, we present a survey of representative methods dealing with these issues and propose a taxonomy that summarizes the state-of-the-art. The objective is to provide a comprehensive analysis and guide of existing efforts around information diffusion in social networks. This survey is intended to help researchers in quickly understanding existing works and possible improvements to bring.", "Rumor is an important form of social interaction, and its spreading has a significant impact on people’s lives. In the age of Web, people are using electronic media more frequently than ever before, and blog has become one of the main online social interactions. Therefore, it is essential to learn the evolution mechanism of rumor spreading on homogeneous network in consideration of the forgetting mechanism of spreaders. Here we study a rumor spreading model on an online social blogging platform called LiveJournal. In comparison with the Susceptible–Infected–Removed (SIR) model, we provide a more detailed and realistic description of rumor spreading process with combination of forgetting mechanism and the SIR model of epidemics. A mathematical model has been presented and numerical solutions of the model were used to analyze the impact factors of rumor spreading, such as the average degree, forgetting rate and stifling rate. Our results show that there exist a threshold of the average degree of LiveJournal and above which the influence of rumor reaches saturation. Forgetting mechanism and stifling rate exert great influence on rumor spreading on online social network. The analysis results can guide people’s behaviors in view of the theoretical and practical aspects.", "", "", "Rumor models consider that information transmission occurs with the same probability between each pair of nodes. However, this assumption is not observed in social networks, which contain influential spreaders. To overcome this limitation, we assume that central individuals have a higher capacity to convince their neighbors than peripheral subjects. From extensive numerical simulations we find that spreading is improved in scale-free networks when the transmission probability is proportional to the PageRank, degree, and betweenness centrality. In addition, the results suggest that spreading can be controlled by adjusting the transmission probabilities of the most central nodes. Our results provide a conceptual framework for understanding the interplay between rumor propagation and heterogeneous transmission in social networks.", "While online social networks provide access to a massive information source, they also enable wide dissemination of false or inaccurate content. Undesirable results caused by misinformation propagation make its timely detection very imperative. An important question is how many monitors are required to detect all misinformation cascades at their early stage. To answer this question, we define a Time Constrained Misinformation Detection (TCMD) problem. As we have proved, there is no polynomial time (1 - e) ln n-approximation for the TCMD problem. The large number of independent misinformation cascades and heterogeneous delays make misinformation detection more challenging. Our approach includes stochastic programming and an O(ln(1 + n)) approximation algorithm for one-hop detection. This approach can provide a lower bound on the number of required monitors for general detection. Furthermore, we propose a network-compression based solution, whose effectiveness is validated by extensive experimental results.", "We introduce a general stochastic model for the spread of rumours, and derive mean-field equations that describe the dynamics of the model on complex social networks (in particular, those mediated by the Internet). We use analytical and numerical solutions of these equations to examine the threshold behaviour and dynamics of the model on several models of such networks: random graphs, uncorrelated scale-free networks and scale-free networks with assortative degree correlations. We show that in both homogeneous networks and random graphs the model exhibits a critical threshold in the rumour spreading rate below which a rumour cannot propagate in the system. In the case of scale-free networks, on the other hand, this threshold becomes vanishingly small in the limit of infinite system size. We find that the initial rate at which a rumour spreads is much higher in scale-free networks than in random graphs, and that the rate at which the spreading proceeds on scale-free networks is further increased when assortative degree correlations are introduced. The impact of degree correlations on the final fraction of nodes that ever hears a rumour, however, depends on the interplay between network topology and the rumour spreading rate. Our results show that scale-free social networks are prone to the spreading of rumours, just as they are to the spreading of infections. They are relevant to the spreading dynamics of chain emails, viral advertising and large-scale information dissemination algorithms on the Internet.", "", "The emergence of social media increases the need for the recognization of social influence mainly motivated by online advertising, political and health campaigns, recommendation systems, epidemiological study, etc. In spreading processes, it is possible to define the most central or influential vertices according to the network topology and dynamic. On the other hand, the least influential spreaders have been disregarded. This paper aims to maximize the mean of information propagation on the network by recognizing the non influential individuals by making them better spreader. Experimental results confirm that selecting 0:5 of least influential spreaders in three social networks (google+, hamsterster and advogato) and rewiring one connection to some important vertex, increase the propagation over the entire network.", "" ] }
1706.08259
2689993734
The execution logs that are used for process mining in practice are often obtained by querying an operational database and storing the result in a flat file. Consequently, the data processing power of the database system cannot be used anymore for this information, leading to constrained flexibility in the definition of mining patterns and limited execution performance in mining large logs. Enabling process mining directly on a database - instead of via intermediate storage in a flat file - therefore provides additional flexibility and efficiency. To help facilitate this ideal of in-database process mining, this paper formally defines a database operator that extracts the 'directly follows' relation from an operational database. This operator can both be used to do in-database process mining and to flexibly evaluate process mining related queries, such as: "which employee most frequently changes the 'amount' attribute of a case from one task to the next". We define the operator using the well-known relational algebra that forms the formal underpinning of relational databases. We formally prove equivalence properties of the operator that are useful for query optimization and present time-complexity properties of the operator. By doing so this paper formally defines the necessary relational algebraic elements of a 'directly follows' operator, which are required for implementation of such an operator in a DBMS.
By defining an operator for efficiently extracting the directly follows' relation between events from a database, this paper has its basis in a long history of papers that focus on optimizing database operations. In particular, it is related to papers that focus on optimizing database operations for data mining purposes @cite_16 @cite_6 , of which SAP HANA @cite_8 is a recent development. The idea of proposing domain-specific database operators has also been applied in other domains, such as spatio-temporal databases @cite_4 and scientific databases @cite_9 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_6", "@cite_16" ], "mid": [ "", "1969877208", "2017369298", "2100406636", "2153028052" ], "abstract": [ "", "The SAP HANA database is positioned as the core of the SAP HANA Appliance to support complex business analytical processes in combination with transactionally consistent operational workloads. Within this paper, we outline the basic characteristics of the SAP HANA database, emphasizing the distinctive features that differentiate the SAP HANA database from other classical relational database management systems. On the technical side, the SAP HANA database consists of multiple data processing engines with a distributed query processing environment to provide the full spectrum of data processing -- from classical relational data supporting both row- and column-oriented physical representations in a hybrid engine, to graph and text processing for semi- and unstructured data management within the same system. From a more application-oriented perspective, we outline the specific support provided by the SAP HANA database of multiple domain-specific languages with a built-in set of natively implemented business functions. SQL -- as the lingua franca for relational database systems -- can no longer be considered to meet all requirements of modern applications, which demand the tight interaction with the data management layer. Therefore, the SAP HANA database permits the exchange of application semantics with the underlying data management platform that can be exploited to increase query expressiveness and to reduce the number of individual application-to-database round trips.", "In CIDR 2009, we presented a collection of requirements for SciDB, a DBMS that would meet the needs of scientific users. These included a nested-array data model, science-specific operations such as regrid, and support for uncertainty, lineage, and named versions. In this paper, we present an overview of SciDB's key features and outline a demonstration of the first version of SciDB on data and operations from one of our lighthouse users, the Large Synoptic Survey Telescope (LSST).", "The authors' perspective of database mining as the confluence of machine learning techniques and the performance emphasis of database technology is presented. Three classes of database mining problems involving classification, associations, and sequences are described. It is argued that these problems can be uniformly viewed as requiring discovery of rules embedded in massive amounts of data. A model and some basic operations for the process of rule discovery are described. It is shown how the database mining problems considered map to this model, and how they can be solved by using the basic operations proposed. An example is given of an algorithm for classification obtained by combining the basic rule discovery operations. This algorithm is efficient in discovering classification rules and has accuracy comparable to ID3, one of the best current classifiers. >", "Mining information and knowledge from large databases has been recognized by many researchers as a key research topic in database systems and machine learning, and by many industrial companies as an important area with an opportunity of major revenues. Researchers in many different fields have shown great interest in data mining. Several emerging applications in information-providing services, such as data warehousing and online services over the Internet, also call for various data mining techniques to better understand user behavior, to improve the service provided and to increase business opportunities. In response to such a demand, this article provides a survey, from a database researcher's point of view, on the data mining techniques developed recently. A classification of the available data mining techniques is provided and a comparative study of such techniques is presented." ] }
1706.08259
2689993734
The execution logs that are used for process mining in practice are often obtained by querying an operational database and storing the result in a flat file. Consequently, the data processing power of the database system cannot be used anymore for this information, leading to constrained flexibility in the definition of mining patterns and limited execution performance in mining large logs. Enabling process mining directly on a database - instead of via intermediate storage in a flat file - therefore provides additional flexibility and efficiency. To help facilitate this ideal of in-database process mining, this paper formally defines a database operator that extracts the 'directly follows' relation from an operational database. This operator can both be used to do in-database process mining and to flexibly evaluate process mining related queries, such as: "which employee most frequently changes the 'amount' attribute of a case from one task to the next". We define the operator using the well-known relational algebra that forms the formal underpinning of relational databases. We formally prove equivalence properties of the operator that are useful for query optimization and present time-complexity properties of the operator. By doing so this paper formally defines the necessary relational algebraic elements of a 'directly follows' operator, which are required for implementation of such an operator in a DBMS.
By presenting a directly follows' operator, the primary goal of this paper is to support computationally efficient process mining on a database. There exist some papers that deal with the computational complexity of the process mining algorithms themselves @cite_10 @cite_18 . Also, in a research agenda for process mining the computational complexity and memory usage of process mining algorithms have been identified as important topics @cite_0 . However, this paper focuses on a step that precedes the process mining itself: flexibly querying a database to investigate which information is useful for process mining.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_10" ], "mid": [ "1968928304", "", "2160967540" ], "abstract": [ "Enterprise information systems support and control operational business processes ranging from simple internal back-office process to complex interorganizational processes. Technologies such as workflow management (WFM), enterprise, application integration (EAI), enterprise resource planning (ERP), and web services (WS) typically focus on the realization of IT support rather than monitoring the operational business processes. Process mining aims at extracting information from event logs to capture the business process as it is being executed. In this paper, we put the topic of process mining into context, discuss the main issues around process mining, and finally we introduce the papers in this special issue.", "", "Process mining techniques often reveal that real-life processes are more variable than anticipated. Although declarative process models are more suitable for less structured processes, most discovery techniques generate conventional procedural models. In this paper, we focus on discovering Declare models based on event logs. A Declare model is composed of temporal constraints. Despite the suitability of declarative process models for less structured processes, their discovery is far from trivial. Even for smaller processes there are many potential constraints. Moreover, there may be many constraints that are trivially true and that do not characterize the process well. Naively checking all possible constraints is computationally intractable and may lead to models with an excessive number of constraints. Therefore, we have developed an Apriori algorithm to reduce the search space. Moreover, we use new metrics to prune the model. As a result, we can quickly generate understandable Declare models for real-life event logs." ] }
1706.08259
2689993734
The execution logs that are used for process mining in practice are often obtained by querying an operational database and storing the result in a flat file. Consequently, the data processing power of the database system cannot be used anymore for this information, leading to constrained flexibility in the definition of mining patterns and limited execution performance in mining large logs. Enabling process mining directly on a database - instead of via intermediate storage in a flat file - therefore provides additional flexibility and efficiency. To help facilitate this ideal of in-database process mining, this paper formally defines a database operator that extracts the 'directly follows' relation from an operational database. This operator can both be used to do in-database process mining and to flexibly evaluate process mining related queries, such as: "which employee most frequently changes the 'amount' attribute of a case from one task to the next". We define the operator using the well-known relational algebra that forms the formal underpinning of relational databases. We formally prove equivalence properties of the operator that are useful for query optimization and present time-complexity properties of the operator. By doing so this paper formally defines the necessary relational algebraic elements of a 'directly follows' operator, which are required for implementation of such an operator in a DBMS.
More database-related work from the area of process mining comes from shaping data warehouses specifically for process mining @cite_2 @cite_15 . There also exists work that focuses on the extraction of logs from a database @cite_11 .
{ "cite_N": [ "@cite_15", "@cite_11", "@cite_2" ], "mid": [ "1236618515", "2150614413", "2292138521" ], "abstract": [ "", "Since ERP systems, like SAP, support the backbone operations of companies, their transaction logs provide valuable insight into the companies' business processes. In SAP every transaction is stored and linked to relevant documents, organizational structures and other process-relevant information. However, the complexities and size of SAP logs make it hard to analyze the business processes directly with current process mining tools. This paper describes an ERP log analysis system that allows the users to define at a meta level how events, resources and their inter-relations are stored and transformed for use in process mining. We show how the system is applied to extract and transform related SAP transaction data for the ProM process mining tool.", "Work∞ow Logs provide a very valuable source of informa- tion about the actual execution of business processes in organizations. We propose to use data warehouse technology to exploit this information resources for organizational developments, monitoring and process im- provements. We introduce a general data warehouse design for work∞ow warehouses and discuss the results from an industrial case study showing the validity of this approach." ] }
1706.08312
2658186448
In this paper, we revisit the idea of putting interfaces to sleep during packet overhearing (i.e., when there are ongoing transmissions addressed to other stations) from a practical standpoint. To this aim, we perform a robust experimental characterisation of the timing and consumption behaviour of a commercial 802.11 card. We design μNap, a local standard-compliant energy-saving mechanism that leverages micro-sleep opportunities inherent to the CSMA operation of 802.11 WLANs. This mechanism is backwards compatible and incrementally deployable, and takes into account the timing limitations of existing hardware, as well as practical CSMA-related issues (e.g., capture effect). According to the performance assessment carried out through trace-based simulation, the use of our scheme would result in a 57 reduction in the time spent in overhearing, thus leading to an energy saving of 15.8 of the activity time.
Since its conception, 802.11 has attempted to minimise idle listening with the introduction of the PS mode, and some previous work followed this path. For instance, Liu and Zhong @cite_25 proposed @math PM to exploit short idle intervals ( @math 100 ms) without buffering or cooperation. @math PM predicts the arrival time of the next frame and puts the interface in PS mode while no arrivals are expected. This mechanism demonstrated poor granularity (tens of ms) on existing hardware and leads to performance degradation due to frame loss. Therefore, it is only suitable for low-traffic scenarios.
{ "cite_N": [ "@cite_25" ], "mid": [ "2044993311" ], "abstract": [ "Wireless interfaces are major power consumers on mobile systems. Considerable research has improved the energy efficiency of elongated idle periods or created more elongated idle periods in wireless interfaces, often requiring cooperation from applications or the network infrastructure. With increasing wireless mobile data, it has become critical to improve the energy efficiency of active wireless interfaces. In this work, we present micro power management (μPM), a solution inspired by the mismatch between the high performance of state-of-the-art 802.11 interfaces and the modest data rate requirements by many popular network applications. μPM enables an 802.11 interface to enter unreachable power-saving modes even between MAC frames, without noticeable impact on the traffic flow. To control data loss, μPM leverages the retransmission mechanism in 802.11 and controls frame delay to adapt to demanded network throughput with minimal cooperation from the access point. Based on a theoretical framework, we employ simulation to systematically investigate an effective and efficient implementation of μPM. We have built a prototype μPM on an open-access wireless hardware platform. Measurements show that more than 30 power reduction for the wireless transceiver can be achieved with μPM for various applications without perceptible quality degradation." ] }
1706.08312
2658186448
In this paper, we revisit the idea of putting interfaces to sleep during packet overhearing (i.e., when there are ongoing transmissions addressed to other stations) from a practical standpoint. To this aim, we perform a robust experimental characterisation of the timing and consumption behaviour of a commercial 802.11 card. We design μNap, a local standard-compliant energy-saving mechanism that leverages micro-sleep opportunities inherent to the CSMA operation of 802.11 WLANs. This mechanism is backwards compatible and incrementally deployable, and takes into account the timing limitations of existing hardware, as well as practical CSMA-related issues (e.g., capture effect). According to the performance assessment carried out through trace-based simulation, the use of our scheme would result in a 57 reduction in the time spent in overhearing, thus leading to an energy saving of 15.8 of the activity time.
Others propose a PS-like operation. Jang @cite_21 described Snooze, an access point (AP)-directed micro-sleep scheduling and antenna configuration management method for 11n WLANs. As a consequence of its centralised design, the granularity of the so-called micro-sleeps in this approach is poor (few milliseconds), which poses doubts on its performance under heavy loads.
{ "cite_N": [ "@cite_21" ], "mid": [ "2107870544" ], "abstract": [ "Increasingly, mobile devices equipped with 802.11n interfaces are being used for a wide variety of applications including bandwidth-intensive HD video streaming. Recent work has shown that 802.11n interfaces are power-hungry, so energy management is an important challenge. 802.11n implementations have additional power states relative to earlier generations of 802.11 technology, so energy management challenges for 802.11n are qualitatively different compared to that faced by prior work. In this paper, we describe the design and implementation of Snooze, an energy management technique for 802.11n which uses two novel and inter-dependent mechanisms: client micro-sleeps and antenna configuration management. In Snooze, the APmonitors traffic on the WLAN and directs client sleep times and durations as well as antenna configurations, without significantly affecting throughput or delay. Snooze achieves 30 85 energy-savings over CAM across workloads ranging from VoIP and video streaming to file downloads and chats." ] }
1706.08312
2658186448
In this paper, we revisit the idea of putting interfaces to sleep during packet overhearing (i.e., when there are ongoing transmissions addressed to other stations) from a practical standpoint. To this aim, we perform a robust experimental characterisation of the timing and consumption behaviour of a commercial 802.11 card. We design μNap, a local standard-compliant energy-saving mechanism that leverages micro-sleep opportunities inherent to the CSMA operation of 802.11 WLANs. This mechanism is backwards compatible and incrementally deployable, and takes into account the timing limitations of existing hardware, as well as practical CSMA-related issues (e.g., capture effect). According to the performance assessment carried out through trace-based simulation, the use of our scheme would result in a 57 reduction in the time spent in overhearing, thus leading to an energy saving of 15.8 of the activity time.
Zhang and Shin @cite_11 addressed the issue from a different standpoint with their Energy-Minimizing Idle Listening (E-MiLi). E-MiLi adaptively downclocks the card during idle periods, and reverts to full rate when an incoming frame is detected. To achieve this purpose, they need to change the physical layer (PHY) all the way down to enable downclocked detection, which severely limits the potential gains. For instance, the E-MiLi downclocking factor of 16 would yield a high power consumption in a modern card compared to its sleep state (see ).
{ "cite_N": [ "@cite_11" ], "mid": [ "2103281825" ], "abstract": [ "WiFi interface is known to be a primary energy consumer in mobile devices, and idle listening (IL) is the dominant source of energy consumption in WiFi. Most existing protocols, such as the 802.11 power-saving mode (PSM), attempt to reduce the time spent in IL by sleep scheduling. However, through an extensive analysis of real-world traffic, we found more than 60 percent of energy is consumed in IL, even with PSM enabled. To remedy this problem, we propose Energy-Minimizing idle Listening (E-MiLi) that reduces the power consumption in IL, given that the time spent in IL has already been optimized by sleep scheduling. Observing that radio power consumption decreases proportionally to its clock rate, E-MiLi adaptively downclocks the radio during IL, and reverts to full clock rate when an incoming packet is detected or a packet has to be transmitted. E-MiLi incorporates sampling rate invariant detection, ensuring accurate packet detection and address filtering even when the receiver's sampling clock rate is much lower than the signal bandwidth. Further, it employs an opportunistic downclocking mechanism to optimize the efficiency of switching clock rate, based on a simple interface to existing MAC-layer scheduling protocols. We have implemented E-MiLi on the USRP software radio platform. Our experimental evaluation shows that E-MiLi can detect packets with close to 100 percent accuracy even with downclocking by a factor of 16. When integrated with 802.11, E-MiLi can reduce energy consumption by around 44 percent for 92 percent of users in real-world wireless networks." ] }
1706.08312
2658186448
In this paper, we revisit the idea of putting interfaces to sleep during packet overhearing (i.e., when there are ongoing transmissions addressed to other stations) from a practical standpoint. To this aim, we perform a robust experimental characterisation of the timing and consumption behaviour of a commercial 802.11 card. We design μNap, a local standard-compliant energy-saving mechanism that leverages micro-sleep opportunities inherent to the CSMA operation of 802.11 WLANs. This mechanism is backwards compatible and incrementally deployable, and takes into account the timing limitations of existing hardware, as well as practical CSMA-related issues (e.g., capture effect). According to the performance assessment carried out through trace-based simulation, the use of our scheme would result in a 57 reduction in the time spent in overhearing, thus leading to an energy saving of 15.8 of the activity time.
Baiamonte and Chiasserini @cite_29 were the first to chase fine-grained micro-sleep opportunities during packet overhearing. They define the Energy-efficient Distributed Access (EDA) scheme, which uses the 802.11 virtual carrier-sensing mechanism for power-saving purposes. Basically, a STA dozes when the Network Allocation Vector (NAV) or the backoff counter are non-zero. Unfortunately, this work lacks an empirical characterisation of the timing constraints needed to design a practical mechanism. Moreover, dozing during the backoff window is not 802.11-fair: in 802.11, STAs must sense the channel every single time slot during the contention period and, if another STA seizes the channel first, the backoff timer must be stopped in order to receive the incoming frame and set the NAV to the proper value. The EDA scheme allows STAs to doze during the contention period and, therefore, breaks the CSMA operation.
{ "cite_N": [ "@cite_29" ], "mid": [ "1992067851" ], "abstract": [ "We focus on energy saving in 802.11-based WLANs. Previous work has shown that, on the one hand, 802.11 wireless interfaces consume a significant amount of energy, on the other hand the use of current power management schemes can severely degrade the QoS performance of several Internet-based applications. Furthermore, the energy spent by wireless devices may even increase when the standard 802.11 power-saving mode (PSM) is implemented. These facts suggest that other solutions to energy saving are highly needed.In this paper, we consider the 802.11 distributed access scheme and we propose a novel approach that enables a station to enter a low-power operational state during channel contention. More specifically, our technique exploits the virtual carrier sense mechanism and the backoff function specified in the IEEE 802.11 DCF, so that a station can dramatically reduce its energy consumption without significant degradation of the QoS performance. To efficiently implement our mechanism, a low-power state with negligible transition time into the active state must be identified. This can be any of the non-standard, low-power states defined by proprietary solutions in the current or next-generation products [7,15,22]. By using the network simulator ns2, we evaluate the performance improvement that is obtained when the proposed mechanism is implemented, against the results attained through the standard DCF. The results show that we can achieve a reduction in energy consumption as large as 80 and 28 under, respectively, UDP and TCP traffic." ] }
1706.08312
2658186448
In this paper, we revisit the idea of putting interfaces to sleep during packet overhearing (i.e., when there are ongoing transmissions addressed to other stations) from a practical standpoint. To this aim, we perform a robust experimental characterisation of the timing and consumption behaviour of a commercial 802.11 card. We design μNap, a local standard-compliant energy-saving mechanism that leverages micro-sleep opportunities inherent to the CSMA operation of 802.11 WLANs. This mechanism is backwards compatible and incrementally deployable, and takes into account the timing limitations of existing hardware, as well as practical CSMA-related issues (e.g., capture effect). According to the performance assessment carried out through trace-based simulation, the use of our scheme would result in a 57 reduction in the time spent in overhearing, thus leading to an energy saving of 15.8 of the activity time.
Balaji @cite_18 revisited the problem of packet overhearing with a scheme called Sleep during Neighbor-Addressed Frame (SNAF). With SNAF, a wireless card checks the destination MAC address and switches to sleep state during the payload duration if it was addressed to other host. They assume, without any experimental validation though, an instantaneous switch-off and that the time required to wake up is equivalent to a Short Interframe Space (SIFS). In order to prevent the risk resulting from errors in the frame header that would lead to an incorrect NAV counter, the authors propose to introduce a new framing format with a new FCS devoted to the MAC header only. This solution lacks compatibility and introduces more overhead based on no evidence.
{ "cite_N": [ "@cite_18" ], "mid": [ "2114816637" ], "abstract": [ "We propose a novel power saving strategy called Sleep during Neighbor-Addressed Frame (SNAF) for improving energy efficiency of IEEE 802.11 based wireless networks. IEEE 802.11 (Wi-Fi) radios that employ SNAF mode can turn OFF their wireless transceivers (i.e., put radios in sleep mode) within specific periods of neighbor-addressed frames while they are being received. The sleep duration of transceivers is easy to determine with no loss of packet's critical control information. The proposed SNAF mode operation does not have any negative effect on network throughput and even complements Power Saving Mode (PSM) available in 802.11 standard. We further propose GreenFrame format for next generation wireless networks. In experiments conducted in wireless LAN scenarios, we observed savings as much as 57.8 when we implement SNAF mode in 802.11 standard and up to 49.5 when we implement SNAF mode in 802.11 PSM." ] }
1706.08312
2658186448
In this paper, we revisit the idea of putting interfaces to sleep during packet overhearing (i.e., when there are ongoing transmissions addressed to other stations) from a practical standpoint. To this aim, we perform a robust experimental characterisation of the timing and consumption behaviour of a commercial 802.11 card. We design μNap, a local standard-compliant energy-saving mechanism that leverages micro-sleep opportunities inherent to the CSMA operation of 802.11 WLANs. This mechanism is backwards compatible and incrementally deployable, and takes into account the timing limitations of existing hardware, as well as practical CSMA-related issues (e.g., capture effect). According to the performance assessment carried out through trace-based simulation, the use of our scheme would result in a 57 reduction in the time spent in overhearing, thus leading to an energy saving of 15.8 of the activity time.
Building on the same idea, Sudarshan @cite_23 proposed Übersleep. This time, the authors do not consider it necessary to add any extra FCS, as they claim (without any specific basis) that such errors are very unlikely.
{ "cite_N": [ "@cite_23" ], "mid": [ "2093799297" ], "abstract": [ "Perpetually rising energy costs are forcing research communities to focus their efforts on a reduction of the global CO2 footprint, and since Information and Communication Technologies (ICT) account for a significant percentage of the overall global energy consumption, energy efficiency is becoming increasingly important in the operation of ICT infrastructure, especially in enterprise and data center networks. Simultaneously, the proliferation of devices such as smartphones and tablets, which have to use battery powered wireless radio adapters, and whose battery life is dependent on the power consumption of the radio adapters used for wireless communication indicates that efficient radio power saving strategies are needed to increase the battery life of such devices. This paper extends an earlier proposal by [1], and examines the performance of the same on IEEE 802.11n networks. It also extrapolates the results to estimate performance on 802.11ac WLANs, the planned successor to 802.11n WLAN in 5 GHz unlicensed band. The results indicate that this approach can be used to save over 59 of the RX energy (i.e., energy consumed by Wi-Fi radio while operating in receive mode) without any loss whatsoever in throughput unlike 802.11 power save mode and that this approach is completely backward compatible." ] }
1706.08312
2658186448
In this paper, we revisit the idea of putting interfaces to sleep during packet overhearing (i.e., when there are ongoing transmissions addressed to other stations) from a practical standpoint. To this aim, we perform a robust experimental characterisation of the timing and consumption behaviour of a commercial 802.11 card. We design μNap, a local standard-compliant energy-saving mechanism that leverages micro-sleep opportunities inherent to the CSMA operation of 802.11 WLANs. This mechanism is backwards compatible and incrementally deployable, and takes into account the timing limitations of existing hardware, as well as practical CSMA-related issues (e.g., capture effect). According to the performance assessment carried out through trace-based simulation, the use of our scheme would result in a 57 reduction in the time spent in overhearing, thus leading to an energy saving of 15.8 of the activity time.
More recently, Palacios-Trujillo modified DCF @cite_5 and PCF @cite_14 to exploit per-packet sleeps. They also applied these ideas to network coding @cite_3 and to a polling-based version of 11ac's TXOP PS mode @cite_10 . Unfortunately, all these papers rely on these early studies mentioned before @cite_27 @cite_13 @cite_15 , which analysed old wireless cards unable to perform sub-millisecond transitions between states.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_3", "@cite_27", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "", "1965360590", "2266469152", "2083896716", "2062824808", "", "1540457649" ], "abstract": [ "", "The Point Coordination Function (PCF) of the IEEE 802.11 standard represents a well-known Medium Access Control (MAC) protocol providing Quality-of Service guarantees in Wireless Local Area Networks (WLANs). However, with the currently employed polling mechanism WLANs consume a significant amount of the energy resources from battery-powered user devices. To provide energy saving, an improved MAC protocol is presented in this paper, where bidirectional transmissions of fixed duration are incorporated into PCF in order to enable dynamic scheduling of real-time traffic. Based on this new strategy, wireless access points (APs) can estimate the proper duration of the Contention Free Period (CFP), in order to allow mobile stations to acknowledge any received data packet with a data packet equal to the received packet in size. Having this information, a mobile station, following the data exchange with the AP, can determine its wake-up timer and activate the sleep mode for the rest of the CFP interval. Comprehensive computer-based simulations demonstrate the feasibility of the proposed MAC improvements to achieve energy efficiency with negligible impact on packet delivery delay.", "In this paper, we propose an energy-efficient solution for implementing Network Coding (NC) in wireless networks based on the IEEE 802.11 Standard. The proposed mechanism, called GreenCode, allows nodes to duty cycle by switching to a low-power (sleep) state when they overhear coded packet transmissions that will not provide any new information for them. To facilitate the sleep operation, bidirectional transmissions involving both coded and non-coded packets between pairs of sender-receiver nodes are integrated into the operation of GreenCode. Both analytical and simulation results presented in this paper show the high energy efficiency of GreenCode with gains of up to 360 when compared to the existing mechanisms based on the IEEE 802.11 Standard.", "In July 1997 the Institute of Electrical and Electronics Engineers (IEEE) completed standard 802.11 for wireless local area networks (LANs). WaveLAN®-II, to be released early in 1998, offers compatibility with the IEEE 802.11 standard for operation in the 2.4-GHz band. It is the successor to WaveLAN-I, which has been in the market since 1991. As a next-generation wireless LAN product, WaveLAN-II has many enhancements to improve performance in various areas. An IEEE 802.11 direct sequence spread spectrum (DSSS) product, WaveLAN-II supports the basic bit rates of 1 and 2 Mb s, but it can also provide enhanced bit rates as high as 10 Mb s. This paper discusses various aspects of the system design of WaveLAN-II and characteristics of its antenna, radio-frequency (RF) front-end, digital signal processor (DSP) transceiver chip, and medium access controller (MAC) chip.", "The IEEE 802.11 Distributed Coordination Function (DCF) is the fundamental access method providing asynchronous best-effort services in Wireless Local Area Networks (WLAN). In this standard, the currently employed Carrier Sense Multiple Access with Collision Avoidance (CSMA CA) and the Binary Exponential Backoff (BEB) mechanism represent major sources of energy consumption at both the access point and mobile stations of a WLAN. To improve energy efficiency in WLANs, this paper introduces an enhanced DCF protocol incorporating bidirectional transmissions in combination with sleep periods, called Bidirectional Sleep DCF (BDSL-DCF). By following this new scheme, every successfully established connection between a sender and its intended destination can be used to exchange data, hence reducing control overhead and channel contention. Furthermore, this functionality allows those mobile stations not participating in data transmission to activate the sleep mode to conserve energy. Simulation results show that BDSL-DCF can outperform DCF in terms of energy efficiency and throughput, with negligible impact on packet transmission delay.", "", "This paper analyzes the performance of a duty-cycled polling-based access mechanism that exploits the Transmission Opportunity Power Save Mode (TXOP PSM) defined in the IEEE 802.11ac to improve the energy efficiency of Wireless Local Area Networks (WLANs) based on the IEEE 802.11. The basic idea behind the proposed approach, named GreenPoll, is to enable contention free periods, based on polling with beacons, during which wireless stations can save energy by turning off their radio transceivers after exchanging data with the access point. The closed expression of energy efficiency of GreenPoll is formulated in this paper and is used to evaluate the performance of GreenPoll considering important parameters like the traffic load, packet length, data rate, and number of stations in the network. Both analytical and simulation results show the high energy efficiency of GreenPoll with gains of up to 330 and 110 when compared to the legacy Distributed Coordination Function (DCF) and the Point Coordination Function (PCF) defined in the IEEE 802.11, respectively." ] }
1706.07966
2731635521
Convolutional kernels are basic and vital components of deep Convolutional Neural Networks (CNN). In this paper, we equip convolutional kernels with shape attributes to generate the deep Irregular Convolutional Neural Networks (ICNN). Compared to traditional CNN applying regular convolutional kernels like @math , our approach trains irregular kernel shapes to better fit the geometric variations of input features. In other words, shapes are learnable parameters in addition to weights. The kernel shapes and weights are learned simultaneously during end-to-end training with the standard back-propagation algorithm. Experiments for semantic segmentation are implemented to validate the effectiveness of our proposed ICNN.
In recent years, CNNs @cite_9 greatly accelerate the development in computer vision tasks, including image classification @cite_9 @cite_0 , object detection @cite_16 @cite_2 @cite_14 and semantic segmentation @cite_17 @cite_20 . Deep stack of convolutional layers is able to model a sophisticated function, and Back Propagation @cite_18 @cite_3 makes it realizable to learn a large amount of parameters.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_9", "@cite_3", "@cite_0", "@cite_2", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "2046084401", "2193145675", "2310919327", "2154579312", "", "2953106684", "", "2412782625", "2952632681" ], "abstract": [ "", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "", "We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 error rate and about a 9 reject rate on zipcode digits provided by the U.S. Postal Service.", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image." ] }
1706.07966
2731635521
Convolutional kernels are basic and vital components of deep Convolutional Neural Networks (CNN). In this paper, we equip convolutional kernels with shape attributes to generate the deep Irregular Convolutional Neural Networks (ICNN). Compared to traditional CNN applying regular convolutional kernels like @math , our approach trains irregular kernel shapes to better fit the geometric variations of input features. In other words, shapes are learnable parameters in addition to weights. The kernel shapes and weights are learned simultaneously during end-to-end training with the standard back-propagation algorithm. Experiments for semantic segmentation are implemented to validate the effectiveness of our proposed ICNN.
The series of Inception-VN @cite_4 @cite_22 @cite_15 make a lot of efforts to study kernel shapes. Inception V1 uses multi-scale kernels for scale-invariance, Inception V2 uses more small kernels to replace one bigger kernel, and Inception V3 uses the stack of one dimensional shape kernels to replace one square shape kernel. As we have mentioned above, though they have studied so many kinds of variants with respect to scale and shape, the kernel shape is still regular, fixed and not task optimized.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_22" ], "mid": [ "2949605076", "2950179405", "2949117887" ], "abstract": [ "Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters." ] }
1706.07966
2731635521
Convolutional kernels are basic and vital components of deep Convolutional Neural Networks (CNN). In this paper, we equip convolutional kernels with shape attributes to generate the deep Irregular Convolutional Neural Networks (ICNN). Compared to traditional CNN applying regular convolutional kernels like @math , our approach trains irregular kernel shapes to better fit the geometric variations of input features. In other words, shapes are learnable parameters in addition to weights. The kernel shapes and weights are learned simultaneously during end-to-end training with the standard back-propagation algorithm. Experiments for semantic segmentation are implemented to validate the effectiveness of our proposed ICNN.
Dilated convolution @cite_10 expands distances between every two adjacent weights in a kernel, and generates a more discrete distribution. The presented dilated convolution is able to aggregate large scale contextual information without losing resolution. It is a variant of traditional compact kernels. In addition, our proposed ICNN is a further generalized version of dilated convolution.
{ "cite_N": [ "@cite_10" ], "mid": [ "2286929393" ], "abstract": [ "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy." ] }
1706.07966
2731635521
Convolutional kernels are basic and vital components of deep Convolutional Neural Networks (CNN). In this paper, we equip convolutional kernels with shape attributes to generate the deep Irregular Convolutional Neural Networks (ICNN). Compared to traditional CNN applying regular convolutional kernels like @math , our approach trains irregular kernel shapes to better fit the geometric variations of input features. In other words, shapes are learnable parameters in addition to weights. The kernel shapes and weights are learned simultaneously during end-to-end training with the standard back-propagation algorithm. Experiments for semantic segmentation are implemented to validate the effectiveness of our proposed ICNN.
Deformable Convolutional Networks (DCN) @cite_11 is a recently proposed excellent work which learns irregular kernels too. DCN has a similar thought with Region Proposal Network @cite_2 as they both apply a normal convolution on the input feature and output the new receptive fields for the following operations with respect to each position in feature maps. There are two main insights in this work. First, kernel shapes vary from different spatial positions for the same input feature since that different parts of the same object often have different patterns. Second, kernel shapes vary from different inputs in the same spatial position so that the network could learn unique kernel shapes for different inputs. Different from DCN, we do not apply the above two insights but the shape variances across input channels as we think different feature maps have different feature patterns, in which dimension kernel shapes are remained same by default for DCN. Besides, since our approach is to insert shape attributes into the convolutional layer directly, it can simply update shape parameters by back propagation without any extra layers. We will show that our implementation of ICNN is more concise and can be easily applied to traditional convolutional layers.
{ "cite_N": [ "@cite_2", "@cite_11" ], "mid": [ "2953106684", "2950477723" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released." ] }
1706.07748
2675516979
Security exploits can include cyber threats such as computer programs that can disturb the normal behavior of computer systems (viruses), unsolicited e-mail (spam), malicious software (malware), monitoring software (spyware), attempting to make computer resources unavailable to their intended users (Distributed Denial-of-Service or DDoS attack), the social engineering, and online identity theft (phishing). One such cyber threat, which is particularly dangerous to computer users is phishing. Phishing is well known as online identity theft, which targets to steal victims' sensitive information such as username, password and online banking details. This paper focuses on designing an innovative and gamified approach to educate individuals about phishing attacks. The study asks how one can integrate self-efficacy, which has a co-relation with the user's knowledge, into an anti-phishing educational game to thwart phishing attacks? One of the main reasons would appear to be a lack of user knowledge to prevent from phishing attacks. Therefore, this research investigates the elements that influence (in this case, either conceptual or procedural knowledge or their interaction effect) and then integrate them into an anti-phishing educational game to enhance people's phishing prevention behaviour through their motivation.
The design of serious games is a double-edged sword. When its power is properly harnessed to serve good purposes, it has tremendous potential to improve human performance. However, when it is exploited for violation purposes, it can pose huge threats to individuals and society. Therefore, the designing of educational games is not an easy task and there are no all-purpose solutions @cite_19 . The notion that game based education offers the opportunity to embed learning in a natural environment, has repeatedly emerged in the research literature @cite_45 @cite_44 @cite_22 @cite_15 @cite_43 @cite_14 @cite_24 @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_22", "@cite_44", "@cite_19", "@cite_43", "@cite_45", "@cite_24", "@cite_15" ], "mid": [ "2080292343", "", "2141667975", "2278316643", "", "2136136174", "1532672918", "1980850924", "1970174926" ], "abstract": [ "Designs for CSCL (Computer-Supported Collaborative Learning) applications usually presume a desktop or laptop computer. Yet future classrooms are likely to be organized around Wireless Internet Learning Devices (WILD) that resemble graphing calculators, Palm, or Pocket-PC handhelds, connected by short-range wireless networking. WILD learning will have physical affordances that are different from today’s computer lab, and different from classrooms with 5 students per computer. These differing affordances may lead to learning activities that deviate significantly from today’s images of K-12 CSCL activities. Drawing upon research across a range of recent handheld projects, we suggest application-level affordances around which WILD-based CSCL has begun to organize: (a) augmenting physical space, (b) leveraging topological space, (c) aggregating coherently across all students, (d) conducting the class, and (e) act becomes artifact. We speculate on how CSCL research may consequently evolve towards a focus on kinds of systemic coupling in an augmented activity space.", "", "Game based education is becoming more and more popular. This is because game based education provides an opportunity for learning in a natural environment. Phishing is an online identity theft, which attempts to steal sensitive information such as username, password, and online banking details from its victims. To prevent this, phishing awareness needs to be considered. This research aims to develop a game design framework, which enhances user avoidance behaviour through motivation to protect users from phishing attacks. In order to do this, a theoretical model derived from Technology Thread Avoidance Theory (TTAT) was developed and used in the game design framework (Liang & Xue, 2010). A survey study was undertaken with 150 regular computer users to elicit feedback through a questionnaire. The study findings revealed that perceived threat, safeguard effectiveness, safeguard cost, self-efficacy, perceived severity, and perceived susceptibility elements should be addressed in the game design framework for computer users to avoid phishing attacks. Furthermore, we argue that this game design framework can be used not only for preventing phishing attacks but also for preventing other malicious IT attacks such as viruses, malware, botnets and spyware.", "Phishing is an online identity theft that aims to steal sensitive information such as username, password and online banking details from its victims. Phishing education needs to be considered as a means to combat this threat. This paper reports on a design and development of a mobile game prototype as an educational tool helping computer users to protect themselves against phishing attacks. The elements of a game design framework for avoiding phishing attacks were used to address the game design issues. Our mobile game design aimed to enhance the users' avoidance behaviour through motivation to protect themselves against phishing threats. A think-aloud study was conducted, along with a pre- and post-test, to assess the game design framework though the developed mobile game prototype. The study results showed a significant improvement of participants' phishing avoidance behaviour in their post-test assessment. Furthermore, the study findings suggest that participants' threat perception, safeguard effectiveness, self-efficacy, perceived severity and perceived susceptibility elements positively impact threat avoidance behaviour, whereas safeguard cost had a negative impact on it. We developed a mobile game prototype to thwart phishing attacks.Investigated what key elements should be addressed in the game.The elements derived from a game design framework were incorporated.Empirically investigated users' phishing threat avoidance behaviour.", "", "In this paper we describe the design and evaluation of Anti-Phishing Phil, an online game that teaches users good habits to help them avoid phishing attacks. We used learning science principles to design and iteratively refine the game. We evaluated the game through a user study: participants were tested on their ability to identify fraudulent web sites before and after spending 15 minutes engaged in one of three anti-phishing training activities (playing the game, reading an anti-phishing tutorial we created based on the game, or reading existing online training materials). We found that the participants who played the game were better able to identify fraudulent web sites compared to the participants in other conditions. We attribute these effects to both the content of the training messages presented in the game as well as the presentation of these materials in an interactive game format. Our results confirm that games can be an effective way of educating people about phishing and other security attacks.", "According to many dental professionals, the decay process resulting from the accumulation of sugar on teeth is a very difficult concept for young children to learn. Playing the dental hygiene game with Thinking Tags not only brings context into the classroom, but also allows children to work with digital manipulatives that provide rich personal experiences and instant feedback. Instead of watching a demonstration of the accumulation of sugars on a computer screen, or being told about dental health, this simulation allows pre-school children to experience improving or decaying dental health without any real adverse health effects. Small, wearable, microprocessor-driven Tags were brought into the kindergarten classroom to simulate the decay process, providing information about sugars in foods and creating a discussion about teeth. Preliminary analyses suggest that this program was effective and enthusiastically received by this age group.", "Good computer and video games like System Shock 2, Deus Ex, Pikmin, Rise of Nations, Neverwinter Nights, and Xenosaga: Episode 1 are learning machines. They get themselves learned and learned well, so that they get played long and hard by a great many people. This is how they and their designers survive and perpetuate themselves. If a game cannot be learned and even mastered at a certain level, it won't get played by enough people, and the company that makes it will go broke. Good learning in games is a capitalist-driven Darwinian process of selection of the fittest. Of course, game designers could have solved their learning problems by making games shorter and easier, by dumbing them down, so to speak. But most gamers don't want short and easy games. Thus, designers face and largely solve an intriguing educational dilemma, one also faced by schools and workplaces: how to get people, often young people, to learn and master something that is long and challenging--and enjoy it, to boot.", "Phishing is an online identity theft, which aims to steal confidential information such as username, password and online banking details from its victims. To prevent this, anti-phishing education needs to be considered. Therefore, the research reported in this paper examines whether conceptual knowledge or procedural knowledge has a positive effect on computer users' self-efficacy to thwart phishing threats. In order to accomplish this, a theoretical model based on Liang and Xue's (2010) Technology Threat Avoidance Theory (TTAT) has been proposed and evaluated. Data was collected from 161 regular computer users to elicit their feedback through an online questionnaire. The study findings revealed that the interaction effect of conceptual and procedural knowledge positively impacts on computer users' self-efficacy, which enhances their phishing threat avoidance behaviour. It can therefore be argued that well-designed end-user security education contributes to thwart phishing threats." ] }
1706.07748
2675516979
Security exploits can include cyber threats such as computer programs that can disturb the normal behavior of computer systems (viruses), unsolicited e-mail (spam), malicious software (malware), monitoring software (spyware), attempting to make computer resources unavailable to their intended users (Distributed Denial-of-Service or DDoS attack), the social engineering, and online identity theft (phishing). One such cyber threat, which is particularly dangerous to computer users is phishing. Phishing is well known as online identity theft, which targets to steal victims' sensitive information such as username, password and online banking details. This paper focuses on designing an innovative and gamified approach to educate individuals about phishing attacks. The study asks how one can integrate self-efficacy, which has a co-relation with the user's knowledge, into an anti-phishing educational game to thwart phishing attacks? One of the main reasons would appear to be a lack of user knowledge to prevent from phishing attacks. Therefore, this research investigates the elements that influence (in this case, either conceptual or procedural knowledge or their interaction effect) and then integrate them into an anti-phishing educational game to enhance people's phishing prevention behaviour through their motivation.
Even though there are usability experts who claim that user education and training does not work @cite_43 , other researchers have revealed that well-designed end-user education could be a recommended approach to combating against cyber-attacks such as MMFs @cite_27 @cite_32 @cite_35 @cite_34 @cite_6 . In line with Herley @cite_1 , also Kirlappos and Sasse @cite_32 and other researchers argue that current security education on malicious IT threats offers, little protection to end users, who access potentially malicious websites @cite_35 @cite_43 @cite_15 .
{ "cite_N": [ "@cite_35", "@cite_1", "@cite_32", "@cite_6", "@cite_43", "@cite_27", "@cite_15", "@cite_34" ], "mid": [ "", "2045591401", "2018398218", "", "2136136174", "", "1970174926", "2160343382" ], "abstract": [ "", "It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certificates errors. We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual treats, and fully 100 of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01 of them a modest annual pain.", "When tempted by a good deal online, users don't focus on security warnings; rather, they look for signs to confirm a site's trustworthiness. User education needs to focus on challenging and correcting the misconceptions that guide current behavior.", "", "In this paper we describe the design and evaluation of Anti-Phishing Phil, an online game that teaches users good habits to help them avoid phishing attacks. We used learning science principles to design and iteratively refine the game. We evaluated the game through a user study: participants were tested on their ability to identify fraudulent web sites before and after spending 15 minutes engaged in one of three anti-phishing training activities (playing the game, reading an anti-phishing tutorial we created based on the game, or reading existing online training materials). We found that the participants who played the game were better able to identify fraudulent web sites compared to the participants in other conditions. We attribute these effects to both the content of the training messages presented in the game as well as the presentation of these materials in an interactive game format. Our results confirm that games can be an effective way of educating people about phishing and other security attacks.", "", "Phishing is an online identity theft, which aims to steal confidential information such as username, password and online banking details from its victims. To prevent this, anti-phishing education needs to be considered. Therefore, the research reported in this paper examines whether conceptual knowledge or procedural knowledge has a positive effect on computer users' self-efficacy to thwart phishing threats. In order to accomplish this, a theoretical model based on Liang and Xue's (2010) Technology Threat Avoidance Theory (TTAT) has been proposed and evaluated. Data was collected from 161 regular computer users to elicit their feedback through an online questionnaire. The study findings revealed that the interaction effect of conceptual and procedural knowledge positively impacts on computer users' self-efficacy, which enhances their phishing threat avoidance behaviour. It can therefore be argued that well-designed end-user security education contributes to thwart phishing threats.", "We evaluate Website authentication measures that are designed to protect users from man-in-the-middle, 'phishing', and other site forgery attacks. We asked 67 bank customers to conduct common online banking tasks. Each time they logged in, we presented increasingly alarming clues that their connection was insecure. First, we removed HTTPS indicators. Next, we removed the participant's site-authentication image--the customer-selected image that many Websites now expect their users to verify before entering their passwords. Finally, we replaced the bank's password-entry page with a warning page. After each clue, we determined whether participants entered their passwords or withheld them. We also investigate how a study's design affects participant behavior: we asked some participants to play a role and others to use their own accounts and passwords. We also presented some participants with security-focused instructions. We confirm prior findings that users ignore HTTPS indicators: no participants withheld their passwords when these indicators were removed. We present the first empirical investigation of site-authentication images, and we find them to be ineffective: even when we removed them, 23 of the 25 (92 ) participants who used their own accounts entered their passwords. We also contribute the first empirical evidence that role playing affects participants' security behavior: role-playing participants behaved significantly less securely than those using their own passwords." ] }
1706.07748
2675516979
Security exploits can include cyber threats such as computer programs that can disturb the normal behavior of computer systems (viruses), unsolicited e-mail (spam), malicious software (malware), monitoring software (spyware), attempting to make computer resources unavailable to their intended users (Distributed Denial-of-Service or DDoS attack), the social engineering, and online identity theft (phishing). One such cyber threat, which is particularly dangerous to computer users is phishing. Phishing is well known as online identity theft, which targets to steal victims' sensitive information such as username, password and online banking details. This paper focuses on designing an innovative and gamified approach to educate individuals about phishing attacks. The study asks how one can integrate self-efficacy, which has a co-relation with the user's knowledge, into an anti-phishing educational game to thwart phishing attacks? One of the main reasons would appear to be a lack of user knowledge to prevent from phishing attacks. Therefore, this research investigates the elements that influence (in this case, either conceptual or procedural knowledge or their interaction effect) and then integrate them into an anti-phishing educational game to enhance people's phishing prevention behaviour through their motivation.
Another reason for ineffectiveness of current security education for phishing prevention is because security education providers assume that users are keen to avoid risks and thus likely to adopt behaviours that might protect them. Kirlappos and Sasse @cite_32 claimed that security education should consider the drivers of end user behaviour rather than warning users of dangers. Therefore, well-designed security education (i.e. user-centred security education) should develop threat perception where users are aware that such a threat is present in the cyberspace. It should also encourage users to enhance avoidance behaviour through motivation to protect them from malicious IT threats.
{ "cite_N": [ "@cite_32" ], "mid": [ "2018398218" ], "abstract": [ "When tempted by a good deal online, users don't focus on security warnings; rather, they look for signs to confirm a site's trustworthiness. User education needs to focus on challenging and correcting the misconceptions that guide current behavior." ] }
1706.07748
2675516979
Security exploits can include cyber threats such as computer programs that can disturb the normal behavior of computer systems (viruses), unsolicited e-mail (spam), malicious software (malware), monitoring software (spyware), attempting to make computer resources unavailable to their intended users (Distributed Denial-of-Service or DDoS attack), the social engineering, and online identity theft (phishing). One such cyber threat, which is particularly dangerous to computer users is phishing. Phishing is well known as online identity theft, which targets to steal victims' sensitive information such as username, password and online banking details. This paper focuses on designing an innovative and gamified approach to educate individuals about phishing attacks. The study asks how one can integrate self-efficacy, which has a co-relation with the user's knowledge, into an anti-phishing educational game to thwart phishing attacks? One of the main reasons would appear to be a lack of user knowledge to prevent from phishing attacks. Therefore, this research investigates the elements that influence (in this case, either conceptual or procedural knowledge or their interaction effect) and then integrate them into an anti-phishing educational game to enhance people's phishing prevention behaviour through their motivation.
The literature revealed that well-designed games focusing on education could be helpful for learning even when used without assistance. The Anti-phishing Phil' game developed by Sheng, et al @cite_43 reported results confirm that games educate people about phishing and other security attacks in a more effective way than other educational approaches such as reading anti-phishing tutorial or reading existing online training materials. @cite_44 developed a mobile game prototype to teach people how to thwart phishing attacks. Their mobile game design aimed to enhance the user's avoidance behaviour through motivation to protect themselves against phishing threats. The designed mobile game was somewhat effective in teaching people how to thwart phishing attacks as the study results showed a significant improvement of participants' phishing threat avoidance behaviour in their post-test assessment. Furthermore, the study findings suggested that participants' threat perception, safeguard effectiveness, self-efficacy, perceived severity and perceived susceptibility elements positively impact threat avoidance behaviour, whereas safeguard cost had a negative impact on it.
{ "cite_N": [ "@cite_44", "@cite_43" ], "mid": [ "2278316643", "2136136174" ], "abstract": [ "Phishing is an online identity theft that aims to steal sensitive information such as username, password and online banking details from its victims. Phishing education needs to be considered as a means to combat this threat. This paper reports on a design and development of a mobile game prototype as an educational tool helping computer users to protect themselves against phishing attacks. The elements of a game design framework for avoiding phishing attacks were used to address the game design issues. Our mobile game design aimed to enhance the users' avoidance behaviour through motivation to protect themselves against phishing threats. A think-aloud study was conducted, along with a pre- and post-test, to assess the game design framework though the developed mobile game prototype. The study results showed a significant improvement of participants' phishing avoidance behaviour in their post-test assessment. Furthermore, the study findings suggest that participants' threat perception, safeguard effectiveness, self-efficacy, perceived severity and perceived susceptibility elements positively impact threat avoidance behaviour, whereas safeguard cost had a negative impact on it. We developed a mobile game prototype to thwart phishing attacks.Investigated what key elements should be addressed in the game.The elements derived from a game design framework were incorporated.Empirically investigated users' phishing threat avoidance behaviour.", "In this paper we describe the design and evaluation of Anti-Phishing Phil, an online game that teaches users good habits to help them avoid phishing attacks. We used learning science principles to design and iteratively refine the game. We evaluated the game through a user study: participants were tested on their ability to identify fraudulent web sites before and after spending 15 minutes engaged in one of three anti-phishing training activities (playing the game, reading an anti-phishing tutorial we created based on the game, or reading existing online training materials). We found that the participants who played the game were better able to identify fraudulent web sites compared to the participants in other conditions. We attribute these effects to both the content of the training messages presented in the game as well as the presentation of these materials in an interactive game format. Our results confirm that games can be an effective way of educating people about phishing and other security attacks." ] }
1706.07748
2675516979
Security exploits can include cyber threats such as computer programs that can disturb the normal behavior of computer systems (viruses), unsolicited e-mail (spam), malicious software (malware), monitoring software (spyware), attempting to make computer resources unavailable to their intended users (Distributed Denial-of-Service or DDoS attack), the social engineering, and online identity theft (phishing). One such cyber threat, which is particularly dangerous to computer users is phishing. Phishing is well known as online identity theft, which targets to steal victims' sensitive information such as username, password and online banking details. This paper focuses on designing an innovative and gamified approach to educate individuals about phishing attacks. The study asks how one can integrate self-efficacy, which has a co-relation with the user's knowledge, into an anti-phishing educational game to thwart phishing attacks? One of the main reasons would appear to be a lack of user knowledge to prevent from phishing attacks. Therefore, this research investigates the elements that influence (in this case, either conceptual or procedural knowledge or their interaction effect) and then integrate them into an anti-phishing educational game to enhance people's phishing prevention behaviour through their motivation.
Tseng, et al @cite_2 also developed a game to teach users about phishing based on the content of the website. The authors proposed the phishing attack frame hierarchy to describe stereotype features of phishing attack techniques. The inheritance and instantiation properties of the frame model allowed them to extend the original phishing pages to increase game contents. Finally, the authors developed an anti-phishing educational game to evaluate the effectiveness of proposed frame hierarchy. The evaluation results showed that most of the lecturers and experts were satisfied with this proposed system.
{ "cite_N": [ "@cite_2" ], "mid": [ "2056855061" ], "abstract": [ "Low motivation is the first barrier when educating end-users about anti-phishing. Game-based learning utilizing interactive activities and interesting multimedia is a suitable solution for motivating end-users. To satisfy pedagogical needs, it is important to generate numerous game contents full of variety. However, it is costly and time consuming to generate new phishing page. Due to the complexity of combining different phishing attack technique with various deceiving scenario, the phishing pages modeling issue arises when automatically generate content of anti-phishing education game. In this paper, we propose the phishing attack frame hierarchy to describe stereotype features of phishing attack techniques. The inheritance and instantiation properties of the frame model allow us extend the original phishing pages to increase game contents. To evaluate the effectiveness of proposed frame hierarchy, we have developed an anti-phishing education game. The evaluation results show that most of lecturers and experts satisfy with our proposed system." ] }
1706.07748
2675516979
Security exploits can include cyber threats such as computer programs that can disturb the normal behavior of computer systems (viruses), unsolicited e-mail (spam), malicious software (malware), monitoring software (spyware), attempting to make computer resources unavailable to their intended users (Distributed Denial-of-Service or DDoS attack), the social engineering, and online identity theft (phishing). One such cyber threat, which is particularly dangerous to computer users is phishing. Phishing is well known as online identity theft, which targets to steal victims' sensitive information such as username, password and online banking details. This paper focuses on designing an innovative and gamified approach to educate individuals about phishing attacks. The study asks how one can integrate self-efficacy, which has a co-relation with the user's knowledge, into an anti-phishing educational game to thwart phishing attacks? One of the main reasons would appear to be a lack of user knowledge to prevent from phishing attacks. Therefore, this research investigates the elements that influence (in this case, either conceptual or procedural knowledge or their interaction effect) and then integrate them into an anti-phishing educational game to enhance people's phishing prevention behaviour through their motivation.
Previous research has revealed that technology alone is insufficient to ensure critical IT security issues. So far, there has been little work on end user behaviour of performing security and preventing users from attacks which are imperative to cope up with MMFs such as phishing attacks @cite_44 @cite_15 @cite_22 @cite_36 @cite_12 @cite_37 @cite_50 . Many discussions have terminated with the conclusion of if we could only remove the user from the system, we would be able to make it secure" @cite_28 . Where it is not possible to completely eliminate the user, for example in home use, the best possible approach for computer security is to educate the user in security prevention @cite_44 @cite_15 @cite_22 @cite_32 @cite_46 . Previous research has revealed well designed user security education can be effective @cite_43 @cite_35 @cite_33 @cite_41 . This could be web-based training materials, contextual training, and embedded training to improve users' ability to avoid phishing attacks. One objective of our research is to find effective ways to educate people to identify and prevent from MMFs such as phishing websites.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_22", "@cite_33", "@cite_36", "@cite_28", "@cite_46", "@cite_41", "@cite_32", "@cite_44", "@cite_43", "@cite_50", "@cite_15", "@cite_12" ], "mid": [ "", "", "2141667975", "1983581110", "", "2268018836", "", "", "2018398218", "2278316643", "2136136174", "2049370737", "1970174926", "1863383349" ], "abstract": [ "", "", "Game based education is becoming more and more popular. This is because game based education provides an opportunity for learning in a natural environment. Phishing is an online identity theft, which attempts to steal sensitive information such as username, password, and online banking details from its victims. To prevent this, phishing awareness needs to be considered. This research aims to develop a game design framework, which enhances user avoidance behaviour through motivation to protect users from phishing attacks. In order to do this, a theoretical model derived from Technology Thread Avoidance Theory (TTAT) was developed and used in the game design framework (Liang & Xue, 2010). A survey study was undertaken with 150 regular computer users to elicit feedback through a questionnaire. The study findings revealed that perceived threat, safeguard effectiveness, safeguard cost, self-efficacy, perceived severity, and perceived susceptibility elements should be addressed in the game design framework for computer users to avoid phishing attacks. Furthermore, we argue that this game design framework can be used not only for preventing phishing attacks but also for preventing other malicious IT attacks such as viruses, malware, botnets and spyware.", "Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.", "", "Discussions in the security community often tend to end in agreement that the only way to really address many of our current problems is 'user education'. User education has, in many respects, become the default way to address the fact that our security environment is becoming too complex for us to secure with applications. But is user education the way forward or is it merely a term used to avoid admitting our failure to create a secure environment for our users customers? Is there any reason to expect that the user would be interested in educating themselves? Is there any research indicating that user education actually helps? This paper aims to discuss two questions. First, should we expect our users to be interested in education? After all, they pay us for taking care of security, so that they can get on with their real work. Second, do we have any evidence that user education leads to a higher level of security? Do the users actually change their behaviour in a way that mitigates risks? Are the risks we are seeing today addressable by increasing awareness?", "", "", "When tempted by a good deal online, users don't focus on security warnings; rather, they look for signs to confirm a site's trustworthiness. User education needs to focus on challenging and correcting the misconceptions that guide current behavior.", "Phishing is an online identity theft that aims to steal sensitive information such as username, password and online banking details from its victims. Phishing education needs to be considered as a means to combat this threat. This paper reports on a design and development of a mobile game prototype as an educational tool helping computer users to protect themselves against phishing attacks. The elements of a game design framework for avoiding phishing attacks were used to address the game design issues. Our mobile game design aimed to enhance the users' avoidance behaviour through motivation to protect themselves against phishing threats. A think-aloud study was conducted, along with a pre- and post-test, to assess the game design framework though the developed mobile game prototype. The study results showed a significant improvement of participants' phishing avoidance behaviour in their post-test assessment. Furthermore, the study findings suggest that participants' threat perception, safeguard effectiveness, self-efficacy, perceived severity and perceived susceptibility elements positively impact threat avoidance behaviour, whereas safeguard cost had a negative impact on it. We developed a mobile game prototype to thwart phishing attacks.Investigated what key elements should be addressed in the game.The elements derived from a game design framework were incorporated.Empirically investigated users' phishing threat avoidance behaviour.", "In this paper we describe the design and evaluation of Anti-Phishing Phil, an online game that teaches users good habits to help them avoid phishing attacks. We used learning science principles to design and iteratively refine the game. We evaluated the game through a user study: participants were tested on their ability to identify fraudulent web sites before and after spending 15 minutes engaged in one of three anti-phishing training activities (playing the game, reading an anti-phishing tutorial we created based on the game, or reading existing online training materials). We found that the participants who played the game were better able to identify fraudulent web sites compared to the participants in other conditions. We attribute these effects to both the content of the training messages presented in the game as well as the presentation of these materials in an interactive game format. Our results confirm that games can be an effective way of educating people about phishing and other security attacks.", "", "Phishing is an online identity theft, which aims to steal confidential information such as username, password and online banking details from its victims. To prevent this, anti-phishing education needs to be considered. Therefore, the research reported in this paper examines whether conceptual knowledge or procedural knowledge has a positive effect on computer users' self-efficacy to thwart phishing threats. In order to accomplish this, a theoretical model based on Liang and Xue's (2010) Technology Threat Avoidance Theory (TTAT) has been proposed and evaluated. Data was collected from 161 regular computer users to elicit their feedback through an online questionnaire. The study findings revealed that the interaction effect of conceptual and procedural knowledge positively impacts on computer users' self-efficacy, which enhances their phishing threat avoidance behaviour. It can therefore be argued that well-designed end-user security education contributes to thwart phishing threats.", "This study aims to understand the IT threat avoidance behaviors of personal computer users. We tested a research model derived from Technology Threat Avoidance Theory (TTAT) using survey data. We find that users’ IT threat avoidance behavior is predicted by avoidance motivation, which, in turn, is determined by perceived threat, safeguard effectiveness, safeguard cost, and self-efficacy. Users develop a threat perception when they believe that the malicious IT is likely to attack them (perceived susceptibility) and the negative consequences will be severe if they are attacked (perceived severity). When threatened, users are more motivated to avoid the threat if they believe that the safeguarding measure is effective (safeguard effectiveness) and inexpensive (safeguard cost) and they have confidence in using it (self-efficacy). In addition, we find that perceived threat and safeguard effectiveness have a negative interaction on avoidance motivation so that a higher level of perceived threat is associated with a weaker relationship between safeguard effectiveness and avoidance motivation or a higher level of safeguard effectiveness is associated with a weaker relationship between perceived threat and avoidance motivation. These findings provide an enriched understanding about personal computer users’ IT threat avoidance behavior." ] }
1706.07518
2661761953
Previous neural machine translation models used some heuristic search algorithms (e.g., beam search) in order to avoid solving the maximum a posteriori problem over translation sentences at test time. In this paper, we propose the Gumbel-Greedy Decoding which trains a generative network to predict translation under a trained model. We solve such a problem using the Gumbel-Softmax reparameterization, which makes our generative network differentiable and trainable through standard stochastic gradient methods. We empirically demonstrate that our proposed model is effective for generating sequences of discrete words.
There has been several work on training to solve decoding problem in NLP @cite_6 @cite_12 @cite_13 . Recently, there has been a work that came out independently of ours on learning to decode. proposed to train a neural network that predicts an arbitrary decoding objective given a source sentence and a partial hypothesis or a prefix of translation. They use it as an auxiliary score in beam search. For training such a network, referred to as a Q network in their paper, they generate each training example by either running beam search or using a ground-truth translation (when appropriate) for each source sentence. This approach allows one to use an arbitrary decoding objective, and yet it still relies heavily on the log-probability of the underlying neural translation system in actual decoding.
{ "cite_N": [ "@cite_13", "@cite_12", "@cite_6" ], "mid": [ "2414484917", "2176263492", "2195405088" ], "abstract": [ "Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general-purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable accuracy in estimating local, next-word distributions. In this work, we introduce a model and beam-search training scheme, based on the work of Daume III and Marcu (2005), that extends seq2seq to learn global sequence scores. This structured approach avoids classical biases associated with local training and unifies the training loss with the test-time usage, while preserving the proven model architecture of seq2seq and its efficient training approach. We show that our system outperforms a highly-optimized attention-based seq2seq system and other baselines on three different sequence to sequence tasks: word ordering, parsing, and machine translation.", "Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.", "We propose minimum risk training for end-to-end neural machine translation. Unlike conventional maximum likelihood estimation, minimum risk training is capable of optimizing model parameters directly with respect to arbitrary evaluation metrics, which are not necessarily differentiable. Experiments show that our approach achieves significant improvements over maximum likelihood estimation on a state-of-the-art neural machine translation system across various languages pairs. Transparent to architectures, our approach can be applied to more neural networks and potentially benefit more NLP tasks." ] }
1706.07829
2688959180
Recently, with the advancement of the GPS-enabled cellular technologies, the location-based services (LBS) have gained in popularity. Nowadays, an increasingly larger number of map-based applications enable users to ask a wider variety of queries. Researchers have studied the ride-sharing, the carpooling, the vehicle routing, and the collective travel planning problems extensively in recent years. Collective traveling has the benefit of being environment-friendly by reducing the global travel cost, the greenhouse gas emission, and the energy consumption. In this paper, we introduce several optimization problems to recommend a suitable route and stops of a vehicle, in a road network, for a group of users intending to travel collectively. The goal of each problem is to minimize the aggregate cost of the individual travelers' paths and the shared route under various constraints. First, we formulate the problem of determining the optimal pair of end-stops, given a set of queries that originate and terminate near the two prospective end regions. We outline a baseline polynomial-time algorithm and propose a new faster solution - both calculating an exact answer. In our approach, we utilize the path-coherence property of road networks to develop an efficient algorithm. Second, we define the problem of calculating the optimal route and intermediate stops of a vehicle that picks up and drops off passengers en-route, given its start and end stoppages, and a set of path queries from users. We outline an exact solution of both time and space complexities exponential in the number of queries. Then, we propose a novel polynomial-time-and-space heuristic algorithm that performs reasonably well in practice. We also analyze several variants of this problem under different constraints. Last, we perform extensive experiments that demonstrate the efficiency and accuracy of our algorithms.
In this paper, we have introduced two types of problems - the optimal end-stops @math query, and the optimal route and intermediate stops @math query. In the existing literature, our @math and @math problems are closely related to the ride-sharing problem in road networks @cite_11 . In recent years, several studies, @cite_13 , @cite_18 , @cite_35 , @cite_12 , have demonstrated the benefits of ride-sharing in reducing the traffic congestion @cite_13 , @cite_18 , the number of DWI fatalities @cite_35 , and the greenhouse gas emission @cite_12 . @cite_4 shows how a ride-sharing system may save time, money and the environment. Our techniques complement the existing ride-sharing approaches, @cite_11 , by computing the optimal route and stops of a vehicle for a group of assigned passengers.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_4", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2621131185", "2154245661", "2114547164", "", "2565919573", "2163599294" ], "abstract": [ "", "This paper assesses the potential of ride-sharing for reducing traffic in a city -- based on mobility data extracted from 3G Call Description Records (CDRs), for the cities of Madrid and Barcelona (BCN), and from OSNs, such as Twitter and Foursquare (FSQ), collected for the cities of New York (NY) and Los Angeles (LA). First, we analyze these data sets to understand mobility patterns, home and work locations, and social ties between users. Then, we develop an efficient algorithm for matching users with similar mobility patterns, considering a range of constraints, including social distance. The solution provides an upper bound to the potential decrease in the number of cars in a city that can be achieved by ride-sharing. Our results indicate that this decrease can be as high as 31 , when users are willing to ride with friends of friends.", "In times of increasing traffic-related problems, such as air-pollution or traffic jams, ride-sharing is one of the most environmentally friendly and pleasantest ways to travel. The many benefits are offset by a multitude of prejudices and fears, including security concerns and a heavy scheduling and coordinating burden. For this reason this paper introduces vHike an easy-to-use management system for dynamic ride-sharing running on modern Smartphones. By the use of techniques well-known from Web 2.0 social networks the threats and social discomfort emanated by ride-sharing is mitigated. With vHike we want to show that a proper designed social dynamic ride-sharing system can be feasible and viable.", "", "Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societ al impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that ( i ) scales to large numbers of passengers and trips and ( ii ) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.", "Although ridesharing can provide a wealth of benefits, such as reduced travel costs, congestion, and consequently less pollution, there are a number of challenges that have restricted its widespread adoption. In fact, even at a time when improving communication systems provide real-time detailed information that could be used to facilitate ridesharing, the share of work trips that use ridesharing has decreased by almost 10 in the past 30years." ] }
1706.07829
2688959180
Recently, with the advancement of the GPS-enabled cellular technologies, the location-based services (LBS) have gained in popularity. Nowadays, an increasingly larger number of map-based applications enable users to ask a wider variety of queries. Researchers have studied the ride-sharing, the carpooling, the vehicle routing, and the collective travel planning problems extensively in recent years. Collective traveling has the benefit of being environment-friendly by reducing the global travel cost, the greenhouse gas emission, and the energy consumption. In this paper, we introduce several optimization problems to recommend a suitable route and stops of a vehicle, in a road network, for a group of users intending to travel collectively. The goal of each problem is to minimize the aggregate cost of the individual travelers' paths and the shared route under various constraints. First, we formulate the problem of determining the optimal pair of end-stops, given a set of queries that originate and terminate near the two prospective end regions. We outline a baseline polynomial-time algorithm and propose a new faster solution - both calculating an exact answer. In our approach, we utilize the path-coherence property of road networks to develop an efficient algorithm. Second, we define the problem of calculating the optimal route and intermediate stops of a vehicle that picks up and drops off passengers en-route, given its start and end stoppages, and a set of path queries from users. We outline an exact solution of both time and space complexities exponential in the number of queries. Then, we propose a novel polynomial-time-and-space heuristic algorithm that performs reasonably well in practice. We also analyze several variants of this problem under different constraints. Last, we perform extensive experiments that demonstrate the efficiency and accuracy of our algorithms.
The challenges in the ride-sharing system come from two directions. First, to dynamically match the passengers, requesting shared rides, to appropriate vehicles. Second, to compute the best route of each vehicle and its pick-up and drop-off locations for the passengers assigned to it. Neither task is trivial. Several works in the existing literature address the first problem @cite_41 , @cite_6 , @cite_39 . @cite_41 presents an efficient algorithm based on the kinetic tree, which finds the appropriate assignment with a service guarantee. In @cite_39 , a system, named T-share, performs dynamic vehicle-passenger matching for the purpose of Taxi ride-sharing. @cite_6 introduces a Spatio-temporal index structure, which facilitates taxi searching under a set of constraints such as the time-window constraints, and the monetary constraints. These algorithms concentrate on efficiently assigning the passengers to the vehicles in the system in real-time. In this paper, we mainly focus on overcoming the second challenge of determining the optimal route and stops of a vehicle through solving the @math and the @math problems.
{ "cite_N": [ "@cite_41", "@cite_6", "@cite_39" ], "mid": [ "1923372423", "1993912146", "1976993400" ], "abstract": [ "Urban traffic gridlock is a familiar scene. At the same time, the mean occupancy rate of personal vehicle trips in the United States is only 1.6 persons per vehicle mile. Ridesharing has the potential to solve many environmental, congestion, pollution, and energy problems. In this paper, we introduce the problem of large scale real-time ridesharing with service guarantee on road networks. Trip requests are dynamically matched to vehicles while trip waiting and service time constraints are satisfied. We first propose two scheduling algorithms: a branch-and-bound algorithm and an integer programing algorithm. However, these algorithms do not adapt well to the dynamic nature of the ridesharing problem. Thus, we propose kinetic tree algorithms which are better suited to efficient scheduling of dynamic requests and adjust routes on-the-fly. We perform experiments on a large Shanghai taxi dataset. Results show that the kinetic tree algorithms outperform other algorithms significantly.", "We proposed and developed a taxi-sharing system that accepts taxi passengers’ real-time ride requests sent from smartphones and schedules proper taxis to pick up them via ridesharing, subject to time, capacity, and monetary constraints. The monetary constraints provide incentives for both passengers and taxi drivers: passengers will not pay more compared with no ridesharing and get compensated if their travel time is lengthened due to ridesharing; taxi drivers will make money for all the detour distance due to ridesharing. While such a system is of significant social and environmental benefit, e.g., saving energy consumption and satisfying people's commute, real-time taxi-sharing has not been well studied yet. To this end, we devise a mobile-cloud architecture based taxi-sharing system. Taxi riders and taxi drivers use the taxi-sharing service provided by the system via a smart phone App. The Cloud first finds candidate taxis quickly for a taxi ride request using a taxi searching algorithm supported by a spatio-temporal index. A scheduling process is then performed in the cloud to select a taxi that satisfies the request with minimum increase in travel distance. We built an experimental platform using the GPS trajectories generated by over 33,000 taxis over a period of three months. A ride request generator is developed (available at http: cs.uic.edu ∼sma ridesharing) in terms of the stochastic process modelling real ride requests learned from the data set. Tested on this platform with extensive experiments, our proposed system demonstrated its efficiency, effectiveness and scalability. For example, when the ratio of the number of ride requests to the number of taxis is 6, our proposed system serves three times as many taxi riders as that when no ridesharing is performed while saving 11 percent in total travel distance and 7 percent taxi fare per rider.", "Taxi ridesharing can be of significant social and environmental benefit, e.g. by saving energy consumption and satisfying people's commute needs. Despite the great potential, taxi ridesharing, especially with dynamic queries, is not well studied. In this paper, we formally define the dynamic ridesharing problem and propose a large-scale taxi ridesharing service. It efficiently serves real-time requests sent by taxi users and generates ridesharing schedules that reduce the total travel distance significantly. In our method, we first propose a taxi searching algorithm using a spatio-temporal index to quickly retrieve candidate taxis that are likely to satisfy a user query. A scheduling algorithm is then proposed. It checks each candidate taxi and inserts the query's trip into the schedule of the taxi which satisfies the query with minimum additional incurred travel distance. To tackle the heavy computational load, a lazy shortest path calculation strategy is devised to speed up the scheduling algorithm. We evaluated our service using a GPS trajectory dataset generated by over 33,000 taxis during a period of 3 months. By learning the spatio-temporal distributions of real user queries from this dataset, we built an experimental platform that simulates user real behaviours in taking a taxi. Tested on this platform with extensive experiments, our approach demonstrated its efficiency, effectiveness, and scalability. For example, our proposed service serves 25 additional taxi users while saving 13 travel distance compared with no-ridesharing (when the ratio of the number of queries to that of taxis is 6)." ] }
1706.07829
2688959180
Recently, with the advancement of the GPS-enabled cellular technologies, the location-based services (LBS) have gained in popularity. Nowadays, an increasingly larger number of map-based applications enable users to ask a wider variety of queries. Researchers have studied the ride-sharing, the carpooling, the vehicle routing, and the collective travel planning problems extensively in recent years. Collective traveling has the benefit of being environment-friendly by reducing the global travel cost, the greenhouse gas emission, and the energy consumption. In this paper, we introduce several optimization problems to recommend a suitable route and stops of a vehicle, in a road network, for a group of users intending to travel collectively. The goal of each problem is to minimize the aggregate cost of the individual travelers' paths and the shared route under various constraints. First, we formulate the problem of determining the optimal pair of end-stops, given a set of queries that originate and terminate near the two prospective end regions. We outline a baseline polynomial-time algorithm and propose a new faster solution - both calculating an exact answer. In our approach, we utilize the path-coherence property of road networks to develop an efficient algorithm. Second, we define the problem of calculating the optimal route and intermediate stops of a vehicle that picks up and drops off passengers en-route, given its start and end stoppages, and a set of path queries from users. We outline an exact solution of both time and space complexities exponential in the number of queries. Then, we propose a novel polynomial-time-and-space heuristic algorithm that performs reasonably well in practice. We also analyze several variants of this problem under different constraints. Last, we perform extensive experiments that demonstrate the efficiency and accuracy of our algorithms.
In the @math problem, given a cluster of co-located sources, and another of co-located destinations, we are to find an optimal pair of end-stops for a vehicle, which will carry the passengers from the source cluster to the destination cluster. Users demanding a vehicle, using a location-based service, do not automatically form clusters. The initial task is to group them in a way that satisfies our input requirements. In this paper, we do not provide an algorithm for grouping passengers. Instead, we assume that an existing clustering algorithm such as @cite_25 , which partitions the queries into batches, has already performed the grouping. @cite_25 divides the path queries into groups, where each group comprises the queries from a source cluster to a destination cluster. We take each output query group of @cite_25 as input to our algorithm and focus on determining the optimal end-stoppages for the corresponding vehicle.
{ "cite_N": [ "@cite_25" ], "mid": [ "68380453" ], "abstract": [ "The advancement of mobile technologies and map-based applications enables a user to access a wide variety of location-based services that range from information queries to navigation systems. Due to the popularity of map-based applications among the users, the service provider often requires to answer a large number of simultaneous (or contemporary) queries. Thus, processing queries efficiently on spatial networks (i.e., road networks) have become an important research area in recent years. In this paper, we focus on path queries that find the shortest path between a source and a destination of the user. In particular, we address the problem of finding the shortest paths for a large number of simultaneous path queries in road networks. Traditional systems that consider one query at a time are not suitable for many applications due to high computational and service cost overhead. We propose an efficient group based approach that provides a practical solution with reduced cost. The key concept of our approach is to group queries that share a common travel path and then compute the shortest path for the group. Experimental results show the effectiveness and efficiency of our group based approach." ] }
1706.07829
2688959180
Recently, with the advancement of the GPS-enabled cellular technologies, the location-based services (LBS) have gained in popularity. Nowadays, an increasingly larger number of map-based applications enable users to ask a wider variety of queries. Researchers have studied the ride-sharing, the carpooling, the vehicle routing, and the collective travel planning problems extensively in recent years. Collective traveling has the benefit of being environment-friendly by reducing the global travel cost, the greenhouse gas emission, and the energy consumption. In this paper, we introduce several optimization problems to recommend a suitable route and stops of a vehicle, in a road network, for a group of users intending to travel collectively. The goal of each problem is to minimize the aggregate cost of the individual travelers' paths and the shared route under various constraints. First, we formulate the problem of determining the optimal pair of end-stops, given a set of queries that originate and terminate near the two prospective end regions. We outline a baseline polynomial-time algorithm and propose a new faster solution - both calculating an exact answer. In our approach, we utilize the path-coherence property of road networks to develop an efficient algorithm. Second, we define the problem of calculating the optimal route and intermediate stops of a vehicle that picks up and drops off passengers en-route, given its start and end stoppages, and a set of path queries from users. We outline an exact solution of both time and space complexities exponential in the number of queries. Then, we propose a novel polynomial-time-and-space heuristic algorithm that performs reasonably well in practice. We also analyze several variants of this problem under different constraints. Last, we perform extensive experiments that demonstrate the efficiency and accuracy of our algorithms.
Our @math query is different from the point-to-point shortest path query in two aspects. First, it takes multiple source-destination pairs as input. Second, it aims to optimize a different cost function, which is a summation of the vehicle's path cost and the passenger's solo travel costs. Therefore, the traditional shortest path algorithms such as the @math @cite_40 , @cite_44 , and the @math @cite_0 cannot answer our query. Similarly, we cannot use faster hierarchy based approaches like @cite_42 , @cite_37 , and @cite_38 either. However, we benefit from the @math 's algorithm @cite_40 , @cite_44 , the bidirectional search @cite_30 , and the group shortest path approach @cite_10 . @cite_10 introduces a technique to process a batch of shortest path queries simultaneously, based on the path-coherence property of road networks. Although @cite_10 cannot solve our problem, we borrow the intuition behind its simultaneous search to develop an efficient algorithm in that answers the @math query.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_37", "@cite_42", "@cite_44", "@cite_0", "@cite_40", "@cite_10" ], "mid": [ "2162841080", "2081863845", "2148831787", "1673383737", "2128812635", "", "", "1504126076" ], "abstract": [ "Three admissible bidirectional search algorithms have been described in the literature: A Cartesian product approach due to Doran, Pohl's BHPA, and Champeaux and Sint's BHFFA2. This paper describes an algorithm, GP, which contains the latter two and others. New admissibility results are obtained. A first order analysis is made comparing the run times of Cartesian product search, two versions of GP, and unidirectional A. The goal is to gain insight on when bidirectional search is useful and direction for seeking better bidirectional search algorithms.", "Given two locations s and t in a road network, a distance query returns the minimum network distance from s to t, while a shortest path query computes the actual route that achieves the minimum distance. These two types of queries find important applications in practice, and a plethora of solutions have been proposed in past few decades. The existing solutions, however, are optimized for either practical or asymptotic performance, but not both. In particular, the techniques with enhanced practical efficiency are mostly heuristic-based, and they offer unattractive worst-case guarantees in terms of space and time. On the other hand, the methods that are worst-case efficient often entail prohibitive preprocessing or space overheads, which render them inapplicable for the large road networks (with millions of nodes) commonly used in modern map applications. This paper presents Arterial Hierarchy (AH), an index structure that narrows the gap between theory and practice in answering shortest path and distance queries on road networks. On the theoretical side, we show that, under a realistic assumption, AH answers any distance query in O(log α) time, where α = dmax dmin, and dmax (resp. dmin) is the largest (resp. smallest) L∞ distance between any two nodes in the road network. In addition, any shortest path query can be answered in O(k + log α) time, where k is the number of nodes on the shortest path. On the practical side, we experimentally evaluate AH on a large set of real road networks with up to twenty million nodes, and we demonstrate that (i) AH outperforms the state of the art in terms of query time, and (ii) its space and pre-computation overheads are moderate.", "We present a route planning technique solely based on the concept of node contraction. The nodes are first ordered by 'importance'. A hierarchy is then generated by iteratively contracting the least important node. Contracting a node υ means replacing shortest paths going through v by shortcuts. We obtain a hierarchical query algorithm using bidirectional shortest-path search. The forward search uses only edges leading to more important nodes and the backward search uses only edges coming from more important nodes. For fastest routes in road networks, the graph remains very sparse throughout the contraction process using rather simple heuristics for ordering the nodes. We have five times lower query times than the best previous hierarchical Dijkstra-based speedup techniques and a negative space overhead, i.e., the data structure for distance computation needs less space than the input graph. CHs can be combined with many other route planning techniques, leading to improved performance for many-to-many routing, transit-node routing, goal-directed routing or mobile and dynamic scenarios.", "Materialization and hierarchical routing algorithms are becoming important tools in querying databases for the shortest paths in time-critical applications like Intelligent Transportation Systems (ITS), due to the growing size of their spatial graph databases [16]. A hierarchical routing algorithm decomposes the original graph into a set of fragment graphs and a boundary graph which summarizes the fragment graphs. A fully materialized hierarchical routing algorithm pre-computes and stores the shortest-path view and the shortest-path-cost view for the graph fragments as well as for the boundary graph [9]. The storage cost of the fully materialized approach can be reduced by a virtual or a hybrid materialization approach, where few or none of the relevant views are pre-computed. This paper explores the effect of materializing individual views for the storage overhead and computation time of hierarchical routing algorithms. Our experiments with the Twin Cities metropolitan road-map show that materializing the shortest-path-cost view for the boundary graph provides the best savings in computation time, for a given amount of storage and a small number of fragments. Materializing the relevant part of the shortest-path-cost view for the fragment graphs provides the next best savings, followed by materializing the shortest-path view for the boundary graph. Virtual shortest-path-view on fragments can reduce storage costs by an order of magnitude or more for large graphs", "This paper discusses the implementation of Dijkstra's (1959) classic double bucket algorithm for path finding in connected networks. The work reports on a modification of the algorithm embracing both static and dynamic heuristic components and multiple source nodes. The modified algorithm is applied in a 3D spatial information system (SIS) for routing emergency service vehicles. The algorithm has been implemented as a suite of modules and integrated into a commercial SIS software environment. Genuine 3D spatial data is used to test the algorithm on the problem of vehicle routing and rerouting under simulated earthquake conditions in the Japanese city of Okayama. Coverage graphs were also produced giving contour lines joining points with identical travel times.", "", "", "The recent advancement of GPS-enabled mobile technologies and the proliferation of map-based applications are attracting an increasing number of people to use location based services (LBSs). Processing a larger number of simultaneous queries efficiently have become an important research topic in recent years. In this paper, we focus on an important class of LBSs, shortest path queries (SP-queries) in road networks. Given a source and a destination in a road network, an SP-query returns the path from the source to the destination that minimizes the travel time. We particularly focus on batch processing of simultaneous SP-queries in road networks. Traditional systems that process one query at a time usually provide slow responses, causing the machine to flood with incoming queries. Existing fast solutions for SP-queries require expensive pre-processing steps and are incapable of adapting with the continuous change in traffic on the roads. We propose an efficient group based approach that provides an approximate solution with reduced cost and high accuracy. An important benefit of our approach is that it does not require expensive pre-processing. The key concept is to exploit the path-coherence property of road networks by grouping queries that share substantial common paths in their shortest paths and processing the group in a single pass. Our approach incurs an average relative error of 0.5 and is on average 6 times faster than the straightforward approach that evaluates each SP-query individually." ] }
1706.07829
2688959180
Recently, with the advancement of the GPS-enabled cellular technologies, the location-based services (LBS) have gained in popularity. Nowadays, an increasingly larger number of map-based applications enable users to ask a wider variety of queries. Researchers have studied the ride-sharing, the carpooling, the vehicle routing, and the collective travel planning problems extensively in recent years. Collective traveling has the benefit of being environment-friendly by reducing the global travel cost, the greenhouse gas emission, and the energy consumption. In this paper, we introduce several optimization problems to recommend a suitable route and stops of a vehicle, in a road network, for a group of users intending to travel collectively. The goal of each problem is to minimize the aggregate cost of the individual travelers' paths and the shared route under various constraints. First, we formulate the problem of determining the optimal pair of end-stops, given a set of queries that originate and terminate near the two prospective end regions. We outline a baseline polynomial-time algorithm and propose a new faster solution - both calculating an exact answer. In our approach, we utilize the path-coherence property of road networks to develop an efficient algorithm. Second, we define the problem of calculating the optimal route and intermediate stops of a vehicle that picks up and drops off passengers en-route, given its start and end stoppages, and a set of path queries from users. We outline an exact solution of both time and space complexities exponential in the number of queries. Then, we propose a novel polynomial-time-and-space heuristic algorithm that performs reasonably well in practice. We also analyze several variants of this problem under different constraints. Last, we perform extensive experiments that demonstrate the efficiency and accuracy of our algorithms.
In the @math problem, given the end-stoppages and a set of path queries from a group of users, we are to compute an optimal route for a vehicle, as a sequence of intermediate stops. Usually, in a ride-sharing system, there is a fleet of vehicles for providing the passengers with shared rides. The first task is to divide the passengers into groups and assign each group to a vehicle; we assume that a vehicle-passenger matching algorithm, e.g., @cite_41 , @cite_6 , @cite_39 , and @cite_16 , has already done that. In this paper, we center upon the task of determining the optimal route for a single vehicle serving a group of passengers.
{ "cite_N": [ "@cite_41", "@cite_16", "@cite_6", "@cite_39" ], "mid": [ "1923372423", "2565197146", "1993912146", "1976993400" ], "abstract": [ "Urban traffic gridlock is a familiar scene. At the same time, the mean occupancy rate of personal vehicle trips in the United States is only 1.6 persons per vehicle mile. Ridesharing has the potential to solve many environmental, congestion, pollution, and energy problems. In this paper, we introduce the problem of large scale real-time ridesharing with service guarantee on road networks. Trip requests are dynamically matched to vehicles while trip waiting and service time constraints are satisfied. We first propose two scheduling algorithms: a branch-and-bound algorithm and an integer programing algorithm. However, these algorithms do not adapt well to the dynamic nature of the ridesharing problem. Thus, we propose kinetic tree algorithms which are better suited to efficient scheduling of dynamic requests and adjust routes on-the-fly. We perform experiments on a large Shanghai taxi dataset. Results show that the kinetic tree algorithms outperform other algorithms significantly.", "Recently, many ride sharing systems have been commercially introduced (e.g., Uber, Flinc, and Lyft) forming a multi-billion dollars industry. The main idea is to match people requesting a certain ride to other people who are acting as drivers on their own spare time. The matching algorithm run by these services is very simple and ignores a wide sector of users who can be exploited to maximize the benefits of these services. In this demo, we demonstrate SHAREK; a driver-rider matching algorithm that can be embedded inside existing ride sharing services to enhance the quality of their matching. SHAREK has the potential to boost the performance and widen the user base and applicability of existing ride sharing services. This is mainly because within its matching technique, SHAREK takes into account user preferences in terms of maximum waiting time the rider is willing to have before being picked up as well as the maximum cost that the rider is willing to pay. Then, within its course of execution, SHAREK applies a set of smart filters that enable it to do the matching so efficiently without the need to many expensive shortest path computations.", "We proposed and developed a taxi-sharing system that accepts taxi passengers’ real-time ride requests sent from smartphones and schedules proper taxis to pick up them via ridesharing, subject to time, capacity, and monetary constraints. The monetary constraints provide incentives for both passengers and taxi drivers: passengers will not pay more compared with no ridesharing and get compensated if their travel time is lengthened due to ridesharing; taxi drivers will make money for all the detour distance due to ridesharing. While such a system is of significant social and environmental benefit, e.g., saving energy consumption and satisfying people's commute, real-time taxi-sharing has not been well studied yet. To this end, we devise a mobile-cloud architecture based taxi-sharing system. Taxi riders and taxi drivers use the taxi-sharing service provided by the system via a smart phone App. The Cloud first finds candidate taxis quickly for a taxi ride request using a taxi searching algorithm supported by a spatio-temporal index. A scheduling process is then performed in the cloud to select a taxi that satisfies the request with minimum increase in travel distance. We built an experimental platform using the GPS trajectories generated by over 33,000 taxis over a period of three months. A ride request generator is developed (available at http: cs.uic.edu ∼sma ridesharing) in terms of the stochastic process modelling real ride requests learned from the data set. Tested on this platform with extensive experiments, our proposed system demonstrated its efficiency, effectiveness and scalability. For example, when the ratio of the number of ride requests to the number of taxis is 6, our proposed system serves three times as many taxi riders as that when no ridesharing is performed while saving 11 percent in total travel distance and 7 percent taxi fare per rider.", "Taxi ridesharing can be of significant social and environmental benefit, e.g. by saving energy consumption and satisfying people's commute needs. Despite the great potential, taxi ridesharing, especially with dynamic queries, is not well studied. In this paper, we formally define the dynamic ridesharing problem and propose a large-scale taxi ridesharing service. It efficiently serves real-time requests sent by taxi users and generates ridesharing schedules that reduce the total travel distance significantly. In our method, we first propose a taxi searching algorithm using a spatio-temporal index to quickly retrieve candidate taxis that are likely to satisfy a user query. A scheduling algorithm is then proposed. It checks each candidate taxi and inserts the query's trip into the schedule of the taxi which satisfies the query with minimum additional incurred travel distance. To tackle the heavy computational load, a lazy shortest path calculation strategy is devised to speed up the scheduling algorithm. We evaluated our service using a GPS trajectory dataset generated by over 33,000 taxis during a period of 3 months. By learning the spatio-temporal distributions of real user queries from this dataset, we built an experimental platform that simulates user real behaviours in taking a taxi. Tested on this platform with extensive experiments, our approach demonstrated its efficiency, effectiveness, and scalability. For example, our proposed service serves 25 additional taxi users while saving 13 travel distance compared with no-ridesharing (when the ratio of the number of queries to that of taxis is 6)." ] }
1706.07845
2700550412
We present HARP, a novel method for learning low dimensional embeddings of a graph's nodes which preserves higher-order structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial representations, which serve as good initializations for learning representations in the original, detailed graph. We inductively extend this idea, by decomposing a graph in a series of levels, and then embed the hierarchy of graphs from the coarsest one to the original graph. HARP is a general meta-strategy to improve all of the state-of-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec. Indeed, we demonstrate that applying HARP's hierarchical paradigm yields improved implementations for all three of these methods, as evaluated on both classification tasks on real-world graphs such as DBLP, BlogCatalog, CiteSeer, and Arxiv, where we achieve a performance gain over the original implementations by up to 14 Macro F1.
. Most early methods treated representation learning as performing dimension reduction on the Laplacian and adjacency matrices @cite_13 @cite_22 @cite_9 . These methods work well on small graphs, but the time complexity of these algorithms is too high for the large-scale graphs commonly encountered today.
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_22" ], "mid": [ "2001141328", "2156718197", "" ], "abstract": [ "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.", "Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.", "" ] }
1706.07845
2700550412
We present HARP, a novel method for learning low dimensional embeddings of a graph's nodes which preserves higher-order structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial representations, which serve as good initializations for learning representations in the original, detailed graph. We inductively extend this idea, by decomposing a graph in a series of levels, and then embed the hierarchy of graphs from the coarsest one to the original graph. HARP is a general meta-strategy to improve all of the state-of-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec. Indeed, we demonstrate that applying HARP's hierarchical paradigm yields improved implementations for all three of these methods, as evaluated on both classification tasks on real-world graphs such as DBLP, BlogCatalog, CiteSeer, and Arxiv, where we achieve a performance gain over the original implementations by up to 14 Macro F1.
Recently, neural network-based methods have been proposed for constructing node representation in large-scale graphs. Deepwalk @cite_0 presents a two-phase algorithm for graph representation learning. In the first phase, Deepwalk samples sequences of neighboring nodes of each node by random walking on the graph. Then, the node representation is learned by training a Skip-gram model @cite_7 on the random walks. A number of methods have been proposed which extend this idea. First, several methods use different strategies for sampling neighboring nodes. LINE @cite_21 learns graph embeddings which preserve both the first-order and second-order proximities in a graph. Walklets @cite_3 captures multiscale node representation on graphs by sampling edges from higher powers of the graph adjacency matrix. Node2vec @cite_10 combines DFS-like and BFS-like exploration within the random walk framework. Second, matrix factorization methods and deep neural networks have also been proposed @cite_18 @cite_5 @cite_11 @cite_4 as alternatives to the Skip-gram model for learning the latent representations.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_21", "@cite_3", "@cite_0", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "2090891622", "2614812929", "2950133940", "1888005072", "2963410212", "2154851992", "2387462954", "2366141641", "2393319904" ], "abstract": [ "In this paper, we present GraRep , a novel model for learning vertex representations of weighted graphs. This model learns low dimensional vectors to represent vertices appearing in a graph and, unlike existing work, integrates global structural information of the graph into the learning process. We also formally analyze the connections between our work and several previous research efforts, including the DeepWalk model of as well as the skip-gram model with negative sampling of We conduct experiments on a language network, a social network as well as a citation network and show that our learned global representations can be effectively used as features in tasks such as clustering, classification and visualization. Empirical results demonstrate that our representation significantly outperforms other state-of-the-art methods in such tasks.", "We propose a new method for embedding graphs while preserving directed edge information. Learning such continuous-space vector representations (or embeddings) of nodes in a graph is an important first step for using network information (from social networks, user-item graphs, knowledge bases, etc.) in many machine learning tasks. Unlike previous work, we (1) explicitly model an edge as a function of node embeddings, and we (2) propose a novel objective, the graph likelihood, which contrasts information from sampled random walks with non-existent edges. Individually, both of these contributions improve the learned representations, especially when there are memory constraints on the total size of the embeddings. When combined, our contributions enable us to significantly improve the state-of-the-art by learning more concise representations that better preserve the graph structure. We evaluate our method on a variety of link-prediction task including social networks, collaboration networks, and protein interactions, showing that our proposed method learn representations with error reductions of up to 76 and 55 , on directed and undirected graphs. In addition, we show that the representations learned by our method are quite space efficient, producing embeddings which have higher structure-preserving accuracy but are 10 times smaller.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE .", "We present WALKLETS, a novel approach for learning multiscale representations of vertices in a network. In contrast to previous works, these representations explicitly encode multi-scale vertex relationships in a way that is analytically derivable. WALKLETS generates these multiscale relationships by sub-sampling short random walks on the vertices of a graph. By 'skipping' over steps in each random walk, our method generates a corpus of vertex pairs which are reachable via paths of a fixed length. This corpus can then be used to learn a series of latent representations, each of which captures successively higher order relationships from the adjacency matrix. We demonstrate the efficacy of WALKLETS's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, DBLP, Flickr, and YouTube. Our results show that WALKLETS outperforms new methods based on neural matrix factorization. Specifically, we outperform DeepWalk by up to 10 and LINE by 58 Micro-F1 on challenging multi-label classification tasks. Finally, WALKLETS is an online algorithm, and can easily scale to graphs with millions of vertices and edges.", "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "Graph embedding algorithms embed a graph into a vector space where the structure and the inherent properties of the graph are preserved. The existing graph embedding methods cannot preserve the asymmetric transitivity well, which is a critical property of directed graphs. Asymmetric transitivity depicts the correlation among directed edges, that is, if there is a directed path from u to v, then there is likely a directed edge from u to v. Asymmetric transitivity can help in capturing structures of graphs and recovering from partially observed graphs. To tackle this challenge, we propose the idea of preserving asymmetric transitivity by approximating high-order proximity which are based on asymmetric transitivity. In particular, we develop a novel graph embedding algorithm, High-Order Proximity preserved Embedding (HOPE for short), which is scalable to preserve high-order proximities of large scale graphs and capable of capturing the asymmetric transitivity. More specifically, we first derive a general formulation that cover multiple popular high-order proximity measurements, then propose a scalable embedding algorithm to approximate the high-order proximity measurements based on their general formulation. Moreover, we provide a theoretical upper bound on the RMSE (Root Mean Squared Error) of the approximation. Our empirical experiments on a synthetic dataset and three real-world datasets demonstrate that HOPE can approximate the high-order proximities significantly better than the state-of-art algorithms and outperform the state-of-art algorithms in tasks of reconstruction, link prediction and vertex recommendation.", "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "Network embedding is an important method to learn low-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embedding methods adopt shallow models. However, since the underlying network structure is complex, shallow models cannot capture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to find a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a Structural Deep Network Embedding method, namely SDNE. More specifically, we first propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the first-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used by the unsupervised component to capture the global network structure. While the first-order proximity is used as the supervised information in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structure and is robust to sparse networks. Empirically, we conduct the experiments on five real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network significantly better and achieves substantial gains in three applications, i.e. multi-label classification, link prediction and visualization." ] }
1706.07867
2697966164
Hierarchical models are utilized in a wide variety of problems which are characterized by task hierarchies, where predictions on smaller subtasks are useful for trying to predict a final task. Typically, neural networks are first trained for the subtasks, and the predictions of these networks are subsequently used as additional features when training a model and doing inference for a final task. In this work, we focus on improving learning for such hierarchical models and demonstrate our method on the task of speaker trait prediction. Speaker trait prediction aims to computationally identify which personality traits a speaker might be perceived to have, and has been of great interest to both the Artificial Intelligence and Social Science communities. Persuasiveness prediction in particular has been of interest, as persuasive speakers have a large amount of influence on our thoughts, opinions and beliefs. In this work, we examine how leveraging the relationship between related speaker traits in a hierarchical structure can help improve our ability to predict how persuasive a speaker is. We present a novel algorithm that allows us to backpropagate through this hierarchy. This hierarchical model achieves a 25 relative error reduction in classification accuracy over current state-of-the art methods on the publicly available POM dataset.
Identification and analysis of high-level speaker traits has a rich basis in social psychology. Passion and credibility, in particular, are noted to be important characteristics for eloquent speech @cite_22 . @cite_11 discuss the importance of passion in rhetoric, and several other studies have analyzed the relationship between credibility and persuasion @cite_28 @cite_17 . @cite_2 explore the role face, body and speech play in judgments of personality and effect, and @cite_15 studied gaze and how it relates to making conciliatory and demanding requests.
{ "cite_N": [ "@cite_11", "@cite_22", "@cite_28", "@cite_2", "@cite_15", "@cite_17" ], "mid": [ "1604094939", "1493371713", "", "2055718898", "2023890289", "" ], "abstract": [ "* Contents * Acknowledgments * Introduction: Persuasion * Part One: Against Rhetoric *1. The Rhetoric against Rhetoric: Hobbes *2. Persuading without Convincing: Rousseau *3. The Sovereignty of Scholars: Kant * Part Two: For Rhetoric *4. Drawing upon Judgment: Aristotle *5. Conviction and Controversy: Cicero *6. Persuasion and Deliberation * Notes * Bibliography * Index", "Based on careful study of the Greek text and informed by the best modern scholarship, the second edition of this highly acclaimed translation offers the most faithful English version ever published of On Rhetoric. Updated in light of recent scholarship, the new edition features a revised introduction-with two new sections-and revised appendices that provide new and additional supplementary texts (relevant ancient works).", "", "Three experiments correlated judgments made from observing single channels (face, body, or speech) with multiple channel judgments (face, body, and speech together; or face and speech together). Judges observed the spontaneous behavior of videotaped student nurses in 2 types of interview situations, \"deceptive\" and \"honest,\" and rated the nurses on 14 bipolar adjective scales (e.g., awkward–natural). The single channel judgments that correlated most highly with the multiple channel judgments depended on the type of attribute being judged and the situation in which the behavior occurred (e.g., in the deception condition, judgments made from speech had the highest correlation with whole-person judgments). (18 ref) (PsycINFO Database Record (c) 2016 APA, all rights reserved)", "This study examined the relationships among nonverbal behaviors, dimensions of source credibility, and speaker persuasiveness in a public speaking context. Relevant nonverbal literature was organized according to a Brunswikian lens model. Nonverbal behavioral composites, grouped according to their likely proximal percepts, were hypothesized to significantly affect both credibility and persuasiveness. A sample of 60 speakers gave videotaped speeches that were judged on credibility and persuasiveness by classmates. Pairs of trained raters coded 22 vocalic, kinesic, and proxemic nonverbal behaviors evidenced in the tapes. Results confirmed numerous associations between nonverbal behaviors and attributions of credibility and persuasiveness. Greater perceived competence and composure were associated with greater vocal and facial pleasantness, with greater facial expressiveness contributing to competence perceptions. Greater sociability was associated with more kinesic proxemic immediacy, dominance, and relaxation and with vocal pleasantness. Most of these same cues also enhanced character judgments. No cues were related to dynamism judgments. Greater perceived persuasiveness correlated with greater vocal pleasantness (especially fluency and pitch variety), kinesic proxemic immediacy, facial expressiveness, and kinesic relaxation (especially high random movement but little tension). All five dimensions of credibility related to persuasiveness. Advantages of analyzing nonverbal cues according to proximal percepts are discussed.", "" ] }
1706.07593
2952696564
Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals.This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.
Convolutional Neural Networks (CNNs) have been very effectively applied to a range of robotic and vision tasks including grasp pose detection @cite_17 @cite_16 , image classification @cite_30 @cite_18 , semantic segmentation @cite_3 , depth estimation @cite_10 @cite_7 @cite_26 @cite_11 , surface normal estimation @cite_24 @cite_31 @cite_10 . Our work is more closely related to the latter two tasks as we demonstrate surface curvature can be predicted using RGB images as the only input. We began this work by using the VGG architecture @cite_21 as a starting point to predict surface curvature in a standalone network and extended it to estimate depth and surface normals as well, in the one network.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_11", "@cite_7", "@cite_21", "@cite_3", "@cite_24", "@cite_31", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "", "2949650786", "2158211626", "2963591054", "2950619061", "1686810756", "", "2952623155", "2341204628", "2290564286", "2951713345", "2950988471" ], "abstract": [ "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps.", "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.", "We consider the problem of depth estimation from a single monocular image in this work. It is a challenging task as no reliable depth cues are available, e.g., stereo correspondences, motions, etc. Previous efforts have been focusing on exploiting geometric priors or additional sources of information, with all using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) are setting new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimations can be naturally formulated into a continuous conditional random field (CRF) learning problem. Therefore, we in this paper present a deep convolutional neural field model for estimating depths from a single image, aiming to jointly explore the capacity of deep CNN and continuous CRF. Specifically, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. The proposed method can be used for depth estimations of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be analytically calculated, thus we can exactly solve the log-likelihood optimization. Moreover, solving the MAP problem for predicting depths of a new image is highly efficient as closed-form solutions exist. We experimentally demonstrate that the proposed method outperforms state-of-the-art depth estimation methods on both indoor and outdoor scene datasets.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture we should use? We propose to build upon the decades of hard work in 3D scene understanding, to design new CNN architecture for the task of surface normal estimation. We show by incorporating several constraints (man-made, manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning.", "We introduce an approach that leverages surface normal predictions, along with appearance cues, to retrieve 3D models for objects depicted in 2D still images from a large CAD object library. Critical to the success of our approach is the ability to recover accurate surface normals for objects in the depicted scene. We introduce a skip-network model built on the pre-trained Oxford VGG convolutional neural network (CNN) for surface normal prediction. Our model achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface normal prediction, and recovers fine object detail compared to previous methods. Furthermore, we develop a two-stream network over the input image and predicted surface normals that jointly learns pose and style for CAD model retrieval. When using the predicted surface normals, our two-stream network matches prior work using surface normals computed from RGB-D images on the task of pose prediction, and achieves state of the art when using RGB-D input. Finally, our two-stream network allows us to retrieve CAD models that better match the style and pose of a depicted object compared with baseline approaches.", "This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93 in dense clutter. This is a 20 improvement compared to our prior work.", "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.", "We present an accurate, real-time approach to robotic grasp detection based on convolutional neural networks. Our network performs single-stage regression to graspable bounding boxes without using standard sliding window or region proposal techniques. The model outperforms state-of-the-art approaches by 14 percentage points and runs at 13 frames per second on a GPU. Our network can simultaneously perform classification so that in a single step it recognizes the object and finds a good grasp rectangle. A modification to this model predicts multiple grasps per object by using a locally constrained prediction mechanism. The locally constrained model performs significantly better, especially on objects that can be grasped in a variety of ways." ] }
1706.07593
2952696564
Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals.This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.
Data driven single image surface normal estimation was first tackled by in @cite_39 . They used a SVM based detector followed by an iterative optimization scheme to extract geometrically informative primitives. proposed to use image cues of pixel-wise and segment based methods to generate a feature representation that can estimate surface normals in a boosting framework @cite_12 . A ConvNet approach to estimating surface normals in global and local scales while incorporating numerous constraints such as room layout and edge labels was taken by @cite_24 . Recently, @cite_31 showed that by combining hierarchy of features from different levels of activations in a skip-network architecture that you could generate much finer predictions for surface normals achieving state of the art results.
{ "cite_N": [ "@cite_24", "@cite_31", "@cite_12", "@cite_39" ], "mid": [ "2952623155", "2341204628", "337610345", "" ], "abstract": [ "In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture we should use? We propose to build upon the decades of hard work in 3D scene understanding, to design new CNN architecture for the task of surface normal estimation. We show by incorporating several constraints (man-made, manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning.", "We introduce an approach that leverages surface normal predictions, along with appearance cues, to retrieve 3D models for objects depicted in 2D still images from a large CAD object library. Critical to the success of our approach is the ability to recover accurate surface normals for objects in the depicted scene. We introduce a skip-network model built on the pre-trained Oxford VGG convolutional neural network (CNN) for surface normal prediction. Our model achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface normal prediction, and recovers fine object detail compared to previous methods. Furthermore, we develop a two-stream network over the input image and predicted surface normals that jointly learns pose and style for CAD model retrieval. When using the predicted surface normals, our two-stream network matches prior work using surface normals computed from RGB-D images on the task of pose prediction, and achieves state of the art when using RGB-D input. Finally, our two-stream network allows us to retrieve CAD models that better match the style and pose of a depicted object compared with baseline approaches.", "In this work we propose the method for a rather unexplored problem of computer vision - discriminatively trained dense surface normal estimation from a single image. Our method combines contextual and segment-based cues and builds a regressor in a boosting framework by transforming the problem into the regression of coefficients of a local coding. We apply our method to two challenging data sets containing images of man-made environments, the indoor NYU2 data set and the outdoor KITTI data set. Our surface normal predictor achieves results better than initially expected, significantly outperforming state-of-the-art.", "" ] }
1706.07154
2701136070
Pain is a personal, subjective experience that is commonly evaluated through visual analog scales (VAS). While this is often convenient and useful, automatic pain detection systems can reduce pain score acquisition efforts in large-scale studies by estimating it directly from the participants' facial expressions. In this paper, we propose a novel two-stage learning approach for VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs) to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels from face images. The estimated scores are then fed into the personalized Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by each person. Personalization of the model is performed using a newly introduced facial expressiveness score, unique for each person. To the best of our knowledge, this is the first approach to automatically estimate VAS from face images. We show the benefits of the proposed personalized over traditional non-personalized approach on a benchmark dataset for pain analysis from face images.
Although there is much research in the automated recognition of affect from facial expression (for surveys see: @cite_2 @cite_17 ), until recently only a handful of works have focused on automated pain estimation. Due to rapid advances in computer vision and also the recent release of the UNBC-McMaster dataset @cite_8 , pain analysis from face images have seen significant advances. Specifically, this dataset provides videos with each frame coded in terms of Prkachin and Solomon Pain Intensity (PSPI) @cite_22 score, defined on an ordinal scale 0-15. This is considered to be an objective pain score, in contrast to subjective pain ratings such as VAS. Despite the fact that VAS is still the most commonly accepted pain score in clinical settings, all existing automatic methods for pain estimation from pain images focused on prediction of PSPI scores. We outline below some of the recently published works.
{ "cite_N": [ "@cite_8", "@cite_22", "@cite_17", "@cite_2" ], "mid": [ "2101545465", "2106043670", "2156503193", "2117645142" ], "abstract": [ "A major factor hindering the deployment of a fully functional automatic facial expression detection system is the lack of representative data. A solution to this is to narrow the context of the target application, so enough data is available to build robust models so high performance can be gained. Automatic pain detection from a patient's face represents one such application. To facilitate this work, researchers at McMaster University and University of Northern British Columbia captured video of participant's faces (who were suffering from shoulder pain) while they were performing a series of active and passive range-of-motion tests to their affected and unaffected limbs on two separate occasions. Each frame of this data was AU coded by certified FACS coders, and self-report and observer measures at the sequence level were taken as well. This database is called the UNBC-McMaster Shoulder Pain Expression Archive Database. To promote and facilitate research into pain and augment current datasets, we have publicly made available a portion of this database which includes: 1) 200 video sequences containing spontaneous facial expressions, 2) 48,398 FACS coded frames, 3) associated pain frame-by-frame scores and sequence-level self-report and observer measures, and 4) 66-point AAM landmarks. This paper documents this data distribution in addition to describing baseline results of our AAM SVM system. This data will be available for distribution in March 2011.", "The present study examined psychometric properties of facial expressions of pain. A diverse sample of 129 people suffering from shoulder pain underwent a battery of active and passive range-of-motion tests to their affected and unaffected limbs. The same tests were repeated on a second occasion. Participants rated the maximum pain induced by each test on three self-report scales. Facial actions were measured with the Facial Action Coding System. Several facial actions discriminated painful from non-painful movements; however, brow-lowering, orbit tightening, levator contraction and eye closing appeared to constitute a distinct, unitary action. An index of pain expression based on these actions demonstrated test-retest reliability and concurrent validity with self-reports of pain. The findings support the concept of a core pain expression with desirable psychometric properties. They are also consistent with the suggestion of individual differences in pain expressiveness. Reasons for varying reports of relations between pain expression and self-reports in previous studies are discussed.", "Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.", "This survey describes recent progress in the field of Affective Computing (AC), with a focus on affect detection. Although many AC researchers have traditionally attempted to remain agnostic to the different emotion theories proposed by psychologists, the affective technologies being developed are rife with theoretical assumptions that impact their effectiveness. Hence, an informed and integrated examination of emotion theories from multiple areas will need to become part of computing practice if truly effective real-world systems are to be achieved. This survey discusses theoretical perspectives that view emotions as expressions, embodiments, outcomes of cognitive appraisal, social constructs, products of neural circuitry, and psychological interpretations of basic feelings. It provides meta-analyses on existing reviews of affect detection systems that focus on traditional affect detection modalities like physiology, face, and voice, and also reviews emerging research on more novel channels such as text, body language, and complex multimodal systems. This survey explicitly explores the multidisciplinary foundation that underlies all AC applications by describing how AC researchers have incorporated psychological theories of emotion and how these theories affect research questions, methods, results, and their interpretations. In this way, models and methods can be compared, and emerging insights from various disciplines can be more expertly integrated." ] }
1706.07154
2701136070
Pain is a personal, subjective experience that is commonly evaluated through visual analog scales (VAS). While this is often convenient and useful, automatic pain detection systems can reduce pain score acquisition efforts in large-scale studies by estimating it directly from the participants' facial expressions. In this paper, we propose a novel two-stage learning approach for VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs) to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels from face images. The estimated scores are then fed into the personalized Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by each person. Personalization of the model is performed using a newly introduced facial expressiveness score, unique for each person. To the best of our knowledge, this is the first approach to automatically estimate VAS from face images. We show the benefits of the proposed personalized over traditional non-personalized approach on a benchmark dataset for pain analysis from face images.
Aside from the UNBC-McMaster dataset @cite_8 , there have been a variety of pain recognition studies based on other datasets (e.g., see @cite_13 @cite_27 ). Note also that pain detection has been attempted from other modalities including upper body movements combined with face @cite_43 @cite_40 , physiological signals such as skin conductance and heart rate @cite_14 , and brain hemodynamic responses using NIRS @cite_29 @cite_42 and fMRI @cite_23 @cite_9 . Nevertheless, none of these works attempted automatic estimation of VAS, and in a personalized manner.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_29", "@cite_42", "@cite_9", "@cite_43", "@cite_40", "@cite_27", "@cite_23", "@cite_13" ], "mid": [ "2061222262", "2101545465", "2548572385", "", "", "2132680170", "2018988971", "2894142993", "2018759089", "2072439449" ], "abstract": [ "Although it is well known that pain induces changes in autonomic parameters, the extent to which these changes correlate with the experience of pain is under debate. The aim of the present study was to compare a combination of multiple autonomic parameters and each parameter alone in their ability to differentiate among 4 categories of pain intensity. Tonic heat stimuli (1 minute) were individually adjusted to induce no pain, low, medium, and high pain in 45 healthy volunteers. Electrocardiogram, photoplethysmogram, and galvanic skin response were recorded, and the following parameters were calculated: heart rate; heart rate variability—high frequency (0.15 to 0.4 Hz) spectral power; skin conductance level; number of skin conduction fluctuations; and photoplethysmographic pulse wave amplitude. A combination of parameters was created by fitting an ordinal cumulative logit model to the data and using linear coefficients of the model. Friedman test with post-hoc Wilcoxon test were used to compare between pain intensity categories for every parameter alone and for their linear combination. All of the parameters successfully differentiated between no pain and all other pain categories. However, none of the parameters differentiated between all 3 pain categories (i.e., low and medium; medium and high; low and high). In contrast, the linear combination of parameters significantly differentiated not only between pain and no pain, but also between all pain categories (P < .001 to .02). These results suggest that multiparameter approaches should be further investigated to make progress toward reliable autonomic-based pain assessment.", "A major factor hindering the deployment of a fully functional automatic facial expression detection system is the lack of representative data. A solution to this is to narrow the context of the target application, so enough data is available to build robust models so high performance can be gained. Automatic pain detection from a patient's face represents one such application. To facilitate this work, researchers at McMaster University and University of Northern British Columbia captured video of participant's faces (who were suffering from shoulder pain) while they were performing a series of active and passive range-of-motion tests to their affected and unaffected limbs on two separate occasions. Each frame of this data was AU coded by certified FACS coders, and self-report and observer measures at the sequence level were taken as well. This database is called the UNBC-McMaster Shoulder Pain Expression Archive Database. To promote and facilitate research into pain and augment current datasets, we have publicly made available a portion of this database which includes: 1) 200 video sequences containing spontaneous facial expressions, 2) 48,398 FACS coded frames, 3) associated pain frame-by-frame scores and sequence-level self-report and observer measures, and 4) 66-point AAM landmarks. This paper documents this data distribution in addition to describing baseline results of our AAM SVM system. This data will be available for distribution in March 2011.", "The purpose of this study was to use functional near-infrared spectroscopy (fNIRS) to examine patterns of both activation and deactivation that occur in the frontal lobe in response to noxious stimuli. The frontal lobe was selected because it has been shown to be activated by noxious stimuli in functional magnetic resonance imaging studies. The brain region is located behind the forehead which is devoid of hair, providing a relative ease of placement for fNIRS probes on this area of the head. Based on functional magnetic resonance imaging studies showing blood-oxygenation-level dependent changes in the frontal lobes, we evaluated functional near-infrared spectroscopy measures in response to two levels of electrical pain in awake, healthy human subjects (n = 10; male = 10). Each subject underwent two recording sessions separated by a 30-minute resting period. Data collected from 7 subjects were analyzed, containing a total of 38 36 low high intensity pain stimuli for the first recording session and 27 31 pain stimuli for the second session. Our results show that there is a robust and significant deactivation in sections of the frontal cortices. Further development and definition of the specificity and sensitivity of the approach may provide an objective measure of nociceptive activity in the brain that can be easily applied in the surgical setting.", "", "", "Psychologists have long explored mechanisms with which humans recognize other humans' affective states from modalities, such as voice and face display. This exploration has led to the identification of the main mechanisms, including the important role played in the recognition process by the modalities' dynamics. Constrained by the human physiology, the temporal evolution of a modality appears to be well approximated by a sequence of temporal segments called onset, apex, and offset. Stemming from these findings, computer scientists, over the past 15 years, have proposed various methodologies to automate the recognition process. We note, however, two main limitations to date. The first is that much of the past research has focused on affect recognition from single modalities. The second is that even the few multimodal systems have not paid sufficient attention to the modalities' dynamics: The automatic determination of their temporal segments, their synchronization to the purpose of modality fusion, and their role in affect recognition are yet to be adequately explored. To address this issue, this paper focuses on affective face and body display, proposes a method to automatically detect their temporal segments or phases, explores whether the detection of the temporal phases can effectively support recognition of affective states, and recognizes affective states based on phase synchronization alignment. The experimental results obtained show the following: 1) affective face and body displays are simultaneous but not strictly synchronous; 2) explicit detection of the temporal phases can improve the accuracy of affect recognition; 3) recognition from fused face and body modalities performs better than that from the face or the body modality alone; and 4) synchronized feature-level fusion achieves better performance than decision-level fusion.", "Depression is one of the most common mental health disorders with strong adverse effects on personal and social functioning. The absence of any objective diagnostic aid for depression leads to a range of subjective biases in initial diagnosis and ongoing monitoring. Psychologists use various visual cues in their assessment to quantify depression such as facial expressions, eye contact and head movements. This paper studies the contribution of (upper) body expressions and gestures for automatic depression analysis. A framework based on space-time interest points and bag of words is proposed for the analysis of upper body and facial movements. Salient interest points are selected using clustering. The major contribution of this paper lies in the creation of a bag of body expressions and a bag of facial dynamics for assessing the contribution of different body parts for depression analysis. Head movement analysis is performed by selecting rigid facial fiducial points and a new histogram of head movements is proposed. The experiments are performed on real-world clinical data where video clips of patients and healthy controls are recorded during interactive interview sessions. The results show the effectiveness of the proposed system to evaluate the contribution of various body parts in depression analysis.", "", "", "Automatic pain recognition can improve medical treatment, especially when the patient is not able to utter on his pain experience. Facial expressions with their intensities and dynamics contain valuable information for recognising pain. We propose a concept for distinguishing facial expressions of pain from others and assessing the pain expression intensity. It is based on a Support Vector Machine (SVM) classifier and a function model for intensity rating. The intensity model is trained using Comparative Learning, a new technique that simplifies labelling of the data. Using a database of 3D posed pain sequences we show the suitability of the concept to recognise pain expressions, distinguish different intensities and spot even slight intensity changes in its temporal context." ] }
1706.07179
2631715525
We introduce RelNet: a new model for relational reasoning. RelNet is a memory augmented neural network which models entities as abstract memory slots and is equipped with an additional relational memory which models relations between all memory pairs. The model thus builds an abstract knowledge graph on the entities and relations present in a document which can then be used to answer questions about the document. It is trained end-to-end: only supervision to the model is in the form of correct answers to the questions. We test the model on the 20 bAbI question-answering tasks with 10k examples per task and find that it solves all the tasks with a mean error of 0.3 , achieving 0 error on 11 of the 20 tasks.
Representation learning and reasoning over graph structured data is also relevant to this work. Graph based neural network models have been proposed which take graph data as an input. The relational memory however does not rely on a specified graph structure and such models can potentially be used for multi-hop reasoning over the relational memory. @cite_5 proposed a method for learning a graphical representation of the text data for question answering, however the model requires explicit supervision for the graph at every step whereas RelNet does not require explicit supervision for the graph.
{ "cite_N": [ "@cite_5" ], "mid": [ "2750894112" ], "abstract": [ "Graph-structured data is important in modeling relationships between multiple entities, and can be used to represent states of the world as well as many data structures. (2016) describe a model known as a Gated Graph Sequence Neural Network (GGS-NN) that produces sequences from graph-structured input. In this work I introduce the Gated Graph Transformer Neural Network (GGT-NN), an extension of GGS-NNs that uses graph-structured data as an intermediate representation. The model can learn to construct and modify graphs in sophisticated ways based on textual input, and also to use the graphs to produce a variety of outputs. For example, the model successfully learns to solve almost all of the bAbI tasks (, 2016), and also discovers the rules governing graphical formulations of a simple cellular automaton and a family of Turing machines." ] }
1706.07448
2949511139
Artificial agents will need to be aware of human moral and social norms, and able to use them in decision-making. In particular, artificial agents will need a principled approach to managing conflicting norms, which are common in human social interactions. Existing logic-based approaches suffer from normative explosion and are typically designed for deterministic environments; reward-based approaches lack principled ways of determining which normative alternatives exist in a given environment. We propose a hybrid approach, using Linear Temporal Logic (LTL) representations in Markov Decision Processes (MDPs), that manages norm conflicts in a systematic manner while accommodating domain stochasticity. We provide a proof-of-concept implementation in a simulated vacuum cleaning domain.
There have been several instances of temporal logics being employed to represent moral and social norms. For example, @cite_19 @cite_24 employ a logical representation of norms using Normative Temporal Logic, a generalization of Computation Tree Logic (CTL). This characterization allows the description of complex temporal norms. @cite_27 employ LTL with past-time modalities, which they use to construct guards (functions that restrict agent actions given an event history); they are concerned primarily with enforcement, and thus do not address norm conflicts. These approaches are designed for deterministic environment models, and are not well suited for stochastic domains.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_27" ], "mid": [ "1588742135", "1530806667", "2126701434" ], "abstract": [ "Social laws have proved to be a powerful and theoretically elegant framework for coordination in multi-agent systems. Most existing models of social laws assume that a designer is attempting to produce a set of constraints on agent behaviour which will ensure that some single overall desirable objective is achieved. However, this represents a gross simplification of the typical situation, where a designer may have multiple (possibly conflicting) objectives, with different priorities. Moreover, social laws, as well as bringing benefits, also have implementation costs: imposing a social law often cannot be done at zero cost. We present a model of social laws that reflects this reality: it takes into account both the fact that the designer of a social law may have multiple differently valued objectives, and that the implementation of a social law is not costneutral. In this setting, designing a social law becomes an optimisation problem, in which a designer must take into account both the benefits and costs of a social law. We investigate the issue of representing a designer's objectives, characterise the complexity of the optimal social law design problem, and consider possible constraints that lead to reductions in computational complexity. We then show how the problem of designing an optimal social law can be formulated as an integer linear program.", "We introduce emph Normative Temporal Logic (acro ntl ), a logic for reasoning about normative systems. acro ntl is a generalisation of the well-known branching-time temporal logic acro ctl , in which the path quantifiers @math ( on all pathsldots'') and @math ( on some pathldots'') are replaced by the indexed deontic operators @math and @math , where for example @math means @math is obligatory in the context of normative system @math ''. After defining the logic, we give a sound and complete axiomatisation, and discuss the logic's relationship to standard deontic logics. We present a symbolic representation language for models and normative systems, and identify four different model checking problems, corresponding to whether or not a model is represented symbolically or explicitly, and whether or not we are given an interpretation for the normative systems named in formulae to be checked. We show that the complexity of model checking varies from acro p -complete up to acro exptime -hard for these variations.", "Norms have been widely proposed as a means of coordinating and controlling the behaviour of agents in a multi-agent system. A key challenge in normative MAS is norm enforcement: how and when to restrict the agents' behaviour in order to obtain a desirable outcome? Even if a norm can be enforced theoretically, it may not be enforceable in a grounded, practical setting. In this paper we study the problem of practical norm enforcement. The key notion is that of a guard. Guards are functions which restrict the possible actions after a history of events. We propose a formal, computational model of norms, guards and norm enforcement, based on linear-time temporal logic with past operators. We show that not all norms can be enforced by such guard functions, even in the presence of unlimited computational power to reason about future events. We analyse which norms can be enforced by guards if only a fixed lookahead is available. We investigate decision problems for this question with respect to specific classes of norms, related to safety and liveness properties." ] }
1706.07448
2949511139
Artificial agents will need to be aware of human moral and social norms, and able to use them in decision-making. In particular, artificial agents will need a principled approach to managing conflicting norms, which are common in human social interactions. Existing logic-based approaches suffer from normative explosion and are typically designed for deterministic environments; reward-based approaches lack principled ways of determining which normative alternatives exist in a given environment. We propose a hybrid approach, using Linear Temporal Logic (LTL) representations in Markov Decision Processes (MDPs), that manages norm conflicts in a systematic manner while accommodating domain stochasticity. We provide a proof-of-concept implementation in a simulated vacuum cleaning domain.
The combination of moral and social norms with Markov Decision Processes is not new. Much of this work, e.g. @cite_12 , tends to emphasize norm emergence, thus lacks explicit representations of norms. Other work @cite_10 considers incorporating deontic logic norms using an agent architecture that reasons about the consequences (in the environment) of violating norms.
{ "cite_N": [ "@cite_10", "@cite_12" ], "mid": [ "1495895124", "92376239" ], "abstract": [ "Rational self-interested agents, which act so as to achieve the best expected outcome, should violate the norms if the expected rewards obtained with the defections from the norms surpass the expected rewards obtained by being norm-compliant. It means they should estimate the earnings brought about by the violations and the losses caused by their respective reactions. In this paper, we present a rational self-interested agent model that takes into account the possibility of breaking norms. To develop such model, we employ Markov Decision Processes (MDPs). Our approach consists of representing the reactions for norm violations within the MDPs in such a way that the agent is able to reason about how those violations affect her expected utilities and future options. Finally, we perform an experiment in order to establish comparisons between the model presented in this work and its norm-compliant version.", "Behavioral norms are key ingredients that allow agent coordination where societ al laws do not sufficiently constrain agent behaviors. Whereas social laws need to be enforced in a top-down manner, norms evolve in a bottom-up manner and are typically more self-enforcing. While effective norms can significantly enhance performance of individual agents and agent societies, there has been little work in multiagent systems on the formation of social norms. We propose a model that supports the emergence of social norms via learning from interaction experiences. In our model, individual agents repeatedly interact with other agents in the society over instances of a given scenario. Each interaction is framed as a stage game. An agent learns its policy to play the game over repeated interactions with multiple agents. We term this mode of learning social learning, which is distinct from an agent learning from repeated interactions against the same player. We are particularly interested in situations where multiple action combinations yield the same optimal payoff. The key research question is to find out if the entire population learns to converge to a consistent norm. In addition to studying such emergence of social norms among homogeneous learners via social learning, we study the effects of heterogeneous learners, population size, multiple social groups, etc." ] }
1706.07255
2795583760
Fast algorithms for optimal multi-robot path planning are sought after in real-world applications. Known methods, however, generally do not simultaneously guarantee good solution optimality and good (e.g., polynomial) running time. In this work, we develop a first low-polynomial running time algorithm, called SplitAngGroup (SaG), that solves the multi-robot path planning problem on grids and grid-like environments, and produces constant factor makespan optimal solutions on average over all problem instances. That is, SaG is an average case O(1)-approximation algorithm and computes solutions with sub-linear makespan. SaG is capable of handling cases when the density of robots is extremely high - in a graph-theoretic setting, the algorithm supports cases where all vertices of the underlying graph are occupied. SaG attains its desirable properties through a careful combination of a novel divide-and-conquer technique, which we denote as global decoupling, and network flow based methods for routing the robots. Solutions from SaG, in a weaker sense, are also a constant factor approximation on total distance optimality.
Similar to single robot problems involving potentially many degrees of freedom @cite_69 @cite_73 , multi-robot path planning is strongly NP-hard even for discs in simple polygons @cite_9 and PSPACE-hard for translating rectangles @cite_43 . The hardness of the problem extends to unlabeled case @cite_50 where it remains highly intractable @cite_46 @cite_64 . Nevertheless, under appropriate settings, the unlabeled case can be solved near optimally @cite_6 @cite_62 @cite_36 @cite_29 .
{ "cite_N": [ "@cite_69", "@cite_64", "@cite_62", "@cite_36", "@cite_9", "@cite_29", "@cite_6", "@cite_43", "@cite_50", "@cite_46", "@cite_73" ], "mid": [ "211895014", "1485718613", "1964462572", "1628219785", "2068796584", "2963358853", "2052819251", "2072839015", "2153159477", "2036722182", "1516027685" ], "abstract": [ "Abstract : This paper concerns the problem of planning a sequence of movements of linked polyhedra through 3 dimensional Euclidean space, avoiding contact with a fixed set of polyhedra obstacles. We prove this generalized mover's problem is polynomial space hard. Our proof provides strong evidence that robot movement planning is computationally intractable, i.e., any algorithm requires time growing exponentially with the number of degrees of freedom. Keywords: Robotics; Mover's problem; Obstacle avoidence; PSPACE; Combinatorial; Geometry; Collision avoidence.", "In unlabeled multi-robot motion planning several interchangeable robots operate in a common workspace. The goal is to move the robots to a set of target positions such that each position will be occupied by some robot. In this paper, we study this problem for the specific case of unit-square robots moving amidst polygonal obstacles and show that it is PSPACE-hard. We also consider three additional variants of this problem and show that they are all PSPACE-hard as well. To the best of our knowledge, this is the first hardness proof for the unlabeled case. Furthermore, our proofs can be used to show that the labeled variant (where each robot is assigned with a specific target position), again, for unit-square robots, is PSPACE-hard as well, which sets another precedence, as previous hardness results require the robots to be of different shapes.", "In this paper, we consider the problem of concurrent assignment and planning of trajectories (which we denote Capt) for a team of robots. This problem involves simultaneously addressing two challenges: (1) the combinatorially complex problem of finding a suitable assignment of robots to goal locations, and (2) the generation of collision-free, time parameterized trajectories for every robot. We consider the Capt problem for unlabeled (interchangeable) robots and propose algorithmic solutions to two variations of the Capt problem. The first algorithm, c-Capt, is a provably correct, complete, centralized algorithm which guarantees collision-free optimal solutions to the Capt problem in an obstacle-free environment. To achieve these strong claims, c-Capt exploits the synergy obtained by combining the two subproblems of assignment and trajectory generation to provide computationally tractable solutions for large numbers of robots. We then propose a decentralized solution to the Capt problem through d-Capt, a decentralized algorithm that provides suboptimal results compared to c-Capt . We illustrate the algorithms and resulting performance through simulation and experimentation.", "We consider the following motion-planning problem: we are given @math unit discs in a simple polygon with @math vertices, each at their own start position, and we want to move the discs to a given set of @math target positions. Contrary to the standard (labeled) version of the problem, each disc is allowed to be moved to any target position, as long as in the end every target position is occupied. We show that this unlabeled version of the problem can be solved in @math time, assuming that the start and target positions are at least some minimal distance from each other. This is in sharp contrast to the standard (labeled) and more general multi-robot motion planning problem for discs moving in a simple polygon, which is known to be strongly NP-hard.", "Le probleme de calcul du mouvement de gestion pour un systeme robotise simple a ete etudie plus recemment a partir d'un point de vue algorithmique. Le but de cet article est de montrer que le «probleme de nombreux disques» est NP-dur au sens fort", "We study the problem of path planning for unlabeled (indistinguishable) unit-disc robots in a planar environment cluttered with polygonal obstacles. We introduce an algorithm which minimizes the total path length, i.e., the sum of lengths of the individual paths. Our algorithm is guaranteed to find a solution if one exists, or report that none exists otherwise. It runs in time O ( m +mn ) , where m is the number of robots and n is the total complexity of the workspace. Moreover, the total length of the returned solution is at most OPT+4m, where OPT is the optimal solution cost. To the best of our knowledge this is the first algorithm for the problem that has such guarantees. The algorithm has been implemented in an exact manner and we present experimental results that attest to its efficiency.", "For the task of transferring a group of robots from one formation to another on a connected graph with unit edge lengths, we provide an efficient hierarchical algorithm that can complete goal assignment and path planning for 10,000 robots on a 250,000 vertex grid in under one second. In the extreme, our algorithm can handle up to one million robots on a grid with one billion vertices in approximately 30 minutes. Perhaps more importantly, we prove that with high probability, the algorithm supplies paths with total distance within a constant multiple of the optimal total distance. Furthermore, our hierarchical method also allows these paths to be scheduled with a tight completion time guarantee. In practice, our implementation yields a total path distance less than two times of the true optimum and a much shorter completion time.", "Coordinated motion planning for a large number af three-di mensional objects in the presence of obstacles is a computa tional problem whose complexity is important to calibrate. In this paper we show that even the restricted two-dimensional problem for arbitrarily many rectangles in a rectangular region is PSPACE-hard. This result should be viewed as a guide to the difficulty, of the general problem and should lead researchers to consider more tractable restricted classes of motion problems of practical interest.", "In many multirobot applications, the specific assignment of goal configurations to robots is less important than the overall behavior of the robot formation. In such cases, it is convenient to define a permutation-invariant multirobot formation as a set of robot configurations, without assigning specific configurations to specific robots. For the case of robots that translate in the plane, we can represent such a formation by the coefficients of a complex polynomial whose roots represent the robot configurations. Since these coefficients are invariant with respect to permutation of the roots of the polynomial, they provide an effective representation for permutation-invariant formations. In this paper, we extend this idea to build a full representation of a permutation-invariant formation space. We describe the properties of the representation, and show how it can be used to construct collision-free paths for permutation-invariant formations", "We present a nondeterministic model of computation based on reversing edge directions in weighted directed graphs with minimum in-flow constraints on vertices. Deciding whether this simple graph model can be manipulated in order to reverse the direction of a particular edge is shown to be PSPACE-complete by a reduction from Quantified Boolean Formulas. We prove this result in a variety of special cases including planar graphs and highly restricted vertex configurations, some of which correspond to a kind of passive constraint logic. Our framework is inspired by (and indeed a generalization of) the \"Generalized Rush Hour Logic\" developed by Flake and Baum [Theoret. Comput. Sci. 270(1-2) (2002) 8951.We illustrate the importance of our model of computation by giving simple reductions to show that several motion-planning problems are PSPACE-hard. Our main result along these lines is that classic unrestricted sliding-block puzzles are PSPACE-hard, even if the pieces are restricted to be all dominoes (1 × 2 blocks) and the goal is simply to move a particular piece. No prior complexity results were known about these puzzles. This result can be seen as a strengthening of the existing result that the restricted Rush HourTM puzzles are PSPACE-complete [Theoret. Comput. Sci. 270(1-2) (2002) 895], of which we also give a simpler proof. We also greatly strengthen the conditions for the PSPACE-hardness of the Warehouseman's Problem [Int. J. Robot. Res. 3(4) (1984) 76], a classic motion-planning problem. Finally, we strengthen the existing result that the pushing-blocks puzzle Sokoban is PSPACE-complete [In: Proc. Internat. Conf. on Fun with Algorithms, Elba, Italy, June 1998, pp. 65-76.], by showing that it is PSPACE-complete even if no barriers are allowed.", "The Complexity of Robot Motion Planning makes original contributions both to robotics and to the analysis of algorithms. In this groundbreaking monograph John Canny resolves long-standing problems concerning the complexity of motion planning and, for the central problem of finding a collision free path for a jointed robot in the presence of obstacles, obtains exponential speedups over existing algorithms by applying high-powered new mathematical techniques.Canny's new algorithm for this \"generalized movers' problem,\" the most-studied and basic robot motion planning problem, has a single exponential running time, and is polynomial for any given robot. The algorithm has an optimal running time exponent and is based on the notion of roadmaps - one-dimensional subsets of the robot's configuration space. In deriving the single exponential bound, Canny introduces and reveals the power of two tools that have not been previously used in geometric algorithms: the generalized (multivariable) resultant for a system of polynomials and Whitney's notion of stratified sets. He has also developed a novel representation of object orientation based on unnormalized quaternions which reduces the complexity of the algorithms and enhances their practical applicability.After dealing with the movers' problem, the book next attacks and derives several lower bounds on extensions of the problem: finding the shortest path among polyhedral obstacles, planning with velocity limits, and compliant motion planning with uncertainty. It introduces a clever technique, \"path encoding,\" that allows a proof of NP-hardness for the first two problems and then shows that the general form of compliant motion planning, a problem that is the focus of a great deal of recent work in robotics, is non-deterministic exponential time hard. Canny proves this result using a highly original construction.John Canny received his doctorate from MIT And is an assistant professor in the Computer Science Division at the University of California, Berkeley. The Complexity of Robot Motion Planning is the winner of the 1987 ACM Doctoral Dissertation Award." ] }
1706.07255
2795583760
Fast algorithms for optimal multi-robot path planning are sought after in real-world applications. Known methods, however, generally do not simultaneously guarantee good solution optimality and good (e.g., polynomial) running time. In this work, we develop a first low-polynomial running time algorithm, called SplitAngGroup (SaG), that solves the multi-robot path planning problem on grids and grid-like environments, and produces constant factor makespan optimal solutions on average over all problem instances. That is, SaG is an average case O(1)-approximation algorithm and computes solutions with sub-linear makespan. SaG is capable of handling cases when the density of robots is extremely high - in a graph-theoretic setting, the algorithm supports cases where all vertices of the underlying graph are occupied. SaG attains its desirable properties through a careful combination of a novel divide-and-conquer technique, which we denote as global decoupling, and network flow based methods for routing the robots. Solutions from SaG, in a weaker sense, are also a constant factor approximation on total distance optimality.
Because general (labeled) optimal multi-robot path planning problems in continuous domains are extremely challenging, a common approach is to start with a discrete setting from the onset. Significant progress has been made on solving the problem optimally in discrete settings, in particular on grid-based environments. Multi-robot motion planning is less computationally expensive in discrete domains, with the feasibility problem readily solvable in @math time, in which @math is the number of vertices of the discrete graph where the robots may reside @cite_39 @cite_41 @cite_42 @cite_33 . In particular, @cite_33 shows that the setting considered in this paper is always feasible except when the grid graph has only four vertices (which is a trivial case that can be safely ignored).
{ "cite_N": [ "@cite_41", "@cite_42", "@cite_33", "@cite_39" ], "mid": [ "2016374168", "1844237500", "2241357751", "2083721544" ], "abstract": [ "We consider a graph with n vertices, and p<n pebbles of m colors. A pebble move consists of transferring a pebble from its current host vertex to an adjacent unoccupied vertex. The problem is to move the pebbles to a given new color arrangement.", "Given a connected, undirected, simple graph @math and @math pebbles labeled @math , a configuration of these @math pebbles is an injective map assigning the pebbles to vertices of @math . Let @math and @math be two such configurations. From a configuration, pebbles can move on @math as follows: In each step, at most one pebble may move from the vertex it currently occupies to an adjacent unoccupied vertex, yielding a new configuration. A natural question in this setting is the following: Is configuration @math reachable from @math and if so, how? We show that the feasibility of this problem can be decided in time @math .", "We study the problem of planning paths for (p ) distinguishable pebbles (robots) residing on the vertices of an (n )-vertex connected graph with (p n ). A pebble may move from a vertex to an adjacent one in a time step provided that it does not collide with other pebbles. When (p = n ), the only collision free moves are synchronous rotations of pebbles on disjoint cycles of the graph. We show that the feasibility of such problems is intrinsically determined by the diameter of a (unique) permutation group induced by the underlying graph. Roughly speaking, the diameter of a group ( G ) is the minimum length of the generator product required to reach an arbitrary element of ( G ) from the identity element. Through bounding the diameter of this associated permutation group, which assumes a maximum value of (O(n^2) ), we establish a linear time algorithm for deciding the feasibility of such problems and an (O(n^3) ) algorithm for planning complete paths.", "" ] }
1706.06806
2651362180
We consider the problem of embedding a finite set of points @math that satisfy @math triangle inequalities into @math , when the points are approximately low-dimensional. Goemans (unpublished, appears in a work of [Magen and Moharammi, 2008]) showed that such points residing in @math dimensions can be embedded into @math with distortion at most @math . We prove the following robust analogue of this statement: if there exists a @math -dimensional subspace @math such that the projections onto this subspace satisfy @math , then there is an embedding of the points into @math with @math average distortion. A consequence of this result is that the integrality gap of the well-known Goemans-Linial SDP relaxation for the Uniform Sparsest Cut problem is @math on graphs @math whose @math -th smallest normalized eigenvalue of the Laplacian satisfies @math . Our result improves upon the previously known bound of @math on the average distortion, and the integrality gap of the Goemans-Linial SDP under the same preconditions, proven in the previous works of [Deshpande and Venkat, 2014] and [Deshpande, Harsha and Venkat, 2016].
Kwok et al @cite_2 showed that a better analysis of Cheeger's inequality gives a @math approximation to the sparsest cut on regular graphs. In particular, when @math , this gives a @math approximation. Note that our result gives a better approximation in this setting (see ).
{ "cite_N": [ "@cite_2" ], "mid": [ "2950405280" ], "abstract": [ "Let (G) be the minimum conductance of an undirected graph G, and let 0= = 2, (G) = O(k) , and this performance guarantee is achieved by the spectral partitioning algorithm. This improves Cheeger's inequality, and the bound is optimal up to a constant factor for any k. Our result shows that the spectral partitioning algorithm is a constant factor approximation algorithm for finding a sparse cut if $ is a constant for some constant k. This provides some theoretical justification to its empirical performance in image segmentation and clustering problems. We extend the analysis to other graph partitioning problems, including multi-way partition, balanced separator, and maximum cut." ] }
1706.06954
2712522410
In this work, we present Web-STAR, an online platform for story understanding built on top of the STAR (STory comprehension through ARgumentation) reasoning engine. This platform includes a web-based IDE, integration with the STAR system and a web service infrastructure to support integration with other systems that rely on story understanding functionality to complete their tasks. The platform also delivers a number of "social" features like public story sharing with a built-in commenting system, a public repository for sharing stories with the community and collaboration tools that can be used from both project team members for development and educators for teaching. Moreover, we discuss the ongoing work on adding new features and functionality to this platform.
Currently, several systems have been developed to deal with the problem of story comprehension. Starting from the 70's, Charniak presented two systems; a story comprehension model for answering questions about children stories, by relating stories to real-world background knowledge @cite_18 and a system for answering questions about simple stories dealing with painting @cite_22 . There is also work on Deep Read @cite_28 , an automated reading comprehension system that accepts stories and answers questions about them. More recent attempts, include work of Mueller @cite_7 , on a system for modeling space and time in narratives about restaurants and Genesis @cite_11 , a system for modeling and exploring aspects of story understanding using stories drawn from sources ranging from fairy tales to Shakespeare's plays.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_28", "@cite_11" ], "mid": [ "2099531122", "1573557135", "2059914676", "2121300346", "2204969741" ], "abstract": [ "Massachusetts Institute of Technology. Dept. of Electrical Engineering. Thesis. 1972. Ph.D.", "This paper describes Ms. Malaprop, a program (currently being designed) which will answer questions about simple stories dealing with painting, where stories, questions and answers will be expressed in semantic representation rather than English in order to allow concentration on the inferential problems involved in language comprehension. The common sense knowledge needed to accomplish the task is provided by the frame representation of \"mundane\" painting found in Charniak (1976b). The present paper, after reviewing this representation, goes on to describe how it is used by Ms. Malaprop. Some specific questions of matching, correcting false conclusions, and search, will be discussed.", "This study investigated the automatic modelling of space and time in narratives involving dining in a restaurant. We built a program that (1) uses information extraction techniques to convert narrative texts into templates containing key information about the dining episodes discussed in the narratives, (2) constructs commonsense reasoning problems from the templates, (3) uses commonsense reasoning and a commonsense knowledge base to build models of the dining episodes, and (4) generates and answers questions by consulting the models. We describe the program and present the results of running it on a corpus of web texts and American literature.", "This paper describes initial work on Deep Read, an automated reading comprehension system that accepts arbitrary text input (a story) and answers questions about it. We have acquired a corpus of 60 development and 60 test stories of 3rd to 6th grade material; each story is followed by short-answer questions (an answer key was also provided). We used these to construct and evaluate a baseline system that uses pattern matching (bag-of-words) techniques augmented with additional automated linguistic processing (stemming, name identification, semantic class identification, and pronoun resolution). This simple system retrieves the sentence containing the answer 30--40 of the time.", "A story summarizer benefits greatly from a reader model because a reader model enables the story summarizer to focus on delivering useful knowledge in minimal time with minimal eort. Such a" ] }
1706.06954
2712522410
In this work, we present Web-STAR, an online platform for story understanding built on top of the STAR (STory comprehension through ARgumentation) reasoning engine. This platform includes a web-based IDE, integration with the STAR system and a web service infrastructure to support integration with other systems that rely on story understanding functionality to complete their tasks. The platform also delivers a number of "social" features like public story sharing with a built-in commenting system, a public repository for sharing stories with the community and collaboration tools that can be used from both project team members for development and educators for teaching. Moreover, we discuss the ongoing work on adding new features and functionality to this platform.
There are also IDEs concentrated on logic programming, like SWISH (SWI-Prolog for Sharing) @cite_19 and IDP Web-IDE @cite_0 . SWISH is a web front-end for SWI-Prolog which is used to run small Prolog programs for demonstration, experimentation and education. IDP Web-IDE is an online front-end for IDP, a Knowledge Base System for the FO( @math ) language.
{ "cite_N": [ "@cite_0", "@cite_19" ], "mid": [ "2236664303", "2229795720" ], "abstract": [ "IDP is a knowledge base system based on first order logic. It is finding its way to a larger public but is still facing practical challenges. Adoption of new languages requires a newcomer-friendly way for users to interact with it. Both an online presence to try to convince potential users to download the system and offline availability to develop larger applications are essential. We developed an IDE which can serve both purposes through the use of web technology. It enables us to provide the user with a modern IDE with relatively little effort.", "Recently, we see a new type of interfaces for programmers based on web technology. For example, JSFiddle, IPython Notebook and R-studio. Web technology enables cloud-based solutions, embedding in tutorial web pages, attractive rendering of results, web-scale cooperative development, etc. This article describes SWISH, a web front-end for Prolog. A public website exposes SWIProlog using SWISH, which is used to run small Prolog programs for demonstration, experimentation and education. We connected SWISH to the ClioPatria semantic web toolkit, where it allows for collaborative development of programs and queries related to a dataset as well as performing maintenance tasks on the running server and we embedded SWISH in the Learn Prolog Now! online Prolog book." ] }
1706.06905
2706729717
Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at modeling in-terdependencies between features. We evaluate the method on the multi-modal Youtube-8M Large-Scale Video Understanding dataset using pre-extracted visual and audio features. We demonstrate improvements provided by the Context Gating as well as by the combination of learnable pooling methods. We finally show how this leads to the best performance, out of more than 600 teams, in the Kaggle Youtube-8M Large-Scale Video Understanding challenge.
Successful hand-crafted representations @cite_50 @cite_4 @cite_17 are based on local histograms of image and motion gradient orientations extracted along dense trajectories @cite_32 @cite_17 . More recent methods extract deep convolutional neural network activations computed from individual frames or blocks of frames using spatial @cite_10 @cite_47 @cite_6 @cite_15 or spatio-temporal @cite_42 @cite_18 @cite_27 @cite_43 @cite_36 @cite_56 convolutions. Convolutional neural networks can be also applied separately on the appearance channel and the pre-computed motion field channel resulting in the, so called, two-stream representations @cite_18 @cite_10 @cite_6 @cite_35 @cite_56 . As our work is motivated by the Youtube-8M large-scale video understanding challenge @cite_0 , we will assume for the rest of the paper that features are provided (more details are provided in ). This work mainly focuses on the temporal aggregation of given features.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_4", "@cite_15", "@cite_36", "@cite_42", "@cite_32", "@cite_6", "@cite_56", "@cite_0", "@cite_43", "@cite_27", "@cite_50", "@cite_47", "@cite_10", "@cite_17" ], "mid": [ "2952186347", "2619082050", "2034328688", "1944615693", "2952633803", "28988658", "2511475724", "2608988379", "2951864506", "2524365899", "", "2746726611", "2142194269", "2308045930", "2342662179", "2105101328" ], "abstract": [ "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper, we demonstrate how such features can be used for recognizing complex motion patterns. We construct video representations in terms of local space-time features and integrate such representations with SVM classification schemes for recognition. For the purpose of evaluation we introduce a new video database containing 2391 sequences of six human actions performed by 25 people in four different scenarios. The presented results of action recognition justify the proposed method and demonstrate its advantage compared to other relative approaches for action recognition.", "Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.", "Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos.", "In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks [42] with learnable spatio-temporal feature aggregation [6]. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13 relative) as well as outperforms other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks.", "Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7 ) and HMDB51 (67.2 ).", "Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of 8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.", "", "This paper presents a general ConvNet architecture for video action recognition based on multiplicative interactions of spacetime features. Our model combines the appearance and motion pathways of a two-stream architecture by motion gating and is trained end-to-end. We theoretically motivate multiplicative gating functions for residual networks and empirically study their effect on classification accuracy. To capture long-term dependencies we inject identity mapping kernels for learning temporal relationships. Our architecture is fully convolutional in spacetime and able to evaluate a video in a single forward pass. Empirical investigation reveals that our model produces state-of-the-art results on two standard action recognition datasets.", "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.", "", "Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art." ] }
1706.06905
2706729717
Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at modeling in-terdependencies between features. We evaluate the method on the multi-modal Youtube-8M Large-Scale Video Understanding dataset using pre-extracted visual and audio features. We demonstrate improvements provided by the Context Gating as well as by the combination of learnable pooling methods. We finally show how this leads to the best performance, out of more than 600 teams, in the Kaggle Youtube-8M Large-Scale Video Understanding challenge.
Gating mechanisms allow multiplicative interaction between a given input feature @math and a gate vector with values in between 0 and 1. They are commonly used in recurrent neural network models such as LSTM @cite_23 and GRU @cite_5 but have so far not been exploited in conjunction with other non-temporal aggregation strategies such as Fisher Vectors (FV), Vector of Locally Aggregated Descriptors (VLAD) or bag-of-visual-words (BoW). Our work aims to fill this gap and designs a video classification architecture combining non-temporal aggregation with gating mechanisms. One of the motivations for this choice is the recent Gated Linear Unit (GLU) @cite_49 , which has demonstrated significant improvements in natural language processing tasks.
{ "cite_N": [ "@cite_5", "@cite_49", "@cite_23" ], "mid": [ "2172140247", "2567070169", "" ], "abstract": [ "Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and a newly proposed gated recursive convolutional neural network. We show that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase. Furthermore, we find that the proposed gated recursive convolutional network learns a grammatical structure of a sentence automatically.", "The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens. We propose a novel simplified gating mechanism that outperforms (2016) and investigate the impact of key architectural decisions. The proposed approach achieves state-of-the-art on the WikiText-103 benchmark, even though it features long-term dependencies, as well as competitive results on the Google Billion Words benchmark. Our model reduces the latency to score a sentence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the first time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.", "" ] }
1706.06714
2718340241
Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.
More recently, attentional RNN encoder-decoder based models @cite_4 have shown improved results in a variety of tasks. presented a review network in solving the image captioning task, which produces a compact thought vector via reviewing all the input information encoded by the encoder. proposed attentional RNN encoder-decoder based model by introducing two layers of attention to model content selection and surface realization. More close to our work, proposed an attentive encoder-decoder based generator, which applied the attention mechanism over the slot-value pairs. The model indicated a domain scalability when a very limited proportion of training data is available.
{ "cite_N": [ "@cite_4" ], "mid": [ "2133564696" ], "abstract": [ "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition." ] }
1706.06941
2669485504
Graph representations offer powerful and intuitive ways to describe data in a multitude of application domains. Here, we consider stochastic processes generating graphs and propose a methodology for detecting changes in stationarity of such processes. The methodology is general and considers a process generating attributed graphs with a variable number of vertices edges, without the need to assume a one-to-one correspondence between vertices at different time steps. The methodology acts by embedding every graph of the stream into a vector domain, where a conventional multivariate change detection procedure can be easily applied. We ground the soundness of our proposal by proving several theoretical results. In addition, we provide a specific implementation of the methodology and evaluate its effectiveness on several detection problems involving attributed graphs representing biological molecules and drawings. Experimental results are contrasted with respect to suitable baseline methods, demonstrating the effectiveness of our approach.
The literature in statistical inference on time-varying graphs (or networks) is rather limited @cite_50 @cite_48 , especially when dealing with attributed graphs and non-identified vertices. Among the many, anomaly detection in graphs emerged as a problem of particular relevance, as a consequence of the ever growing possibility to monitor and collect data coming from natural and man-made systems of various size. An overview of proposed approaches for anomaly and change detection on time-variant graphs is reported in @cite_33 @cite_38 , where the authors distinguish the level of influence of a change. They identify changes affecting vertices and edges, or involving entire sub-networks of different size; this type of change usually concerns static networks, where the topology is often fixed. Other changes have a global influence, or might not be ascribed to specific vertices or edges.
{ "cite_N": [ "@cite_48", "@cite_33", "@cite_50", "@cite_38" ], "mid": [ "2468801653", "2093168265", "1937334562", "2089554624" ], "abstract": [ "The sample mean is one of the most fundamental concepts in statistics. Properties of the sample mean that are well-defined in Euclidean spaces become unclear in graph spaces. This paper proposes conditions under which the following properties are valid: existence, uniqueness, and consistency of means, the midpoint property, necessary conditions of optimality, and convergence results of mean algorithms. The theoretical results address common misconceptions about the graph mean in graph edit distance spaces, serve as a first step towards a statistical analysis of graph spaces, and result in a theoretically well-founded mean algorithm that outperformed six other mean algorithms with respect to solution quality on different graph datasets representing images and molecules. HighlightsA first step towards a theory of statistical graph space analysis is proposed.MMM-algorithm is proposed that outperformed six other mean algorithms.Necessary conditions of optimality are proved.Convergence of MMM-algorithm is shown.Basic statistical and geometrical properties are shown.", "Anomaly detection is an important problem with multiple applications, and thus has been studied for decades in various research domains. In the past decade there has been a growing interest in anomaly detection in data represented as networks, or graphs, largely because of their robust expressiveness and their natural ability to represent complex relationships. Originally, techniques focused on anomaly detection in static graphs, which do not change and are capable of representing only a single snapshot of data. As real-world networks are constantly changing, there has been a shift in focus to dynamic graphs, which evolve over time.", "The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it is more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions.", "Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured graph data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised versus (semi-)supervised approaches, for static versus dynamic graphs, for attributed versus plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly attribution and highlight the major techniques that facilitate digging out the root cause, or the why', of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field." ] }
1706.06792
2629553543
Deep Convolutional Neural Networks (CNNs) are capable of learning unprecedentedly effective features from images. Some researchers have struggled to enhance the parameters' efficiency using grouped convolution. However, the relation between the optimal number of convolutional groups and the recognition performance remains an open problem. In this paper, we propose a series of Basic Units (BUs) and a two-level merging strategy to construct deep CNNs, referred to as a joint Grouped Merging Net (GM-Net), which can produce joint grouped and reused deep features while maintaining the feature discriminability for classification tasks. Our GM-Net architectures with the proposed BU_A (dense connection) and BU_B (straight mapping) lead to significant reduction in the number of network parameters and obtain performance improvement in image classification tasks. Extensive experiments are conducted to validate the superior performance of the GM-Net than the state-of-the-arts on the benchmark datasets, e.g., MNIST, CIFAR-10, CIFAR-100 and SVHN.
Overfitting: Overfitting has long been a major issue in computer vision tasks, since millions of parameters tend to remember each training data @cite_4 . Dropout @cite_5 provides an effective regularization that prevents co-adaptions on training data by randomly dropping out neurons. It can be explained as an ensemble of sparse neural networks to reduce overfitting risks. Similarly, stochastic depth @cite_18 and drop path @cite_16 utilize the idea of only training part of the network to optimize models. We attempt to make parameters sparse and compact by dropping connections with special basic units. Sparse models reduce overfitting generally, but underfitting will occur if it is too sparse. This problem is handled in our models with a unique merging strategy.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_16", "@cite_4" ], "mid": [ "2095705004", "2949892913", "2408279554", "2950220847" ], "abstract": [ "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10).", "We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.", "Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models." ] }
1706.06792
2629553543
Deep Convolutional Neural Networks (CNNs) are capable of learning unprecedentedly effective features from images. Some researchers have struggled to enhance the parameters' efficiency using grouped convolution. However, the relation between the optimal number of convolutional groups and the recognition performance remains an open problem. In this paper, we propose a series of Basic Units (BUs) and a two-level merging strategy to construct deep CNNs, referred to as a joint Grouped Merging Net (GM-Net), which can produce joint grouped and reused deep features while maintaining the feature discriminability for classification tasks. Our GM-Net architectures with the proposed BU_A (dense connection) and BU_B (straight mapping) lead to significant reduction in the number of network parameters and obtain performance improvement in image classification tasks. Extensive experiments are conducted to validate the superior performance of the GM-Net than the state-of-the-arts on the benchmark datasets, e.g., MNIST, CIFAR-10, CIFAR-100 and SVHN.
Grouped Convolution: Ascribe to the computing limitation, AlexNet @cite_12 has to split its feature maps into two groups, which enlightens scholars nowadays to find high-efficient architectures. Grouped convolution hasn't got much attention until ResNeXt @cite_3 pointed out the significance of cardinality, explained as the number of convolutional groups. By increasing cardinality, ResNeXt @cite_3 increases its accuracy when maintaining model's complexity and parameter numbers. Xception @cite_0 proposed depth-wise separable convolution, or channel-wise convolution, where each channel is deemed as a convolution group. As achieving higher performance, an open question is still left behind: What is the optimal number of convolutional groups?
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_12" ], "mid": [ "2951583185", "2953328958", "" ], "abstract": [ "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.", "We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.", "" ] }
1706.06768
2689607630
Weakly supervised object detection (WSOD), which is the problem of learning detectors using only image-level labels, has been attracting more and more interest. However, this problem is quite challenging due to the lack of location supervision. To address this issue, this paper integrates saliency into a deep architecture, in which the location in- formation is explored both explicitly and implicitly. Specifically, we select highly confident object pro- posals under the guidance of class-specific saliency maps. The location information, together with semantic and saliency information, of the selected proposals are then used to explicitly supervise the network by imposing two additional losses. Meanwhile, a saliency prediction sub-network is built in the architecture. The prediction results are used to implicitly guide the localization procedure. The entire network is trained end-to-end. Experiments on PASCAL VOC demonstrate that our approach outperforms all state-of-the-arts.
However, the formulated MIL problems are non-convex and prone to stuck in local optima. To address this issue, different strategies are developed, which either help to make good initializations @cite_8 @cite_14 @cite_15 @cite_9 , or regularize the models with extra cues @cite_23 @cite_24 @cite_15 . These methods have demonstrated their effectiveness, especially when CNN features are used for representing object proposals @cite_15 @cite_9 . Thus, a current trend along this research line is to integrate the MIL strategy with deep networks, as the exemplar work done in @cite_19 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_24", "@cite_19", "@cite_23", "@cite_15" ], "mid": [ "2952072685", "2132984949", "2133324800", "", "2951001760", "", "2953310444" ], "abstract": [ "Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50 relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection.", "Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that often we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.", "Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when using high-dimensional representations, such as Fisher vectors and convolutional neural network features. We also propose a window refinement method, which improves the localization accuracy by incorporating an objectness prior. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset, which verifies the effectiveness of our approach.", "", "Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization.", "", "We present a technique for weakly supervised object localization (WSOL), building on the observation that WSOL algorithms usually work better on images with bigger objects. Instead of training the object detector on the entire training set at the same time, we propose a curriculum learning strategy to feed training images into the WSOL learning loop in an order from images containing bigger objects down to smaller ones. To automatically determine the order, we train a regressor to estimate the size of the object given the whole image as input. Furthermore, we use these size estimates to further improve the re-localization step of WSOL by assigning weights to object proposals according to how close their size matches the estimated object size. We demonstrate the effectiveness of using size order and size weighting on the challenging PASCAL VOC 2007 dataset, where we achieve a significant improvement over existing state-of-the-art WSOL techniques." ] }
1706.06768
2689607630
Weakly supervised object detection (WSOD), which is the problem of learning detectors using only image-level labels, has been attracting more and more interest. However, this problem is quite challenging due to the lack of location supervision. To address this issue, this paper integrates saliency into a deep architecture, in which the location in- formation is explored both explicitly and implicitly. Specifically, we select highly confident object pro- posals under the guidance of class-specific saliency maps. The location information, together with semantic and saliency information, of the selected proposals are then used to explicitly supervise the network by imposing two additional losses. Meanwhile, a saliency prediction sub-network is built in the architecture. The prediction results are used to implicitly guide the localization procedure. The entire network is trained end-to-end. Experiments on PASCAL VOC demonstrate that our approach outperforms all state-of-the-arts.
aims to utilize the knowledge learned with CNNs on other vision tasks to help the learning of object detector under weak supervision. For instance, the CNN pre-trained on the large-scale image-level classification task @cite_20 not only is able to extract discriminative features but also has the localization ability. Therefore, @cite_15 @cite_9 use CNNs to represent the features of object proposals. @cite_13 @cite_1 @cite_10 @cite_12 explore the encoded semantic and spatial information in convolutional layers to predict approximate locations of objects. Most of these methods treat classification and localization as separate procedures. Thus, their localization performance is limited.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_1", "@cite_15", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "2441255125", "2133324800", "2950328304", "2953310444", "1994488211", "2952533407", "2117539524" ], "abstract": [ "We address the problem of weakly supervised object localization where only image-level annotations are available for training. Many existing approaches tackle this problem through object proposal mining. However, a substantial amount of noise in object proposals causes ambiguities for learning discriminative object models. Such approaches are sensitive to model initialization and often converge to an undesirable local minimum. In this paper, we address this problem by progressive domain adaptation with two main steps: classification adaptation and detection adaptation. In classification adaptation, we transfer a pre-trained network to our multi-label classification task for recognizing the presence of a certain object in an image. In detection adaptation, we first use a mask-out strategy to collect class-specific object proposals and apply multiple instance learning to mine confident candidates. We then use these selected object proposals to fine-tune all the layers, resulting in a fully adapted detection network. We extensively evaluate the localization performance on the PASCAL VOC and ILSVRC datasets and demonstrate significant performance improvement over the state-of-the-art methods.", "Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when using high-dimensional representations, such as Fisher vectors and convolutional neural network features. We also propose a window refinement method, which improves the localization accuracy by incorporating an objectness prior. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset, which verifies the effectiveness of our approach.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them", "We present a technique for weakly supervised object localization (WSOL), building on the observation that WSOL algorithms usually work better on images with bigger objects. Instead of training the object detector on the entire training set at the same time, we propose a curriculum learning strategy to feed training images into the WSOL learning loop in an order from images containing bigger objects down to smaller ones. To automatically determine the order, we train a regressor to estimate the size of the object given the whole image as input. Furthermore, we use these size estimates to further improve the re-localization step of WSOL by assigning weights to object proposals according to how close their size matches the estimated object size. We demonstrate the effectiveness of using size order and size weighting on the challenging PASCAL VOC 2007 dataset, where we achieve a significant improvement over existing state-of-the-art WSOL techniques.", "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.", "Object localization is an important computer vision problem with a variety of applications. The lack of large scale object-level annotations and the relative abundance of image-level labels makes a compelling case for weak supervision in the object localization task. Deep Convolutional Neural Networks are a class of state-of-the-art methods for the related problem of object recognition. In this paper, we describe a novel object localization algorithm which uses classification networks trained on only image labels. This weakly supervised method leverages local spatial and semantic patterns captured in the convolutional layers of classification networks. We propose an efficient beam search based approach to detect and localize multiple objects in images. The proposed method significantly outperforms the state-of-the-art in standard object localization data-sets with a 8 point increase in mAP scores.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements." ] }
1706.06768
2689607630
Weakly supervised object detection (WSOD), which is the problem of learning detectors using only image-level labels, has been attracting more and more interest. However, this problem is quite challenging due to the lack of location supervision. To address this issue, this paper integrates saliency into a deep architecture, in which the location in- formation is explored both explicitly and implicitly. Specifically, we select highly confident object pro- posals under the guidance of class-specific saliency maps. The location information, together with semantic and saliency information, of the selected proposals are then used to explicitly supervise the network by imposing two additional losses. Meanwhile, a saliency prediction sub-network is built in the architecture. The prediction results are used to implicitly guide the localization procedure. The entire network is trained end-to-end. Experiments on PASCAL VOC demonstrate that our approach outperforms all state-of-the-arts.
is an important ingredient to make CNN remarkable in fully supervised vision tasks. Intrigued by its success, very recent efforts are often made to construct end-to-end architectures for WSOD. For instance, WSDDN @cite_16 proposes a two-stream network to parallelly learn classifiers and localizers in an end-to-end manner. @cite_21 incorporates context information into the two-stream network. WCCN @cite_19 integrates the MIL strategy into an end-to-end deep network. Benefited from the end-to-end training, these methods achieve state-of-the-art performance. Our work is along this research line. Different from these approaches, we explore highly confident information from class-specific saliency maps and integrate it into the network to supervise the end-to-end training.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_21" ], "mid": [ "2951001760", "2101611867", "" ], "abstract": [ "Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization.", "Weakly supervised learning of object detection is an important problem in image understanding that still does not have a satisfactory solution. In this paper, we address this problem by exploiting the power of deep convolutional neural networks pre-trained on large-scale image-level classification tasks. We propose a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classification. Trained as an image classifier, the architecture implicitly learns object detectors that are better than alternative weakly supervised detection systems on the PASCAL VOC data. The model, which is a simple and elegant end-to-end architecture, outperforms standard data augmentation and fine-tuning techniques for the task of image-level classification as well.", "" ] }
1706.06768
2689607630
Weakly supervised object detection (WSOD), which is the problem of learning detectors using only image-level labels, has been attracting more and more interest. However, this problem is quite challenging due to the lack of location supervision. To address this issue, this paper integrates saliency into a deep architecture, in which the location in- formation is explored both explicitly and implicitly. Specifically, we select highly confident object pro- posals under the guidance of class-specific saliency maps. The location information, together with semantic and saliency information, of the selected proposals are then used to explicitly supervise the network by imposing two additional losses. Meanwhile, a saliency prediction sub-network is built in the architecture. The prediction results are used to implicitly guide the localization procedure. The entire network is trained end-to-end. Experiments on PASCAL VOC demonstrate that our approach outperforms all state-of-the-arts.
Saliency detection @cite_17 @cite_4 can automatically highlight image regions containing objects of interest. Thus, the generated saliency maps provide approximate information of object locations. Based on this observation, saliency has been used as a prior in different weakly supervised vision tasks @cite_11 @cite_28 @cite_30 . In WSOD, category-free saliency detection @cite_28 is exploited via a self-paced curriculum learning strategy. @cite_26 designs an attention network for WSOD. @cite_4 @cite_30 propose a convenient way to get class-specific saliency maps from the CNN pre-trained on ILSVRC @cite_20 , which provide both location and semantic information. Our work is motivated by their findings. We aim to incorporate such class-specific saliency maps into an end-to-end framework to boost the performance of detection.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_4", "@cite_28", "@cite_17", "@cite_20", "@cite_11" ], "mid": [ "2519610629", "", "2962851944", "2949652892", "2047670868", "2117539524", "2474876375" ], "abstract": [ "In this paper, we deal with a weakly supervised semantic segmentation problem where only training images with image-level labels are available. We propose a weakly supervised semantic segmentation method which is based on CNN-based class-specific saliency maps and fully-connected CRF. To obtain distinct class-specific saliency maps which can be used as unary potentials of CRF, we propose a novel method to estimate class saliency maps which improves the method proposed by (2014) significantly by the following improvements: (1) using CNN derivatives with respect to feature maps of the intermediate convolutional layers with up-sampling instead of an input image; (2) subtracting the saliency maps of the other classes from the saliency maps of the target class to differentiate target objects from other objects; (3) aggregating multiple-scale class saliency maps to compensate lower resolution of the feature maps. After obtaining distinct class saliency maps, we apply fully-connected CRF by using the class maps as unary potentials. By the experiments, we show that the proposed method has outperformed state-of-the-art results with the PASCAL VOC 2012 dataset under the weakly-supervised setting.", "", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "Weakly-supervised object detection (WOD) is a challenging problems in computer vision. The key problem is to simultaneously infer the exact object locations in the training images and train the object detectors, given only the training images with weak image-level labels. Intuitively, by simulating the selective attention mechanism of human visual system, saliency detection technique can select attractive objects in scenes and thus is a potential way to provide useful priors for WOD. However, the way to adopt saliency detection in WOD is not trivial since the detected saliency region might be possibly highly ambiguous in complex cases. To this end, this paper first comprehensively analyzes the challenges in applying saliency detection to WOD. Then, we make one of the earliest efforts to bridge saliency detection to WOD via the self-paced curriculum learning, which can guide the learning procedure to gradually achieve faithful knowledge of multi-class objects from easy to hard. The experimental results demonstrate that the proposed approach can successfully bridge saliency detection and WOD tasks and achieve the state-of-the-art object detection results under the weak supervision.", "Recent progresses in salient object detection have exploited the boundary prior, or background information, to assist other saliency cues such as contrast, achieving state-of-the-art results. However, their usage of boundary prior is very simple, fragile, and the integration with other cues is mostly heuristic. In this work, we present new methods to address these issues. First, we propose a robust background measure, called boundary connectivity. It characterizes the spatial layout of image regions with respect to image boundaries and is much more robust. It has an intuitive geometrical interpretation and presents unique benefits that are absent in previous saliency measures. Second, we propose a principled optimization framework to integrate multiple low level cues, including our background measure, to obtain clean and uniform saliency maps. Our formulation is intuitive, efficient and achieves state-of-the-art results on several benchmark datasets.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "In this paper, we propose a novel method to perform weakly-supervised image parsing based on the dictionary learning framework. To deal with the challenges caused by the label ambiguities, we design a saliency guided weight assignment scheme to boost the discriminative dictionary learning. More specifically, with a collection of tagged images, the proposed method first conducts saliency detection and automatically infers the confidence for each semantic class to be foreground or background. These clues are then incorporated to learn the dictionaries, the weights, as well as the sparse representation coefficients in the meanwhile. Once obtained the coefficients of a superpixel, we use a sparse representation classifier to determine its semantic label. The approach is validated on the MSRC21, PASCAL VOC07, and VOC12 datasets. Experimental results demonstrate the encouraging performance of our approach in comparison with some state-of-the-arts." ] }
1706.06631
2696455221
Network virtualization offers flexibility by decoupling virtual network from the underlying physical network. Software-Defined Network (SDN) could utilize the virtual network. For example, in Software-Defined Networks, the entire network can be run on commodity hardware and operating systems that use virtual elements. However, this could present new challenges of data plane performance. In this paper, we present an empirical model of the packet processing delay of a widely used OpenFlow virtual switch, the Open vSwitch. In the empirical model, we analyze the effect of varying Random Access Memory (RAM) and network parameters on the performance of the Open vSwitch. Our empirical model captures the non-network processing delays, which could be used in enhancing the network modeling and simulation.
S. Azodolmolky @cite_3 developed a network calculus based an analytical model to describe the functionality of Software-Defined Networks. They modeled the behavior of an SDN switch in terms of delay, queue length boundaries, buffer length and controller buffer length. K. Mahmood @cite_10 provided SDN modeling based on queuing theory, where a Jackson network was used to model the data plane and queue was used to model the controller. They determined the average time a packet spends in the SDN and the maximum data that can be injected in the network given some delay requirements. Authors built a custom simulator to validate their analytical model. Another queuing theory based model was developed by Xiong @cite_4 to evaluate the OpenFlow-based software-defined network. To obtain the average time a packet spends in the system, an @math queue was used to model the switch. They evaluated the switch queuing model with different performance parameters using numerical analysis. The queue was used to model the controller packet-in behavior and it was evaluated using widely used benchmark Cbench under various network scenarios.
{ "cite_N": [ "@cite_4", "@cite_10", "@cite_3" ], "mid": [ "2315262035", "2296645514", "1972826678" ], "abstract": [ "OpenFlow is one of the most famous protocols for controller-to-switch communications in software-defined networking (SDN), commonly seen as a promising way towards future Internet. Understanding the performance and limitation of OpenFlow-based SDN is a prerequisite of its deployments. To achieve this aim, this paper proposes a novel analytical performance model of OpenFlow networks based on queueing theory. After depicting a typical network scenario of OpenFlow deployments, we model the packet forwarding of its OpenFlow switches and the packet-in message processing of its SDN controller respectively as the queueing systems MX M 1 and M G 1. Subsequently, we build a queueing model of OpenFlow networks in terms of packet forwarding performance, and solve its closed-form expression of average packet sojourn time and the corresponding probability density function. Finally, the numerical analysis is carried out to evaluate our proposed performance model with different parameter values. Furthermore, our controller model is contrasted with the classical one by utilizing the popular benchmark Cbench. Experimental results indicate that our controller model provides a more accurate approximation of SDN controller performance.", "OpenFlow (OF) is one of the most widely used protocols for controller-to-switch communication in a software defined network (SDN). Performance analysis of OF-based SDN using analytical models is both highly desirable and challenging. There already exists a very elegant analytical model based on M M 1 queues to estimate the packet sojourn time and probability of lost packets for the case in which a controller is responsible for only a single node in the data plane. However the literature falls short when it comes to the multiple node case, i.e. when there is more than one node in the data plane. In this work we propose a model to address this challenge by approximating the data plane as an open Jackson network with the controller also modeled as an M M 1 queue. The model is then used to evaluate the system in the light of some of the metrics, such as; how much time a packet spends on average in an OF-based network and how much data we can pump into the network given the average delay requirements. Finally the PDF and the CDF of the time spent by the packet in an OF-based SDN for a given path is derived.", "Software defined networking (SDN) and OpenFlow as the outcome of recent research and development efforts provided unprecedented access into the forwarding plane of networking elements. This is achieved by decoupling the network control out of the forwarding devices. This separation paves the way for a more flexible and innovative networking. While SDN concept and OpenFlow find their ways into commercial deployments, performance evaluation of the SDN concept and its scalability, delay bounds, buffer sizing and similar performance metrics are not investigated in recent researches. In spite of usage of benchmark tools (like OFlops and Cbench), simulation studies and very few analytical models, there is a lack of analytical models to express the boundary condition of SDN deployment. In this work we present a model based on network calculus theory to describe the functionality of an SDN switch and controller. To the best of our knowledge, this is for the first time that network calculus framework is utilized to model the behavior of an SDN switch in terms of delay and queue length boundaries and the analysis of the buffer length of SDN controller and SDN switch. The presented model can be used for network designers and architects to get a quick view of the overall SDN network deployment performance and buffer sizing of SDN switches and controllers." ] }
1706.06631
2696455221
Network virtualization offers flexibility by decoupling virtual network from the underlying physical network. Software-Defined Network (SDN) could utilize the virtual network. For example, in Software-Defined Networks, the entire network can be run on commodity hardware and operating systems that use virtual elements. However, this could present new challenges of data plane performance. In this paper, we present an empirical model of the packet processing delay of a widely used OpenFlow virtual switch, the Open vSwitch. In the empirical model, we analyze the effect of varying Random Access Memory (RAM) and network parameters on the performance of the Open vSwitch. Our empirical model captures the non-network processing delays, which could be used in enhancing the network modeling and simulation.
U. Javed developed a stochastic model for transit latency in SDN @cite_8 . They performed experiments on three different platforms (i.e. Mininet, MikroTik RouterBoard 750GL and GENI) and used the Round Trip Time (RTT) between end hosts as a measurement metric to formulate their model. They also proposed and demonstrated that the log-normal mixture distribution is more suited for transit latency in SDN as compared to M M 1 models suggested in earlier studies. A hybrid approach was proposed by M. Jarschel @cite_6 , where they first used hardware switches to measure the average packet forwarding time then selected one of the hardware switch performance values to develop a queuing model to analyze the network. They simplified the OpenFlow architecture queuing system to (forwarding model), (controller model) from and , respectively. To validate the results from analytical model, they used OMNeT++ to implement packet based simulation.
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "1491798314", "2563943708" ], "abstract": [ "The OpenFlow concept of flow-based forwarding and separation of the control plane from the data plane provides a new flexibility in network innovation. While initially used solely in the research domain, OpenFlow is now finding its way into commercial applications. However, this creates new challenges, as questions of OpenFlow scalability and performance have not yet been answered. This paper is a first step towards that goal. Based on measurements of switching times of current OpenFlow hardware, we derive a basic model for the forwarding speed and blocking probability of an OpenFlow switch combined with an OpenFlow controller and validate it using a simulation. This model can be used to estimate the packet sojourn time and the probability of lost packets in such a system and can give hints to developers and researchers on questions how an OpenFlow architecture will perform given certain parameters.", "Software defined networks (SDNs) introduced the concept of decoupling control and data planes which is a paradigm shift. The OpenFlow protocol is one of a number of technologies that enables this decoupling and, in effect, commodifies network equipment. As of now, there is still limited work that has been done towards modeling the transit delay across OpenFlow switches experienced by network traffic. In this work we develop a stochastic model for the path latency in Open vSwitch (used together with a POX controller) based on measurements made in experiments performed on three different platforms which include 1) Mininet, 2) MikroTik RouterBoard 750GL and 3) GENI testbed softswitch. We propose a log-normal mix model (LNMM) and show that it offers a R2 value of greater than 0.90 for most of our experiments. We also demonstrate how the M M 1 models proposed in earlier studies is a poor fit." ] }
1706.06376
2174992904
Advanced medical devices exploit the advantages of embedded software whose development is subject to compliance with stringent requirements of standardization and certification regimes due to the critical nature of such systems. This paper presents initial results and lessons learned from an ongoing project focusing on the development of a formal model of a subsystem of a software-controlled safety-critical active medical device (AMD) responsible for renal replacement therapy. The use of formal approaches for the development of AMDs is highly recommended by standards and regulations, and motivates the recent advancement of the state of the art of related methods and tools including Event-B and Rodin applied in this paper. It is expected that the presented model development approach and the specification of a high-confidence medical system will contribute to the still sparse experience base available at the disposal of the scientific and practitioner community of formal methods and software engineering.
In recent years, the use of formal methods is escalating for the development of software-intensive medical systems. For example, @cite_24 use the formal Analytical Software Design (ASD) @cite_38 approach for developing the power control service of an interventional X-ray system. @cite_25 present a methodology based on timed automata to extract timing properties of a heart that can be used for the verification and validation of implantable cardiac devices. @cite_3 provide a solution for the pacemaker challenge using the model checker PAT (Process Analysis Toolkit) @cite_37 . M ' e @cite_26 and @cite_35 present a model of pacemakers in Event-B and VDM @cite_39 respectively.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_26", "@cite_35", "@cite_3", "@cite_39", "@cite_24", "@cite_25" ], "mid": [ "1533209541", "1532054750", "2062214872", "2125673566", "2128835097", "1542358180", "1966848862", "155921250" ], "abstract": [ "Software is now an essential component that is embedded in an ever-increasing array of products. It has become an important means of realising product innovation and is a key determinant of both product quality and time-to-market. For many businesses, software has become business-critical and software development is a strategic business activity. At the same time, software development continues to suffer from poor predictability. Existing development methods appear to have reached a quality ceiling that incremental improvements in process and technology are unlikely to breach. To break through this ceiling, a different, more formal approach is needed, but one which can be introduced within existing development organisations.", "FDR, initially introduced decades ago, is the de facto analyzer for Communicating Sequential Processes (CSP). Model checking techniques have been evolved rapidly since then. This paper describes Pat, i.e., a process analysis toolkit which complements FDR in several aspects. Pat is designed to analyze event-based compositional system models specified using CSP as well as shared variables and asynchronous message passing. It supports automated refinement checking, model checking of LTL extended with events, etc. In this paper, we highlight how partial order reduction is applied to improve refinement checking in Pat. Experiment results show that Pat outperforms FDR in some cases.", "Formal methods have emerged as an alternative approach to ensuring quality and correctness of highly critical systems, overcoming limitations of traditional validation techniques such as simulation and testing. We propose a refinement-based methodology for complex medical systems design, which possesses all the required key features. A refinement-based combined approach of formal verification, model validation using a model-checker and refinement chart is proposed in this methodology for designing a high-confidence medical device. Furthermore, we show the effectiveness of this methodology for the design of a cardiac pacemaker system.", "The construction of formal models of real-time distributed systems is a considerable practical challenge. We propose and illustrate a pragmatic incremental approach in which detail is progressively added to abstract system-level specifications of functional and timing properties via intermediate models that express system architecture, concurrency and timing behaviour. The approach is illustrated by developing a new formal model of the cardiac pacemaker system proposed as a \"grand challenge\" problem in 2007. The models are expressed using the Vienna Development Method (VDM) and are validated primarily by scenario-based tests, including the analysis of timed traces. We argue that the insight gained using this staged modelling approach will be valuable in the subsequent development of implementations, and in detecting potential bottlenecks within suggested implementation architectures.", "The pacemaker challenge proposed by Software Quality Research Laboratory is looking for formal methods toproduce precise and reliable systems. Safety critical systems like pacemaker need to guarantee important properties (like deadlock-free, safety, etc.), which concern human lives. Formal methods have been applied in designing safety critical systems with verified desirable properties. In this paper, we propose a formal model of pacemaker, modeling its behaviors and its communication with the external environment, using a real-time formalism. Critical properties, such as deadlock freeness and heart rate limits are then verified using the model checker PAT(Process Analysis Toolkit). This work yields a verified formal model of pacemaker systems, which can serve as specification for real pacemaker implementations.", "", "We report about experiences at Philips Healthcare with component-based development supported by formal techniques. The formal Analytical Software Design (ASD) approach of the company Verum has been incorporated into the industrial workflow. The commercial tool ASD:Suite supports both compositional verification and code generation for control components. For other components test-driven development has been used. We discuss the results of these combined techniques in a project which developed the power control service of an interventional X-ray system.", "" ] }
1706.06376
2174992904
Advanced medical devices exploit the advantages of embedded software whose development is subject to compliance with stringent requirements of standardization and certification regimes due to the critical nature of such systems. This paper presents initial results and lessons learned from an ongoing project focusing on the development of a formal model of a subsystem of a software-controlled safety-critical active medical device (AMD) responsible for renal replacement therapy. The use of formal approaches for the development of AMDs is highly recommended by standards and regulations, and motivates the recent advancement of the state of the art of related methods and tools including Event-B and Rodin applied in this paper. It is expected that the presented model development approach and the specification of a high-confidence medical system will contribute to the still sparse experience base available at the disposal of the scientific and practitioner community of formal methods and software engineering.
One of the medical devices relatively close to hemodialysis machines is an infusion pump. It is primarily responsible for delivering fluids, such as nutrients and medications, into a patient's body in controlled amounts. @cite_30 present a reference model of PCA (Patient Control Analgesia) infusion pumps and test the model for structural and safety properties, Jos ' e et al @cite_27 present a formal model in MAL (Modal Action Logic) @cite_36 that helps compare different infusion devices and their provided functionalities, and @cite_1 use the ProZ model checker @cite_28 to test various safety properties of infusion pumps.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_36", "@cite_1", "@cite_27" ], "mid": [ "2149519129", "1491364274", "2103839338", "2023753675", "2107114959" ], "abstract": [ "As software becomes ever more ubiquitous and complex in medical devices, it becomes increasingly important to assure that it performs safely and effectively. The critical nature of medical devices necessitates that the software used therein be reliable and free of errors. It becomes imperative, therefore, to have a conformance review process in place to ascertain the correctness of the software and to ensure that it meets all requirements and standards. Formal methods have long been suggested as a means to design and develop medical device software. However, most manufacturers shy from using these techniques, citing them as too complex and time consuming. As a result, (potentially life-threatening) errors are often not discovered until a device is already on the market. In this paper we present a reference model based approach to software conformance checking. Reference models enable the application of formal methods to software conformance checking, and provide a framework for rigorous testing. To illustrate the approach, we develop the reference model for a generic patient controlled analgesic infusion pump, and explain how it can be used to aid software conformance checking in a regulatory environment.", "We present the architecture and implementation of the proz tool to validate high-level Z specifications. The tool was integrated into prob, by providing a translation of Z into B and by extending the kernel of prob to accommodate some new syntax and data types. We describe the challenge of going from the tool friendly formalism B to the more specification-oriented formalism Z, and show how many Z specifications can be systematically translated into B. We describe the extensions, such as record types and free types, that had to be added to the kernel to support a large subset of Z. As a side-effect, we provide a way to animate and model check records in prob. By incorporating proz into prob, we have inherited many of the recent extensions developed for B, such as the integration with CSP or the animation of recursive functions. Finally, we present a successful industrial application, which makes use of this fact, and where proz was able to discover several errors in Z specifications containing higher-order recursive functions.", "The paper explores the role that formal modeling may play in aiding the visualization and implementation of usability requirements of a control panel. We propose that this form of analysis should become a systematic and routine aspect of the development of such interfaces. We use a notation for describing the interface that is convenient to use by software engineers, and describe a set of tools designed to make the process systematic and exhaustive.", "Formally modelling the software functionality and interactivity of safety-critical devices allows us to prove properties about their behaviours and be certain that they will respond to user interaction correctly. In domains such as medical environments, where many different devices may be used, it is equally important to ensure that all devices used adhere to a set of safety, and other, principles designed for that environment. In this paper we look at modelling important properties of interactive medical devices including safety considerations mandated by their users. We use ProZ for model checking to ensure that properties stated in temporal logic hold, and also to check invariants. In this way we gain confidence that important properties do hold of the device, and that models of particular devices adhere to the properties described.", "This paper is concerned with the scaleable and systematic analysis of interactive systems. The motivating problem is the procurement of medical devices. In such situations several different manufacturers offer solutions that support a par- ticular clinical activity. Apart from cost, which is a dominating factor, the variations between devices are relatively subtle and the consequences of particular design fea- tures are not clear from manufacturers' manuals, demonstrations or trial uses. De- spite their subtlety these differences can be important to the safety and usability of the device. The paper argues that formal analysis of the range of offered devices can provide a systematic means of comparison. The paper also explores barriers to the use of such techniques, demonstrating how layers of specification may be used to make it possible to reuse common specification. Infusion pumps provide a motivat- ing example. A specific model is described and analysed and comparison between competitive devices is discussed." ] }
1706.06376
2174992904
Advanced medical devices exploit the advantages of embedded software whose development is subject to compliance with stringent requirements of standardization and certification regimes due to the critical nature of such systems. This paper presents initial results and lessons learned from an ongoing project focusing on the development of a formal model of a subsystem of a software-controlled safety-critical active medical device (AMD) responsible for renal replacement therapy. The use of formal approaches for the development of AMDs is highly recommended by standards and regulations, and motivates the recent advancement of the state of the art of related methods and tools including Event-B and Rodin applied in this paper. It is expected that the presented model development approach and the specification of a high-confidence medical system will contribute to the still sparse experience base available at the disposal of the scientific and practitioner community of formal methods and software engineering.
The formal basis for medical software components development we have used in this paper is shared with aforementioned works. However, apart from the work of M ' e @cite_26 , the verification and validation activities are better integrated into our proposed development process as compared to others. We cover a multitude of model analysis activities, e.g., model checking, model review, and animation, that give us a grasp on the notion of correctness far better than approaches which are comprised of only a subset of analysis techniques we have employed. The work of M ' e @cite_26 , though based on Event-B, is still different than our work because they use the refinement chart approach for model development. It is a graphical modeling technique that provides a view of different subsystems offering assistance in their later integration into a single system. In contrast, our work is based on (different) conventional modeling strategies, i.e., linear sequential, decomposable and atomicity, whose semantics are well-defined and whose efficacy has been proven by several industrial success stories such as @cite_31 , @cite_23 , @cite_19 . The system under development is another difference; they work on pacemaker systems and our work is related to hemodialysis machines.
{ "cite_N": [ "@cite_19", "@cite_31", "@cite_26", "@cite_23" ], "mid": [ "2786911411", "1582439588", "2062214872", "1554863622" ], "abstract": [ "One of the guarantees that the designers of on-board satellite systems need to provide, so as to ensure their dependability, is that the mode transition scheme is implemented correctly, i.e. that the states of system components are consistent with the global system mode. There is still, however, a lack of scalable approaches to developing and verifying systems with complex mode transitions. This paper presents an approach to the formal development of mode-rich systems by refinement in Event-B. We formalise the concepts of modes and mode transitions as well as deriving specification and refinement patterns which support correct-by-construction system development. The proposed approach is validated by a formal development of the Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. The experience gained in the course of developing such a complex industrial system as AOCS, shows that Event-B refinement provides the engineers with a scalable formal technique. Moreover, the case study has demonstrated that Event-B can facilitate formal development of mode-rich systems and, in particular, proof-based verification of their mode consistency.", "The automatic train operating system for METEOR, the first driverless metro in the city of Paris, is designed to manage the traffic of the vehicles controlled automatically or manually. This system, developed by Matra Transport International for the RATP, requires a very high level of dependability and safety for the users and the operator. To achieve this, the safety critical software located in the different control units (ground, line and on-board) was developed using the B formal method together with the Vital Coded Processor. This architecture thus ensures an optimum level of safety agreed with the customer. This experience with the METEOR project has convinced Matra Transport International of the advantages of using this B formal method for large-scale industrial developments.", "Formal methods have emerged as an alternative approach to ensuring quality and correctness of highly critical systems, overcoming limitations of traditional validation techniques such as simulation and testing. We propose a refinement-based methodology for complex medical systems design, which possesses all the required key features. A refinement-based combined approach of formal verification, model validation using a model-checker and refinement chart is proposed in this methodology for designing a high-confidence medical device. Furthermore, we show the effectiveness of this methodology for the design of a cardiac pacemaker system.", "In this article we would like to go back on B used to design software, by presenting the industrial process established through years by Siemens Transportation Systems on a real project: the VAL shuttle for Roissy Charles de Gaulle airport. In this project, the logical core of an equipment located along the tracks and driving the shuttles is designed with B. By confronting this B software development, with the historical context, we show that B can be used as a high-level programming language offering the feature of proving properties. We show how this process is used to build, by construction, a large size software with very few design errors ever since its first release, and for a predefined cost." ] }
1706.06328
2645018129
This paper presents preliminary results of our work with a major financial company, where we try to use methods of plan recognition in order to investigate the interactions of a costumer with the company's online interface. In this paper, we present the first steps of integrating a plan recognition algorithm in a real-world application for detecting and analyzing the interactions of a costumer. It uses a novel approach for plan recognition from bare-bone UI data, which reasons about the plan library at the lowest recognition level in order to define the relevancy of actions in our domain, and then uses it to perform plan recognition. We present preliminary results of inference on three different use-cases modeled by domain experts from the company, and show that this approach manages to decrease the overload of information required from an analyst to evaluate a costumer's session - whether this is a malicious or benign session, whether the intended tasks were completed, and if not - what actions are expected next.
Some notable exceptions are works by @cite_10 @cite_6 @cite_4 which try to combine low level activities and higher level domain knowledge, or @cite_4 which propose a multi-agent model for robot collaboration based on plan recognition. However, the works in these lines of research tend to use a domain-theory based recognition or other probabilistic representations, which do not capture the hierarchical nature of task decomposition like Hierarchical task networks (HTNs) and plan libraries.
{ "cite_N": [ "@cite_4", "@cite_10", "@cite_6" ], "mid": [ "", "1546690352", "2161029186" ], "abstract": [ "", "Correlating and analyzing security alerts is a critical and challenging task in security management. Recently, some techniques have been proposed for security alert correlation. However, these approaches focus more on basic or low-level alert correlation. In this paper, we study how to conduct probabilistic inference to correlate and analyze attack scenarios. Specifically, we propose an approach to solving the following problems: 1) How to correlate isolated attack scenarios resulted from low-level alert correlation? 2) How to identify attacker's high-level strategies and intentions? 3) How to predict the potential attacks based on observed attack activities? We evaluate our approaches using DARPA's grand challenge problem (GCP) data set. The results demonstrate the capability of our approach in correlating isolated attack scenarios, identifying attack strategies and predicting future attacks.", "This paper addresses the problem of learning and recognizing human activities of daily living (ADL), which is an important research issue in building a pervasive and smart environment. In dealing with ADL, we argue that it is beneficial to exploit both the inherent hierarchical organization of the activities and their typical duration. To this end, we introduce the switching hidden semi-markov model (S-HSMM), a two-layered extension of the hidden semi-Markov model (HSMM) for the modeling task. Activities are modeled in the S-HSMM in two ways: the bottom layer represents atomic activities and their duration using HSMMs; the top layer represents a sequence of high-level activities where each high-level activity is made of a sequence of atomic activities. We consider two methods for modeling duration: the classic explicit duration model using multinomial distribution, and the novel use of the discrete Coxian distribution. In addition, we propose an effective scheme to detect abnormality without the need for training on abnormal data. Experimental results show that the S-HSMM performs better than existing models including the flat HSMM and the hierarchical hidden Markov model in both classification and abnormality detection tasks, alleviating the need for presegmented training data. Furthermore, our discrete Coxian duration model yields better computation time and generalization error than the classic explicit duration model." ] }
1706.06328
2645018129
This paper presents preliminary results of our work with a major financial company, where we try to use methods of plan recognition in order to investigate the interactions of a costumer with the company's online interface. In this paper, we present the first steps of integrating a plan recognition algorithm in a real-world application for detecting and analyzing the interactions of a costumer. It uses a novel approach for plan recognition from bare-bone UI data, which reasons about the plan library at the lowest recognition level in order to define the relevancy of actions in our domain, and then uses it to perform plan recognition. We present preliminary results of inference on three different use-cases modeled by domain experts from the company, and show that this approach manages to decrease the overload of information required from an analyst to evaluate a costumer's session - whether this is a malicious or benign session, whether the intended tasks were completed, and if not - what actions are expected next.
In order to address the above tasks, we need to use hierarchical plan recognition, a field of research exploring algorithms that recognize the plans of the agent based on a partial sequence of actions, and predict future actions @cite_13 . Few works @cite_16 @cite_1 did use a plan library as the underlying domain knowledge for the task, but they did not provide predictions or used this information to formalize the output. Other works @cite_14 do output a complete hierarchy, but the plan recognition algorithm used does not work well in exploratory environments.
{ "cite_N": [ "@cite_1", "@cite_14", "@cite_16", "@cite_13" ], "mid": [ "2137658325", "2727902986", "2902726747", "1558822720" ], "abstract": [ "To be effective, current intrusion detection systems (IDSs) must incorporate artificial intelligence methods for plan recognition. Plan recognition is critical both to predicting the future actions of attackers and planning appropriate responses to their actions. However network security places a new set of requirements on plan recognition. We present an argument for including plan recognition in IDSs and an algorithm for conducting plan recognition that meets the needs of the network security domain.", "We describe the application of plan recognition techniques to support human intelligence analysts in processing national security alert sets by automatically identifying the hostile intent behind them. Identifying the intent enables us to both prioritize and explain the alert sets for succinct user presentation. Our empirical evaluation demonstrates that the approach can handle alert sets of as many as 20 elements and can readily distinguish between false and true alarms. We discuss the important opportunities, for future work, that will increase the cardinality of the alert sets supported by the system to the level demanded by a deployable application. In particular, we outline opportunities to bring the analysts into the process and the opportunities for heuristic improvements to the plan recognition algorithm.", "", "Research in discourse analysis, story understanding, and user modeling for expert systems has shown great interest in plan recognition problems. In a plan recognition problem, one is given a fragmented description of actions performed by one or more agents, and expected to infer the overall plan or scenario which explains those actions. This thesis develops the first formal description of the plan recognition process. Beginning with a reified logic of events, the thesis presents a scheme for hierarchically structuring a library of event types. A semantic basis for non-deductive inference, called \"minimum covering entailment\", justifies the conclusions that one may draw from a set of observed actions. Minimum covering entailment is defined by delineating the class of models in which the library is complete and the set of unrelated observations is minimized. An equivalent proof theory forms a preliminary basis for mechanizing the theory. Equivalence theorems between the proof and model theories are presented. Minimum covering entailment is related to a formalism for non-monotonic inference known as \"circumscription\". Finally, the thesis describes a number of algorithms which correctly implement the theory, together with a discussion of their complexity. The theory is applied to a number of examples of plan recognition, in domains ranging from an operating system advisor to the theory of speech acts. The thesis shows how problems of medical diagnosis, a similar kind of non-deductive reasoning, can be cast in the framework, and an example previously solved by a medical expert system is worked out in detail. The analyses provides a firm theoretical foundation for much of what is loosely called \"frame based inference\", and directly accounts for problems of ambiguity, abstraction, and complex temporal interactions, which were ignored by previous work. The framework can be extended to handle difficult phenomena such as errors, and can also be restricted in order to improve its computational properties in specialized domains." ] }
1706.06629
2670684868
In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.
In contrast, capturing the full geometry of dynamic scenes that might contain more than one moving object has received more limited attention. Ren @cite_11 propose a method to track and reconstruct 3D objects simultaneously by refining an initial simple shape primitive. However, in contrast to our approach, it can only track one moving object and requires a manual initialization. @cite_32 propose a combined approach for estimating pose, shape, and the kinematic structure of articulated objects based on motion segmentation. While it is also based on joint tracking and segmentation, the focus is on discovering the articulated structure, only foreground objects are reconstructed and its performance is not real time. St " u ckler and Behnke @cite_0 propose a dense rigid-body motion segmentation algorithm for RGB-D sequences. They only segment the RGB-D images and estimate the motion but do not simultaneously reconstruct the objects. Finally @cite_25 build a model of the environment and consider as new objects parts of the scene that become inconsistent with this model using change detection. However, this approach requires a human in the loop to acquire known-correct segmentation and does not provide real time operation.
{ "cite_N": [ "@cite_0", "@cite_25", "@cite_32", "@cite_11" ], "mid": [ "2022714016", "2060775765", "2419427454", "2047572541" ], "abstract": [ "Motion is a fundamental grouping cue in video. Many current approaches to motion segmentation in monocular or stereo image sequences rely on sparse interest points or are dense but computationally demanding. We propose an efficient expectation---maximization (EM) framework for dense 3D segmentation of moving rigid parts in RGB-D video. Our approach segments images into pixel regions that undergo coherent 3D rigid-body motion. Our formulation treats background and foreground objects equally and poses no further assumptions on the motion of the camera or the objects than rigidness. While our EM-formulation is not restricted to a specific image representation, we supplement it with efficient image representation and registration for rapid segmentation of RGB-D video. In experiments, we demonstrate that our approach recovers segmentation and 3D motion at good precision.", "We build on recent fast and accurate 3-D reconstruction techniques to segment objects during scene reconstruction. We take object outline information from change detection to build 3-D models of rigid objects and represent the scene as static and dynamic components. Object models are updated online during mapping, and can integrate segmentation information from sources other than change detection.", "We present an integrated approach for perception of unknown articulated objects. To robustly perceive objects and understand interactions, our method tightly integrates pose tracking, shape reconstruction, and the estimation of their kinematic structure. The key insight of our method is that these sub-problems complement each other: for example, tracking is greatly facilitated by knowing the shape of the object, whereas the shape and the kinematic structure can be more easily reconstructed if the motion of the object is known. Our combined method leverages these synergies to improve the performance of perception. We analyze the proposed method in average cases and difficult scenarios using a variety of rigid and articulated objects. The results show that our integrated solution achieves better results than solutions for the individual problems. This demonstrates the benefits of approaching robot perception problems in an integrated manner.", "We introduce a probabilistic framework for simultaneous tracking and reconstruction of 3D rigid objects using an RGB-D camera. The tracking problem is handled using a bag-of-pixels representation and a back-projection scheme. Surface and background appearance models are learned online, leading to robust tracking in the presence of heavy occlusion and outliers. In both our tracking and reconstruction modules, the 3D object is implicitly embedded using a 3D level-set function. The framework is initialized with a simple shape primitive model (e.g. a sphere or a cube), and the real 3D object shape is tracked and reconstructed online. Unlike existing depth-based 3D reconstruction works, which either rely on calibrated fixed camera set up or use the observed world map to track the depth camera, our framework can simultaneously track and reconstruct small moving objects. We use both qualitative and quantitative results to demonstrate the superior performance of both tracking and reconstruction of our method." ] }
1706.06629
2670684868
In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.
Several recent RGB-only methods have also addressed the problem of monocular 3D reconstruction of dynamic scenes. Works such as @cite_20 @cite_30 @cite_17 are similar in spirit to our simultaneous segmentation, tracking and reconstruction approach. Russell @cite_20 perform multiple model fitting to decompose a scene into piecewise rigid parts that are then grouped to form distinct objects. The strength of their approach is the flexibility to deal with a mixture of non-rigid, articulated or rigid objects. Fragkiadaki @cite_30 follow a pipeline approach that first performs clustering of long term tracks into different objects followed by non-rigid reconstruction. However, both of these approaches act on sparse tracks and are batch methods that require all the frames to have been captured in advance. Our method also shares commonality with the dense RGB multi-body reconstruction approach of @cite_17 , who also perform simultaneous segmentation, tracking and 3D reconstruction of multiple rigid models, with the notable difference that our approach is online and real time while theirs is batch and takes several seconds per frame.
{ "cite_N": [ "@cite_30", "@cite_20", "@cite_17" ], "mid": [ "2151175804", "9584637", "2063759133" ], "abstract": [ "Extracting 3D shape of deforming objects in monocular videos, a task known as non-rigid structure-from-motion (NRSfM), has so far been studied only on synthetic datasets and controlled environments. Typically, the objects to reconstruct are pre-segmented, they exhibit limited rotations and occlusions, or full-length trajectories are assumed. In order to integrate NRSfM into current video analysis pipelines, one needs to consider as input realistic -thus incomplete- tracking, and perform spatio-temporal grouping to segment the objects from their surroundings. Furthermore, NRSfM needs to be robust to noise in both segmentation and tracking, e.g., drifting, segmentation \"leaking\", optical flow \"bleeding\" etc. In this paper, we make a first attempt towards this goal, and propose a method that combines dense optical flow tracking, motion trajectory clustering and NRSfM for 3D reconstruction of objects in videos. For each trajectory cluster, we compute multiple reconstructions by minimizing the reprojection error and the rank of the 3D shape under different rank bounds of the trajectory matrix. We show that dense 3D shape is extracted and trajectories are completed across occlusions and low textured regions, even under mild relative motion between the object and the camera. We achieve competitive results on a public NRSfM benchmark while using fixed parameters across all sequences and handling incomplete trajectories, in contrast to existing approaches. We further test our approach on popular video segmentation datasets. To the best of our knowledge, our method is the first to extract dense object models from realistic videos, such as those found in Youtube or Hollywood movies, without object-specific priors.", "Consider a video sequence captured by a single camera observing a complex dynamic scene containing an unknown mixture of multiple moving and possibly deforming objects. In this paper we propose an unsupervised approach to the challenging problem of simultaneously segmenting the scene into its constituent objects and reconstructing a 3D model of the scene. The strength of our approach comes from the ability to deal with real-world dynamic scenes and to handle seamlessly different types of motion: rigid, articulated and non-rigid. We formulate the problem as hierarchical graph-cut based segmentation where we decompose the whole scene into background and foreground objects and model the complex motion of non-rigid or articulated objects as a set of overlapping rigid parts. We evaluate the motion segmentation functionality of our approach on the Berkeley Motion Segmentation Dataset. In addition, to validate the capability of our approach to deal with real-world scenes we provide 3D reconstructions of some challenging videos from the YouTube-Objects dataset.", "Existing approaches to camera tracking and reconstruction from a single handheld camera for Augmented Reality (AR) focus on the reconstruction of static scenes. However, most real world scenarios are dynamic and contain multiple independently moving rigid objects. This paper addresses the problem of simultaneous segmentation, motion estimation and dense 3D reconstruction of dynamic scenes. We propose a dense solution to all three elements of this problem: depth estimation, motion label assignment and rigid transformation estimation directly from the raw video by optimizing a single cost function using a hill-climbing approach. We do not require prior knowledge of the number of objects present in the scene — the number of independent motion models and their parameters are automatically estimated. The resulting inference method combines the best techniques in discrete and continuous optimization: a state of the art variational approach is used to estimate the dense depth maps while the motion segmentation is achieved using discrete graph-cut based optimization. For the rigid motion estimation of the independently moving objects we propose a novel tracking approach designed to cope with the small fields of view they induce and agile motion. Our experimental results on real sequences show how accurate segmentations and dense depth maps can be obtained in a completely automated way and used in marker-free AR applications." ] }
1706.06444
2663767485
We study reconstruction operators on a Hilbert space that are exact on a given reconstruction subspace. Among those the reconstruction operator obtained by the least squares fit has the smallest operator norm, and therefore is most stable with respect to noisy measurements. We then construct the operator with the smallest possible quasi-optimality constant, which is the most stable with respect to a systematic error appearing before the sampling process (model uncertainty). We describe how to vary continuously between the two reconstruction methods, so that we can trade stability for quasi-optimality. As an application we study the reconstruction of a compactly supported function from nonuniform samples of its Fourier transform.
(i) @cite_41 a bandlimited function is approximated from finitely many, nonuniform samples by means of a trigonometric polynomial. In this case the sampling space consists of the reproducing kernels @math and the reconstruction vectors are @math .
{ "cite_N": [ "@cite_41" ], "mid": [ "2015784160" ], "abstract": [ "In many Applications one seeks to recover an entire function of exponential type from its non-uniformly spaced samples. Whereas the mathematical theory usually addresses the question of when such a function in L 2 (R) can be recovered, numerical methods operate with a finite-dimensional model. The numerical reconstruction or approximation of the original function amounts to the solution of a large linear system. We show that the solutions of a particularly efficient discrete model in which the data are fit by trigonometric polynomials converge to the solution of the original infinite-dimensional reconstruction problem. This legitimatizes the numerical computations and explains why the algorithms employed produce reasonable results. The main mathematical result is a new type of approximation theorem for entire functions of exponential type from a finite number of values. From another point of view our approach provides a new method for proving sampling theorems." ] }
1706.06444
2663767485
We study reconstruction operators on a Hilbert space that are exact on a given reconstruction subspace. Among those the reconstruction operator obtained by the least squares fit has the smallest operator norm, and therefore is most stable with respect to noisy measurements. We then construct the operator with the smallest possible quasi-optimality constant, which is the most stable with respect to a systematic error appearing before the sampling process (model uncertainty). We describe how to vary continuously between the two reconstruction methods, so that we can trade stability for quasi-optimality. As an application we study the reconstruction of a compactly supported function from nonuniform samples of its Fourier transform.
(iii) More generally, the goal is to approximate a compactly supported function in some smoothness class from its nonuniform Fourier samples @math . Thus the sampling space consists again of the functions @math . The reconstruction space depends on the signal model and on a priori information. If @math is smooth and belongs to a Besov space, then the reconstruction space may be taken to be a wavelet subspace. The problems of Fourier sampling have motivated Adcock and Hansen to revisit nonuniform sampling theory and to create the impressive and useful framework of generalized sampling @cite_34 @cite_16 @cite_14 @cite_19 @cite_24 .
{ "cite_N": [ "@cite_14", "@cite_16", "@cite_24", "@cite_19", "@cite_34" ], "mid": [ "2962990130", "2032208959", "1873771836", "2962823049", "" ], "abstract": [ "Abstract In this paper we study the problem of computing wavelet coefficients of compactly supported functions from their Fourier samples. For this, we use the recently introduced framework of generalized sampling. Our first result demonstrates that using generalized sampling one obtains a stable and accurate reconstruction, provided the number of Fourier samples grows linearly in the number of wavelet coefficients recovered. For the class of Daubechies wavelets we derive the exact constant of proportionality. Our second result concerns the optimality of generalized sampling for this problem. Under some mild assumptions we show that generalized sampling cannot be outperformed in terms of approximation quality by more than a constant factor. Moreover, for the class of so-called perfect methods, any attempt to lower the sampling ratio below a certain critical threshold necessarily results in exponential ill-conditioning. Thus generalized sampling provides a nearly-optimal solution to this problem.", "Abstract We introduce a simple and efficient method to reconstruct an element of a Hilbert space in terms of an arbitrary finite collection of linearly independent reconstruction vectors, given a finite number of its samples with respect to any Riesz basis. As we establish, provided the dimension of the reconstruction space is chosen suitably in relation to the number of samples, this procedure can be implemented in a completely numerically stable manner. Moreover, the accuracy of the resulting approximation is determined solely by the choice of reconstruction basis, meaning that reconstruction vectors can be readily tailored to the particular problem at hand. An important example of this approach is the accurate recovery of a piecewise analytic function from its first few Fourier coefficients. Whilst the standard Fourier projection suffers from the Gibbs phenomenon, by reconstructing in a piecewise polynomial basis we obtain an approximation with root-exponential accuracy in terms of the number of Fourier samples and exponential accuracy in terms of the degree of the reconstruction. Numerical examples illustrate the advantage of this approach over other existing methods.", "Abstract In this paper we study the general reconstruction of a compactly supported function from its Fourier coefficients using compactly supported shearlet systems. We assume that only finitely many Fourier samples of the function are accessible and based on this finite collection of measurements an approximation is sought in a finite dimensional shearlet reconstruction space. We analyze this sampling and reconstruction process by a recently introduced method called generalized sampling. In particular by studying the stable sampling rate of generalized sampling we then show stable recovery of the signal is possible using an almost linear rate. Furthermore, we compare the result to the previously obtained rates for wavelets.", "We introduce a generalized framework for sampling and reconstruction in separable Hilbert spaces. Specifically, we establish that it is always possible to stably reconstruct a vector in an arbitrary Riesz basis from sufficiently many of its samples in any other Riesz basis. This framework can be viewed as an extension of the well-known consistent reconstruction technique (). However, whilst the latter imposes stringent assumptions on the reconstruction basis, and may in practice be unstable, our framework allows for recovery in any (Riesz) basis in a manner that is completely stable.", "" ] }
1706.06444
2663767485
We study reconstruction operators on a Hilbert space that are exact on a given reconstruction subspace. Among those the reconstruction operator obtained by the least squares fit has the smallest operator norm, and therefore is most stable with respect to noisy measurements. We then construct the operator with the smallest possible quasi-optimality constant, which is the most stable with respect to a systematic error appearing before the sampling process (model uncertainty). We describe how to vary continuously between the two reconstruction methods, so that we can trade stability for quasi-optimality. As an application we study the reconstruction of a compactly supported function from nonuniform samples of its Fourier transform.
In all these problems the canonical approximation or reconstruction is by means of a least squares fit, namely The weights @math are usually chosen to be @math , but in many contexts is has turned out to be useful to use weights as a kind of cheap preconditioners. The use of in sampling theory goes back at least to @cite_4 @cite_37 , and has become standard in the recent work on (Fourier) sampling, see for example @cite_18 @cite_41 @cite_20 @cite_28 @cite_25 @cite_39 @cite_15 @cite_2 @cite_5 .
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_4", "@cite_41", "@cite_28", "@cite_39", "@cite_2", "@cite_5", "@cite_15", "@cite_25", "@cite_20" ], "mid": [ "", "", "48974879", "2015784160", "1993016110", "2962865631", "2963732050", "2004421506", "", "2592413265", "" ], "abstract": [ "", "", "This chapter presents methods for the reconstruction of bandlimited functions from irregular samples. The first part discusses algorithms for a constructive solution of the irregular sampling problem. An important aspect of these algorithms is the explicit knowledge of the constants involved and efficient error estimates. The second part discusses the numerical implementation of these algorithms and compares the performance of various reconstruction methods. Although most of the material has already appeared in print, several new results are included: (a) a new method to estimate frame bounds, (b) a reconstruction of band-limited functions from partial information, which is not just the samples; (c) a new result on the complete reconstruction of band-limited functions from local averages; (d) a systematic exposition of recent experimental results.", "In many Applications one seeks to recover an entire function of exponential type from its non-uniformly spaced samples. Whereas the mathematical theory usually addresses the question of when such a function in L 2 (R) can be recovered, numerical methods operate with a finite-dimensional model. The numerical reconstruction or approximation of the original function amounts to the solution of a large linear system. We show that the solutions of a particularly efficient discrete model in which the data are fit by trigonometric polynomials converge to the solution of the original infinite-dimensional reconstruction problem. This legitimatizes the numerical computations and explains why the algorithms employed produce reasonable results. The main mathematical result is a new type of approximation theorem for entire functions of exponential type from a finite number of values. From another point of view our approach provides a new method for proving sampling theorems.", "We give an overview of recent developments in the problem of reconstructing a band-limited signal from nonuniform sampling from a numerical analysis view point. It is shown that the appropriate design of the finite-dimensional model plays a key role in the numerical solution of the nonuniform sampling problem. In the one approach (often proposed in the literature) the finite-dimensional model leads to an ill-posed problem even in very simple situations. The other approach that we consider leads to a well-posed problem that preserves important structural properties of the original infinite-dimensional problem and gives rise to efficient numerical algorithms. Furthermore, a fast multilevel algorithm is presented that can reconstruct signals of unknown bandwidth from noisy nonuniformly spaced samples. We also discuss the design of efficient regularization methods for ill-conditioned reconstruction problems. Numerical examples from spectroscopy and exploration geophysics demonstrate the performance of the proposed methods.", "Abstract In this paper, we consider the problem of recovering a compactly supported multivariate function from a collection of pointwise samples of its Fourier transform taken nonuniformly. We do this by using the concept of weighted Fourier frames. A seminal result of Beurling shows that sampling points give rise to a classical Fourier frame provided they are relatively separated and of sufficient density. However, this result does not allow for arbitrary clustering of sampling points, as is often the case in practice. Whilst keeping the density condition sharp and dimension independent, our first result removes the separation condition and shows that density alone suffices. However, this result does not lead to estimates for the frame bounds. A known result of Grochenig provides explicit estimates, but only subject to a density condition that deteriorates linearly with dimension. In our second result we improve these bounds by reducing the dimension dependence. In particular, we provide explicit frame bounds which are dimensionless for functions having compact support contained in a sphere. Next, we demonstrate how our two main results give new insight into a reconstruction algorithm—based on the existing generalized sampling framework—that allows for stable and quasi-optimal reconstruction in any particular basis from a finite collection of samples. Finally, we construct sufficiently dense sampling schemes that are often used in practice—jittered, radial and spiral sampling schemes—and provide several examples illustrating the effectiveness of our approach when tested on these schemes.", "We provide sufficient density condition for a set of nonuniform samples to give rise to a set of sampling for multivariate bandlimited functions when the measurements consist of pointwise evaluations of a function and its first k derivatives. Along with explicit estimates of corresponding frame bounds, we derive the explicit density bound and show that, as k increases, it grows linearly in (k+1 ) with the constant of proportionality (1 e ). Seeking larger gap conditions, we also prove a multivariate perturbation result for nonuniform samples that are sufficiently close to sets of sampling, e.g. to uniform samples taken at (k+1 ) times the Nyquist rate. Additionally, in the univariate setting, we consider a related problem of so-called nonuniform bunched sampling, where in each sampling interval (s+1 ) bunched measurements of a function are taken and the sampling intervals are permitted to be of different length. We derive an explicit density condition which grows linearly in (s+1 ) for large s, with the constant of proportionality depending on the width of the bunches. The width of the bunches is allowed to be arbitrarily small, and moreover, for sufficiently narrow bunches and sufficiently large s, we obtain the same result as in the case of univariate sampling with s derivatives.", "This article discusses modern techniques for nonuniform sampling and reconstruction of functions in shift-invariant spaces. It is a survey as well as a research paper and provides a unified framework for uniform and nonuniform sampling and reconstruction in shift-invariant subspaces by bringing together wavelet theory, frame theory, reproducing kernel Hilbert spaces, approximation theory, amalgam spaces, and sampling. Inspired by applications taken from communication, astronomy, and medicine, the following aspects will be emphasized: (a) The sampling problem is well defined within the setting of shift-invariant spaces. (b) The general theory works in arbitrary dimension and for a broad class of generators. (c) The reconstruction of a function from any sufficiently dense nonuniform sampling set is obtained by efficient iterative algorithms. These algorithms converge geometrically and are robust in the presence of noise. (d) To model the natural decay conditions of real signals and images, the sampling theory is developed in weighted L p-spaces.", "", "We consider the problem of recovering a compactly supported function from a finite collection of pointwise samples of its Fourier transform taken nonuniformly. First, we show that under suitable conditions on the sampling frequencies---specifically, their density and bandwidth---it is possible to recover any such function @math in a stable and accurate manner in any given finite-dimensional subspace, in particular, one which is well suited for approximating @math . In practice, this is carried out using so-called nonuniform generalized sampling (NUGS). Second, we consider approximation spaces in one dimension consisting of compactly supported wavelets. We prove that a linear scaling of the dimension of the space with the sampling bandwidth is both necessary and sufficient for stable and accurate recovery. Thus wavelets are, up to constant factors, optimal spaces for reconstruction.", "" ] }
1706.06542
2950905275
In this work, we aim at developing an extractive summarizer in the multi-document setting. We implement a rank based sentence selection using continuous vector representations along with key-phrases. Furthermore, we propose a model to tackle summary coherence for increasing readability. We conduct experiments on the Document Understanding Conference (DUC) 2004 datasets using ROUGE toolkit. Our experiments demonstrate that the methods bring significant improvements over the state of the art methods in terms of informativity and coherence.
During a decade, several extractive approaches have been developed for automatic summary generation that implement a number of machine learning, graph-based and optimization techniques. LexRank @cite_8 and TextRank @cite_18 are graph-based methods of computing sentence importance for text summarization. The RegSum system @cite_3 employs a supervised model for predicting word importance. Treating multi-document summarization as a submodular maximization problem has proven successful by @cite_13 . Unfortunately, none of the above systems care about the coherence of the final extracted summary.
{ "cite_N": [ "@cite_18", "@cite_13", "@cite_3", "@cite_8" ], "mid": [ "1525595230", "2144933361", "", "2110693578" ], "abstract": [ "In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications.", "We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.", "", "We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents." ] }
1706.06384
2951259575
Since its unveiling in 2011, this http URL has become the de facto standard for publishing semantically described structured data on the web, typically in the form of web page annotations. The increasing adoption of this http URL facilitates the growth of the web of data, as well as the development of automated agents that operate on this data. this http URL is a large heterogeneous vocabulary that covers many domains. This is obviously not a bug, but a feature, since this http URL aims to describe almost everything on the web, and the web is huge. However, the heterogeneity of this http URL may cause a side effect, which is the challenge of picking the right classes and properties for an annotation in a certain domain, as well as keeping the annotation semantically consistent. In this work, we introduce our rule based approach and an implementation of it for validating this http URL annotations from two aspects: (a) the completeness of the annotations in terms of a specified domain, (b) the semantic consistency of the values based on pre-defined rules. We demonstrate our approach in the tourism domain.
We introduce the notion of "domain" and a simple specification of it for schema.org, which adopts a similar nested definition of constraints that restricts classes and properties in relation to other classes of which they are expected types. The concept of selecting a subset of schema.org appears in @cite_7 , but to the best of our knowledge, the domain selection of the editor described there is limited to the selection of classes. We propose a different domain specification approach including selecting a subset of properties and restricting the range of those properties to a subset of subclasses of the range defined by schema.org. The importance of this restriction is described in Section in more detail. Additionally, our validator brings domain definition and semantic consistency rules together in one holistic tool.
{ "cite_N": [ "@cite_7" ], "mid": [ "151101049" ], "abstract": [ "Structured data is picking up on the Web, particularly in the search world. Schema.org, jointly initiated by Google, Microsoft, and Yahoo! provides a hierarchical set of vocabularies to embed metadata in HTML pages for an enhanced search and browsing experience. RDFa-Lite, Microdata and JSON-LD as lower semantic techniques have gained more attention by Web users to markup Web pages and even emails based on Schema.org. However, from the user interface point of view, we still lack user-friendly tools that facilitate the process of structured content authoring. The majority of information still is contained in and exchanged using unstructured documents, such as Web pages, text documents, images and videos. This can also not be expected to change, since text, images and videos are the natural way how humans interact with information. In this paper we present RDFaCE as an implementation of WYSIWYM (What-You-See-Is-What-You-Mean) concept for direct manipulation of semantically structured content in conventional modalities. RDFaCE utilizes on-the-fly form generation based on Schema.org vocabulary for embedding metadata within Web documents. Furthermore, it employs external NLP services to enable automatic annotation of entities and to suggest URIs for entities. RDFaCE is written as a plugin for TinyMCE WYSIWYG editor thereby can be easily integrated into existing content management systems." ] }
1706.05952
2949953504
We address the problem of localisation of objects as bounding boxes in images and videos with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. In this paper, a novel framework based on Bayesian joint topic modelling is proposed, which differs significantly from the existing ones in that: (1) All foreground object classes are modelled jointly in a single generative model that encodes multiple object co-existence so that "explaining away" inference can resolve ambiguity and lead to better learning and localisation. (2) Image backgrounds are shared across classes to better learn varying surroundings and "push out" objects of interest. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Moreover, the Bayesian formulation enables the exploitation of various types of prior knowledge to compensate for the limited supervision offered by weakly labelled data, as well as Bayesian domain adaptation for transfer learning. Extensive experiments on the PASCAL VOC, ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of our Bayesian joint model for weakly supervised object localisation.
Domain adaptation @cite_54 methods aim to exploit prior knowledge from a source domain dataset to improve the performance and or reduce the amount of annotation required in a target domain dataset (see @cite_46 for a review). Many conventional approaches are based on SVMs for which the target domain can be considered a perturbed version of the source domain, and thus learning proceeds in the target domain by regularising it toward the source @cite_39 . More recently, transductive SVM @cite_65 , Multiple Kernel Learning (MKL) @cite_20 , and instance constraints @cite_48 have been exploited. In contrast to these discriminative approaches, we exploit a simple and efficient Bayesian adaptation approach similar in spirit to @cite_54 @cite_24 . Posterior parameters from the source domain are transferred as priors for the target, which are then adapted based on observed target domain data via Bayesian learning. Going beyond simple within-modality dataset bias, recent studies @cite_10 @cite_70 have adapted object detectors from video to image or reverse. We show that our approach can achieve the image-video domain transfer within a single framework.
{ "cite_N": [ "@cite_70", "@cite_48", "@cite_54", "@cite_65", "@cite_39", "@cite_24", "@cite_46", "@cite_10", "@cite_20" ], "mid": [ "2133434696", "", "2097342496", "2107250100", "1978920452", "", "2136504847", "1973054923", "2149676790" ], "abstract": [ "Typical object detectors trained on images perform poorly on video, as there is a clear distinction in domain between the two types of data. In this paper, we tackle the problem of adapting object detectors learned from images to work well on videos. We treat the problem as one of unsupervised domain adaptation, in which we are given labeled data from the source domain (image), but only unlabeled data from the target domain (video). Our approach, self-paced domain adaptation, seeks to iteratively adapt the detector by re-training the detector with automatically discovered target domain examples, starting with the easiest first. At each iteration, the algorithm adapts by considering an increased number of target domain examples, and a decreased number of source domain examples. To discover target domain examples from the vast amount of video data, we introduce a simple, robust approach that scores trajectory tracks instead of bounding boxes. We also show how rich and expressive features specific to the target domain can be incorporated under the same framework. We show promising results on the 2011 TRECVID Multimedia Event Detection [1] and LabelMe Video [2] datasets that illustrate the benefit of our approach to adapt object detectors to video.", "", "In recent years, many research works have been carried out to recognize human actions from video clips. To learn an effective action classifier, most of the previous approaches rely on enough training labels. When being required to recognize the action in a different dataset, these approaches have to re-train the model using new labels. However, labeling video sequences is a very tedious and time-consuming task, especially when detailed spatial locations and time durations are required. In this paper, we propose an adaptive action detection approach which reduces the requirement of training labels and is able to handle the task of cross-dataset action detection with few or no extra training labels. Our approach combines model adaptation and action detection into a Maximum a Posterior (MAP) estimation framework, which explores the spatial-temporal coherence of actions and makes good use of the prior information which can be obtained without supervision. Our approach obtains state-of-the-art results on KTH action dataset using only 50 of the training labels in tradition approaches. Furthermore, we show that our approach is effective for the cross-dataset detection which adapts the model trained on KTH to two other challenging datasets1.", "Most current image categorization methods require large collections of manually annotated training examples to learn accurate visual recognition models. The time-consuming human labeling effort effectively limits these approaches to recognition problems involving a small number of different object classes. In order to address this shortcoming, in recent years several authors have proposed to learn object classifiers from weakly-labeled Internet images, such as photos retrieved by keyword-based image search engines. While this strategy eliminates the need for human supervision, the recognition accuracies of these methods are considerably lower than those obtained with fully-supervised approaches, because of the noisy nature of the labels associated to Web data. In this paper we investigate and compare methods that learn image classifiers by combining very few manually annotated examples (e.g., 1-10 images per class) and a large number of weakly-labeled Web photos retrieved using keyword-based image search. We cast this as a domain adaptation problem: given a few strongly-labeled examples in a target domain (the manually annotated examples) and many source domain examples (the weakly-labeled Web photos), learn classifiers yielding small generalization error on the target domain. Our experiments demonstrate that, for the same number of strongly-labeled examples, our domain adaptation approach produces significant recognition rate improvements over the best published results (e.g., 65 better when using 5 labeled training examples per class) and that our classifiers are one order of magnitude faster to learn and to evaluate than the best competing method, despite our use of large weakly-labeled data sets.", "Many multimedia applications can benefit from techniques for adapting existing classifiers to data with different distributions. One example is cross-domain video concept detection which aims to adapt concept classifiers across various video domains. In this paper, we explore two key problems for classifier adaptation: (1) how to transform existing classifier(s) into an effective classifier for a new dataset that only has a limited number of labeled examples, and (2) how to select the best existing classifier(s) for adaptation. For the first problem, we propose Adaptive Support Vector Machines (A-SVMs) as a general method to adapt one or more existing classifiers of any type to the new dataset. It aims to learn the \"delta function\" between the original and adapted classifier using an objective function similar to SVMs. For the second problem, we estimate the performance of each existing classifier on the sparsely-labeled new dataset by analyzing its score distribution and other meta features, and select the classifiers with the best estimated performance. The proposed method outperforms several baseline and competing methods in terms of classification accuracy and efficiency in cross-domain concept detection in the TRECVID corpus.", "", "Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.", "Object detectors are typically trained on a large set of still images annotated by bounding-boxes. This paper introduces an approach for learning object detectors from real-world web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos. To compare to the state of the art, we test our detector on still images, i.e., Pascal VOC 2007. We observe that frames extracted from web videos can differ significantly in terms of quality to still images taken by a good camera. Thus, we formulate the learning from videos as a domain adaptation task. We show that training from a combination of weakly annotated videos and fully annotated still images using domain adaptation improves the performance of a detector trained from still images alone.", "The vast majority of transfer learning methods proposed in the visual recognition domain over the last years addresses the problem of object category detection, assuming a strong control over the priors from which transfer is done. This is a strict condition, as it concretely limits the use of this type of approach in several settings: for instance, it does not allow in general to use off-the-shelf models as priors. Moreover, the lack of a multiclass formulation for most of the existing transfer learning algorithms prevents using them for object categorization problems, where their use might be beneficial, especially when the number of categories grows and it becomes harder to get enough annotated data for training standard learning methods. This paper presents a multiclass transfer learning algorithm that allows to take advantage of priors built over different features and with different learning methods than the one used for learning the new task. We use the priors as experts, and transfer their outputs to the new incoming samples as additional information. We cast the learning problem within the Multi Kernel Learning framework. The resulting formulation solves efficiently a joint optimization problem that determines from where and how much to transfer, with a principled multiclass formulation. Extensive experiments illustrate the value of this approach." ] }
1706.05952
2949953504
We address the problem of localisation of objects as bounding boxes in images and videos with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. In this paper, a novel framework based on Bayesian joint topic modelling is proposed, which differs significantly from the existing ones in that: (1) All foreground object classes are modelled jointly in a single generative model that encodes multiple object co-existence so that "explaining away" inference can resolve ambiguity and lead to better learning and localisation. (2) Image backgrounds are shared across classes to better learn varying surroundings and "push out" objects of interest. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Moreover, the Bayesian formulation enables the exploitation of various types of prior knowledge to compensate for the limited supervision offered by weakly labelled data, as well as Bayesian domain adaptation for transfer learning. Extensive experiments on the PASCAL VOC, ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of our Bayesian joint model for weakly supervised object localisation.
An approach similar in spirit to ours in the sense of jointly learning a model for all classes is that of Cabral al @cite_41 . This study formulates multi-label image classification as a matrix completion problem, which is also related to our factoring images into a mixture of topics. However we add two key components of (i) a stronger notion of the spatial location and extent of each object, and (ii) the ability to encode human knowledge or transferred knowledge through a Bayesian prior. As a result, we are able to address more challenging data than @cite_41 such PASCAL VOC. Multi-instance multi-label (MIML) @cite_67 approaches provide a mechanism to jointly learn a model for all classes @cite_49 @cite_17 . However, because these methods must search for a discrete space (of positive instance subsets), their optimisation problem is harder than the smooth probabilistic optimisation here. Finally, while more elaborate joint generative learning methods @cite_53 @cite_33 exist, they are more complicated than necessary for WSOL and do not scale to the size of data required here.
{ "cite_N": [ "@cite_67", "@cite_33", "@cite_41", "@cite_53", "@cite_49", "@cite_17" ], "mid": [ "2154840533", "2106624428", "", "2033012377", "2135533176", "" ], "abstract": [ "In this paper, we address the problem of multi-instance multi-label learning (MIML) where each example is associated with not only multiple instances but also multiple class labels. In our novel approach, given an MIML example, each instance in the example is only associated with a single label and the label set of the example is the aggregation of all instance labels. Many real-world tasks such as scene classification, text categorization and gene sequence encoding can be properly formalized under our proposed approach. We formulate our MIML problem as a combination of two optimizations: (1) a quadratic programming (QP) that minimizes the empirical risk with L2-norm regularization, and (2) an integer programing (IP) assigning each instance to a single label. We also present an efficient method combining the stochastic gradient decent and alternating optimization approaches to solve our QP and IP optimizations. In our experiments with both an artificially generated data set and real-world applications, i.e. scene classification and text categorization, our proposed method achieves superior performance over existing state-of-the-art MIML methods such as MIMLBOOST, MIMLSVM, M @math MIML and MIMLRBF.", "Given an image, we propose a hierarchical generative model that classifies the overall scene, recognizes and segments each object component, as well as annotates the image with a list of tags. To our knowledge, this is the first model that performs all three tasks in one coherent framework. For instance, a scene of a polo game' consists of several visual objects such as human', horse', grass', etc. In addition, it can be further annotated with a list of more abstract (e.g. dusk') or visually less salient (e.g. saddle') tags. Our generative model jointly explains images through a visual model and a textual model. Visually relevant objects are represented by regions and patches, while visually irrelevant textual annotations are influenced directly by the overall scene class. We propose a fully automatic learning framework that is able to learn robust scene models from noisy Web data such as images and user tags from Flickr.com. We demonstrate the effectiveness of our framework by automatically classifying, annotating and segmenting images from eight classes depicting sport scenes. In all three tasks, our model significantly outperforms state-of-the-art algorithms.", "", "We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes.", "In this paper, we formalize multi-instance multi-label learning, where each training example is associated with not only multiple instances but also multiple class labels. Such a problem can occur in many real-world tasks, e.g. an image usually contains multiple patches each of which can be described by a feature vector, and the image can belong to multiple categories since its semantics can be recognized in different ways. We analyze the relationship between multi-instance multi-label learning and the learning frameworks of traditional supervised learning, multi-instance learning and multi-label learning. Then, we propose the MIMLBOOST and MIMLSVM algorithms which achieve good performance in an application to scene classification.", "" ] }
1706.05952
2949953504
We address the problem of localisation of objects as bounding boxes in images and videos with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. In this paper, a novel framework based on Bayesian joint topic modelling is proposed, which differs significantly from the existing ones in that: (1) All foreground object classes are modelled jointly in a single generative model that encodes multiple object co-existence so that "explaining away" inference can resolve ambiguity and lead to better learning and localisation. (2) Image backgrounds are shared across classes to better learn varying surroundings and "push out" objects of interest. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Moreover, the Bayesian formulation enables the exploitation of various types of prior knowledge to compensate for the limited supervision offered by weakly labelled data, as well as Bayesian domain adaptation for transfer learning. Extensive experiments on the PASCAL VOC, ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of our Bayesian joint model for weakly supervised object localisation.
Combining multiple complementary cues has been shown to improve classification performance in object recognition @cite_61 @cite_47 @cite_26 @cite_20 . Two simple feature fusion methods have been widely used in existing work: early fusion which combines low-level features @cite_22 early (feature concatenation) and late (score level) fusion @cite_35 @cite_31 . Multiple kernel learning (MKL) approaches have attracted attention as a principled mid-level approach to combining features @cite_26 @cite_47 . Similarly to MKL, our framework provides a principled and jointly-learned mid-level probabilistic fusion via its generative process.
{ "cite_N": [ "@cite_61", "@cite_35", "@cite_31", "@cite_26", "@cite_22", "@cite_47", "@cite_20" ], "mid": [ "", "2061629163", "2020477327", "2048041841", "2107200795", "", "2149676790" ], "abstract": [ "", "Learning a new object class from cluttered training images is very challenging when the location of object instances is unknown, i.e. in a weakly supervised setting. Many previous works require objects covering a large portion of the images. We present a novel approach that can cope with extensive clutter as well as large scale and appearance variations between object instances. To make this possible we exploit generic knowledge learned beforehand from images of other classes for which location annotation is available. Generic knowledge facilitates learning any new class from weakly supervised images, because it reduces the uncertainty in the location of its object instances. We propose a conditional random field that starts from generic knowledge and then progressively adapts to the new class. Our approach simultaneously localizes object instances while learning an appearance model specific for the class. We demonstrate this on several datasets, including the very challenging Pascal VOC 2007. Furthermore, our method allows training any state-of-the-art object detector in a weakly supervised fashion, although it would normally require object location annotations.", "A conventional approach to learning object detectors uses fully supervised learning techniques which assumes that a training image set with manual annotation of object bounding boxes are provided. The manual annotation of objects in large image sets is tedious and unreliable. Therefore, a weakly supervised learning approach is desirable, where the training set needs only binary labels regarding whether an image contains the target object class. In the weakly supervised approach a detector is used to iteratively annotate the training set and learn the object model. We present a novel weakly supervised learning framework for learning an object detector. Our framework incorporates a new initial annotation model to start the iterative learning of a detector and a model drift detection method that is able to detect and stop the iterative learning when the detector starts to drift away from the objects of interest. We demonstrate the effectiveness of our approach on the challenging PASCAL 2007 dataset.", "Several object categorization algorithms use kernel methods over multiple cues, as they offer a principled approach to combine multiple cues, and to obtain state-of-the-art performance. A general drawback of these strategies is the high computational cost during training, that prevents their application to large-scale problems. They also do not provide theoretical guarantees on their convergence rate. Here we present a Multiclass Multi Kernel Learning (MKL) algorithm that obtains state-of-the-art performance in a considerably lower training time. We generalize the standard MKL formulation to introduce a parameter that allows us to decide the level of sparsity of the solution. Thanks to this new setting, we can directly solve the problem in the primal formulation. We prove theoretically and experimentally that 1) our algorithm has a faster convergence rate as the number of kernels grow; 2) the training complexity is linear in the number of training examples; 3) very few iterations are enough to reach good solutions. Experiments on three standard benchmark databases support our claims.", "We propose a principled probabilistic formulation of object saliency as a sampling problem. This novel formulation allows us to learn, from a large corpus of unlabelled images, which patches of an image are of the greatest interest and most likely to correspond to an object. We then sample the object saliency map to propose object locations. We show that using only a single object location proposal per image, we are able to correctly select an object in over 42 of the images in the Pascal VOC 2007 dataset, substantially outperforming existing approaches. Furthermore, we show that our object proposal can be used as a simple unsupervised approach to the weakly supervised annotation problem. Our simple unsupervised approach to annotating objects of interest in images achieves a higher annotation accuracy than most weakly supervised approaches.", "", "The vast majority of transfer learning methods proposed in the visual recognition domain over the last years addresses the problem of object category detection, assuming a strong control over the priors from which transfer is done. This is a strict condition, as it concretely limits the use of this type of approach in several settings: for instance, it does not allow in general to use off-the-shelf models as priors. Moreover, the lack of a multiclass formulation for most of the existing transfer learning algorithms prevents using them for object categorization problems, where their use might be beneficial, especially when the number of categories grows and it becomes harder to get enough annotated data for training standard learning methods. This paper presents a multiclass transfer learning algorithm that allows to take advantage of priors built over different features and with different learning methods than the one used for learning the new task. We use the priors as experts, and transfer their outputs to the new incoming samples as additional information. We cast the learning problem within the Multi Kernel Learning framework. The resulting formulation solves efficiently a joint optimization problem that determines from where and how much to transfer, with a principled multiclass formulation. Extensive experiments illustrate the value of this approach." ] }
1706.05952
2949953504
We address the problem of localisation of objects as bounding boxes in images and videos with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. In this paper, a novel framework based on Bayesian joint topic modelling is proposed, which differs significantly from the existing ones in that: (1) All foreground object classes are modelled jointly in a single generative model that encodes multiple object co-existence so that "explaining away" inference can resolve ambiguity and lead to better learning and localisation. (2) Image backgrounds are shared across classes to better learn varying surroundings and "push out" objects of interest. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Moreover, the Bayesian formulation enables the exploitation of various types of prior knowledge to compensate for the limited supervision offered by weakly labelled data, as well as Bayesian domain adaptation for transfer learning. Extensive experiments on the PASCAL VOC, ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of our Bayesian joint model for weakly supervised object localisation.
In summary, this paper makes the following contributions: (1) We propose the novel concept of joint modelling of all object classes and background for weakly supervised object localisation. (2) We formulate a novel Bayesian topic model suitable for object localisation, which can use various types of prior knowledge including an inter-category appearance similarity prior. (3) Our Bayesian prior enables the model to easily borrow available domain knowledge from existing auxiliary datasets and adapt it to a target domain. (4) We further exploiting unlabelled data for improving weakly supervised object localisation. (5) Extensive experiments on the PASCAL VOC 2007 @cite_43 and ImageNet @cite_5 show that our model surpasses existing competitors and achieves state-of-the-art performance. A preliminary version of our work was described in @cite_6 .
{ "cite_N": [ "@cite_43", "@cite_5", "@cite_6" ], "mid": [ "", "2108598243", "2079267072" ], "abstract": [ "", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "We address the problem of localisation of objects as bounding boxes in images with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. We propose a novel framework based on Bayesian joint topic modelling. Our framework has three distinctive advantages over previous works: (1) All object classes and image backgrounds are modelled jointly together in a single generative model so that \"explaining away\" inference can resolve ambiguity and lead to better learning and localisation. (2) The Bayesian formulation of the model enables easy integration of prior knowledge about object appearance to compensate for limited supervision. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Extensive experiments on the challenging VOC dataset demonstrate that our approach outperforms the state-of-the-art competitors." ] }
1706.06169
2683784395
This paper describes our approach to the DSTL Satellite Imagery Feature Detection challenge run by Kaggle. The primary goal of this challenge is accurate semantic segmentation of different classes in satellite imagery. Our approach is based on an adaptation of fully convolutional neural network for multispectral data processing. In addition, we defined several modifications to the training objective and overall training pipeline, e.g. boundary effect estimation, also we discuss usage of data augmentation strategies and reflectance indices. Our solution scored third place out of 419 entries. Its accuracy is comparable to the first two places, but unlike those solutions, it doesn't rely on complex ensembling techniques and thus can be easily scaled for deployment in production as a part of automatic feature labeling systems for satellite imagery analysis.
The most successful state-of-the-art deep learning method is the Fully Convolutional Network (FCN) @cite_4 . The main idea of this approach is to use a CNN as a powerful feature extractor while replacing the fully connected layers with convolution ones to output spatial maps instead of classification scores. Those maps are upsampled to produce dense per-pixel output. This method allows training CNN in the end to end manner for segmentation with input images of arbitrary sizes. This approach achieved a notable enhancement in segmentation accuracy over common methods on standard datasets like PASCAL VOC @cite_11 .
{ "cite_N": [ "@cite_4", "@cite_11" ], "mid": [ "2952632681", "2037227137" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008---2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community's progress through time using the methods of (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges." ] }
1706.06169
2683784395
This paper describes our approach to the DSTL Satellite Imagery Feature Detection challenge run by Kaggle. The primary goal of this challenge is accurate semantic segmentation of different classes in satellite imagery. Our approach is based on an adaptation of fully convolutional neural network for multispectral data processing. In addition, we defined several modifications to the training objective and overall training pipeline, e.g. boundary effect estimation, also we discuss usage of data augmentation strategies and reflectance indices. Our solution scored third place out of 419 entries. Its accuracy is comparable to the first two places, but unlike those solutions, it doesn't rely on complex ensembling techniques and thus can be easily scaled for deployment in production as a part of automatic feature labeling systems for satellite imagery analysis.
Our solution is based on modified fully convolutional neural network architecture called U-Net @cite_3 , that was previously used for the tasks of biomedical image segmentation. The U-Net architecture allows combining low-level feature maps with higher-level ones, which enables precise localization. A large number of feature channels in upsampling part allows propagating context information to higher resolution layers. This type of network architecture was specially designed to solve image segmentation problems effectively. Technical details of U-Net adaptation for discussed task provided in Section .
{ "cite_N": [ "@cite_3" ], "mid": [ "2952232639" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL ." ] }
1706.06275
2652651782
Recent work in computer vision has yielded impressive results in automatically describing images with natural language. Most of these systems generate captions in a sin- gle language, requiring multiple language-specific models to build a multilingual captioning system. We propose a very simple technique to build a single unified model across languages, using artificial tokens to control the language, making the captioning system more compact. We evaluate our approach on generating English and Japanese captions, and show that a typical neural captioning architecture is capable of learning a single model that can switch between two different languages.
The latest image captioning systems use multimodal neural networks, inspired from sequence to sequence modeling in machine translation @cite_7 . Images are fed into a Convolutional Neural Network (CNN) to extract visual features and then converted to word sequences using a Recurrent Neural Network (RNN) that has been trained on image-sentence ground truth pairs @cite_14 @cite_8 @cite_4 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_8" ], "mid": [ "2951912364", "2963758027", "2949888546", "2951805548" ], "abstract": [ "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations." ] }
1706.06275
2652651782
Recent work in computer vision has yielded impressive results in automatically describing images with natural language. Most of these systems generate captions in a sin- gle language, requiring multiple language-specific models to build a multilingual captioning system. We propose a very simple technique to build a single unified model across languages, using artificial tokens to control the language, making the captioning system more compact. We evaluate our approach on generating English and Japanese captions, and show that a typical neural captioning architecture is capable of learning a single model that can switch between two different languages.
More generally, multilingual machine translation is an active area of research. The performance of machine translation can be improved when training data in more than two languages is available @cite_17 @cite_11 , but the models become more complex as the number of languages increases, because they use separate RNNs for each language. The closest related work to ours is Google's multilingual translation system @cite_6 that uses artificial tokens to control the languages. We apply a similar idea here for image caption generation.
{ "cite_N": [ "@cite_11", "@cite_6", "@cite_17" ], "mid": [ "2229833550", "2550821151", "" ], "abstract": [ "We propose multi-way, multilingual neural machine translation. The proposed approach enables a single neural translation model to translate between multiple languages, with a number of parameters that grows only linearly with the number of languages. This is made possible by having a single attention mechanism that is shared across all language pairs. We train the proposed multi-way, multilingual model on ten language pairs from WMT'15 simultaneously and observe clear performance improvements over models trained on only one language pair. In particular, we observe that the proposed model significantly improves the translation quality of low-resource language pairs.", "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-the-art results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and show some interesting examples when mixing languages.", "" ] }
1706.06261
2887779009
Outsourcing software middlebox (aka. virtual network function) to third-party service provider, like public cloud, has been a recent trend for many well-known benefits. However, redirecting a large influx of raw traffic from enterprises to the service providers, who are often outside the traditional trust domain, raises severe security concerns. Over the past few years, steady progress has been made towards enabling the middlebox functionalities without revealing sensitive traffic. Following the direction, in this paper, we present LightBox, the first system that can drive full-stack protected stateful middleboxes at native speed. Similar to existing hardware-assisted designs, LightBox is built on top of SGX enclave. But, different from all prior systems, LightBox is designed to further address two new challenges that, surprisingly, are largely overlooked in the literature. First, LightBox enables enterprises to safely forward the packets intended for processing to the enclaved middlebox with all metadata, including low-level packet headers, packet size, count and timestamps, fully protected. Studies have shown that such metadata can be information-rich and potentially exploited by many traffic analysis attacks. Second, LightBox achieves very efficient in- enclave stateful middlebox processing within the highly constraint enclave space, without incurring unreasonably high overhead. The confidentiality, integrity, and freshness of the flow states are always ensured throughout our efficient state management procedures. Evaluations show that LightBox, with all the extra security benefits, can still perform packet I O at 10Gbps line rate. Even when tracking 600K flows, LightBox can obtain 3.5X to 30X performance speedup than direct adoption of SGX, while incurring negligible packet delay (<1 s) for most test cases.
Secure middleboxes BlindBox @cite_23 is the first system that applies cryptographic protocols (i.e., searchable encryption and garbled circuit) to enable inspection on encrypted packet payloads. A list of follow-up designs are proposed. The work @cite_15 emphasizes on the protection of middlebox rules and support of more inspection rules. SPABox @cite_72 and BlindIDS @cite_71 put extra attention on reducing connection setup cost. Besides payloads, privacy-preserving packet header checking is also studied. The work @cite_54 considers using heavy homomorphic encryption for generic network functions. SplitBox @cite_65 employs a distributed model for a certain class of packet header processing. Embark @cite_21 introduces a customized prefix-matching scheme and by combining it with the technique from @cite_23 , supports a wider class of middlebox functionality. These software-based solutions are generally not yet practical regarding functionality, especially stateful processing, and performance.
{ "cite_N": [ "@cite_54", "@cite_21", "@cite_65", "@cite_72", "@cite_23", "@cite_71", "@cite_15" ], "mid": [ "2256235146", "2306714220", "2949556137", "2764080192", "", "2603792932", "" ], "abstract": [ "Aiming to reduce the cost and complexity of maintaining networking infrastructures, organizations are increasingly outsourcing their network functions (e.g., firewalls, traffic shapers and intrusion detection systems) to the cloud, and a number of industrial players have started to offer network function virtualization (NFV)-based solutions. Alas, outsourcing network functions in its current setting implies that sensitive network policies, such as firewall rules, are revealed to the cloud provider. In this paper, we investigate the use of cryptographic primitives for processing outsourced network functions, so that the provider does not learn any sensitive information. More specifically, we present a cryptographic treatment of privacy-preserving outsourcing of network functions, introducing security definitions as well as an abstract model of generic network functions, and then propose a few instantiations using partial homomorphic encryption and public-key encryption with keyword search. We include a proof-of-concept implementation of our constructions and show that network functions can be privately processed by an untrusted cloud provider in a few milliseconds.", "It is increasingly common for enterprises and other organizations to outsource network processing to the cloud. For example, enterprises may outsource firewalling, caching, and deep packet inspection, just as they outsource compute and storage. However, this poses a threat to enterprise confidentiality because the cloud provider gains access to the organization's traffic. We design and build Embark, the first system that enables a cloud provider to support middlebox outsourcing while maintaining the client's confidentiality. Embark encrypts the traffic that reaches the cloud and enables the cloud to process the encrypted traffic without decrypting it. Embark supports a wide-range of middleboxes such as firewalls, NATs, web proxies, load balancers, and data exfiltration systems. Our evaluation shows that Embark supports these applications with competitive performance.", "This paper presents SplitBox, a scalable system for privately processing network functions that are outsourced as software processes to the cloud. Specifically, providers processing the network functions do not learn the network policies instructing how the functions are to be processed. We first propose an abstract model of a generic network function based on match-action pairs, assuming that this is processed in a distributed manner by multiple honest-but-curious providers. Then, we introduce our SplitBox system for private network function virtualization and present a proof-of-concept implementation on FastClick -- an extension of the Click modular router -- using a firewall as a use case. Our experimental results show that SplitBox achieves a throughput of over 2 Gbps with 1 kB-sized packets on average, traversing up to 60 firewall rules.", "Widely used over the Internet to encrypt traffic, HTTPS provides secure and private data communication between clients and servers. However, to cope with rapidly changing and sophisticated security attacks, network operators often deploy middleboxes to perform deep packet inspection (DPI) to detect attacks and potential security breaches, using techniques ranging from simple keyword matching to more advanced machine learning and data mining analysis. But this creates a problem: how can middleboxes, which employ DPI, work over HTTPS connections with encrypted traffic while preserving privacy? In this paper, we present SPABox, a middlebox-based system that supports both keyword-based and data analysis-based DPI functions over encrypted traffic. SPABox preserves privacy by using a novel protocol with a limited connection setup overhead. We implement SPABox on a standard server and show that SPABox is practical for both long-lived and short-lived connection. Compared with the state-of-the-art Blindbox system, SPABox is more than five orders of magnitude faster and requires seven orders of magnitude less bandwidth for connection setup while SPABox can achieve a higher security level.", "", "The goal of network intrusion detection is to inspect network traffic in order to identify threats and known attack patterns. One of its key features is Deep Packet Inspection (DPI), that extracts the content of network packets and compares it against a set of detection signatures. While DPI is commonly used to protect networks and information systems, it requires direct access to the traffic content, which makes it blinded against encrypted network protocols such as HTTPS. So far, a difficult choice was to be made between the privacy of network users and security through the inspection of their traffic content to detect attacks or malicious activities. This paper presents a novel approach that bridges the gap between network security and privacy. It makes possible to perform DPI directly on encrypted traffic, without knowing neither the traffic content, nor the patterns of detection signatures. The relevance of our work is that it preserves the delicate balance in the security market ecosystem. Indeed, security editors will be able to protect their distinctive detection signatures and supply service providers only with encrypted attack patterns. In addition, service providers will be able to integrate the encrypted signatures in their architectures and perform DPI without compromising the privacy of network communications. Finally, users will be able to preserve their privacy through traffic encryption, while also benefiting from network security services. The extensive experiments conducted in this paper prove that, compared to existing encryption schemes, our solution reduces by 3 orders of magnitude the connection setup time for new users, and by 6 orders of magnitude the consumed memory space on the DPI appliance.", "" ] }
1706.06261
2887779009
Outsourcing software middlebox (aka. virtual network function) to third-party service provider, like public cloud, has been a recent trend for many well-known benefits. However, redirecting a large influx of raw traffic from enterprises to the service providers, who are often outside the traditional trust domain, raises severe security concerns. Over the past few years, steady progress has been made towards enabling the middlebox functionalities without revealing sensitive traffic. Following the direction, in this paper, we present LightBox, the first system that can drive full-stack protected stateful middleboxes at native speed. Similar to existing hardware-assisted designs, LightBox is built on top of SGX enclave. But, different from all prior systems, LightBox is designed to further address two new challenges that, surprisingly, are largely overlooked in the literature. First, LightBox enables enterprises to safely forward the packets intended for processing to the enclaved middlebox with all metadata, including low-level packet headers, packet size, count and timestamps, fully protected. Studies have shown that such metadata can be information-rich and potentially exploited by many traffic analysis attacks. Second, LightBox achieves very efficient in- enclave stateful middlebox processing within the highly constraint enclave space, without incurring unreasonably high overhead. The confidentiality, integrity, and freshness of the flow states are always ensured throughout our efficient state management procedures. Evaluations show that LightBox, with all the extra security benefits, can still perform packet I O at 10Gbps line rate. Even when tracking 600K flows, LightBox can obtain 3.5X to 30X performance speedup than direct adoption of SGX, while incurring negligible packet delay (<1 s) for most test cases.
Several attempts have been made to secure middlebox processing with trusted hardware. S-NFV @cite_98 proposes to protect specifically the middlebox state, but not the entire middlebox processing over protected traffic. Trusted Click @cite_3 and ShieldBox @cite_55 port Click modular router @cite_22 to SGX enclave. But they do not consider the protection of metadata, and lack support for stateful processing due to the inherent limitation of Click. SGX-BOX @cite_13 and mbTLS @cite_81 enable middlebox to intercept TLS connections and securely inspect traffic in enclave, with primary focus on programmability and real-world deployment, respectively. Again, they do not address the concern of metadata privacy. The latest work SafeBricks @cite_51 pays extra attention on middlebox code protection, and applies IPSec for secure traffic tunneling. While protecting low-level headers, it is still vulnerable to various traffic analysis attacks leveraging packet size and count. None of these hardware-assisted solutions protects traffic metadata as does, nor do they enable running secure stateful middlebox processing at native speed in real networks.
{ "cite_N": [ "@cite_22", "@cite_98", "@cite_55", "@cite_3", "@cite_81", "@cite_51", "@cite_13" ], "mid": [ "2010365467", "2312902414", "2790063371", "2600318748", "", "2799154389", "2734383261" ], "abstract": [ "Clicks is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements . Individual elements implement simple router functions like packet classification, queuing, scheduling, and interfacing with network devices. A router configurable is a directed graph with elements at the vertices; packets flow along the edges of the graph. Several features make individual elements more powerful and complex configurations easier to write, including pull connections, which model packet flow drivn by transmitting hardware devices, and flow-based router context, which helps an element locate other interesting elements. Click configurations are modular and easy to extend. A standards-compliant Click IP router has 16 elements on its forwarding path; some of its elements are also useful in Ethernet switches and IP tunnelling configurations. Extending the IP router to support dropping policies, fairness among flows, or Differentiated Services simply requires adding a couple of element at the right place. On conventional PC hardware, the Click IP router achieves a maximum loss-free forwarding rate of 333,000 64-byte packets per second, demonstrating that Click's modular and flexible architecture is compatible with good performance.", "Network Function Virtualization (NFV) applications are stateful. For example, a Content Distribution Network (CDN) caches web contents from remote servers and serves them to clients. Similarly, an Intrusion Detection System (IDS) and an Intrusion Prevention System (IPS) have both per-flow and multi-flow (shared) states to properly react to intrusions. On today's NFV infrastructures, security vulnerabilities many allow attackers to steal and manipulate the internal states of NFV applications that share a physical resource. In this paper, we propose a new protection scheme, S-NFV that incorporates Intel Software Guard Extensions (Intel SGX) to securely isolate the states of NFV applications.", "Middleboxes that process confidential data cannot be securely deployed in untrusted cloud environments. To securely outsource middleboxes to the cloud, state-of-the-art systems advocate network processing over the encrypted traffic. Unfortunately, these systems support only restrictive functionalities, and incur prohibitively high overheads. This motivated the design of ShieldBox---a secure middlebox framework for deploying high-performance network functions (NFs) over untrusted commodity servers. ShieldBox securely processes encrypted traffic inside a secure container by leveraging shielded execution. More specifically, ShieldBox builds on hardware-assisted memory protection based on Intel SGX to provide strong confidentiality and integrity guarantees. For middlebox developers, ShieldBox exposes a generic interface based on Click to design and implement a wide-range of NFs using its out-of-the-box elements and C++ extensions. For network operators, ShieldBox provides configuration and attestation service for seamless and verifiable deployment of middleboxes. We have implemented ShieldBox supporting important end-to-end features required for secure network processing, and performance optimizations. Our extensive evaluation shows that ShieldBox achieves a near-native throughput and latency to securely process confidential data at line rate.", "Network Function Virtualization has received a large amount of research and recent efforts have been made to further leverage the cloud to enhance NFV. However, since there are privacy and security issues with using cloud computing, work has been done to allow for operating on encrypted data, which introduces a large amount of overhead in both computation and data, while only providing a limited set of operations, since these encryption schemes are not fully homomorphic. We propose using trusted computing to circumvent these limitations by having hardware enforce data privacy and provide guaranteed computation. Prior work has shown that Intel's Software Guard Extensions can be used to protect the state of network functions, but there are still questions about the usability of SGX in arbitrary NFV applications and the performance of SGX in these applications. We extend prior work to show how SGX can be used in network deployments by extending the Click modular router to perform secure packet processing with SGX. We also present a performance evaluation of SGX on real hardware to show that processing inside of SGX has a negligible performance impact, compared to performing the same processing outside of SGX.", "", "", "A network middlebox benefits both users and network operators by offering a wide range of security-related in-network functions, such as web firewalls and intrusion detection systems (IDS). However, the wide usage of encryption protocol restricts functionalities of network middleboxes. This forces network operators and users to make a choice between end-to-end privacy and security. This paper presents SGX-Box, a secure middlebox system that enables visibility on encrypted traffic by leveraging Intel SGX technology. The entire process of SGX-Box ensures that the sensitive information, such as decrypted payloads and session keys, is securely protected within the SGX enclave. SGX-Box provides easy-to-use abstraction and a high-level programming language, called SB lang for handling encrypted traffic in middleboxes. It greatly enhances programmability by hiding details of the cryptographic operations and the implementation details in SGX enclave processing. We implement a proof-of-concept IDS using SB lang. Our preliminary evaluation shows that SGX-Box incurs acceptable performance overhead while it dramatically reduces middlebox developer's effort." ] }