aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1902.11154 | 2916790266 | In this paper we propose a robust visual odometry system for a wide-baseline camera rig with wide field-of-view (FOV) fisheye lenses, which provides full omnidirectional stereo observations of the environment. For more robust and accurate ego-motion estimation we adds three components to the standard VO pipeline, 1) the hybrid projection model for improved feature matching, 2) multi-view P3P RANSAC algorithm for pose estimation, and 3) online update of rig extrinsic parameters. The hybrid projection model combines the perspective and cylindrical projection to maximize the overlap between views and minimize the image distortion that degrades feature matching performance. The multi-view P3P RANSAC algorithm extends the conventional P3P RANSAC to multi-view images so that all feature matches in all views are considered in the inlier counting for robust pose estimation. Finally the online extrinsic calibration is seamlessly integrated in the backend optimization framework so that the changes in camera poses due to shocks or vibrations can be corrected automatically. The proposed system is extensively evaluated with synthetic datasets with ground-truth and real sequences of highly dynamic environment, and its superior performance is demonstrated. | For improved environmental awareness and perception capabilities, multi-camera methods also have been studied. @cite_18 present a visual odometry algorithm for a multi-camera system which can observe full surrounding view. They successfully estimate ego-motion with the 2-point algorithm showing the importance of the inter-camera correspondences to recover metric scale. @cite_3 implement a visual SLAM and self-calibration system with at least one calibrated-stereo camera and an arbitrary number of monocular cameras where they have overlapping views with the stereo camera. Recently, a robust multi-camera system using direct methods with plane sweeping stereo is proposed by @cite_20 . Finding correspondences between fisheye images is a challenging and important problem and many researchers devoted efforts in it. Special descriptors @cite_7 @cite_8 are designed to consider the distortion, and @cite_12 and @cite_15 proposed dense matching algorithms for fisheye images. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_15",
"@cite_12",
"@cite_20"
],
"mid": [
"1986556718",
"2751456471",
"2047895082",
"1131710387",
"2773774647",
"1968567746",
"2910508725"
],
"abstract": [
"In this paper, we present a visual ego-motion estimation algorithm for a self-driving car equipped with a close-to-market multi-camera system. By modeling the multi-camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the generalized essential matrix where the full relative motion including metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera correspondences. We show that up to a maximum of 6 solutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the special case with only intra-camera correspondences where the scale becomes unobservable and provide a practical alternative solution. Our formulation can be efficiently implemented within RANSAC for robust estimation. We verify the validity of our assumptions on the motion model by comparing our results on a large real-world dataset collected by a car equipped with 4 cameras with minimal overlapping field-of-views against the GPS INS ground truth.",
"In this paper, we develop an interest point detector and binary feature descriptor for spherical images. We take as inspiration a recent framework developed for planar images, BRISK (Binary Robust Invariant Scalable Keypoints), and adapt the method to operate on spherical images. All of our processing is intrinsic to the sphere and avoids the distortion inherent in storing and indexing spherical images in a 2D representation. We discretise images on a spherical geodesic grid formed by recursive subdivision of a triangular mesh. This leads to a multiscale pixel grid comprising mainly hexagonal pixels that lends itself naturally to a spherical image pyramid representation. For interest point detection, we use a variant of the Accelerated Segment Test (AST) corner detector which operates on our geodesic grid. We estimate a continuous scale and location for features and descriptors are built by sampling onto a regular pattern in the tangent space. We evaluate repeatability, precision and recall on both synthetic spherical images with known ground truth correspondences and real images.",
"In this paper, we propose SPHORB, a new fast and robust binary feature detector and descriptor for spherical panoramic images. In contrast to state-of-the-art spherical features, our approach stems from the geodesic grid, a nearly equal-area hexagonal grid parametrization of the sphere used in climate modeling. It enables us to directly build fine-grained pyramids and construct robust features on the hexagonal spherical grid, thus avoiding the costly computation of spherical harmonics and their associated bandwidth limitation. We further study how to achieve scale and rotation invariance for the proposed SPHORB feature. Extensive experiments show that SPHORB consistently outperforms other existing spherical features in accuracy, efficiency and robustness to camera movements. The superior performance of SPHORB has also been validated by real-world matching tests.",
"The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. If vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multi-camera system, an accurate calibration of the multi-camera system is a necessary prerequisite. We propose a novel vSLAM-based self-calibration method for a multi-camera system that includes at least one calibrated stereo camera, and an arbitrary number of monocular cameras. We assume overlapping fields of view to only exist within stereo cameras. Our self-calibration estimates the inter-camera transforms with metric scale; metric scale is inferred from calibrated stereo. On our MAV, we set up each camera pair in a stereo configuration which facilitates the estimation of the MAV's pose with metric scale. Once the MAV is calibrated, the MAV is able to estimate its global pose via a multi-camera vSLAM implementation based on the generalized camera model. We propose a novel minimal and linear 3-point algorithm that uses relative rotation angle measurements from a 3-axis gyroscope to recover the relative motion of the MAV with metric scale and from 2D-2D feature correspondences. This relative motion estimation does not involve scene point triangulation. Our constant-time vSLAM implementation with loop closures runs on-board the MAV in real-time. To the best of our knowledge, no published work has demonstrated real-time on-board vSLAM with loop closures. We show experimental results from simulation experiments, and real-world experiments in both indoor and outdoor environments.",
"We propose a novel omnidirectional stereo camera setup that is formed by two ultra-wide field-of-view (FOV) fisheye cameras. The proposed configuration is formed by two 245-degree FOV fisheye cameras, facing opposite directions, that are rigidly mounted on two sides of a rod. The overlapping view in the two fisheye images forms a ring-shaped spatial stereo setup. Our system provides stereo observations with full 360-degree FOV in horizontal directions and 65-degree FOV in the vertical direction. In addition, the two fisheye cameras altogether also provide full spherical monocular coverage of the surrounding environment. We address challenges in camera modeling, fisheye intrinsic calibration, stereo self-calibration, and depth estimation. We develop a lens-specific fisheye camera calibration method that uses manufacturer data for the optical lens to assist with the intrinsic calibration and develop an online self-calibration approach for estimating stereo extrinsic parameters. The overlapping camera views are rectified into stereo image pairs, from which a spatial stereo matching pipeline is developed for depth estimation in all horizontal directions. We show both qualitative and quantitative analysis to validate our approach.",
"In this paper, we propose an adaptation of camera projection models for fisheye cameras into the plane-sweeping stereo matching algorithm. This adaptation allows us to do plane-sweeping stereo directly on fisheye images. Our approach also works for other non-pinhole cameras such as omni directional and catadioptric cameras when using the unified projection model. Despite the simplicity of our proposed approach, we are able to obtain full, good quality and high resolution depth maps from the fisheye images. To verify our approach, we show experimental results based on depth maps generated by our approach, and dense models produced from these depth maps.",
"We present a visual odometry (VO) algorithm for a multi-camera system and robust operation in challenging environments. Our algorithm consists of a pose tracker and a local mapper. The tracker estimates the current pose by minimizing photometric errors between the most recent keyframe and the current frame. The mapper initializes the depths of all sampled feature points using plane-sweeping stereo. To reduce pose drift, a sliding window optimizer is used to refine poses and structure jointly. Our formulation is flexible enough to support an arbitrary number of stereo cameras. We evaluate our algorithm thoroughly on five datasets. The datasets were captured in different conditions: daytime, night-time with near-infrared (NIR) illumination and nighttime without NIR illumination. Experimental results show that a multi-camera setup makes the VO more robust to challenging environments, especially night-time conditions, in which a single stereo configuration fails easily due to the lack of features."
]
} |
1902.11216 | 2917914520 | In video production, inserting B-roll is a widely used technique to enrich the story and make a video more engaging. However, determining the right content and positions of B-roll and actually inserting it within the main footage can be challenging, and novice producers often struggle to get both timing and content right. We present B-Script, a system that supports B-roll video editing via interactive transcripts. B-Script has a built-in recommendation system trained on expert-annotated data, recommending users B-roll position and content. To evaluate the system, we conducted a within-subject user study with 110 participants, and compared three interface variations: a timeline-based editor, a transcript-based editor, and a transcript-based editor with recommendations. Users found it easier and were faster to insert B-roll using the transcript-based interface, and they created more engaging videos when recommendations were provided. | Prior research has found that content-focused editing of media, such as transcript-based interfaces, can help people more effectively accomplish video and audio editing @cite_4 @cite_8 @cite_20 @cite_1 @cite_25 @cite_16 . @cite_20 , and more recently, @cite_8 present video editing systems that include video aligned transcript editors to enable more efficient navigation and editing of video. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"2058733260",
"1964202597",
"1834627372",
"2537196324",
"2035965376",
"2137575240"
],
"abstract": [
"Audio stories are an engaging form of communication that combine speech and music into compelling narratives. Existing audio editing tools force story producers to manipulate speech and music tracks via tedious, low-level waveform editing. In contrast, we present a set of tools that analyze the audio content of the speech and music and thereby allow producers to work at much higher level. Our tools address several challenges in creating audio stories, including (1) navigating and editing speech, (2) selecting appropriate music for the score, and (3) editing the music to complement the speech. Key features include a transcript-based speech editing tool that automatically propagates edits in the transcript text to the corresponding speech track; a music browser that supports searching based on emotion, tempo, key, or timbral similarity to other songs; and music retargeting tools that make it easy to combine sections of music with the speech. We have used our tools to create audio stories from a variety of raw speech sources, including scripted narratives, interviews and political speeches. Informal feedback from first-time users suggests that our tools are easy to learn and greatly facilitate the process of editing raw footage into a final story.",
"We present a set of tools designed to help editors place cuts and create transitions in interview video. To help place cuts, our interface links a text transcript of the video to the corresponding locations in the raw footage. It also visualizes the suitability of cut locations by analyzing the audio visual features of the raw footage to find frames where the speaker is relatively quiet and still. With these tools editors can directly highlight segments of text, check if the endpoints are suitable cut locations and if so, simply delete the text to make the edit. For each cut our system generates visible (e.g. jump-cut, fade, etc.) and seamless, hidden transitions. We present a hierarchical, graph-based algorithm for efficiently generating hidden transitions that considers visual features specific to interview footage. We also describe a new data-driven technique for setting the timing of the hidden transition. Finally, our tools offer a one click method for seamlessly removing 'ums' and repeated words as well as inserting natural-looking pauses to emphasize semantic content. We apply our tools to edit a variety of interviews and also show how they can be used to quickly compose multiple takes of an actor narrating a story.",
"Searching for scenes in movies is a time-consuming but crucial task for film studies scholars, film professionals, and new media artists. In pilot interviews we have found that such users search for a wide variety of clips---e.g., actions, props, dialogue phrases, character performances, locations---and they return to particular scenes they have seen in the past. Today, these users find relevant clips by watching the entire movie, scrubbing the video timeline, or navigating via DVD chapter menus. Increasingly, users can also index films through transcripts---however, dialogue often lacks visual context, character names, and high level event descriptions. We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries. Our interface integrates information from such sources to allow expressive search at several levels of granularity: Captions provide access to accurate dialogue, scripts describe shot-by-shot actions and settings, and plot summaries contain high-level event descriptions. We propose new algorithms for finding word-level caption to script alignments, parsing text scripts, and aligning plot summaries to scripts. Film studies graduate students evaluating SceneSkim expressed enthusiasm about the usability of the proposed system for their research and teaching.",
"Speech recordings are central to modern media from podcasts to audio books to e-lectures and voice-overs. Authoring these recordings involves an iterative back and forth process between script writing editing and audio recording editing. Yet, most existing tools treat the script and the audio separately, making the back and forth workflow very tedious. We present Voice Script, an interface to support a dynamic workflow for script writing and audio recording editing. Our system integrates the script with the audio such that, as the user writes the script or records speech, edits to the script are translated to the audio and vice versa. Through informal user studies, we demonstrate that our interface greatly facilitates the audio authoring process in various scenarios.",
"Blackboard-style lecture videos are popular, but learning using existing video player interfaces can be challenging. Viewers cannot consume the lecture material at their own pace, and the content is also difficult to search or skim. For these reasons, some people prefer lecture notes to videos. To address these limitations, we present Visual Transcripts, a readable representation of lecture videos that combines visual information with transcript text. To generate a Visual Transcript, we first segment the visual content of a lecture into discrete visual entities that correspond to equations, figures, or lines of text. Then, we analyze the temporal correspondence between the transcript and visuals to determine how sentences relate to visual entities. Finally, we arrange the text and visuals in a linear layout based on these relationships. We compare our result with a standard video player, and a state-of-the-art interface designed specifically for blackboard-style lecture videos. User evaluation suggests that users prefer our interface for learning and that our interface is effective in helping them browse or search through lecture videos.",
"Digital video is becoming increasingly ubiquitous. However, editing video remains difficult for several reasons: it is a time-based medium, it has dual tracks of audio and video, and current tools force users to work at the smallest level of detail. Based on interviews with professional video editors, we developed a video editor, called Silver, that uses metadata to make digital video editing more accessible to novices. To help users visualize video, Silver provides multiple views with different semantic content and at different levels of abstraction, including storyboard, editable transcript, and timeline views. Silver offers smart editing operations that help users resolve the inconsistencies that arise because of the different boundaries in audio and video. We conducted a preliminary user study to investigate the effectiveness of the Silver smart editing. Participants successfully edited video after only a short tutorial, both with and without smart editing assistance. Our research suggests several ways in which video editing tools could use metadata to assist users in the reuse and composition of video."
]
} |
1902.11216 | 2917914520 | In video production, inserting B-roll is a widely used technique to enrich the story and make a video more engaging. However, determining the right content and positions of B-roll and actually inserting it within the main footage can be challenging, and novice producers often struggle to get both timing and content right. We present B-Script, a system that supports B-roll video editing via interactive transcripts. B-Script has a built-in recommendation system trained on expert-annotated data, recommending users B-roll position and content. To evaluate the system, we conducted a within-subject user study with 110 participants, and compared three interface variations: a timeline-based editor, a transcript-based editor, and a transcript-based editor with recommendations. Users found it easier and were faster to insert B-roll using the transcript-based interface, and they created more engaging videos when recommendations were provided. | Automatic location suggestion for B-roll content, as well as the B-roll browsing tool, are important novel elements in the design of B-script. QuickCut also uses a text-based interface for video editing @cite_24 . However, in it, a user provides the annotations for the video, and QuickCut uses this annotation vocabulary to generate a final narrated video composition. In our system, B-script searches over "in-the-wild" metadata, and B-script provides recommendations for content. @cite_25 edit blackboard-style lectures into a readable representation combining visuals with readable transcripts. Vidcrit enables the review of an edited video with multimodal annotations as overlay on the transcript of the video @cite_21 . We use these insights to build a tool that supports B-roll video editing, given its challenges of finding the right content and position for B-roll, as well as inserting B-roll. | {
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_25"
],
"mid": [
"2532670842",
"2535823561",
"2035965376"
],
"abstract": [
"We present QuickCut, an interactive video editing tool designed to help authors efficiently edit narrated video. QuickCut takes an audio recording of the narration voiceover and a collection of raw video footage as input. Users then review the raw footage and provide spoken annotations describing the relevant actions and objects in the scene. QuickCut time-aligns a transcript of the annotations with the raw footage and a transcript of the narration to the voiceover. These aligned transcripts enable authors to quickly match story events in the narration with semantically relevant video segments and form alignment constraints between them. Given a set of such constraints, QuickCut applies dynamic programming optimization to choose frame-level cut points between the video segments while maintaining alignments with the narration and adhering to low-level film editing guidelines. We demonstrate QuickCut's effectiveness by using it to generate a variety of short (less than 2 minutes) narrated videos. Each result required between 14 and 52 minutes of user time to edit (i.e. between 8 and 31 minutes for each minute of output video), which is far less than typical authoring times with existing video editing workflows.",
"Video production is a collaborative process in which stakeholders regularly review drafts of the edited video to indicate problems and offer suggestions for improvement. Although practitioners prefer in-person feedback, most reviews are conducted asynchronously via email due to scheduling and location constraints. The use of this impoverished medium is challenging for both providers and consumers of feedback. We introduce VidCrit, a system for providing asynchronous feedback on drafts of edited video that incorporates favorable qualities of an in-person review. This system consists of two separate interfaces: (1) A feedback recording interface captures reviewers' spoken comments, mouse interactions, hand gestures and other physical reactions. (2) A feedback viewing interface transcribes and segments the recorded review into topical comments so that the video author can browse the review by either text or timelines. Our system features novel methods to automatically segment a long review session into topical text comments, and to label such comments with additional contextual information. We interviewed practitioners to inform a set of design guidelines for giving and receiving feedback, and based our system's design on these guidelines. Video reviewers using our system preferred our feedback recording interface over email for providing feedback due to the reduction in time and effort. In a fixed amount of time, reviewers provided 10.9 (σ=5.09) more local comments than when using text. All video authors rated our feedback viewing interface preferable to receiving feedback via e-mail.",
"Blackboard-style lecture videos are popular, but learning using existing video player interfaces can be challenging. Viewers cannot consume the lecture material at their own pace, and the content is also difficult to search or skim. For these reasons, some people prefer lecture notes to videos. To address these limitations, we present Visual Transcripts, a readable representation of lecture videos that combines visual information with transcript text. To generate a Visual Transcript, we first segment the visual content of a lecture into discrete visual entities that correspond to equations, figures, or lines of text. Then, we analyze the temporal correspondence between the transcript and visuals to determine how sentences relate to visual entities. Finally, we arrange the text and visuals in a linear layout based on these relationships. We compare our result with a standard video player, and a state-of-the-art interface designed specifically for blackboard-style lecture videos. User evaluation suggests that users prefer our interface for learning and that our interface is effective in helping them browse or search through lecture videos."
]
} |
1902.11216 | 2917914520 | In video production, inserting B-roll is a widely used technique to enrich the story and make a video more engaging. However, determining the right content and positions of B-roll and actually inserting it within the main footage can be challenging, and novice producers often struggle to get both timing and content right. We present B-Script, a system that supports B-roll video editing via interactive transcripts. B-Script has a built-in recommendation system trained on expert-annotated data, recommending users B-roll position and content. To evaluate the system, we conducted a within-subject user study with 110 participants, and compared three interface variations: a timeline-based editor, a transcript-based editor, and a transcript-based editor with recommendations. Users found it easier and were faster to insert B-roll using the transcript-based interface, and they created more engaging videos when recommendations were provided. | Computational assistance in such creative tasks as video and audio editing has been previously proven effective in a number of situations @cite_19 @cite_17 @cite_29 @cite_18 . Video editing assistants often rely on video metadata such as event segmentation @cite_9 , gaze data @cite_11 , multiple-scene shots @cite_26 , camera settings @cite_10 , semantic annotations and camera motions @cite_12 @cite_28 . @cite_18 develop a language based recommendation system for creating visually enhanced audio podcasts. Existing industry-level systems such as Wibbitz @cite_3 or Wochit @cite_13 help video content creators to tell visual stories, including automated content suggestions. In contrast, our tool allows users to focus on the video content in the transcript during editing via transcript-based editing and recommendations, which directly supports B-roll video editing. For developing B-Script, we build a recommendation system based entirely on the transcript of the video, using expert-annotated data for recommending video edits to users. By integrating B-roll search, and recommendations, into the workflow of B-roll insertion, our approach better supports this video editing process. | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_11",
"@cite_26",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_3",
"@cite_19",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2766248684",
"",
"2074520446",
"2108322535",
"1988366361",
"2120645068",
"",
"2057593084",
"2140187124",
"2170530433",
"2026996548"
],
"abstract": [
"",
"Audio Podcasts have gained popularity because they are a compelling form of storytelling and are easy to consume. However, they are not as easy to produce since resources are invested in the research, recording, and editing process and the average length of an episode is over an hour. Some audio podcasts could benefit from visuals to increase engagement and learning, but manually curating them can be arduous and time-consuming. We introduce a tool for automatically visualizing audio podcasts, currently focused on the genre of travelogues. Our system works by first time-aligning the transcript of a given podcast, using NLP techniques to extract entities and track how interesting or relevant they are throughout the podcast, and then retrieving visual data appropriately to describe them, either through transitions on a map or professionally taken photographs with captions. By automatically creating a visual narrative to accompany a podcast, we hope our tool can provide listeners with a better sense of the podcast's topic.",
"",
"We present an approach that takes multiple videos captured by social cameras---cameras that are carried or worn by members of the group involved in an activity---and produces a coherent \"cut\" video of the activity. Footage from social cameras contains an intimate, personalized view that reflects the part of an event that was of importance to the camera operator (or wearer). We leverage the insight that social cameras share the focus of attention of the people carrying them. We use this insight to determine where the important \"content\" in a scene is taking place, and use it in conjunction with cinematographic guidelines to select which cameras to cut to and to determine the timing of those cuts. A trellis graph representation is used to optimize an objective function that maximizes coverage of the important content in the scene, while respecting cinematographic guidelines such as the 180-degree rule and avoiding jump cuts. We demonstrate cuts of the videos in various styles and lengths for a number of scenarios, including sports games, street performances, family activities, and social get-togethers. We evaluate our results through an in-depth analysis of the cuts in the resulting videos and through comparison with videos produced by a professional editor and existing commercial solutions.",
"",
"We present EverTutor, a system that automatically generates interactive tutorials on smartphone from user demonstration. For tutorial authors, it simplifies the tutorial creation. For tutorial users, it provides contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate the tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step. We conducted a 6-person user study for creating tutorials and a 12-person user study for browsing tutorials, and we compared EverTutor's interactive tutorials to static and video ones. Study results show that creating tutorials by EverTutor is simpler and faster than producing static and video tutorials. Also, when using the tutorials, the task completion time for interactive tutorials were 3-6 times faster than static and video tutorials regardless of age group. In terms of user preference, 83 of the users chose interactive type as the preferred tutorial type and rated it easiest to follow and easiest to understand.",
"We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.",
"",
"We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study comparing our automatically generated tutorials to hand-designed tutorials and screen-capture video recordings finds that users are 20--44 faster and make 60--95 fewer errors using our tutorials. While our system focuses on tutorial generation, we also present some initial work on generating content-dependent macros that use image recognition to automatically transfer selection operations from the example image used in the demonstration to new target images. While our macros are limited to transferring selection operations we demonstrate automatic transfer of several common retouching techniques including eye recoloring, whitening teeth and sunset enhancement.",
"This article outlines a paradigm shift in media production: the advent of computational media production that will automate the capture, editing, and reuse of video content. By integrating metadata creation and (re)use throughout the media production process, we enable the mass customization of video.",
"This paper introduces an evaluated approach to the automatic generation of video narratives from user generated content gathered in a shared repository. In the context of social events, end-users record video material with their personal cameras and upload the content to a common repository. Video narrative techniques, implemented using Narrative Structure Language (NSL) and ShapeShifting Media, are employed to automatically generate movies recounting the event. Such movies are personalized according to the preferences expressed by each individual end-user, for each individual viewing. This paper describes our prototype narrative system, MyVideos, deployed as a web application, and reports on its evaluation for one specific use case: assembling stories of a school concert by parents, relatives and friends. The evaluations carried out through focus groups, interviews and field trials, in the Netherlands and UK, provided validating results and further insights into this approach.",
"We present an interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques. The drawing literature describes a number of techniques to support this task and help people gain consciousness of the shapes in a scene and their relationships. We compile these techniques and derive a set of construction lines that we automatically extract from a model photograph. We then display these lines over the model to guide its manual reproduction by the user on the drawing canvas. Finally, we use shape-matching to register the user's sketch with the model guides. We use this registration to provide corrective feedback to the user. Our user studies show that automatically extracted construction lines can help users draw more accurately. Furthermore, users report that guidance and corrective feedback help them better understand how to draw."
]
} |
1902.11216 | 2917914520 | In video production, inserting B-roll is a widely used technique to enrich the story and make a video more engaging. However, determining the right content and positions of B-roll and actually inserting it within the main footage can be challenging, and novice producers often struggle to get both timing and content right. We present B-Script, a system that supports B-roll video editing via interactive transcripts. B-Script has a built-in recommendation system trained on expert-annotated data, recommending users B-roll position and content. To evaluate the system, we conducted a within-subject user study with 110 participants, and compared three interface variations: a timeline-based editor, a transcript-based editor, and a transcript-based editor with recommendations. Users found it easier and were faster to insert B-roll using the transcript-based interface, and they created more engaging videos when recommendations were provided. | Integrating B-roll is a widely used technique in film editing for documentaries, reality shows or news coverage. Conventions of cinematography suggest several guidelines about B-roll editing or making cuts between shots in general. For example, it is recommended to avoid cuts when the speaker is in the middle of talking or gesturing actively because such cuts disrupt the audio-visual flow @cite_5 . Instead, a good practice is to place the cuts where there are natural pauses in the flow. Sentence endings or eye blinks can be important cues for indicating these pauses @cite_22 . On the other hand, in practice, vlogs are usually created with low production value (often with a single smartphone) and do not necessarily follow the traditional principles of professional cinematography @cite_0 . For instance, in vlogs, it is common to have frequent jump cuts or rapid shot changes to make the footage faster pace. Instead of relying on a set of rules to determine the content and positions of B-Roll, we model our recommendation algorithm based on analysis of popular vlogs from YouTube () and a database of expert annotated video edits that we collect for this purpose (). | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_22"
],
"mid": [
"2969207609",
"",
"1581704028"
],
"abstract": [
"Vlog Like a Boss is the complete guide to video blogging. Amy Schmittauer is the Vlog Boss. As a new media triple threat -- YouTuber, keynote speaker and author -- she coaches people to go after what they want in life and leverage online video to make it happen. Creator of the popular YouTube series Savvy Sexy Social, her channel boasts a global community and millions of views. Now, in her first book, she will show you how to Vlog Like a Boss and get the attention you deserve. A collection of strategies and tactics from years of experience in online video, Schmittauer shares time-tested, proven methods to creating brand awareness with vlogging that will build a loyal community for years to come. This complete guide will show you how to: - overcome the 3 major fears that hold many people back from making videos - create your most engaging vlog every time with the Authority Video Formula - achieve your on-camera talent (and no, you don't need to be a natural!) - leverage the \"secret\" to video success that creates fans for life - and much more!",
"",
"In the Blink of an Eye is celebrated film editor Walter Murch's vivid, multifaceted, thought -- provoking essay on film editing. Starting with what might be the most basic editing question -- Why do cuts work? -- Murch treats the reader to a wonderful ride through the aesthetics and practical concerns of cutting film. Along the way, he offers his unique insights on such subjects as continuity and discontinuity in editing, dreaming, and reality; criteria for a good cut; the blink of the eye as an emotional cue; digital editing; and much more. In this second edition, Murch reconsiders and completely revises his popular first edition's lengthy meditation on digital editing (which accounts for a third of the book's pages) in light of the technological changes that have taken place in the six years since its publication."
]
} |
1902.11153 | 2916006017 | Recently the GAN generated face images are more and more realistic with high-quality, even hard for human eyes to detect. On the other hand, the forensics community keeps on developing methods to detect these generated fake images and try to guarantee the credibility of visual contents. Although researchers have developed some methods to detect generated images, few of them explore the important problem of generalization ability of forensics model. As new types of GANs are emerging fast, the generalization ability of forensics models to detect new types of GAN images is absolutely an essential research topic. In this paper, we explore this problem and propose to use preprocessed images to train a forensic CNN model. By applying similar image level preprocessing to both real and fake training images, the forensics model is forced to learn more intrinsic features to classify the generated and real face images. Our experimental results also prove the effectiveness of the proposed method. | There are some related work proposed to detect AI generated fake images or videos using deep networks. To detect DeepFake video, different detection methods have been proposed @cite_0 @cite_4 @cite_12 @cite_7 @cite_15 . In @cite_0 , the authors propose a temporal-aware CNN-RNN network pipeline to automatically detect DeepFake videos. Li and Lyu @cite_4 found face swap transforms leave distinctive artifacts in the resulting DeepFake videos, and they can be effectively captured by convolutional neural networks. The method in @cite_12 is based on detection of eye blinking in videos, which is a physiological signal that is not well presented in synthesized fake videos. In @cite_7 , the authors propose that errors can be revealed when 3D head poses are estimated from the face images, and they further develop a classification method based on this cue. Two network architecture are proposed in @cite_15 , which are used to detect fake video generated by Deepfake @cite_10 and Face2Face @cite_2 @cite_6 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"2898877033",
"2898776593",
"2794857359",
"2911424785",
"2301937176",
"2952515199",
"",
"2806757392"
],
"abstract": [
"In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1) Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative example. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding, our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method is more robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice.",
"In this paper, we propose a new method to expose AI-generated fake face images or videos (commonly known as the Deep Fakes). Our method is based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images. We perform experiments to demonstrate this phenomenon and further develop a classification method based on this cue. Using features based on this cue, an SVM classifier is evaluated using a set of real face images and Deep Fakes.",
"With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.",
"In recent months a machine learning based free software tool has made it easy to create believable face swaps in videos that leaves few traces of manipulation, in what are known as \"deepfake\" videos. Scenarios where these realistic fake videos are used to create political distress, blackmail someone or fake terrorism events are easily envisioned. This paper proposes a temporal-aware pipeline to automatically detect deepfake videos. Our system uses a convolutional neural network (CNN) to extract frame-level features. These features are then used to train a recurrent neural network (RNN) that learns to classify if a video has been subject to manipulation or not. We evaluate our method against a large set of deepfake videos collected from multiple video websites. We show how our system can achieve competitive results in this task while using a simple architecture.",
"We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.",
"This paper presents a method to automatically and efficiently detect face tampering in videos, and particularly focuses on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face. Traditional image forensics techniques are usually not well suited to videos due to the compression that strongly degrades the data. Thus, this paper follows a deep learning approach and presents two networks, both with a low number of layers to focus on the mesoscopic properties of images. We evaluate those fast networks on both an existing dataset and a dataset we have constituted from online videos. The tests demonstrate a very successful detection rate with more than 98 for Deepfake and 95 for Face2Face.",
"",
"The new developments in deep generative networks have significantly improve the quality and efficiency in generating realistically-looking fake face videos. In this work, we describe a new method to expose fake face videos generated with neural networks. Our method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. Our method is tested over benchmarks of eye-blinking detection datasets and also show promising performance on detecting videos generated with DeepFake."
]
} |
1902.11153 | 2916006017 | Recently the GAN generated face images are more and more realistic with high-quality, even hard for human eyes to detect. On the other hand, the forensics community keeps on developing methods to detect these generated fake images and try to guarantee the credibility of visual contents. Although researchers have developed some methods to detect generated images, few of them explore the important problem of generalization ability of forensics model. As new types of GANs are emerging fast, the generalization ability of forensics models to detect new types of GAN images is absolutely an essential research topic. In this paper, we explore this problem and propose to use preprocessed images to train a forensic CNN model. By applying similar image level preprocessing to both real and fake training images, the forensics model is forced to learn more intrinsic features to classify the generated and real face images. Our experimental results also prove the effectiveness of the proposed method. | Apart from the forensics of DeepFake, some work focus on the detection of GAN generated images @cite_13 @cite_11 @cite_14 @cite_20 . In @cite_13 , the authors present a study on the detection of images translation from GANs. But some of them show dramatic impairments on Twitter-like compressed images. Shahroz @cite_11 use ensemble classifiers to detect fake face images created by GANs. A method based on color statistical features is proposed in @cite_14 , and several detection schemes are designed according to the practicability. Nhu-Tai @cite_20 proposed another model based on convolutional neural network to detect gernerated face images, which is based on transfer learning from a deep face recognition network. These image forensics methods can perform well on test dataset that is homologous to the training dataset. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2888519208",
"2811414481",
"",
"2896689692"
],
"abstract": [
"With the powerful deep network architectures, such as generative adversarial networks and variational autoencoders, large amounts of photorealistic images can be generated. The generated images, already fooling human eyes successfully, are not initially targeted for deceiving image authentication systems. However, research communities as well as public media show great concerns on whether these images would lead to serious security issues. In this paper, we address the problem of detecting deep network generated (DNG) images by analyzing the disparities in color components between real scene images and DNG images. Existing deep networks generate images in RGB color space and have no explicit constrains on color correlations; therefore, DNG images have more obvious differences from real images in other color spaces, such as HSV and YCbCr, especially in the chrominance components. Besides, the DNG images are different from the real ones when considering red, green, and blue components together. Based on these observations, we propose a feature set to capture color image statistics for detecting the DNG images. Moreover, three different detection scenarios in practice are considered and the corresponding detection strategies are designed. Extensive experiments have been conducted on face image datasets to evaluate the effectiveness of the proposed method. The experimental results show that the proposed method is able to distinguish the DNG images from real ones with high accuracies.",
"The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95 can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89 , on compressed data.",
"",
"Due to the significant advancements in image processing and machine learning algorithms, it is much easier to create, edit, and produce high quality images. However, attackers can maliciously use these tools to create legitimate looking but fake images to harm others, bypass image detection algorithms, or fool image recognition classifiers. In this work, we propose neural network based classifiers to detect fake human faces created by both 1) machines and 2) humans. We use ensemble methods to detect GANs-created fake images and employ pre-processing techniques to improve fake face image detection created by humans. Our approaches focus on image contents for classification and do not use meta-data of images. Our preliminary results show that we can effectively detect both GANs-created images, and human-created fake images with 94 and 74.9 AUROC score."
]
} |
1902.11153 | 2916006017 | Recently the GAN generated face images are more and more realistic with high-quality, even hard for human eyes to detect. On the other hand, the forensics community keeps on developing methods to detect these generated fake images and try to guarantee the credibility of visual contents. Although researchers have developed some methods to detect generated images, few of them explore the important problem of generalization ability of forensics model. As new types of GANs are emerging fast, the generalization ability of forensics models to detect new types of GAN images is absolutely an essential research topic. In this paper, we explore this problem and propose to use preprocessed images to train a forensic CNN model. By applying similar image level preprocessing to both real and fake training images, the forensics model is forced to learn more intrinsic features to classify the generated and real face images. Our experimental results also prove the effectiveness of the proposed method. | However, most of the above work do not pay attention to the generalization ability of their forensics models. They only train and test their methods on the same type of generated image, but the generalization ability to other fake images generated by new GANs models are unknown. An exception is the ForensicTransfer work proposed by Davide @cite_19 . The authors use a new autoencoder-based architecture which enforces activations in different parts of a latent vector for the real and fake classes. They devise a learning based forensic detector which adapts well to new domains, and they handle scenarios where only a handful of target domain fake examples are available during training. However, in a real application, we may not have an example images from an unknown generation model. Thus, in this work we propose to improve the generalization ability without using any target domain fake images. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2902304528"
],
"abstract": [
"Distinguishing fakes from real images is becoming increasingly difficult as new sophisticated image manipulation approaches come out by the day. Convolutional neural networks (CNN) show excellent performance in detecting image manipulations when they are trained on a specific forgery method. However, on examples from unseen manipulation approaches, their performance drops significantly. To address this limitation in transferability, we introduce ForensicTransfer. ForensicTransfer tackles two challenges in multimedia forensics. First, we devise a learning-based forensic detector which adapts well to new domains, i.e., novel manipulation methods. Second we handle scenarios where only a handful of fake examples are available during training. To this end, we learn a forensic embedding that can be used to distinguish between real and fake imagery. We are using a new autoencoder-based architecture which enforces activations in different parts of a latent vector for the real and fake classes. Together with the constraint of correct reconstruction this ensures that the latent space keeps all the relevant information about the nature of the image. Therefore, the learned embedding acts as a form of anomaly detector; namely, an image manipulated from an unseen method will be detected as fake provided it maps sufficiently far away from the cluster of real images. Comparing with prior works, ForensicTransfer shows significant improvements in transferability, which we demonstrate in a series of experiments on cutting-edge benchmarks. For instance, on unseen examples, we achieve up to 80-85 in terms of accuracy compared to 50-59 , and with only a handful of seen examples, our performance already reaches around 95 ."
]
} |
1902.11203 | 2918062846 | Generating plausible hair image given limited guidance, such as sparse sketches or low-resolution image, has been made possible with the rise of Generative Adversarial Networks (GANs). Traditional image-to-image translation networks can generate recognizable results, but finer textures are usually lost and blur artifacts commonly exist. In this paper, we propose a two-phase generative model for high-quality hair image synthesis. The two-phase pipeline first generates a coarse image by an existing image translation model, then applies a re-generating network with self-enhancing capability to the coarse image. The self-enhancing capability is achieved by a proposed structure extraction layer, which extracts the texture and orientation map from a hair image. Extensive experiments on two tasks, Sketch2Hair and Hair Super-Resolution, demonstrate that our approach is able to synthesize plausible hair image with finer details, and outperforms the state-of-the-art. | Generating virtual hairstyles is a long-standing research topic in computer graphics due to the important role it plays in representing human characters in games and movies. Most previous works focus on producing 3D hairs, according to user interactions @cite_16 @cite_43 @cite_15 or real-captured images @cite_20 @cite_33 @cite_35 @cite_12 . Given images, thanks to these modeling techniques, the hair can be recovered strand by strand which enables intelligent hair editing @cite_34 @cite_44 or interpolation @cite_11 by performing manipulation in 3D space and then being re-rendered to 2D domain. Although these methods are able to result in realistic appearances, high computational cost are incurred due to the involvement of 3D matters. To avoid high computational cost in hair rendering, @cite_38 proposes a deep learning based hair synthesis method, which can generate high-quality results from an edge activation map. However, to obtain the activation map, an input CG hair model is still required for the initial rendering. In comparison, our method involves no 3D rendering module, and relies on a 2D image with sparse information only. With such a limited input, we still synthesize photo-realistic results thanks to the self-enhancing module as proposed. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_33",
"@cite_34",
"@cite_44",
"@cite_43",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"2468764576",
"2896259970",
"",
"",
"1983116695",
"2912802002",
"2031008622",
"2513476367",
"2808837808",
"",
"2168485256"
],
"abstract": [
"We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.",
"We present an adversarial network for rendering photorealistic hair as an alternative to conventional computer graphics pipelines. Our deep learning approach does not require low-level parameter tuning nor ad-hoc asset design. Our method simply takes a strand-based 3D hair model as input and provides intuitive user-control for color and lighting through reference images. To handle the diversity of hairstyles and its appearance complexity, we disentangle hair structure, color, and illumination properties using a sequential GAN architecture and a semi-supervised training approach. We also introduce an intermediate edge activation map to orientation field conversion step to ensure a successful CG-to-photoreal transition, while preserving the hair structures of the original input data. As we only require a feed-forward pass through the network, our rendering performs in real-time. We demonstrate the synthesis of photorealistic hair images on a wide range of intricate hairstyles and compare our technique with state-of-the-art hair rendering methods.",
"",
"",
"This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.",
"We are pleased to present the proceedings of the Fourth Eurographics Workshop on Sketch-Based Interfaces and Modeling (SBIM '07). This annual workshop explores techniques and technologies needed to enable sketchbased interfaces. This year's workshop took place in Riverside, California, on the University of California, Riverside campus from August 2-4, coinciding with the 2007 ACM SIGGRAPH conference in San Diego, California. The program included paper presentations, invited talks, a hands-on demo session, and a break-out brainstorming session. This year's 18 paper presentations addressed novel work in a wide-range of exciting areas in sketchbased interfaces and modeling including sketch recognition, ink processing, user interface design, and 3D modeling. In Thursday's keynote presentation, James Landay, Associate Professor at the University of Washington and Strategic Advisor at Intel Research, proposed activity-based pen computing as a new model for the design of pen-based interfaces. In Friday's invited talk, Andrew Forsberg, a Research Scientist at Brown University, discussed ways to strengthen the pen computing research community. In the hands-on demo session, participants had the opportunity to try out eight novel pen-based interfaces. On Saturday, members of Microsoft Corp. and the Brown University Center for Pen Based Computing led a hands-on technical development tutorial for the Tablet PC. Participants at the tutorial created a simple application to collect and manipulate ink, and learned the latest details of working with new APIs, such as the Real-Time Stylus and the Handwriting Recognition APIs. The workshop included presentations from Sashi Raghupathy, who leads the Microsoft ink development team, and Jay Pittman, who leads the Microsoft handwriting recognition team. The tutorial concluded with an opportunity for researchers to give the Microsoft tablet team feedback about what they would like to see in future versions of the Tablet PC platform software.",
"Human hair presents highly convoluted structures and spans an extraordinarily wide range of hairstyles, which is essential for the digitization of compelling virtual avatars but also one of the most challenging to create. Cutting-edge hair modeling techniques typically rely on expensive capture devices and significant manual labor. We introduce a novel data-driven framework that can digitize complete and highly complex 3D hairstyles from a single-view photograph. We first construct a large database of manually crafted hair models from several online repositories. Given a reference photo of the target hairstyle and a few user strokes as guidance, we automatically search for multiple best matching examples from the database and combine them consistently into a single hairstyle to form the large-scale structure of the hair model. We then synthesize the final hair strands by jointly optimizing for the projected 2D similarity to the reference photo, the physical plausibility of each strand, as well as the local orientation coherency between neighboring strands. We demonstrate the effectiveness and robustness of our method on a variety of hairstyles and challenging images, and compare our system with state-of-the-art hair modeling algorithms.",
"",
"",
"",
"In this paper we study the problem of hair interpolation: given two 3D hair models, we want to generate a sequence of intermediate hair models that transform from one input to another both smoothly and aesthetically pleasing. We propose an automatic method that efficiently calculates a many-to-many strand correspondence between two or more given hair models, taking into account the multi-scale clustering structure of hair. Experiments demonstrate that hair interpolation can be used for producing more vivid portrait morphing effects and enabling a novel example-based hair styling methodology, where a user can interactively create new hairstyles by continuously exploring a “style space” spanning multiple input hair models."
]
} |
1902.10877 | 2915229399 | Financial time series prediction, especially with machine learning techniques, is an extensive field of study. In recent times, deep learning methods (especially time series analysis) have performed outstandingly for various industrial problems, with better prediction than machine learning methods. Moreover, many researchers have used deep learning methods to predict financial time series with various models in recent years. In this paper, we will compare various deep learning models, such as multilayer perceptron (MLP), one-dimensional convolutional neural networks (1D CNN), stacked long short-term memory (stacked LSTM), attention networks, and weighted attention networks for financial time series prediction. In particular, attention LSTM is not only used for prediction, but also for visualizing intermediate outputs to analyze the reason of prediction; therefore, we will show an example for understanding the model prediction intuitively with attention vectors. In addition, we focus on time and factors, which lead to an easy understanding of why certain trends are predicted when accessing a given time series table. We also modify the loss functions of the attention models with weighted categorical cross entropy; our proposed model produces a 0.76 hit ratio, which is superior to those of other methods for predicting the trends of the KOSPI 200. | Kohzadi @cite_7 tested ANN and ARIMA models for forecasting commodity prices and compared the results of each. They found that the ANNs returned 27 @math The above research can be interpreted as showing that deep learning techniques are more effective than other machine learning techniques in extracting high-level representations of input features, thereby enhancing their overall performance. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2146552111"
],
"abstract": [
"Abstract A feedforward neural network which can account for nonlinear relationships was used to compare ARIMA and neural network price forecasting performance. Data used was monthly live cattle and wheat prices from 1950 through 1990. The experiment was repeated seven times for successive three year periods. This involved using a walk forward or sliding window approach from 1970 through 1990 which generated out of sample results. The neural network models achieved a 27 percent and 56 percent lower mean squared error than ARIMA model. The absolute mean error and mean absolute percent error were also lower for the neural network models. The neural network models were able to capture a significant number of turning points for both wheat and cattle, while the ARIMA model was only able to do so for wheat. Since this forecasting method is not problem specific and uses only past prices, it can be applied to other forecasting problems such as stocks and other financial prices."
]
} |
1902.10877 | 2915229399 | Financial time series prediction, especially with machine learning techniques, is an extensive field of study. In recent times, deep learning methods (especially time series analysis) have performed outstandingly for various industrial problems, with better prediction than machine learning methods. Moreover, many researchers have used deep learning methods to predict financial time series with various models in recent years. In this paper, we will compare various deep learning models, such as multilayer perceptron (MLP), one-dimensional convolutional neural networks (1D CNN), stacked long short-term memory (stacked LSTM), attention networks, and weighted attention networks for financial time series prediction. In particular, attention LSTM is not only used for prediction, but also for visualizing intermediate outputs to analyze the reason of prediction; therefore, we will show an example for understanding the model prediction intuitively with attention vectors. In addition, we focus on time and factors, which lead to an easy understanding of why certain trends are predicted when accessing a given time series table. We also modify the loss functions of the attention models with weighted categorical cross entropy; our proposed model produces a 0.76 hit ratio, which is superior to those of other methods for predicting the trends of the KOSPI 200. | In addition, 1D CNNs have shown superior performance for classifying sequential data. A CNN is widely used in image-related tasks such as classification, segmentation, denoising, super-resolution, etc. These days, in addition to image area classification, 1D CNNs are also very effective for deriving important features from segments of whole sequential data and where the location of the features within the segment is insignificant. This applies well to the analysis of sequential sensor data, fixed length periodic signals, and NLP. In @cite_20 and @cite_21 , semantically meaningful representations of sentences are learned using CNNs in NLP. The models these papers recommend potentially interesting documents to users based on what they are currently reading. In @cite_9 , the authors evaluate a CNN architecture on sentiment analysis and topic categorization tasks. The 1D CNN architecture achieved remarkable results compared with those in previous papers. Also, the network used in this paper is quite simple and easy to implement. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_20"
],
"mid": [
"2949541494",
"2131876387",
"2251008987"
],
"abstract": [
"We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.",
"In this paper, we propose a new latent semantic model that incorporates a convolutional-pooling structure over word sequences to learn low-dimensional, semantic vector representations for search queries and Web documents. In order to capture the rich contextual structures in a query or a document, we start with each word within a temporal context window in a word sequence to directly capture contextual features at the word n-gram level. Next, the salient word n-gram features in the word sequence are discovered by the model and are then aggregated to form a sentence-level feature vector. Finally, a non-linear transformation is applied to extract high-level semantic information to generate a continuous vector representation for the full text string. The proposed convolutional latent semantic model (CLSM) is trained on clickthrough data and is evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that the proposed model effectively captures salient semantic information in queries and documents for the task while significantly outperforming previous state-of-the-art semantic models.",
"An “Interestingness Modeler” uses deep neural networks to learn deep semantic models (DSM) of “interestingness.” The DSM, consisting of two branches of deep neural networks or their convolutional versions, identifies and predicts target documents that would interest users reading source documents. The learned model observes, identifies, and detects naturally occurring signals of interestingness in click transitions between source and target documents derived from web browser logs. Interestingness is modeled with deep neural networks that map source-target document pairs to feature vectors in a latent space, trained on document transitions in view of a “context” and optional “focus” of source and target documents. Network parameters are learned to minimize distances between source documents and their corresponding “interesting” targets in that space. The resulting interestingness model has applicable uses, including, but not limited to, contextual entity searches, automatic text highlighting, prefetching documents of likely interest, automated content recommendation, automated advertisement placement, etc."
]
} |
1902.10847 | 2917138319 | Visual identification of individual animals that bear unique natural body markings is an important task in wildlife conservation. The photo databases of animal markings grow larger and each new observation has to be matched against thousands of images. Existing photo-identification solutions have constraints on image quality and appearance of the pattern of interest in the image. These constraints limit the use of photos from citizen scientists. We present a novel system for visual re-identification based on unique natural markings that is robust to occlusions, viewpoint and illumination changes. We adapt methods developed for face re-identification and implement a deep convolutional neural network (CNN) to learn embeddings for images of natural markings. The distance between the learned embedding points provides a dissimilarity measure between the corresponding input images. The network is optimized using the triplet loss function and the online semi-hard triplet mining strategy. The proposed re-identification method is generic and not species specific. We evaluate the proposed system on image databases of manta ray belly patterns and humpback whale flukes. To be of practical value and adopted by marine biologists, a re-identification system needs to have a top-10 accuracy of at least 95 . The proposed system achieves this performance standard. | Matching natural patterns has been approached by exhaustively generating two-dimensional affine transformations based on user provided key points and comparing each transformation of a candidate example with the examples stored in a repository @cite_11 , @cite_33 , @cite_10 , @cite_32 . The algorithm was implemented in a solution called APHIS (Automated Photo-Identification Suite) and applied for re-identification of lizards @cite_11 , arthropods @cite_33 , spotted raggedtooth sharks @cite_10 and turtle flippers @cite_32 . However, the method requires a user to select key points and identify the most distinctive spots for each image. | {
"cite_N": [
"@cite_32",
"@cite_10",
"@cite_33",
"@cite_11"
],
"mid": [
"2803858642",
"1580935273",
"2791449606",
"1966074269"
],
"abstract": [
"Abstract Photo-identification using animals' natural markings is a cheaper and less impactful alternative to the use of more conventional external or internal tags for identifying individual animals. Photo-ID has already been successfully employed to monitor hard-shelled sea turtles, specifically using facial scales. However, photographing facial features might cause stress on photosensitive sea turtles, especially at night. Considering that there are more scales on the flippers than on the face and flipper photography is likely to be less invasive, we proposed an alternative method for photo-identification using the scale patterns on the front flippers. This method might also be suitable for successful photo-identification of hatchlings – which has been ineffectual using facial scales. To test the suitability of using the front flippers for photo-ID sea turtles, we took photos of the right flippers of adult and hatchling green turtles (Chelonia mydas) and olive ridley turtles (Lepidochelys olivacea) from Cabuyal, Costa Rica. The photos were analysed using APHIS, which delimits an area of interest with three reference points and then uses several additional points within this area to mark the intersections between scales. In both species, hatchlings and adults were correctly identified 92.9 and 81.8 of the time respectively but the capability of APHIS to correctly identify individuals was highly dependent on the quality of the photo. We detected similarities in flipper scale pattern between hatchlings of the same species but not between hatchlings from the same nest, indicating that hatchlings have unique flipper markings. The use of flipper scale patterns to identify individual sea turtles is comparable to the use of facial scales, without the risk of disturbing nesting females. Additionally, we were able to reliably identify individual hatchlings, which has not been possible using facial scales or conventional tagging techniques. Flipper scale patterns may potentially be a new methodology for identifying hatchlings both short- and long-term. We discuss the benefits and limitations of using sea turtle flipper scale patterns for identification as well as the benefits and limitation of APHIS.",
"Summary 1 The ability to identify individual animals is a critical aid in wildlife and conservation studies requiring information on behaviour, distribution, habitat use, population and life-history parameters. We present a computer-aided photo-identification technique that relies on natural marks to identify individuals of Carcharias taurus, a shark species that is critically endangered off the eastern Australian coast and considered globally vulnerable. The technique could potentially be applied to a range of species of similar form and bearing natural marks. 2 The use of natural marks for photo-identification is a non-invasive technique for identifying individual animals. As photo-identification databases grow larger, and their implementation spans several years, the historically used visual-matching processes lose accuracy and speed. A computerized pattern-matching system that requires initial user interaction to select the key features aids researchers by considerably reducing the time needed for identification of individuals. 3 Our method uses a two-dimensional affine transformation to compare two individuals in a commonly defined reference space. The methodology was developed using a database of 221 individually identifiable sharks that were photographically marked and rephotographed over 9 years, demonstrating both the efficacy of the technique and that the natural pigment marks of C. taurus are a reliable means of tracking individuals over several years. 4 Synthesis and applications. The identification of individual animals that are naturally marked with spots or similar patterns is achieved with an interactive pattern-matching system that uses an affine transformation to compare selected points in a single-user computer-aided interface. Our technique has been used successfully on C. taurus and we believe the methodology can be applied to other species of a similar form that have natural marks or patterns. The identification of individuals allows accurate tracking of their movements and distribution, and contributes to better population estimates for improved wildlife management and conservation planning.",
"Abstract Photographic identification methods are of highly importance when it comes to reduce the animal's stress, pain and possible injuries during or after marking techniques and thus to increase the reliability of demographic parameter estimates. There is plenty of software available for photo-identification, allowing individual identification in capture-mark-recapture (CMR) methods using body patterns, spots and marks unique to each individual. However, these non-invasive methods have hardly ever been used with arthropods. In this study, APHIS (Automated PHoto Identification Suite) has been assessed as a software capable of identifying individuals in different samplings during catch-and-release sessions with dead specimens under laboratory conditions. For this individual identification, SPM (Spot Pattern Matching) and ITM (Image Template Matching) procedures were tested; achieving a success of 100 and 95.35 , respectively. In SPM, the software itself matched the specimens almost automatically in half of the cases. However, it resulted more time-consuming than ITM during the pre-processing of images. On the other hand, ITM saves time during this step and still is able to detect recaptures accurately, yet more time may be needed when selecting the recaptures from the candidate list. Thus, it can be attested that APHIS is a competent and efficient software regarding photo-identification of Rhynchophorus ferrugineus and species with similar and unique individual colour patterns in their pronotum.",
"Abstract Unique body characteristics are increasingly used for individual recognition to avoid the effort and the potential negative effects of capture–mark–recapture technique. As a consequence there is a growing demand for computer procedures to assist users in photo-recognition of an individual. We present a new software for photo-matching developed to minimize the pre-processing time and maximize the speed of the matching procedure. In APHIS photos can be processed in batches of hundreds and users can select between two alternative matching procedures, one interactive, built as an extension of existing and freely available software, and one automatic. We assessed its performance in terms of individual recognition and time efficiency and illustrate its use with real capture-photo–recapture studies on a reptile and an amphibian species, the Balearic Lizard Podiarcis lilfordi and the Northern spectacled salamander Salamandrina perspicillata , with contrasting skin patterns."
]
} |
1902.10847 | 2917138319 | Visual identification of individual animals that bear unique natural body markings is an important task in wildlife conservation. The photo databases of animal markings grow larger and each new observation has to be matched against thousands of images. Existing photo-identification solutions have constraints on image quality and appearance of the pattern of interest in the image. These constraints limit the use of photos from citizen scientists. We present a novel system for visual re-identification based on unique natural markings that is robust to occlusions, viewpoint and illumination changes. We adapt methods developed for face re-identification and implement a deep convolutional neural network (CNN) to learn embeddings for images of natural markings. The distance between the learned embedding points provides a dissimilarity measure between the corresponding input images. The network is optimized using the triplet loss function and the online semi-hard triplet mining strategy. The proposed re-identification method is generic and not species specific. We evaluate the proposed system on image databases of manta ray belly patterns and humpback whale flukes. To be of practical value and adopted by marine biologists, a re-identification system needs to have a top-10 accuracy of at least 95 . The proposed system achieves this performance standard. | Some methods have been developed for specific species and, while performing well on these, are not easily transferable to other species. High-contrast colour patterns of humpback whale flukes @cite_8 and dolphin dorsal fins @cite_41 are matched by extracting hand-crafted features from corresponding segments obtained by overlaying a grid on a region of interest. This method is not robust to viewpoint changes. | {
"cite_N": [
"@cite_41",
"@cite_8"
],
"mid": [
"2569469748",
"2118037900"
],
"abstract": [
"Ecologists commonly use photo-identification of individual animals to monitor the behaviour, state and health of a population, since it is a cost-effective technique that eliminates the need to physically capture and tag animals. With dolphins, the nicks and notches of the dorsal fin are typically used as the unique identifying features for each individual; however New Zealand common dolphins are relatively unmarked, so most of the population cannot be identified. Here, we investigate how computer vision can be used to extract information from the pigmentation patterns that are typically seen on adult common dolphin dorsal fins. We develop features that are relatively robust to changes in the fin orientation and compare the classification rates of 779 photos of 169 different adult common dolphins. Using pigmentation-based features, we correctly classified individuals 75 of the time, with our top-5 estimates containing the correct dolphin in 86 of the cases.",
"This paper describes current work on a photo-id system for humpback whales. Individuals of this species can be uniquely identified by the light and dark pigmentation patches on their tails (flukes). We developed an interface that assists the user in segmenting the animal's tail from the sea and fitting an affine invariant coordinate grid to it. A numerical feature vector capturing the patch-distribution with respect to the grid is then automatically extracted and used to match the individual against a database of similarly processed images."
]
} |
1902.10847 | 2917138319 | Visual identification of individual animals that bear unique natural body markings is an important task in wildlife conservation. The photo databases of animal markings grow larger and each new observation has to be matched against thousands of images. Existing photo-identification solutions have constraints on image quality and appearance of the pattern of interest in the image. These constraints limit the use of photos from citizen scientists. We present a novel system for visual re-identification based on unique natural markings that is robust to occlusions, viewpoint and illumination changes. We adapt methods developed for face re-identification and implement a deep convolutional neural network (CNN) to learn embeddings for images of natural markings. The distance between the learned embedding points provides a dissimilarity measure between the corresponding input images. The network is optimized using the triplet loss function and the online semi-hard triplet mining strategy. The proposed re-identification method is generic and not species specific. We evaluate the proposed system on image databases of manta ray belly patterns and humpback whale flukes. To be of practical value and adopted by marine biologists, a re-identification system needs to have a top-10 accuracy of at least 95 . The proposed system achieves this performance standard. | Another approach identifies individual cetaceans from images showing the trailing edge of their fins by generating a representation of integral curvature of the nicks and notches along the trailing edge @cite_6 . | {
"cite_N": [
"@cite_6"
],
"mid": [
"2963430954"
],
"abstract": [
"We address the problem of identifying individual cetaceans from images showing the trailing edge of their fins. Given the trailing edge from an unknown individual, we produce a ranking of known individuals from a database. The nicks and notches along the trailing edge define an individual's unique signature. We define a representation based on integral curvature that is robust to changes in viewpoint and pose, and captures the pattern of nicks and notches in a local neighborhood at multiple scales. We explore two ranking methods that use this representation. The first uses a dynamic programming time-warping algorithm to align two representations, and interprets the alignment cost as a measure of similarity. This algorithm also exploits learned spatial weights to downweight matches from regions of unstable curvature. The second interprets the representation as a feature descriptor. Feature keypoints are defined at the local extrema of the representation. Descriptors for the set of known individuals are stored in a tree structure, which allows us to perform queries given the descriptors from an unknown trailing edge. We evaluate the top-k accuracy on two real-world datasets to demonstrate the effectiveness of the curvature representation, achieving top-1 accuracy scores of approximately 95 and 80 for bottlenose dolphins and humpback whales, respectively."
]
} |
1902.10847 | 2917138319 | Visual identification of individual animals that bear unique natural body markings is an important task in wildlife conservation. The photo databases of animal markings grow larger and each new observation has to be matched against thousands of images. Existing photo-identification solutions have constraints on image quality and appearance of the pattern of interest in the image. These constraints limit the use of photos from citizen scientists. We present a novel system for visual re-identification based on unique natural markings that is robust to occlusions, viewpoint and illumination changes. We adapt methods developed for face re-identification and implement a deep convolutional neural network (CNN) to learn embeddings for images of natural markings. The distance between the learned embedding points provides a dissimilarity measure between the corresponding input images. The network is optimized using the triplet loss function and the online semi-hard triplet mining strategy. The proposed re-identification method is generic and not species specific. We evaluate the proposed system on image databases of manta ray belly patterns and humpback whale flukes. To be of practical value and adopted by marine biologists, a re-identification system needs to have a top-10 accuracy of at least 95 . The proposed system achieves this performance standard. | Current systems used in practice (Manta Matcher @cite_5 , HotSpotter @cite_1 ) are based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm @cite_37 with different modifications and enhancements to work on specific cases. While the algorithm works well on images that clearly show the pattern of interest, it is not robust to large changes in camera viewpoint, occlusions and variations in illumination. | {
"cite_N": [
"@cite_5",
"@cite_37",
"@cite_1"
],
"mid": [
"2068232285",
"",
"2108570180"
],
"abstract": [
"For species which bear unique markings, such as natural spot patterning, field work has become increasingly more reliant on visual identification to recognize and catalog particular specimens or to monitor individuals within populations. While many species of interest exhibit characteristic markings that in principle allow individuals to be identified from photographs, scientists are often faced with the task of matching observations against databases of hundreds or thousands of images. We present a novel technique for automated identification of manta rays (Manta alfredi and Manta birostris) by means of a pattern-matching algorithm applied to images of their ventral surface area. Automated visual identification has recently been developed for several species. However, such methods are typically limited to animals that can be photographed above water, or whose markings exhibit high contrast and appear in regular constellations. While manta rays bear natural patterning across their ventral surface, these patterns vary greatly in their size, shape, contrast, and spatial distribution. Our method is the first to have proven successful at achieving high matching accuracies on a large corpus of manta ray images taken under challenging underwater conditions. Our method is based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm. In order to cope with the considerable variation in quality of underwater photographs, we also incorporate preprocessing and image enhancement steps. Furthermore, we use a novel pattern-matching approach that results in better accuracy than the standard SIFT approach and other alternative methods. We present quantitative evaluation results on a data set of 720 images of manta rays taken under widely different conditions. We describe a novel automated pattern representation and matching method that can be used to identify individual manta rays from photographs. The method has been incorporated into a website (mantamatcher.org) which will serve as a global resource for ecological and conservation research. It will allow researchers to manage and track sightings data to establish important life-history parameters as well as determine other ecological data such as abundance, range, movement patterns, and structure of manta ray populations across the world.",
"",
"Summary 1. Photographic mark–recapture is a cost-effective, non-invasive way to study populations. However, to efficiently apply photographic mark–recapture to large populations, computer software is needed for image manipulation and pattern matching. 2. We created an open-source application for the storage, pattern extraction and pattern matching of digital images for the purposes of mark–recapture analysis. The resulting software package is a stand-alone, multiplatform application implemented in Java. Our program employs the Scale Invariant Feature Transform (SIFT) operator that extracts distinctive features invariant to image scale and rotation. 3. We applied this system to a population of Masai giraffe (Giraffa camelopardalis tippelskirchi) in the Tarangire Ecosystem in northern Tanzania. Over 1200 images were acquired in the field during three primary sampling periods between September 2008 and December 2009. The pattern information in these images was extracted and matched resulting in capture histories for over 600 unique individuals. 4. Estimated error rates of the matching system were low based on a subset of test images that were independently matched by eye. 5. Encounter histories were subsequently analysed with open population models to estimate apparent survival rates and population size. 6. This new open-access tool allowed photographic mark–recapture to be applied successfully to this relatively large population."
]
} |
1902.10847 | 2917138319 | Visual identification of individual animals that bear unique natural body markings is an important task in wildlife conservation. The photo databases of animal markings grow larger and each new observation has to be matched against thousands of images. Existing photo-identification solutions have constraints on image quality and appearance of the pattern of interest in the image. These constraints limit the use of photos from citizen scientists. We present a novel system for visual re-identification based on unique natural markings that is robust to occlusions, viewpoint and illumination changes. We adapt methods developed for face re-identification and implement a deep convolutional neural network (CNN) to learn embeddings for images of natural markings. The distance between the learned embedding points provides a dissimilarity measure between the corresponding input images. The network is optimized using the triplet loss function and the online semi-hard triplet mining strategy. The proposed re-identification method is generic and not species specific. We evaluate the proposed system on image databases of manta ray belly patterns and humpback whale flukes. To be of practical value and adopted by marine biologists, a re-identification system needs to have a top-10 accuracy of at least 95 . The proposed system achieves this performance standard. | The difference between face verification and animal re-identification is that a face image is typically normalized to an upright position whereas a pattern on an animal body is not necessarily in a canonical position and can appear at different angles. See an example of the same manta ray viewed from different vantage points in Fig. . A robust identification system should be invariant to the pose of the object of interest and viewing angle. In our previous work @cite_13 , we investigated the difficulty of recognizing a set of artificially generated patterns subjected to various projective transformations to simulate the variations in appearance of natural markings from different vantage points. This previous study explored Siamese @cite_7 and Triplet @cite_27 architectures with different loss functions for learning the homographic equivalence between patterns. It was concluded that these architectures with a relatively simple CNN in its core were suitable for pattern re-identification. The results were promising and we have now extended this approach to real images of animal markings in the wild. | {
"cite_N": [
"@cite_27",
"@cite_13",
"@cite_7"
],
"mid": [
"1975517671",
"2777843075",
"2157364932"
],
"abstract": [
"Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.",
"Despite impressive results in object classification, verification and recognition, most deep neural network based recognition systems become brittle when the view point of the camera changes dramatically. Robustness to geometric transformations is highly desirable for applications like wild life monitoring where there is no control on the pose of the objects of interest. The images of different objects viewed from various observation points define equivalence classes where by definition two images are said to be equivalent if they are views from the same object. These equivalence classes can be learned via embeddings that map the input images to vectors of real numbers. During training, equivalent images are mapped to vectors that get pulled closer together, whereas if the images are not equivalent their associated vectors get pulled apart. In this work, we present an effective deep neural network model for learning the homographic equivalence between patterns. The long term aim of this research is to develop more robust manta ray recognizers. Manta rays bear unique natural spot patterns on their bellies. Visual identification based on these patterns from underwater images enables a better understanding of habitat use by monitoring individuals within populations. We test our model on a dataset of artificially generated patterns that resemble natural patterning. Our experiments demonstrate that the proposed architecture is able to discriminate between patterns subjected to large homographic transformations.",
"We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L sub 1 norm in the target space approximates the \"semantic\" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves."
]
} |
1902.10990 | 2917392503 | We develop hierarchically quantized efficient embedding representations for similarity-based search and show that this representation provides not only the state of the art performance on the search accuracy but also provides several orders of speed up during inference. The idea is to hierarchically quantize the representation so that the quantization granularity is greatly increased while maintaining the accuracy and keeping the computational complexity low. We also show that the problem of finding the optimal sparse compound hash code respecting the hierarchical structure can be optimized in polynomial time via minimum cost flow in an equivalent flow network. This allows us to train the method end-to-end in a mini-batch stochastic gradient descent setting. Our experiments on Cifar100 and ImageNet datasets show the state of the art search accuracy while providing several orders of magnitude search speedup respectively over exhaustive linear search over the dataset. | Embedding representation learning with neural networks has its roots in Siamese networks @cite_15 @cite_1 where it was trained end-to-end to pull similar examples close to each other and push dissimilar examples at least some margin away from each other in the embedding space. @cite_15 demonstrated the idea could be used for signature verification tasks. The line of work since then has been explored in wide variety of practical applications such as face recognition @cite_5 , domain adaptation @cite_7 , zero-shot learning @cite_17 @cite_11 , video representation learning @cite_34 , and similarity-based interior design @cite_8 , etc. | {
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_5",
"@cite_15",
"@cite_34",
"@cite_17"
],
"mid": [
"2962830213",
"2551835155",
"2021354639",
"",
"2145287260",
"2127589108",
"219040644",
"2549858646"
],
"abstract": [
"This paper addresses the task of zero-shot image classification. The key contribution of the proposed approach is to control the semantic embedding of images – one of the main ingredients of zero-shot learning – by formulating it as a metric learning problem. The optimized empirical criterion associates two types of sub-task constraints: metric discriminating capacity and accurate attribute prediction. This results in a novel expression of zero-shot learning not requiring the notion of class in the training phase: only pairs of image attributes, augmented with a consistency indicator, are given as ground truth. At test time, the learned model can predict the consistency of a test image with a given set of attributes, allowing flexible ways to produce recognition inferences. Despite its simplicity, the proposed approach gives state-of-the-art results on four challenging datasets used for zero-shot recognition evaluation.",
"Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem between a labelled source dataset and an unlabelled target dataset, recent papers have shown promising results by fine-tuning the networks with domain adaptation loss functions which try to align the mismatch between the training and testing data distributions. Nevertheless, these recent deep learning based domain adaptation approaches still suffer from issues such as high sensitivity to the gradient reversal hyperparameters and overfitting during the fine-tuning stage. In this paper, we propose a unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin. We will make our learned models as well as the source code available immediately upon acceptance.",
"Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.",
"",
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a \"Siamese\" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"Riding on the waves of deep neural networks, deep metric learning has achieved promising results in various tasks by using triplet network or Siamese network. Though the basic goal of making images from the same category closer than the ones from different categories is intuitive, it is hard to optimize the objective directly due to the quadratic or cubic sample size. Hard example mining is widely used to solve the problem, which spends the expensive computation on a subset of samples that are considered hard. However, hard is defined relative to a specific model. Then complex models will treat most samples as easy ones and vice versa for simple models, both of which are not good for training. It is difficult to define a model with the just right complexity and choose hard examples adequately as different samples are of diverse hard levels. This motivates us to propose the novel framework named Hard-Aware Deeply Cascaded Embedding(HDC) to ensemble a set of models with different complexities in cascaded manner to mine hard examples at multiple levels. A sample is judged by a series of models with increasing complexities and only updates models that consider the sample as a hard case. The HDC is evaluated on CARS196, CUB-200-2011, Stanford Online Products, VehicleID and DeepFashion datasets, and outperforms state-of-the-art methods by a large margin."
]
} |
1902.10731 | 2917729910 | We present a private learner for halfspaces over an arbitrary finite domain @math with sample complexity @math . The building block for this learner is a differentially private algorithm for locating an approximate center point of @math points -- a high dimensional generalization of the median function. Our construction establishes a relationship between these two problems that is reminiscent of the relation between the median and learning one-dimensional thresholds [ FOCS '15]. This relationship suggests that the problem of privately locating a center point may have further applications in the design of differentially private algorithms. We also provide a lower bound on the sample complexity for privately finding a point in the convex hull. For approximate differential privacy, we show a lower bound of @math , whereas for pure differential privacy @math . | Most related to our work is the work on private learning and its sample and time complexity by @cite_29 @cite_22 @cite_29 @cite_30 @cite_7 @cite_17 @cite_11 @cite_18 @cite_28 @cite_13 . As some of these works demonstrate efficiency gaps between private and non-private learning, alternative models have been explored including semi-supervised learning ( @cite_16 ), learning multiple concepts ( @cite_27 ), and prediction ( @cite_5 , @cite_21 ). | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_11",
"@cite_22",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_27",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"1981635503",
"",
"2962958653",
"",
"2401502000",
"2171200367",
"2010523825",
"",
"2260333220",
"",
"2106714532",
"744425561",
"2096223017"
],
"abstract": [
"Learning is a task that generalizes many of the analyses that are applied to collections of data, in particular, to collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. (in SIAM J. Comput., 40(3):793---826, 2011) initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that for finite, discrete domains (ignoring time complexity), every PAC learning task could be performed privately with polynomially many labeled examples; in many natural cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by in STOC, pp. 609---618, 2008), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.",
"",
"In this work we analyze the sample complexity of classification by differentially private algorithms. Differential privacy is a strong and well-studied notion of privacy introduced by [Lecture Notes in Comput. Sci. 3876, Springer, New York, 2006, pp. 265--284] that ensures that the output of an algorithm leaks little information about the data point provided by any of the participating individuals. Sample complexity of private probably approximately correct (PAC) and agnostic learning was studied in a number of prior works starting with [SIAM J. Comput., 40 (2011), pp. 793--826]. However, a number of basic questions remain open [A. Beimel, S. P. Kasiviswanathan, and K. Nissim, Lecture Notes in Comput. Sci. 5978, Springer, New York, 2006, pp. 437--454; K. Chaudhuri and D. Hsu, Proceedings of Conference in Learning Theory, 2011, pp. 155--186; A. Beimel, K. Nissim, and U. Stemmer, Proceedings of the 4th Conference on Innovations in Theoretical Computer Science, 2013, pp. 9...",
"",
"",
"We prove new upper and lower bounds on the sample complexity of (a#x03B5;, a#x03B4;) differentially private algorithms for releasing approximate answers to threshold functions. A threshold function cx over a totally ordered domain X evaluates to cx(y) = 1 if y a#x2264; x, and evaluates to 0 otherwise. We give the first nontrivial lower bound for releasing thresholds with (a#x03B5;, a#x03B4;) differential privacy, showing that the task is impossible over an infinite domain X, and moreover requires sample complexity n a#x2265; (log* |X|), which grows with the size of the domain. Inspired by the techniques used to prove this lower bound, we give an algorithm for releasing thresholds with n a#x2264; 2(1+o(1)) log* |X| samples. This improves the previous best upper bound of 8(1+o(1)) log* |X| (, RANDOM'13). Our sample complexity upper and lower bounds also apply to the tasks of learning distributions with respect to Kolmogorov distance and of properly PAC learning thresholds with differential privacy. The lower bound gives the first separation between the sample complexity of properly learning a concept class with (a#x03B5;, a#x03B4;) differential privacy and learning without privacy. For properly learning thresholds in 'dimensions, this lower bound extends to n a#x2265; (l a#x2219; log* |X|). To obtain our results, we give reductions in both directions from releasing and properly learning thresholds and the simpler interior point problem. Given a database D of elements from X, the interior point problem asks for an element between the smallest and largest elements in D. We introduce new recursive constructions for bounding the sample complexity of the interior point problem, as well as further reductions and techniques for proving impossibility results for other basic problems in differential privacy.",
"We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to 0, 1 . The true answer is Σ ieS f(d i ), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11].",
"",
"We investigate the direct-sum problem in the context of differentially private PAC learning: What is the sample complexity of solving k learning tasks simultaneously under differential privacy, and how does this cost compare to that of solving k learning tasks without privacy? In our setting, an individual example consists of a domain element x labeled by k unknown concepts (c1,...,ck). The goal of a multi-learner is to output k hypotheses (h1,...,hk) that generalize the input examples. Without concern for privacy, the sample complexity needed to simultaneously learn @math concepts is essentially the same as needed for learning a single concept. Under differential privacy, the basic strategy of learning each hypothesis independently yields sample complexity that grows polynomially with k. For some concept classes, we give multi-learners that require fewer samples than the basic strategy. Unfortunately, however, we also give lower bounds showing that even for very simple concept classes, the sample cost of private multi-learning must grow polynomially in k.",
"",
"A private learner is an algorithm that given a sample of labeled individual examples outputs a generalizing hypothesis while preserving the privacy of each individual. In 2008, (FOCS 2008) gave a generic construction of private learners, in which the sample complexity is (generally) higher than what is needed for non-private learners. This gap in the sample complexity was then further studied in several followup papers, showing that (at least in some cases) this gap is unavoidable. Moreover, those papers considered ways to overcome the gap, by relaxing either the privacy or the learning guarantees of the learner. We suggest an alternative approach, inspired by the (non-private) models of semi-supervised learning and active-learning, where the focus is on the sample complexity of labeled examples whereas unlabeled examples are of a significantly lower cost. We consider private semi-supervised learners that operate on a random sample, where only a (hopefully small) portion of this sample is labeled. The learners have no control over which of the sample elements are labeled. Our main result is that the labeled sample complexity of private learners is characterized by the VC dimension. We present two generic constructions of private semi-supervised learners. The first construction is of learners where the labeled sample complexity is proportional to the VC dimension of the concept class, however, the unlabeled sample complexity of the algorithm is as big as the representation length of domain elements. Our second construction presents a new technique for decreasing the labeled sample complexity of a given private learner, while roughly maintaining its unlabeled sample complexity. In addition, we show that in some settings the labeled sample complexity does not depend on the privacy parameters of the learner.",
"An order-revealing encryption scheme gives a public procedure by which two ciphertexts can be compared to reveal the ordering of their underlying plaintexts. We show how to use order-revealing encryption to separate computationally efficient PAC learning from efficient @math -differentially private PAC learning. That is, we construct a concept class that is efficiently PAC learnable, but for which every efficient learner fails to be differentially private. This answers a question of FOCS '08, SIAM J. Comput. '11. To prove our result, we give a generic transformation from an order-revealing encryption scheme into one with strongly correct comparison, which enables the consistent comparison of ciphertexts that are not obtained as the valid encryption of any message. We believe this construction may be of independent interest.",
"In 2008, Kasiviswanathan el al. defined private learning as a combination of PAC learning and differential privacy [16]. Informally, a private learner is applied to a collection of labeled individual information and outputs a hypothesis while preserving the privacy of each individual. gave a generic construction of private learners for (finite) concept classes, with sample complexity logarithmic in the size of the concept class. This sample complexity is higher than what is needed for non-private learners, hence leaving open the possibility that the sample complexity of private learning may be sometimes significantly higher than that of non-private learning. We give a combinatorial characterization of the sample size sufficient and necessary to privately learn a class of concepts. This characterization is analogous to the well known characterization of the sample complexity of non-private learning in terms of the VC dimension of the concept class. We introduce the notion of probabilistic representation of a concept class, and our new complexity measure RepDim corresponds to the size of the smallest probabilistic representation of the concept class. We show that any private learning algorithm for a concept class C with sample complexity m implies RepDim(C) = O(m), and that there exists a private learning algorithm with sample complexity m = O(RepDim(C)). We further demonstrate that a similar characterization holds for the database size needed for privately computing a large class of optimization problems and also for the well studied problem of private data release."
]
} |
1902.10731 | 2917729910 | We present a private learner for halfspaces over an arbitrary finite domain @math with sample complexity @math . The building block for this learner is a differentially private algorithm for locating an approximate center point of @math points -- a high dimensional generalization of the median function. Our construction establishes a relationship between these two problems that is reminiscent of the relation between the median and learning one-dimensional thresholds [ FOCS '15]. This relationship suggests that the problem of privately locating a center point may have further applications in the design of differentially private algorithms. We also provide a lower bound on the sample complexity for privately finding a point in the convex hull. For approximate differential privacy, we show a lower bound of @math , whereas for pure differential privacy @math . | @cite_19 showed an efficient (non-private) learner for halfspaces that works in (a variant of) the statistical query (SQ) model of @cite_12 . It is known that SQ learners can be transformed to preserve differential privacy , and the algorithm of @cite_19 yields a differentially private efficient learner for halfspaces over examples from @math with sample complexity @math . Another related work is that of @cite_10 who constructed an algorithm for approximately solving linear programs with differential privacy. While learning halfspaces non-privately easily reduces to solving linear programs, it is not clear whether the results of @cite_10 imply a private learner for halfspaces (due to the types of errors they incur). | {
"cite_N": [
"@cite_19",
"@cite_10",
"@cite_12"
],
"mid": [
"2069730673",
"",
"1995897489"
],
"abstract": [
"The perceptron algorithm, developed mainly in the machine learning literature, is a simple greedy method for finding a feasible solution to a linear program (alternatively, for learning a threshold function). In spite of its exponential worst-case complexity, it is often quite useful, in part due to its noise-tolerance and also its overall simplicity. In this paper, we show that a randomized version of the perceptron algorithm along with periodic rescaling runs in polynomial-time. The resulting algorithm for linear programming has an elementary description and analysis.",
"",
"In this paper, we study the problem of learning in the presence of classification noise in the probabilistic learning model of Valiant and its variants. In order to identify the class of “robust” learning algorithms in the most general way, we formalize a new but related model of learning from statistical queries . Intuitively, in this model a learning algorithm is forbidden to examine individual examples of the unknown target function, but is given acess to an oracle providing estimates of probabilities over the sample space of random examples. One of our main results shows that any class of functions learnable from statistical queries is in fact learnable with classification noise in Valiant's model, with a noise rate approaching the information-theoretic barrier of 1 2. We then demonstrate the generality of the statistical query model, showing that practically every class learnable in Valiant's model and its variants can also be learned in the new model (and thus can be learned in the presence of noise). A notable exception to this statement is the class of parity functions, which we prove is not learnable from statistical queries, and for which no noise-tolerant algorithm is known."
]
} |
1902.10649 | 2933330146 | Voice-assistants text classification and named-entity recognition (NER) models are trained on millions of example utterances. Because of the large datasets, long training time is one of the bottlenecks for releasing improved models. In this work, we develop F10-SGD, a fast optimizer for text classification and NER elastic-net linear models. On internal datasets, F10-SGD provides 4x reduction in training time compared to the OWL-QN optimizer without loss of accuracy or increase in model size. Furthermore, we incorporate biased sampling that prioritizes harder examples towards the end of the training. As a result, in addition to faster training, we were able to obtain statistically significant accuracy improvements for NER. On public datasets, F10-SGD obtains 22 faster training time compared to FastText for text classification. And, 4x reduction in training time compared to CRFSuite OWL-QN for NER. | FastText @cite_12 is linear embedding model for text classification. It supports asynchronous multi-threaded SGD training via Hogwild @cite_9 , which makes training fast. However, FastText does not support L2 or dropout regularization, leading to suboptimal performance on small datasets. Also, it does not support L1 for feature selection, but it does have a quantization option @cite_4 to reduce the model size after training. | {
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_12"
],
"mid": [
"2951781666",
"2563351168",
"2468328197"
],
"abstract": [
"Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.",
"We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store the word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent the quantization artifacts. As a result, our approach produces a text classifier, derived from the fastText approach, which at test time requires only a fraction of the memory compared to the original one, without noticeably sacrificing the quality in terms of classification accuracy. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy.",
"This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute."
]
} |
1902.10671 | 2930126691 | Object detection models based on convolutional neural networks (CNNs) demonstrate impressive performance when trained on large-scale labeled datasets. While a generic object detector trained on such a dataset performs adequately in applications where the input data is similar to user photographs, the detector performs poorly on small objects, particularly ones with limited training data or imaged from uncommon viewpoints. Also, a specific room will have many objects that are missed by standard object detectors, frustrating a robot that continually operates in the same indoor environment. This paper describes a system for rapidly creating customized object detectors. Data is collected from a quadcopter that is teleoperated with an interactive interface. Once an object is selected, the quadcopter autonomously photographs the object from multiple viewpoints to create training data that is used by DUNet (Dense Upscaled Net), collect data to train DUNet (Dense Upscaled Network), our proposed model for learning customized object detectors from scratch given limited data. Our experiments compare the performance of learning models from scratch with DUNet vs. fine tuning existing state of the art object detectors, both on our indoor robotics domain and on standard datasets. | Image datasets are typically sourced from the Internet but there is also a growing trend of datasets, particularly for robotics applications, collected directly from the real world. For instance, the KITTI dataset @cite_26 consists of roadway images taken from a car driving in an urban environment. There has also been significant recent progress in efficiently collecting large quantities of visual data using robots, including smart user interfaces for semi-automated data collection using drones (e.g., @cite_7 ) and indoor mobile robots (e.g., @cite_30 ). | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_7"
],
"mid": [
"2754002491",
"",
"2807155006"
],
"abstract": [
"The ability to recognize objects is an essential skill for a robotic system acting in human-populated environments. Despite decades of effort from the robotic and vision research communities, robots are still missing good visual perceptual systems, preventing the use of autonomous agents for real-world applications. The progress is slowed down by the lack of a testbed able to accurately represent the world perceived by the robot in-the-wild. In order to fill this gap, we introduce a large-scale, multi-view object dataset collected with an RGB-D camera mounted on a mobile robot. The dataset embeds the challenges faced by a robot in a real-life application and provides a useful tool for validating object recognition algorithms. Besides describing the characteristics of the dataset, the paper evaluates the performance of a collection of well-established deep convolutional networks on the new dataset and analyzes the transferability of deep representations from Web images to robotic data. Despite the promising results obtained with such representations, the experiments demonstrate that object classification with real-life robotic data is far from being solved. Finally, we provide a comparative study to analyze and highlight the open challenges in robot vision, explaining the discrepancies in the performance.",
"",
"Drones are a versatile platform for both amateur and professional photographers, enabling them to capture photos that are impossible to shoot with ground-based cameras. However, when guided by inexperienced pilots, they have a high incidence of collisions, crashes, and poorly framed photographs. This paper presents an intelligent user interface for photographing objects that is robust against navigation errors and reliably collects high quality photographs. By retaining the human in the loop, our system is faster and more selective than purely autonomous UAVs that employ simple coverage algorithms. The intelligent user interface operates in multiple modes, allowing the user to either directly control the quadcopter or fly in a semi-autonomous mode around a target object in the environment. To evaluate the interface, users completed a data set collection task in which they were asked to photograph objects from multiple views. Our sketchbased control paradigm facilitated task completion, reduced crashes, and was favorably reviewed by the participants."
]
} |
1902.10671 | 2930126691 | Object detection models based on convolutional neural networks (CNNs) demonstrate impressive performance when trained on large-scale labeled datasets. While a generic object detector trained on such a dataset performs adequately in applications where the input data is similar to user photographs, the detector performs poorly on small objects, particularly ones with limited training data or imaged from uncommon viewpoints. Also, a specific room will have many objects that are missed by standard object detectors, frustrating a robot that continually operates in the same indoor environment. This paper describes a system for rapidly creating customized object detectors. Data is collected from a quadcopter that is teleoperated with an interactive interface. Once an object is selected, the quadcopter autonomously photographs the object from multiple viewpoints to create training data that is used by DUNet (Dense Upscaled Net), collect data to train DUNet (Dense Upscaled Network), our proposed model for learning customized object detectors from scratch given limited data. Our experiments compare the performance of learning models from scratch with DUNet vs. fine tuning existing state of the art object detectors, both on our indoor robotics domain and on standard datasets. | Convolutional neural networks (CNNs) @cite_23 were initially applied to handwritten digit recognition but were shown to outperform traditional techniques such as deformable part models @cite_2 on image labeling in AlexNet @cite_0 . Since then, there have been consistent improvements to the state-of-the-art based on extensions to CNN-based architectures, such as VGG @cite_9 , GoogLeNet @cite_4 and DenseNet @cite_27 . | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_2"
],
"mid": [
"2950179405",
"1686810756",
"",
"",
"2147800946",
"2120419212"
],
"abstract": [
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"",
"",
"The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.",
"This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose."
]
} |
1902.10671 | 2930126691 | Object detection models based on convolutional neural networks (CNNs) demonstrate impressive performance when trained on large-scale labeled datasets. While a generic object detector trained on such a dataset performs adequately in applications where the input data is similar to user photographs, the detector performs poorly on small objects, particularly ones with limited training data or imaged from uncommon viewpoints. Also, a specific room will have many objects that are missed by standard object detectors, frustrating a robot that continually operates in the same indoor environment. This paper describes a system for rapidly creating customized object detectors. Data is collected from a quadcopter that is teleoperated with an interactive interface. Once an object is selected, the quadcopter autonomously photographs the object from multiple viewpoints to create training data that is used by DUNet (Dense Upscaled Net), collect data to train DUNet (Dense Upscaled Network), our proposed model for learning customized object detectors from scratch given limited data. Our experiments compare the performance of learning models from scratch with DUNet vs. fine tuning existing state of the art object detectors, both on our indoor robotics domain and on standard datasets. | CNNs were also instrumental to recent progress on object localization, starting with MultiBox @cite_28 . Inspired by classification models, R-CNN @cite_15 used cropped boxes from the original image as input to a neural network classifier. Unfortunately, R-CNN was computationally expensive since it repeatedly processed the same pixels whenever they appeared in different overlapping regions. Fast R-CNN @cite_22 addressed this defect by first pushing the entire image through a feature extractor, thus amortizing the computation across the set of anchor boxes. This set of ideas has culminated in Faster R-CNN @cite_3 , where region proposals are efficiently generated using a fully convolutional network. While Faster R-CNN can process several images per second, it is typically still too slow for most mobile or robotics applications that demand real-time performance on compute-constrained platforms. This has motivated a series of object detection models, such as SSD @cite_19 and YOLO @cite_13 that aim for high quality detections at near real-time speed. | {
"cite_N": [
"@cite_22",
"@cite_28",
"@cite_3",
"@cite_19",
"@cite_15",
"@cite_13"
],
"mid": [
"",
"",
"2613718673",
"2193145675",
"2102605133",
"2796347433"
],
"abstract": [
"",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL"
]
} |
1902.10671 | 2930126691 | Object detection models based on convolutional neural networks (CNNs) demonstrate impressive performance when trained on large-scale labeled datasets. While a generic object detector trained on such a dataset performs adequately in applications where the input data is similar to user photographs, the detector performs poorly on small objects, particularly ones with limited training data or imaged from uncommon viewpoints. Also, a specific room will have many objects that are missed by standard object detectors, frustrating a robot that continually operates in the same indoor environment. This paper describes a system for rapidly creating customized object detectors. Data is collected from a quadcopter that is teleoperated with an interactive interface. Once an object is selected, the quadcopter autonomously photographs the object from multiple viewpoints to create training data that is used by DUNet (Dense Upscaled Net), collect data to train DUNet (Dense Upscaled Network), our proposed model for learning customized object detectors from scratch given limited data. Our experiments compare the performance of learning models from scratch with DUNet vs. fine tuning existing state of the art object detectors, both on our indoor robotics domain and on standard datasets. | Our work is informed by the comprehensive experiments on object detection speed accuracy trade-offs conducted by @cite_18 , where SSD + MobileNet emerges as a very strong baseline for our application. However, we saw opportunities for improving customized object detectors, drawing inspiration from recent work on feature extraction in DenseNet @cite_27 , fully-convolutional approaches to semantic segmentation such as Tiramisu @cite_29 and recent multi-scale approaches for object detection, such as FPN @cite_16 and TDM @cite_21 . | {
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_21",
"@cite_27",
"@cite_16"
],
"mid": [
"2953390309",
"2950975557",
"2572745118",
"",
"2949533892"
],
"abstract": [
"The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed memory accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to-apples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [, 2015], R-FCN [, 2016] and SSD [, 2015] systems, which we view as \"meta-architectures\" and trace out the speed accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task.",
"State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions. Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train. In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets. Code to reproduce the experiments is available here : this https URL",
"In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and low-level features. The proposed TDM architecture provides a significant boost on the COCO testdev benchmark, achieving 28.6 AP for VGG16, 35.2 AP for ResNet101, and 37.3 for InceptionResNetv2 network, without any bells and whistles (e.g., multi-scale, iterative box refinement, etc.).",
"",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available."
]
} |
1902.10671 | 2930126691 | Object detection models based on convolutional neural networks (CNNs) demonstrate impressive performance when trained on large-scale labeled datasets. While a generic object detector trained on such a dataset performs adequately in applications where the input data is similar to user photographs, the detector performs poorly on small objects, particularly ones with limited training data or imaged from uncommon viewpoints. Also, a specific room will have many objects that are missed by standard object detectors, frustrating a robot that continually operates in the same indoor environment. This paper describes a system for rapidly creating customized object detectors. Data is collected from a quadcopter that is teleoperated with an interactive interface. Once an object is selected, the quadcopter autonomously photographs the object from multiple viewpoints to create training data that is used by DUNet (Dense Upscaled Net), collect data to train DUNet (Dense Upscaled Network), our proposed model for learning customized object detectors from scratch given limited data. Our experiments compare the performance of learning models from scratch with DUNet vs. fine tuning existing state of the art object detectors, both on our indoor robotics domain and on standard datasets. | The standard approach to customizing an object detector is via domain transfer --- e.g., replacing the final layer in a strong pre-trained model and fine-tuning it on the new data. However, we see significant advantages to training custom object detectors from scratch, such as DSOD @cite_10 , which demonstrates competitive performance, albeit not in real-time. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2743388417"
],
"abstract": [
"We present Deeply Supervised Object Detector (DSOD), a framework that can learn object detectors from scratch. State-of-the-art object objectors rely heavily on the off-the-shelf networks pre-trained on large-scale classification datasets like ImageNet, which incurs learning bias due to the difference on both the loss functions and the category distributions between classification and detection tasks. Model fine-tuning for the detection task could alleviate this bias to some extent but not fundamentally. Besides, transferring pre-trained models from classification to detection between discrepant domains is even more difficult (e.g. RGB to depth images). A better solution to tackle these two critical problems is to train object detectors from scratch, which motivates our proposed DSOD. Previous efforts in this direction mostly failed due to much more complicated loss functions and limited training data in object detection. In DSOD, we contribute a set of design principles for training object detectors from scratch. One of the key findings is that deep supervision, enabled by dense layer-wise connections, plays a critical role in learning a good detector. Combining with several other principles, we develop DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better results than the state-of-the-art solutions with much more compact models. For instance, DSOD outperforms SSD on all three benchmarks with real-time detection speed, while requires only 1 2 parameters to SSD and 1 10 parameters to Faster RCNN. Our code and models are available at: this https URL ."
]
} |
1902.10514 | 2949413858 | Recent developments in the world of services on the Web show that both the number of available Web APIs as well as the applications built on top is constantly increasing. This trend is commonly attributed to the wide adoption of the REST architectural principles. Still, the development of Web APIs is rather autonomous and it is up to the providers to decide how to implement, expose and describe the Web APIs. The individual implementations are then commonly documented in textual form as part of a webpage, showing a wide variety in terms of content, structure and level of detail. As a result, client application developers are forced to manually process and interpret the documentation. Before we can achieve a higher level of automation and can make any significant improvement to current practices and technologies, we need to reach a deeper understanding of their similarities and differences. Therefore, in this paper we present a thorough analysis of the most popular Web APIs through the examination of their documentation. We provide conclusions about common description forms, output types, usage of API parameters, invocation support, level of reusability, API granularity and authentication details. The collected data builds a solid foundation for identifying deficiencies and can be used as a basis for devising common standards and guidelines for Web API development. | The first study on the state of Web APIs was presented by @cite_0 and features a comprehensive overview through the analysis of 222 Web APIs in 2010. While aim to draw conclusions on the state of the entire world of APIs on the Web, we focus on the most popular and common ones, substituting a larger dataset for more and other types of features. Another more recent study from 2012 was provided by @cite_6 , wherein the authors analyze a dataset of twenty Web APIs by a broad range of features, some of which were incorporated in our survey. Similar to our study, the dataset was gained by selecting top ranked entries from the ProgrammableWeb directory, using the number of mashups as the sorting criteria. The rather limited dataset and fast moving developments in the world of services on the web necessitate taking another look at the current state of Web APIs. Other older studies, devoted to investigating Web Services exist. The authors in @cite_3 provide a study on Web services but their data is restricted to only a few characteristics and a single source. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_6"
],
"mid": [
"",
"2141233033",
"38513179"
],
"abstract": [
"",
"Web services technology has received much attention in the last few years, and a lot of research efforts have been devoted to utilizing services on the Internet to fulfill consumers' requirements. However, little research has been done on the current status of web services on the Internet, which has a great impact on current research. Enlightened by this situation, we made an exploratory study of the current status of web services on the Internet. Our study mainly focused on the investigation of four aspects, including the number, complexity, quality of description and the function diversity of available web services on the Internet. A web services investigation system is built up to harvest web services from the Internet and calculate the statistical results. The investigation results are reported in this paper, and, based on our study, the development trend of web services technology is also discussed in this paper.",
"Since Fielding's seminal contribution on the REST architecture style in 2000, the so-called class of RESTful services has taken off to challenge previously existing Web services. Several books have since then emerged, providing a set of valuable guidelines and design principles for the development of truly RESTful services. However, today's most popular \"RESTful\" services adopt only few of these guidelines, resulting in overburdening developers integrating multiple services in mashup applications. In this paper we present an in-depth analysis for the top 20 RESTful services listed on programmableweb.com against 17 RESTful service design criteria found in literature. Results provide evidence that hardly any of the services claiming to be RESTful is truly RESTful, probably due to the lack of rigidness and ease-of-use of currently available decision criteria. To improve the situation, we provide recommendations for various stakeholder groups."
]
} |
1902.10388 | 2934414806 | Non-Orthogonal Multiple Access (NOMA) and Fog Radio Access Networks (FRAN) are promising candidates within the 5G and beyond systems. This work examines the benefit of adopting NOMA in an FRAN architecture with constrained capacity fronthaul. The paper proposes methods for optimizing joint scheduling and power adaptation in the downlink of a NOMA-based FRAN with multiple resource blocks (RB). We consider a mixed-integer optimization problem which maximizes a network-wide rate-based utility function subject to fronthaul-capacity constraints, so as to determine i) the user-to-RB assignment, ii) the allocated power to each RB, and iii) the power split levels of the NOMA users in each RB. The paper proposes a feasible decoupled solution for such non-convex optimization problem using a three-step hybrid centralized distributed approach. The proposed solution complies with FRAN operation that aims to partially shift the network control to the FAPs, so as to overcome delays due to fronthaul rate constraints. The paper proposes and compares two distinct methods for solving the assignment problem, namely the Hungarian method, and the Multiple Choice Knapsack method. The power allocation and the NOMA power split optimization, on the other hand, are solved using the alternating direction method of multipliers (ADMM). Simulations results illustrate the advantages of the proposed methods compared to different baseline schemes including the conventional Orthogonal Multiple Access (OMA), for different utility functions and different network environments. | The integration of NOMA in a CRAN architecture is analyzed in few recent works, e.g., @cite_9 @cite_18 @cite_31 @cite_26 @cite_23 @cite_6 @cite_3 @cite_14 . References @cite_9 and @cite_18 analyze the outage probability in a NOMA-based CRAN network. In particular, reference @cite_18 proposes an architecture, where the cell-center users are served by their nearest RRHs as their strong users, while multiple RRHs collaborate to serve the cell-edge users as their weak users. Using stochastic geometric tools, it is shown in @cite_18 that NOMA enhances the performance of the cell-edge users. References @cite_31 and @cite_26 analyze the energy efficiency of the DL of a CRAN cellular network. In @cite_31 , NOMA is adopted in the wireless backhaul links connecting the BSs to the cloud, while in @cite_26 , it is adopted in the communication between RRHs and users. Both papers show that the NOMA scheme can achieve higher energy efficiency compared to the conventional OMA. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_23",
"@cite_31"
],
"mid": [
"2783775905",
"2858284200",
"2887813159",
"2764105225",
"",
"1888296727",
"",
""
],
"abstract": [
"In this paper, a novel non-orthogonal multiple access (NOMA)-enabled framework for cloud-radio access networks (CRANs) is proposed. In this framework, two users are scheduled in the same resources according to NOMA; however the performance of cell-edge users is enhanced by means of coordinating beamforming. Stochastic geometry is invoked for modeling the proposed framework, where the positions of BSs follow a cluster point process. In an effort to characterize the performance of the proposed framework, simple expressions in terms of outage probability are derived for both nearby users and the cell-edge users. It is analytically demonstrated that the average beamforming gain is a liner function with respect to the number of cooperating base stations. Numerical results verify the accuracy of analysis and reveal that the proposed framework is capable of greatly enhancing the performance of cell-edge users.",
"The number of wireless devices is growing rapidly on a daily basis echoing the increasing number of applications of the Internet of Thing. Facing massive connections and unavoidable interference, how to provide a green communication is a concerning matter. In this regard, nonorthogonal multiple-access (NOMA) is a natural communications technology that can scale with the massive number of simultaneous connections for a limited bandwidth. In this paper, we aim to maximize the energy efficiency (EE) for an NOMA-based cloud radio access network, where sub-6 GHz and millimeter wave bands are used in fronthaul and access links, respectively. In particular, we formulate the power optimization problem to maximize the EE of the system subject to the fronthaul capacity and transmit power constraints. To address this nonconvex problem, we first convert the fractional objective function into a subtractive form. A two-layer algorithm is then proposed. In the outer loop, the @math -norm technique is adopted to transform the nonconvex fronthaul capacity constraint into a convex one, whereas in the inner loop, the weighted minimum mean square error approach is applied. Simulation results indicate that the proposed NOMA scheme can obtain higher EE as well as throughput when compared with orthogonal multiple-access methods.",
"This paper studies the application of non-orthogonal multiple access (NOMA) to cache-enabled cloud radio access network (CRAN) with mixed multicast and unicast transmission. Users requesting the same content are grouped together and served with a cluster of remote radio heads (RRHs) using distributed beamforming. In addition, the user with better channel condition in each group is allowed to request an extra unicast content via the NOMA protocol. Each RRH has a local cache which enables it to acquire the requested contents either from the local cache or from the central processor via the fronthaul link. Taking the maximum fronthaul capacity into consideration, we investigate the subchannel (SC) allocation problem to both RRHs and multicast groups to improve the weighted network sum rate. The optimal solution requires exhaustive search, which become prohibitively complicated as the number of RRHs and groups increases. To tackle this problem effectively, we formulate this problem as a three-sided matching problem among SCs, RRHs and multicast groups, and propose a novel low-complexity matching algorithm. We prove mathematically that the proposed algorithm converges to a stable matching within limited number of iterations. Numerical results unveil that the proposed algorithm closely approaches the optimal solution and outperforms the conventional orthogonal multiple access (OMA)-based CRAN.",
"To improve system capacity and spectral efficiency, as well as support more connections in cloud radio access networks (C-RANs), the application of non-orthogonal multiple access (NOMA), a promising technique in the physical layer, is considered in this letter. The stochastic geometry is used to obtain the outage probability of NOMA-based downlink C-RAN, in which the remote radio heads are uniformly distributed and serve two paired users simultaneously. A closed-form approximation for the outage probability is derived, and Monte Carlo simulations are provided to validate the accuracy of the theoretical analysis. Compared with conventional multiple access techniques, both the obtained analytical and simulation results demonstrate that the proposed NOMA scheme can offer better spectral efficiency and user fairness in C-RANs.",
"",
"This paper investigates multiple access techniques for wireless downlink in cloud radio access networks (CRANs) where all base stations (BSs) are connected via a cloud. We first propose a power allocation scheme that makes use of non-orthogonal multiple access to allocate the power at the BSs based on their relative wireless distances to the cloud. Specifically, a successive interference cancellation mechanism is designed at the cloud to lay multiple BSs over each other in the power domain. We then analyse the achievable throughput of the NOMA in the CRANs. The derived expressions not only show the effectiveness of the proposed NOMA over the conventional orthogonal frequency division multiple access (OFDMA) but also allow us to develop an algorithm to find the optimal number of BSs in the CRANs. Finally, numerical results are provided to show an improvement of up to eight times of the sum data rate and a higher number of BSs that could be maintained with the proposed NOMA over the OFDMA scheme as well as evaluating the impact of the wireless propagation environment and channel quality on the system throughput and cloud-edge throughput performance.",
"",
""
]
} |
1902.10191 | 2925436546 | Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. For this case, combining the GNN with a recurrent neural network (RNN, broadly speaking) is a natural idea. Existing approaches typically learn one single graph model for all the graphs, by using the RNN to capture the dynamism of the output node embeddings and to implicitly regulate the graph model. In this work, we propose a different approach, coined EvolveGCN, that uses the RNN to evolve the graph model itself over time. This model adaptation approach is model oriented rather than node oriented, and hence is advantageous in the flexibility on the input. For example, in the extreme case, the model can handle at a new time step, a completely new set of nodes whose historical information is unknown, because the dynamism has been carried over to the GNN parameters. We evaluate the proposed approach on tasks including node classification, edge classification, and link prediction. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. | Methods for dynamic graphs are often extensions of those for a static one, with an additional focus on the temporal dimension and update schemes. For example, in matrix factorization-based approaches @cite_17 @cite_30 , node embeddings come from the (generalized) eigenvectors of the graph Laplacian matrix. Hence, DANE @cite_3 updates the eigenvectors efficiently based on the prior ones, rather than computing them from scratch for each new graph. The dominant advantage of such methods is the computational efficiency. | {
"cite_N": [
"@cite_30",
"@cite_3",
"@cite_17"
],
"mid": [
"",
"2623187518",
"2053186076"
],
"abstract": [
"",
"Network embedding leverages the node proximity manifested to learn a low-dimensional node vector representation for each node in the network. The learned embeddings could advance various learning tasks such as node classification, network clustering, and link prediction. Most, if not all, of the existing works, are overwhelmingly performed in the context of plain and static networks. Nonetheless, in reality, network structure often evolves over time with addition deletion of links and nodes. Also, a vast majority of real-world networks are associated with a rich set of node attributes, and their attribute values are also naturally changing, with the emerging of new content patterns and the fading of old content patterns. These changing characteristics motivate us to seek an effective embedding representation to capture network and attribute evolving patterns, which is of fundamental importance for learning in a dynamic environment. To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly. In this paper, we tackle this problem by proposing a novel dynamic attributed network embedding framework - DANE. In particular, DANE first provides an offline method for a consensus embedding and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner. We perform extensive experiments on both synthetic and real attributed networks to corroborate the effectiveness and efficiency of the proposed framework.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in"
]
} |
1902.10191 | 2925436546 | Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. For this case, combining the GNN with a recurrent neural network (RNN, broadly speaking) is a natural idea. Existing approaches typically learn one single graph model for all the graphs, by using the RNN to capture the dynamism of the output node embeddings and to implicitly regulate the graph model. In this work, we propose a different approach, coined EvolveGCN, that uses the RNN to evolve the graph model itself over time. This model adaptation approach is model oriented rather than node oriented, and hence is advantageous in the flexibility on the input. For example, in the extreme case, the model can handle at a new time step, a completely new set of nodes whose historical information is unknown, because the dynamism has been carried over to the GNN parameters. We evaluate the proposed approach on tasks including node classification, edge classification, and link prediction. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. | For random walk-based approaches @cite_20 @cite_15 , transition probabilities conditioned on history are modeled as the normalized inner products of the corresponding node embeddings. These approaches maximize the probabilities of the sampled random walks. CTDANE @cite_1 extends this idea by requiring the walks to obey the temporal order. Another work, NetWalk @cite_26 , does not use the probability as the objective function; rather, it observes that if the graph does not undergo substantial changes, one only needs to resample a few walks in the successive time step. Hence, this approach incrementally retrains the model with warm starts, substantially reducing the computational cost. | {
"cite_N": [
"@cite_15",
"@cite_26",
"@cite_1",
"@cite_20"
],
"mid": [
"2366141641",
"",
"2798918712",
"2154851992"
],
"abstract": [
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"",
"Networks evolve continuously over time with the addition, deletion, and changing of links and nodes. Although many networks contain this type of temporal information, the majority of research in network representation learning has focused on static snapshots of the graph and has largely ignored the temporal dynamics of the network. In this work, we describe a general framework for incorporating temporal information into network embedding methods. The framework gives rise to methods for learning time-respecting embeddings from continuous-time dynamic networks. Overall, the experiments demonstrate the effectiveness of the proposed framework and dynamic network embedding approach as it achieves an average gain of 11.9 across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection."
]
} |
1902.10191 | 2925436546 | Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. For this case, combining the GNN with a recurrent neural network (RNN, broadly speaking) is a natural idea. Existing approaches typically learn one single graph model for all the graphs, by using the RNN to capture the dynamism of the output node embeddings and to implicitly regulate the graph model. In this work, we propose a different approach, coined EvolveGCN, that uses the RNN to evolve the graph model itself over time. This model adaptation approach is model oriented rather than node oriented, and hence is advantageous in the flexibility on the input. For example, in the extreme case, the model can handle at a new time step, a completely new set of nodes whose historical information is unknown, because the dynamism has been carried over to the GNN parameters. We evaluate the proposed approach on tasks including node classification, edge classification, and link prediction. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. | The wave of deep learning introduces a flourish of unsupervised and supervised approaches for parameterizing the quantities of interest with neural networks. DynGEM @cite_8 is an autoencoding approach that minimizes the reconstruction loss, together with the distance between connected nodes in the embedding space. A feature of DynGEM is that the depth of the architecture is adaptive to the size of the graph; and the autoencoder learned from the past time step is used to initialize the training of the one in the following time. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2806983170"
],
"abstract": [
"Embedding large graphs in low dimensional spaces has recently attracted significant interest due to its wide applications such as graph visualization, link prediction and node classification. Existing methods focus on computing the embedding for static graphs. However, many graphs in practical applications are dynamic and evolve constantly over time. Naively applying existing embedding algorithms to each snapshot of dynamic graphs independently usually leads to unsatisfactory performance in terms of stability, flexibility and efficiency. In this work, we present an efficient algorithm DynGEM based on recent advances in deep autoencoders for graph embeddings, to address this problem. The major advantages of DynGEM include: (1) the embedding is stable over time, (2) it can handle growing dynamic graphs, and (3) it has better running time than using static embedding methods on each snapshot of a dynamic graph. We test DynGEM on a variety of tasks including graph visualization, graph reconstruction, link prediction and anomaly detection (on both synthetic and real datasets). Experimental results demonstrate the superior stability and scalability of our approach."
]
} |
1902.10191 | 2925436546 | Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. For this case, combining the GNN with a recurrent neural network (RNN, broadly speaking) is a natural idea. Existing approaches typically learn one single graph model for all the graphs, by using the RNN to capture the dynamism of the output node embeddings and to implicitly regulate the graph model. In this work, we propose a different approach, coined EvolveGCN, that uses the RNN to evolve the graph model itself over time. This model adaptation approach is model oriented rather than node oriented, and hence is advantageous in the flexibility on the input. For example, in the extreme case, the model can handle at a new time step, a completely new set of nodes whose historical information is unknown, because the dynamism has been carried over to the GNN parameters. We evaluate the proposed approach on tasks including node classification, edge classification, and link prediction. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. | A popular category of approaches for dynamic graphs is point processes that are continuous in time. Know-Evolve @cite_12 and DyRep @cite_4 model the occurrence of an edge as a point process and parameterize the intensity function by using a neural network, taking node embeddings as the input. DynamicTriad @cite_14 uses a point process to model a more complex phenomenon---triadic closure---where a triad with three nodes is developed from an open one (a pair of nodes are not connected) to a closed one (all three pairs are connected). HTNE @cite_10 similarly models the dynamism by using the Hawkes process, with additionally an attention mechanism to determine the influence of historical neighbors on the current neighbors of a node. These methods are advantageous for event time prediction because of the continuous nature of the process. | {
"cite_N": [
"@cite_10",
"@cite_14",
"@cite_4",
"@cite_12"
],
"mid": [
"2808908091",
"2787927827",
"2790197930",
"2724395316"
],
"abstract": [
"Given the rich real-life applications of network mining as well as the surge of representation learning in recent years, network embedding has become the focal point of increasing research interests in both academic and industrial domains. Nevertheless, the complete temporal formation process of networks characterized by sequential interactive events between nodes has yet seldom been modeled in the existing studies, which calls for further research on the so-called temporal network embedding problem. In light of this, in this paper, we introduce the concept of neighborhood formation sequence to describe the evolution of a node, where temporal excitation effects exist between neighbors in the sequence, and thus we propose a Hawkes process based Temporal Network Embedding (HTNE) method. HTNE well integrates the Hawkes process into network embedding so as to capture the influence of historical neighbors on the current neighbors. In particular, the interactions of low-dimensional vectors are fed into the Hawkes process as base rate and temporal influence, respectively. In addition, attention mechanism is also integrated into HTNE to better determine the influence of historical neighbors on current neighbors of a node. Experiments on three large-scale real-life networks demonstrate that the embeddings learned from the proposed HTNE model achieve better performance than state-of-the-art methods in various tasks including node classification, link prediction, and embedding visualization. In particular, temporal recommendation based on arrival rate inferred from node embeddings shows excellent predictive power of the proposed model.",
"",
"How can we effectively encode evolving information over dynamic graphs into low-dimensional representations? In this paper, we propose DyRep, an inductive deep representation learning framework that learns a set of functions to efficiently produce low-dimensional node embeddings that evolves over time. The learned embeddings drive the dynamics of two key processes namely, communication and association between nodes in dynamic graphs. These processes exhibit complex nonlinear dynamics that evolve at different time scales and subsequently contribute to the update of node embeddings. We employ a time-scale dependent multivariate point process model to capture these dynamics. We devise an efficient unsupervised learning procedure and demonstrate that our approach significantly outperforms representative baselines on two real-world datasets for the problem of dynamic link prediction and event time prediction.",
"The availability of large scale event data with time stamps has given rise to dynamically evolving knowledge graphs that contain temporal information for each edge. Reasoning over time in such dynamic knowledge graphs is not yet well understood. To this end, we present Know-Evolve, a novel deep evolutionary knowledge network that learns non-linearly evolving entity representations over time. The occurrence of a fact (edge) is modeled as a multivariate point process whose intensity function is modulated by the score for that fact computed based on the learned entity embeddings. We demonstrate significantly improved performance over various relational learning approaches on two large scale real-world datasets. Further, our method effectively predicts occurrence or recurrence time of a fact which is novel compared to prior reasoning approaches in multi-relational setting."
]
} |
1902.10162 | 2919364174 | We show that recent innovations in deep reinforcement learning can effectively color very large graphs -- a well-known NP-hard problem with clear commercial applications. Because the Monte Carlo Tree Search with Upper Confidence Bound algorithm used in AlphaGoZero can improve the performance of a given heuristic, our approach allows deep neural networks trained using high performance computing (HPC) technologies to transform computation into improved heuristics with zero prior knowledge. Key to our approach is the introduction of a novel deep neural network architecture (FastColorNet) that has access to the full graph context and requires @math time and space to color a graph with @math vertices, which enables scaling to very large graphs that arise in real applications like parallel computing, compilers, numerical solvers, and design automation, among others. As a result, we are able to learn new state of the art heuristics for graph coloring. | . Reinforcement learning has been researched for decades for games @cite_17 @cite_19 . proposed DQN, a combination of deep neural network and q-learning with experience replay, to achieve human-level performance on Atari games @cite_11 . More recently, Deepmind published a series of AlphaGo algorithms for more complex game Go and defeated human experts @cite_13 @cite_28 . They apply MCTS to explore the large MDP state space while balancing explore and exploit by using UCB for decision selection. In addition, alpha-beta-pruning @cite_5 is also adopted to early stop game play that is known to win or lose to reduce tree search space. Our approach for graph coloring takes the similar one of AlphaGoZero. Furthermore, in order to learn for much bigger problems compared to the relatively small one of Go, we apply other innovations to make our solution scalable and computational efficiently. | {
"cite_N": [
"@cite_28",
"@cite_17",
"@cite_19",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"2159920598",
"",
"2157803532",
"2257979135",
"2145339207"
],
"abstract": [
"",
"The game of Go has a high branching factor that defeats the tree search approach used in computer chess, and long-range spatiotemporal interactions that make position evaluation extremely difficult. Development of conventional Go programs is hampered by their knowledge-intensive nature. We demonstrate a viable alternative by training networks to evaluate Go positions via temporal difference (TD) learning. Our approach is based on network architectures that reflect the spatial organization of both input and reinforcement signals on the Go board, and training protocols that provide exposure to competent (though unlabelled) play. These techniques yield far better performance than undifferentiated networks trained by selfplay alone. A network with less than 500 weights learned within 3,000 games of 9×9 Go a position evaluation function that enables a primitive one-ply search to defeat a commercial Go program at a low playing level.",
"",
"Abstract The alpha-beta technique for searching game trees is analyzed, in an attempt to provide some insight into its behavior. The first portion of this paper is an expository presentation of the method together with a proof of its correctness and a historical discussion. The alpha-beta procedure is shown to be optimal in a certain sense, and bounds are obtained for its running time with various kinds of random data.",
"The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.",
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action."
]
} |
1902.10162 | 2919364174 | We show that recent innovations in deep reinforcement learning can effectively color very large graphs -- a well-known NP-hard problem with clear commercial applications. Because the Monte Carlo Tree Search with Upper Confidence Bound algorithm used in AlphaGoZero can improve the performance of a given heuristic, our approach allows deep neural networks trained using high performance computing (HPC) technologies to transform computation into improved heuristics with zero prior knowledge. Key to our approach is the introduction of a novel deep neural network architecture (FastColorNet) that has access to the full graph context and requires @math time and space to color a graph with @math vertices, which enables scaling to very large graphs that arise in real applications like parallel computing, compilers, numerical solvers, and design automation, among others. As a result, we are able to learn new state of the art heuristics for graph coloring. | . Recently, reinforcement learning has been applied for combinatorial optimization. combined pointer networks @cite_18 with actor and critic network to optimize Traveling Salesman Problem (TSP) @cite_1 , which does not make good use of graph structure and is not generalized to arbitrary size graphs. @cite_23 , a Q-learning framework is introduced for greedy algorithms to learn over MVC, MAXCUT and TSP problems using structure2vec @cite_14 graph embedding. This algorithm cannot be directly applied to graph coloring since the reward design and state representation for colored graphs is non trivial in its problem formulation. Both algorithms only evaluated on small graphs and are not scalable to big graphs, which are typical in real applications. In contrast, our approach is easy to scale to train on bigger graphs to solve bigger problems in graph coloring applications. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_1",
"@cite_23"
],
"mid": [
"",
"2950191616",
"2952332632",
"2951430899"
],
"abstract": [
"",
"Kernel classifiers and regressors designed for structured data, such as sequences, trees and graphs, have significantly advanced a number of interdisciplinary areas such as computational biology and drug design. Typically, kernels are designed beforehand for a data type which either exploit statistics of the structures or make use of probabilistic generative models, and then a discriminative classifier is learned based on the kernels via convex optimization. However, such an elegant two-stage approach also limited kernel methods from scaling up to millions of data points, and exploiting discriminative information to learn feature representations. We propose, structure2vec, an effective and scalable approach for structured data representation based on the idea of embedding latent variable models into feature spaces, and learning such feature spaces using discriminative information. Interestingly, structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. In applications involving millions of data points, we showed that structure2vec runs 2 times faster, produces models which are @math times smaller, while at the same time achieving the state-of-the-art predictive performance.",
"This paper presents a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent network using a policy gradient method. We compare learning the network parameters on a set of training graphs against learning them on individual test graphs. Despite the computational expense, without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied to the KnapSack, another NP-hard problem, the same method obtains optimal solutions for instances with up to 200 items.",
"The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems."
]
} |
1902.10197 | 2949434543 | We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction. | Another related problem is how to effectively draw negative samples for training knowledge graph embeddings. This problem has been explicitly studied by @cite_1 , which proposed a generative adversarial learning framework to draw negative samples. However, such a framework requires simultaneously training the embedding model and a discrete negative sample generator, which are difficult to optimize and also computationally expensive. We propose a self-adversarial sampling scheme which only relies on the current model. It does require any additional optimization component, which make it much more efficient. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2768762802"
],
"abstract": [
"We introduce an adversarial learning framework, which we named KBGAN, to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graph datasets typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts used by many previous works, but the majority of negative facts generated in this way can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. The objective of the generator is to generate difficult negative samples that can maximize their likeliness determined by the discriminator, while the discriminator minimizes its training loss. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TransE and TransD, each with assistance from one of the two probability-based models, DistMult and ComplEx. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings."
]
} |
1902.10272 | 2959568087 | Recent deep learning architectures can recognize instances of 3D point cloud objects of previously seen classes quite well. At the same time, current 3D depth camera technology allows generating segmenting a large amount of 3D point cloud objects from an arbitrary scene, for which there is no previously seen training data. A challenge for a 3D point cloud recognition system is, then, to classify objects from new, unseen, classes. This issue can be resolved by adopting a zero-shot learning (ZSL) approach for 3D data, similar to the 2D image version of the same problem. ZSL attempts to classify unseen objects by comparing semantic information (attribute word vector) of seen and unseen classes. Here, we adapt several recent 3D point cloud recognition systems to the ZSL setting with some changes to their architectures. To the best of our knowledge, this is the first attempt to classify unseen 3D point cloud objects in the ZSL setting. A standard protocol (which includes the choice of datasets and the seen unseen split) to evaluate such systems is also proposed. Baseline performances are reported using the new protocol on the investigated models. This investigation throws a new challenge to the 3D point cloud recognition community that may instigate numerous future works. | In the image recognition literature, zero-shot learning (ZSL) has made reasonable progress over the past few years @cite_16 @cite_25 @cite_9 . The objective of such learning is to recognize objects from unseen classes not used during training. To do that, semantic information about the class labels in the form of attributes word vectors are taken advantage of. Image features are usually transferred to the dimension of the semantic vector to obtain a matching score by comparing it with seen unseen semantic vectors. Some of the notable research directions in this line of investigation include exploring class attribute association @cite_10 , domain adaptation @cite_21 , the effect of hubness @cite_25 , generalized ZSL @cite_16 , inductive vs. transductive ZSL @cite_1 , multi-label ZSL @cite_15 etc. In this paper, we apply zero-shot learning on 3D point cloud objects instead of the traditional 2D image. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_25"
],
"mid": [
"",
"2746797088",
"2951871104",
"2770962865",
"2963220594",
"2611345819",
"2950652153"
],
"abstract": [
"",
"We address zero-shot learning using a new manifold alignment framework based on a localized multi-scale transform on graphs. Our inference approach includes a smoothness criterion for a function mapping nodes on a graph (visual representation) onto a linear space (semantic representation), which we optimize using multi-scale graph wavelets. The robustness of the ensuing scheme allows us to operate with automatically generated semantic annotations, resulting in an algorithm that is entirely free of manual supervision, and yet improves the state-of-the-art as measured on benchmark datasets.",
"Zero-shot recognition aims to accurately recognize objects of unseen classes by using a shared visual-semantic mapping between the image feature space and the semantic embedding space. This mapping is learned on training data of seen classes and is expected to have transfer ability to unseen classes. In this paper, we tackle this problem by exploiting the intrinsic relationship between the semantic space manifold and the transfer ability of visual-semantic mapping. We formalize their connection and cast zero-shot recognition as a joint optimization problem. Motivated by this, we propose a novel framework for zero-shot recognition, which contains dual visual-semantic mapping paths. Our analysis shows this framework can not only apply prior semantic knowledge to infer underlying semantic manifold in the image feature space, but also generate optimized semantic embedding space, which can enhance the transfer ability of the visual-semantic mapping to unseen classes. The proposed method is evaluated for zero-shot recognition on four benchmark datasets, achieving outstanding results.",
"In this paper, we propose a novel deep learning architecture for multi-label zero-shot learning (ML-ZSL), which is able to predict multiple unseen class labels for each input instance. Inspired by the way humans utilize semantic knowledge between objects of interests, we propose a framework that incorporates knowledge graphs for describing the relationships between multiple labels. Our model learns an information propagation mechanism from the semantic label space, which can be applied to model the interdependencies between seen and unseen class labels. With such investigation of structured knowledge graphs for visual reasoning, we show that our model can be applied for solving multi-label classification and ML-ZSL tasks. Compared to state-of-the-art approaches, comparable or improved performances can be achieved by our method.",
"Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot, and few-shot learning problems. Our approach is based on a novel class adapting principal directions’ (CAPDs) concept that allows multiple embeddings of image features into a semantic space. Given an image, our method produces one principal direction for each seen class. Then, it learns how to combine these directions to obtain the principal direction for each unseen class such that the CAPD of the test image is aligned with the semantic embedding of the true class and opposite to the other classes. This allows efficient and class-adaptive information transfer from seen to unseen classes. In addition, we propose an automatic process for the selection of the most useful seen classes for each unseen class to achieve robustness in zero-shot learning. Our method can update the unseen CAPD taking the advantages of few unseen images to work in a few-shot learning scenario. Furthermore, our method can generalize the seen CAPDs by estimating seen–unseen diversity that significantly improves the performance of generalized zero-shot learning. Our extensive evaluations demonstrate that the proposed approach consistently achieves superior performance in zero-shot, generalized zero-shot, and few one-shot learning problems.",
"We propose a novel approach for unsupervised zero-shot learning (ZSL) of classes based on their names. Most existing unsupervised ZSL methods aim to learn a model for directly comparing image features and class names. However, this proves to be a difficult task due to dominance of non-visual semantics in underlying vector-space embeddings of class names. To address this issue, we discriminatively learn a word representation such that the similarities between class and combination of attribute names fall in line with the visual similarity. Contrary to the traditional zero-shot learning approaches that are built upon attribute presence, our approach bypasses the laborious attribute-class relation annotations for unseen classes. In addition, our proposed approach renders text-only training possible, hence, the training can be augmented without the need to collect additional image data. The experimental results show that our method yields state-of-the-art results for unsupervised ZSL in three benchmark datasets.",
"Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual semantic description of object classes and visual representation of object images can be projected to for nearest neighbour search. Despite the success of deep neural networks that learn an end-to-end model between text and images in other vision problems such as image captioning, very few deep ZSL model exists and they show little advantage over ZSL models that utilise deep feature representations but do not learn an end-to-end embedding. In this paper we argue that the key to make deep ZSL models succeed is to choose the right embedding space. Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space. This is because that in this space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. This model design also provides a natural mechanism for multiple semantic modalities (e.g., attributes and sentence descriptions) to be fused and optimised jointly in an end-to-end manner. Extensive experiments on four benchmarks show that our model significantly outperforms the existing models."
]
} |
1902.09907 | 2917728022 | While learning based depth estimation from images videos has achieved substantial progress, there still exist intrinsic limitations. Supervised methods are limited by a small amount of ground truth or labeled data and unsupervised methods for monocular videos are mostly based on the static scene assumption, not performing well on real world scenarios with the presence of dynamic objects. In this paper, we propose a new learning based method consisting of DepthNet, PoseNet and Region Deformer Networks (RDN) to estimate depth from unconstrained monocular videos without ground truth supervision. The core contribution lies in RDN for proper handling of rigid and non-rigid motions of various objects such as rigidly moving cars and deformable humans. In particular, a deformation based motion representation is proposed to model individual object motion on 2D images. This representation enables our method to be applicable to diverse unconstrained monocular videos. Our method can not only achieve the state-of-the-art results on standard benchmarks KITTI and Cityscapes, but also show promising results on a crowded pedestrian tracking dataset, which demonstrates the effectiveness of the deformation based motion representation. Code and trained models are available at https: github.com haofeixu rdn4depth. | Many supervised methods have been developed to estimate depth @cite_12 @cite_3 @cite_9 @cite_25 . These methods use CNN to learn a mapping from RGB images to depth maps. However, they need a training dataset with ground truth depth of real world scenes which is hard to acquire, especially in outdoor scenarios, and hence limits its applicability. Several works try to resolve this limitation by using synthetic data @cite_8 @cite_16 or images from Internet @cite_2 @cite_23 , but special care must be taken to generate high quality training data, which can be very time-consuming. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_25",
"@cite_12"
],
"mid": [
"2784652921",
"",
"2124907686",
"2963760790",
"2339763956",
"2886009112",
"2336968928",
"2951234442"
],
"abstract": [
"The finding that very large networks can be trained efficiently and reliably has led to a paradigm shift in computer vision from engineered solutions to learning formulations. As a result, the research challenge shifts from devising algorithms to creating suitable and abundant training data for supervised learning. How to efficiently create such training data? The dominant data acquisition method in visual recognition is based on web data and manual annotation. Yet, for many computer vision problems, such as stereo or optical flow estimation, this approach is not feasible because humans cannot manually enter a pixel-accurate flow field. In this paper, we promote the use of synthetically generated data for the purpose of training deep networks on such tasks. We suggest multiple ways to generate such data and evaluate the influence of dataset properties on the performance and generalization properties of the resulting networks. We also demonstrate the benefit of learning schedules that use different types of data at selected stages of the training process.",
"",
"Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a post-processing refining step using conditional random fields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is refined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an auto-regression term characterizing the local structure of the estimation map. The inference problem can be efficiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods.",
"Single-view depth prediction is a fundamental problem in computer vision. Recently, deep learning methods have led to significant progress, but such methods are limited by the available training data. Current datasets based on 3D sensors have key limitations, including indoor-only images (NYU), small numbers of training examples (Make3D), and sparse sampling (KITTI). We propose to use multi-view Internet photo collections, a virtually unlimited data source, to generate training data via modern structure-from-motion and multi-view stereo (MVS) methods, and present a large depth dataset called MegaDepth based on this idea. Data derived from MVS comes with its own challenges, including noise and unreconstructable objects. We address these challenges with new data cleaning methods, as well as automatically augmenting our data with ordinal depth relations generated using semantic segmentation. We validate the use of large amounts of Internet data by showing that models trained on MegaDepth exhibit strong generalization-not only to novel scenes, but also to other diverse datasets including Make3D, KITTI, and DIW, even when no images from those datasets are seen during training.1",
"This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset \"Depth in the Wild\" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.",
"Current methods for single-image depth estimation use training datasets with real image-depth pairs or stereo pairs, which are not easy to acquire. We propose a framework, trained on synthetic image-depth pairs and unpaired real images, that comprises an image translation network for enhancing realism of input images, followed by a depth prediction network. A key idea is having the first network act as a wide-spectrum input translator, taking in either synthetic or real images, and ideally producing minimally modified realistic images. This is done via a reconstruction loss when the training input is real, and GAN loss when synthetic, removing the need for heuristic self-regularization. The second network is trained on a task loss for synthetic image-depth pairs, with extra GAN loss to unify real and synthetic feature distributions. Importantly, the framework can be trained end-to-end, leading to good results, even surpassing early deep-learning methods that use real paired data.",
"As 3D movie viewing becomes mainstream and the Virtual Reality (VR) market emerges, the demand for 3D contents is growing rapidly. Producing 3D videos, however, remains challenging. In this paper we propose to use deep neural networks to automatically convert 2D videos and images to a stereoscopic 3D format. In contrast to previous automatic 2D-to-3D conversion algorithms, which have separate stages and need ground truth depth map as supervision, our approach is trained end-to-end directly on stereo pairs extracted from existing 3D movies. This novel training scheme makes it possible to exploit orders of magnitude more data and significantly increases performance. Indeed, Deep3D outperforms baselines in both quantitative and human subject evaluations.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation."
]
} |
1902.09907 | 2917728022 | While learning based depth estimation from images videos has achieved substantial progress, there still exist intrinsic limitations. Supervised methods are limited by a small amount of ground truth or labeled data and unsupervised methods for monocular videos are mostly based on the static scene assumption, not performing well on real world scenarios with the presence of dynamic objects. In this paper, we propose a new learning based method consisting of DepthNet, PoseNet and Region Deformer Networks (RDN) to estimate depth from unconstrained monocular videos without ground truth supervision. The core contribution lies in RDN for proper handling of rigid and non-rigid motions of various objects such as rigidly moving cars and deformable humans. In particular, a deformation based motion representation is proposed to model individual object motion on 2D images. This representation enables our method to be applicable to diverse unconstrained monocular videos. Our method can not only achieve the state-of-the-art results on standard benchmarks KITTI and Cityscapes, but also show promising results on a crowded pedestrian tracking dataset, which demonstrates the effectiveness of the deformation based motion representation. Code and trained models are available at https: github.com haofeixu rdn4depth. | Learning Geometric Transformation. Spatial Transformer Networks (STN) @cite_20 build the first learnable module in the network architecture to handle geometry variation of input data, which is realized by learning a global parametric transformation. Deformable ConvNets @cite_15 further extend STN by learning offsets to regular grid sampling locations in the standard convolution. STN and Deformable ConvNets are both aiming at designing network architectures with geometry invariant for supervised tasks like classification and segmentation. Our deformation based motion representation aims at learning a transformation for each of individual objects to model object motion between adjacent frames. WarpNet @cite_24 shares a similar spirit to match images by learning a transformation, but training WarpNet needs the supervision of artificial correspondences. Our approach is fully unsupervised and in the context of depth estimation from videos. | {
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_20"
],
"mid": [
"2952695679",
"2950477723",
"2951005624"
],
"abstract": [
"We present an approach to matching images of objects in fine-grained datasets without using part annotations, with an application to the challenging problem of weakly supervised single-view reconstruction. This is in contrast to prior works that require part annotations, since matching objects across class and pose variations is challenging with appearance features alone. We overcome this challenge through a novel deep learning architecture, WarpNet, that aligns an object in one image with a different object in another. We exploit the structure of the fine-grained dataset to create artificial data for training this network in an unsupervised-discriminative learning approach. The output of the network acts as a spatial prior that allows generalization at test time to match real images across variations in appearance, viewpoint and articulation. On the CUB-200-2011 dataset of bird categories, we improve the AP over an appearance-only network by 13.6 . We further demonstrate that our WarpNet matches, together with the structure of fine-grained datasets, allow single-view reconstructions with quality comparable to using annotated point correspondences.",
"Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.",
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations."
]
} |
1902.09849 | 2917426135 | How to better utilize sequential information has been extensively studied in the setting of recommender systems. To this end, architectural inductive biases such as Markov-Chains, Recurrent models, Convolutional networks and many others have demonstrated reasonable success on this task. This paper proposes a new neural architecture, multi-scale Quasi-RNN for next item Recommendation (QR-Rec) task. Our model provides the best of both worlds by exploiting multi-scale convolutional features as the compositional gating functions of a recurrent cell. The model is implemented in a multi-scale fashion, i.e., convolutional filters of various widths are implemented to capture different union-level features of input sequences which influence the compositional encoder. The key idea aims to capture the recurrent relations between different kinds of local features, which has never been studied previously in the context of recommendation. Through extensive experiments, we demonstrate that our model achieves state-of-the-art performance on 15 well-established datasets, outperforming strong competitors such as FPMC, Fossil and Caser absolutely by 0.57 -7.16 and relatively by 1.44 -17.65 in terms of MAP, Recall@10 and NDCG@10. | Next item recommendation is a sub field of sequential recommendation, which considers the order of single item with user general preference for the recommendation. These models are different from the general recommendation models which consider only the set of items without order, including the matrix factorization @cite_20 based models like TimeSVD++ @cite_1 , PMF @cite_2 as well as the neural network based models like NCF @cite_27 , NeuRec @cite_9 , MPCN @cite_3 , LRML @cite_25 | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_27",
"@cite_2",
"@cite_25",
"@cite_20"
],
"mid": [
"2964169350",
"2080320419",
"2786995169",
"",
"",
"2783565819",
"2054141820"
],
"abstract": [
"Modeling user-item interaction patterns is an important task for personalized recommendations. Many recommender systems are based on the assumption that there exists a linear relationship between users and items while neglecting the intricacy and non-linearity of real-life historical interactions. In this paper, we propose a neural network based recommendation model (NeuRec) that untangles the complexity of user-item interactions and establish an integrated network to combine non-linear transformation with latent factors. We further design two variants of NeuRec: userbased NeuRec and item-based NeuRec, by focusing on different aspects of the interaction matrix. Extensive experiments on four real-world datasets demonstrated their superior performances on personalized ranking task.",
"Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"Many recent state-of-the-art recommender systems such as D-ATT, TransNet and DeepCoNN exploit reviews for representation learning. This paper proposes a new neural architecture for recommendation with reviews. Our model operates on a multi-hierarchical paradigm and is based on the intuition that not all reviews are created equal, i.e., only a selected few are important. The importance, however, should be dynamically inferred depending on the current target. To this end, we propose a review-by-review pointer-based learning scheme that extracts important reviews from user and item reviews and subsequently matches them in a word-by-word fashion. This enables not only the most informative reviews to be utilized for prediction but also a deeper word-level interaction. Our pointer-based method operates with a gumbel-softmax based pointer mechanism that enables the incorporation of discrete vectors within differentiable neural architectures. Our pointer mechanism is co-attentive in nature, learning pointers which are co-dependent on user-item relationships. Finally, we propose a multi-pointer learning scheme that learns to combine multiple views of user-item interactions. We demonstrate the effectiveness of our proposed model via extensive experiments on 24 benchmark datasets from Amazon and Yelp. Empirical results show that our approach significantly outperforms existing state-of-the-art models, with up to 19 and 71 relative improvement when compared to TransNet and DeepCoNN respectively. We study the behavior of our multi-pointer learning mechanism, shedding light on 'evidence aggregation' patterns in review-based recommender systems.",
"",
"",
"This paper proposes a new neural architecture for collaborative ranking with implicit feedback. Our model, LRML (Latent Relational Metric Learning) is a novel extension of metric learning approaches for recommendation. More specifically, instead of simple push pull mechanisms between user and item pairs, we propose to learn latent relations for each user item interaction. This helps to alleviate the potential geometric inflexibility of existing metric learning approaches. This not only enables better performance but also a greater extent of modeling capability, allowing our model to scale to larger number of interactions. In order to do so, we employ a augmented memory module and learn to attend over these memory blocks to construct latent relations. The attention module is controlled by the user-item interaction, making the learned relation vector specific to each user-item pair. Hence, this can be interpreted as learning an exclusive and optimal relational translation for each user-item interaction. The proposed architecture not only demonstrates the state-of-the-art performance across multiple recommendation benchmarks. LRML outperforms other metric learning models by 6 -7.5 in terms of Hits@10 and nDCG@10 on large datasets such as Netflix and MovieLens20M. Moreover, qualitative studies also demonstrate evidence that our proposed model is able to infer and encode explicit sentiment, temporal and attribute information despite being only trained on implicit feedback. As such, this ascertains the ability of LRML to uncover hidden relational structure within implicit datasets.",
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels."
]
} |
1902.09849 | 2917426135 | How to better utilize sequential information has been extensively studied in the setting of recommender systems. To this end, architectural inductive biases such as Markov-Chains, Recurrent models, Convolutional networks and many others have demonstrated reasonable success on this task. This paper proposes a new neural architecture, multi-scale Quasi-RNN for next item Recommendation (QR-Rec) task. Our model provides the best of both worlds by exploiting multi-scale convolutional features as the compositional gating functions of a recurrent cell. The model is implemented in a multi-scale fashion, i.e., convolutional filters of various widths are implemented to capture different union-level features of input sequences which influence the compositional encoder. The key idea aims to capture the recurrent relations between different kinds of local features, which has never been studied previously in the context of recommendation. Through extensive experiments, we demonstrate that our model achieves state-of-the-art performance on 15 well-established datasets, outperforming strong competitors such as FPMC, Fossil and Caser absolutely by 0.57 -7.16 and relatively by 1.44 -17.65 in terms of MAP, Recall@10 and NDCG@10. | Markov-Chain based models play as a role of pillar in this domain. In @cite_14 , TransRec mainly models third-order interactions between the user, the previously visited items and the next item to consume. FPMC @cite_4 integrates MF and first-order MC together while Fossil @cite_10 integrates similarity-based methods with higher-order MCs as sequential recommenders. | {
"cite_N": [
"@cite_10",
"@cite_14",
"@cite_4"
],
"mid": [
"2964296635",
"",
"2171279286"
],
"abstract": [
"Predicting personalized sequential behavior is a key task for recommender systems. In order to predict user actions such as the next product to purchase, movie to watch, or place to visit, it is essential to take into account both long-term user preferences and sequential patterns (i.e., short-term dynamics). Matrix Factorization and Markov Chain methods have emerged as two separate but powerful paradigms for modeling the two respectively. Combining these ideas has led to unified methods that accommodate long-and short-term dynamics simultaneously by modeling pairwise user-item and item-item interactions. In spite of the success of such methods for tackling dense data, they are challenged by sparsity issues, which are prevalent in real-world datasets. In recent years, similarity-based methods have been proposed for (sequentially-unaware) item recommendation with promising results on sparse datasets. In this paper, we propose to fuse such methods with Markov Chains to make personalized sequential recommendations. We evaluate our method, Fossil, on a variety of large, real-world datasets. We show quantitatively that Fossil outperforms alternative algorithms, especially on sparse datasets, and qualitatively that it captures personalized dynamics and is able to make meaningful recommendations.",
"",
"Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization."
]
} |
1902.09849 | 2917426135 | How to better utilize sequential information has been extensively studied in the setting of recommender systems. To this end, architectural inductive biases such as Markov-Chains, Recurrent models, Convolutional networks and many others have demonstrated reasonable success on this task. This paper proposes a new neural architecture, multi-scale Quasi-RNN for next item Recommendation (QR-Rec) task. Our model provides the best of both worlds by exploiting multi-scale convolutional features as the compositional gating functions of a recurrent cell. The model is implemented in a multi-scale fashion, i.e., convolutional filters of various widths are implemented to capture different union-level features of input sequences which influence the compositional encoder. The key idea aims to capture the recurrent relations between different kinds of local features, which has never been studied previously in the context of recommendation. Through extensive experiments, we demonstrate that our model achieves state-of-the-art performance on 15 well-established datasets, outperforming strong competitors such as FPMC, Fossil and Caser absolutely by 0.57 -7.16 and relatively by 1.44 -17.65 in terms of MAP, Recall@10 and NDCG@10. | The neural architectures consisting of CNN and RNN actually take advantage of both worlds, where local features extraction and recurrence modeling can be fused together. In @cite_13 , they proposed LR-CNs to integrate CNN extracting visual features followed by LSTM extracting sequence features on visual tasks involving sequences. In @cite_17 , they integrated CNNs, LSTMs and DNNs into a unified model as CLDNN with one followed by another on speech recognition tasks. In @cite_24 , they proposed a recurrent CNN(RCNN) by incorporating recurrent connections into each convolution layer for object recognition. In @cite_26 , conv-RNN was proposed to integrate Bi-RNN followed by convolutional layer as basic module for text modeling. In @cite_28 , they proposed Quais-RNN which uses CNN for generating gates for pooling on NLP tasks, which inspires this work. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_24",
"@cite_13",
"@cite_17"
],
"mid": [
"2742940593",
"2952436057",
"1934184906",
"2951183276",
""
],
"abstract": [
"In this paper, we introduce a generic inference hybrid framework for Convolutional Recurrent Neural Network (conv-RNN) of semantic modeling of text, seamless integrating the merits on extracting different aspects of linguistic information from both convolutional and recurrent neural network structures and thus strengthening the semantic understanding power of the new framework. Besides, based on conv-RNN, we also propose a novel sentence classification model and an attention based answer selection model with strengthening power for the sentence matching and classification respectively. We validate the proposed models on a very wide variety of data sets, including two challenging tasks of answer selection (AS) and five benchmark datasets for sentence classification (SC). To the best of our knowledge, it is by far the most complete comparison results in both AS and SC. We empirically show superior performances of conv-RNN in these different challenging tasks and benchmark datasets and also summarize insights on the performances of other state-of-the-arts methodologies.",
"Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.",
"In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
""
]
} |
1902.09849 | 2917426135 | How to better utilize sequential information has been extensively studied in the setting of recommender systems. To this end, architectural inductive biases such as Markov-Chains, Recurrent models, Convolutional networks and many others have demonstrated reasonable success on this task. This paper proposes a new neural architecture, multi-scale Quasi-RNN for next item Recommendation (QR-Rec) task. Our model provides the best of both worlds by exploiting multi-scale convolutional features as the compositional gating functions of a recurrent cell. The model is implemented in a multi-scale fashion, i.e., convolutional filters of various widths are implemented to capture different union-level features of input sequences which influence the compositional encoder. The key idea aims to capture the recurrent relations between different kinds of local features, which has never been studied previously in the context of recommendation. Through extensive experiments, we demonstrate that our model achieves state-of-the-art performance on 15 well-established datasets, outperforming strong competitors such as FPMC, Fossil and Caser absolutely by 0.57 -7.16 and relatively by 1.44 -17.65 in terms of MAP, Recall@10 and NDCG@10. | Beyond the vanilla RNN, LSTM and GRU are both commonly used variants with simple gating mechanisms to empower RNNs. In order to enhance RNNs, many research has also been done on gating mechanisms including Quais-RNN @cite_28 with single scale k-gram CNN to adaptively generate gates, SRU @cite_12 which uses a single-layer feed forward network with sigmoid activation function to generate gates and accelerates the training and RCRN @cite_18 that learns the recurrent gating functions using recurrent networks etc. They were created for not only solving the issues of gradient explosion or vanishing for RNNs, but also improving accuracy and speed to convergence for the tasks involving sequences. | {
"cite_N": [
"@cite_28",
"@cite_18",
"@cite_12"
],
"mid": [
"2952436057",
"2890832667",
"2751185861"
],
"abstract": [
"Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.",
"Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea behind our approach is to learn the recurrent gating functions using recurrent networks. Our architecture is split into two components - a controller cell and a listener cell whereby the recurrent controller actively influences the compositionality of the listener cell. We conduct extensive experiments on a myriad of tasks in the NLP domain such as sentiment analysis (SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs, suggesting that our controller architecture might be a suitable replacement for the widely adopted stacked architecture. Additionally, RCRN achieves state-of-the-art results on several well-established datasets.",
"Common recurrent neural network architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU) architecture, a recurrent unit that simplifies the computation and exposes more parallelism. In SRU, the majority of computation for each step is independent of the recurrence and can be easily parallelized. SRU is as fast as a convolutional layer and 5-10x faster than an optimized LSTM implementation. We study SRUs on a wide range of applications, including classification, question answering, language modeling, translation and speech recognition. Our experiments demonstrate the effectiveness of SRU and the trade-off it enables between speed and performance. We open source our implementation in PyTorch and CNTK."
]
} |
1902.10051 | 2916619961 | The unit disk graph (UDG) is a widely employed model for the study of wireless networks. In this model, wireless nodes are represented by points in the plane and there is an edge between two points if and only if their Euclidean distance is at most one. A hop spanner for the UDG is a spanning subgraph @math such that for every edge @math in the UDG the topological shortest path between @math and @math in @math has a constant number of edges. The hop stretch factor of @math is the maximum number of edges of these paths. A hop spanner is plane (i.e. embedded planar) if its edges do not cross each other. The problem of constructing hop spanners for the UDG has received considerable attention in both computational geometry and wireless ad hoc networks. Despite this attention, there has not been significant progress on getting hop spanners that (i) are plane, and (ii) have low hop stretch factor. Previous constructions either do not ensure the planarity or have high hop stretch factor. The only construction that satisfies both conditions is due to Catusse, Chepoi, and Vax e s (2010); their plane hop spanner has hop stretch factor at most 449. Our main result is a simple algorithm that constructs a plane hop spanner for the UDG. In addition to the simplicity, the hop stretch factor of the constructed spanner is at most 341. Even though the algorithm itself is simple, its analysis is rather involved. Several results on the plane geometry are established in the course of the proof. These results are of independent interest. | Alzoubi al @cite_6 proposed a distributed algorithm for the construction of a hop spanner for the UDG. Their algorithm integrates the connected dominating set and the local Delaunay graph of @cite_17 to form a backbone for the spanner. Although the backbone is plane, the entire spanner is not. The hop stretch factor of this spanner is at most @math (around @math as estimated in @cite_7 ). | {
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_17"
],
"mid": [
"1766136887",
"1983502795",
"2161843738"
],
"abstract": [
"The simplest model of a wireless network graph is the Unit Disk Graph (UDG): an edge exists in UDG if the Euclidean distance between its endpoints is ≤ 1. The problem of constructing planar spanners of Unit Disk Graphs with respect to the Euclidean distance has received considerable attention from researchers in computational geometry and ad-hoc wireless networks. In this paper, we present an algorithm that, given a set X of terminals in the plane, constructs a planar hop spanner with constant stretch factor for the Unit Disk Graph defined by X. Our algorithm improves on previous constructions in the sense that (i) it ensures the planarity of the whole spanner while previous algorithms ensure only the planarity of a backbone subgraph; (ii) the hop stretch factor of our spanner is significantly smaller.",
"We propose a new geometric spanner for static wireless ad hoc networks, which can be constructed efficiently in a localized manner. It integrates the connected dominating set and the local Delaunay graph to form a backbone of the wireless network. Priori arts showed that both structures can be constructed locally with bounded communication costs. This new spanner has these following attractive properties: 1) the backbone is a planar graph, 2) the node degree of the backbone is bounded from above by a positive constant, 3) it is a spanner for both hops and length, 4) it can be constructed locally and is easy to maintain when the nodes move around, and 5) moreover, the communication cost of each node is bounded by a constant. Simulation results are also presented for studying its practical performance.",
"Several localized routing protocols (see Bose, P. and Morin, P., Proc. 10th Annual Int. Symp. on Algorithms and Computation ISAAC, 1999) guarantee the delivery of packets when the underlying network topology is the Delaunay triangulation of all wireless nodes. However, it is expensive to construct the Delaunay triangulation in a distributed manner. Given a set of wireless nodes, we more accurately model the network as a unit-disk graph, UDG, in which a link between two nodes exists only if the distance between them is at most the maximum transmission range. Given a graph H, a spanning subgraph G of H is a t-spanner if the length of the shortest path connecting any two points in G is no more than t times the length of the shortest path connecting the two points in H. We present a novel localized networking protocol that constructs a planar 2.5-spanner of UDG, called the localized Delaunay triangulation, as network topology. It contains all edges that are in both the UDG and the Delaunay triangulation of all wireless nodes. Our experiments show that the delivery rates of existing localized routing protocols are increased when localized Delaunay triangulation is used instead of several previously proposed topologies. The total communication cost of our networking protocol is O(n log n) bits. Moreover, the computation cost of each node u is O(d sub u log d sub u ), where d sub u is the number of 1-hop neighbors of u in UDG."
]
} |
1902.10051 | 2916619961 | The unit disk graph (UDG) is a widely employed model for the study of wireless networks. In this model, wireless nodes are represented by points in the plane and there is an edge between two points if and only if their Euclidean distance is at most one. A hop spanner for the UDG is a spanning subgraph @math such that for every edge @math in the UDG the topological shortest path between @math and @math in @math has a constant number of edges. The hop stretch factor of @math is the maximum number of edges of these paths. A hop spanner is plane (i.e. embedded planar) if its edges do not cross each other. The problem of constructing hop spanners for the UDG has received considerable attention in both computational geometry and wireless ad hoc networks. Despite this attention, there has not been significant progress on getting hop spanners that (i) are plane, and (ii) have low hop stretch factor. Previous constructions either do not ensure the planarity or have high hop stretch factor. The only construction that satisfies both conditions is due to Catusse, Chepoi, and Vax e s (2010); their plane hop spanner has hop stretch factor at most 449. Our main result is a simple algorithm that constructs a plane hop spanner for the UDG. In addition to the simplicity, the hop stretch factor of the constructed spanner is at most 341. Even though the algorithm itself is simple, its analysis is rather involved. Several results on the plane geometry are established in the course of the proof. These results are of independent interest. | To the best of our knowledge, the only construction that guarantees the planarity of the entire hop spanner is due to Catusse, Chepoi, and Vax e s @cite_7 . First they use a regular square-grid to partition input points into clusters. Then they add edges between points in different clusters, and also between points in the same cluster to obtain a hop spanner, which is not necessarily plane. Then they go through several steps and in each step they remove some edges to ensure planarity, and add some new edges to maintain constant hop stretch factor. At the end they obtain a plane hop spanner with hop stretch factor at most @math . This spanner can be obtained by a localized distributed algorithm. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1766136887"
],
"abstract": [
"The simplest model of a wireless network graph is the Unit Disk Graph (UDG): an edge exists in UDG if the Euclidean distance between its endpoints is ≤ 1. The problem of constructing planar spanners of Unit Disk Graphs with respect to the Euclidean distance has received considerable attention from researchers in computational geometry and ad-hoc wireless networks. In this paper, we present an algorithm that, given a set X of terminals in the plane, constructs a planar hop spanner with constant stretch factor for the Unit Disk Graph defined by X. Our algorithm improves on previous constructions in the sense that (i) it ensures the planarity of the whole spanner while previous algorithms ensure only the planarity of a backbone subgraph; (ii) the hop stretch factor of our spanner is significantly smaller."
]
} |
1902.09884 | 2915604253 | The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet. | The reformulation of few-shot learning as a meta-learning problem using the @cite_3 few-shot learning setting was arguably one of the main enablers of the substantial progress that has been observed in the field of few-shot learning in the past few years. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2963341924"
],
"abstract": [
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank."
]
} |
1902.09884 | 2915604253 | The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet. | Once the set-to-set few-shot learning setting was introduced, a variety of meta-learning systems utilizing it were introduced. The first of which was Matching Networks @cite_3 where both the support and target sets are embedded down to a low-dimensional space using two learnable @math (used on the support set) and @math (used on the target set) parameterized as neural networks. Once the low-dimensional embeddings are computed, the target set items are compared with the support set items using a non-parametric distance metric such as cosine distance. Once the distance vectors are acquired, one can apply a softmax function over them, to acquire a probability distribution that expresses the class of a particular target set item over the support set classes. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2963341924"
],
"abstract": [
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank."
]
} |
1902.09884 | 2915604253 | The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet. | After embedding-based meta-learning models came the resurgence of gradient-based meta-learning models. Such models utilize inference-time model-state updates to acquire task-specific knowledge from a support set, such that they can generalize strongly on a particular task's target set. The first of such approaches was the LSTM @cite_2 which jointly learns a gradient conditional weight update function and a parameter initialization for a model. At inference time, the learned weight update function is used to apply a single update step on the base-model using gradients with respect to a support set loss. Then, the updated base-model is applied on a target set to compute a target loss, which is then used as the system's optimization loss. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2753160622"
],
"abstract": [
"Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning."
]
} |
1902.09884 | 2915604253 | The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet. | Subsequently to Meta-Learner LSTM the @cite_5 (MAML) framework was introduced. In MAML the authors propose to use standard stochastic gradient descent instead of a learnable update function and, in addition, to increase the number of update steps the model is allowed to take on a task's support set. By doing so, MAML achieved state-of-the-art performance in the supervised few-shot learning setting across all established few-shot tasks in both Omniglot and Mini-Imagenet. However, MAML also exhibited a significant amount of problems, including training instability problems, significant sensitivity to architecture selection and requirement for extensive hyperparameter tuning for it to achieve state-of-the-art results. Furthermore, many details in the design of MAML were chosen without much consideration for the multi-step nature of the model, which constrained the model's optimization process unnecessarily and in a way that reduced it's potential generalization performance and convergence speed. The authors in propose various modifications for MAML that resolve its instability problems, enable automatic learning of it's hyperparameters for a given task and unshackle MAML from its design-derived contraints. The resulting model called MAML++ achieves a significant improvement in the generalization performance across all established few-shot tasks whilst decreasing the model's computational overheads. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2951775809"
],
"abstract": [
"We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies."
]
} |
1902.09884 | 2915604253 | The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet. | Unsupervised deep learning has been extensively investigated in the context of generative models @cite_1 @cite_6 @cite_8 , yet, has only been briefly attempted in classification models @cite_15 @cite_11 . | {
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_15",
"@cite_11"
],
"mid": [
"2099471712",
"2467604901",
"",
"2950180292",
"2607510315"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"In just three years, Variational Autoencoders (VAEs) have emerged as one of the most popular approaches to unsupervised learning of complicated distributions. VAEs are appealing because they are built on top of standard function approximators (neural networks), and can be trained with stochastic gradient descent. VAEs have already shown promise in generating many kinds of complicated data, including handwritten digits, faces, house numbers, CIFAR images, physical models of scenes, segmentation, and predicting the future from static images. This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior. No prior knowledge of variational Bayesian methods is assumed.",
"",
"Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.",
"Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts of supervision. This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and Pascal VOC."
]
} |
1902.10058 | 2968225067 | Visual Place Recognition (VPR) is an important component in both computer vision and robotics applications, thanks to its ability to determine whether a place has been visited and where specifically. A major challenge in VPR is to handle changes of environmental conditions including weather, season and illumination. Most VPR methods try to improve the place recognition performance by ignoring the environmental factors, leading to decreased accuracy decreases when environmental conditions change significantly, such as day versus night. To this end, we propose an end-to-end conditional visual place recognition method. Specifically, we introduce the multi-domain feature learning method (MDFL) to capture multiple attribute-descriptions for a given place, and then use a feature detaching module to separate the environmental condition-related features from those that are not. The only label required within this feature learning pipeline is the environmental condition. Evaluation of the proposed method is conducted on the multi-season NORDLAND dataset, and the multi-weather GTAV dataset. Experimental results show that our method improves the feature robustness against variant environmental conditions. | Visual place recognition (VPR) methods have been well studied in past several years, and can be classified into two categories: feature- and appearance-based. In feature-based VPR, descriptive features are transformed into local place descriptors. Then, place recognition can be achieved by extracting the current place descriptors and searching similar place indexes in the bag of words. On the contrary, appearance-based VPR uses feature descriptors that are extracted from the entire image, and performs place recognition by assessing feature similarities. SeqSLAM @cite_5 describes image similarities by directly using the sum of absolute difference (SAD) between frames, while vector of locally aggregated descriptors (VLAD) @cite_8 aggregates local invariant features into a single feature vector and uses Euclidean distance between vectors to quantify image similarities. | {
"cite_N": [
"@cite_5",
"@cite_8"
],
"mid": [
"2110405746",
"2600785079"
],
"abstract": [
"Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100 precision with recall rates of up to 60 .",
"Traditional VLAD method only uses the SITF feature. Since the SITF feature represents the local gradient information, thus VLAD representation based on SITF feature of image has low discriminative power. To address the problem, we present a simple and effective method that fuse the VLAD vectors based on local gradient and color information. Also, in order to improve the retrieval accuracy and reduce running time, we use whitening operator for VLAD vectors. Our proposed method is evaluated on two benchmark datasets, i.e., Holidays and Ukbench. Experiments show that our proposed method achieves good performance."
]
} |
1902.10058 | 2968225067 | Visual Place Recognition (VPR) is an important component in both computer vision and robotics applications, thanks to its ability to determine whether a place has been visited and where specifically. A major challenge in VPR is to handle changes of environmental conditions including weather, season and illumination. Most VPR methods try to improve the place recognition performance by ignoring the environmental factors, leading to decreased accuracy decreases when environmental conditions change significantly, such as day versus night. To this end, we propose an end-to-end conditional visual place recognition method. Specifically, we introduce the multi-domain feature learning method (MDFL) to capture multiple attribute-descriptions for a given place, and then use a feature detaching module to separate the environmental condition-related features from those that are not. The only label required within this feature learning pipeline is the environmental condition. Evaluation of the proposed method is conducted on the multi-season NORDLAND dataset, and the multi-weather GTAV dataset. Experimental results show that our method improves the feature robustness against variant environmental conditions. | Recently, many works have investigated CNN-based features for appearance-based VPR tasks. S " u nderhauf . @cite_3 first used pre-trained VGG model to extract middle-layer CNN outputs as image descriptors in the sequence matching pipeline. However, a pre-trained network can not be further trained for place recognition task, since the data labels are hard to define in VPR task. Recently, Chen . @cite_4 and Garg . @cite_2 address the conditional invariant VPR as an image classification task and rely on precise but expensive human labeling for semantic labels. Arandjelovic . @cite_13 developed NetVLAD, which is a modified form of the VLAD features, with CNN networks to improve the feature robustness. The approach that comes closest to our method is the work of Porav @cite_10 , where they learn invertible generators based on the CycleGAN @cite_7 , The original CycleGAN method can transform the image from one domain to another domain, but such transformation is limited to only two domains. Thus, for multiple domain place recognition task, the method of Porav requires transformation model between each pair of conditions. In contrast, our method can learn more than two conditions in the same structure. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_10",
"@cite_3",
"@cite_2",
"@cite_13"
],
"mid": [
"2580440899",
"2618574778",
"2793048932",
"301022506",
"2718724383",
"2179042386"
],
"abstract": [
"The success of deep learning techniques in the computer vision domain has triggered a range of initial investigations into their utility for visual place recognition, all using generic features from networks that were trained for other types of recognition tasks. In this paper, we train, at large scale, two CNN architectures for the specific place recognition task and employ a multi-scale feature encoding method to generate condition- and viewpoint-invariant features. To enable this training to occur, we have developed a massive Specific PlacEs Dataset (SPED) with hundreds of examples of place appearance change at thousands of different places, as opposed to the semantic place type datasets currently available. This new dataset enables us to set up a training regime that interprets place recognition as a classification problem. We comprehensively evaluate our trained networks on several challenging benchmark place recognition datasets and demonstrate that they achieve an average 10 increase in performance over other place recognition algorithms and pre-trained CNNs. By analyzing the network responses and their differences from pre-trained networks, we provide insights into what a network learns when training for place recognition, and what these results signify for future research in this area.",
"",
"We present a method of improving visual place recognition and metric localisation under very strong appear- ance change. We learn an invertable generator that can trans- form the conditions of images, e.g. from day to night, summer to winter etc. This image transforming filter is explicitly designed to aid and abet feature-matching using a new loss based on SURF detector and dense descriptor maps. A network is trained to output synthetic images optimised for feature matching given only an input RGB image, and these generated images are used to localize the robot against a previously built map using traditional sparse matching approaches. We benchmark our results using multiple traversals of the Oxford RobotCar Dataset over a year-long period, using one traversal as a map and the other to localise. We show that this method significantly improves place recognition and localisation under changing and adverse conditions, while reducing the number of mapping runs needed to successfully achieve reliable localisation.",
"After the incredible success of deep learning in the computer vision domain, there has been much interest in applying Convolutional Network (ConvNet) features in robotic fields such as visual navigation and SLAM. Unfortunately, there are fundamental differences and challenges involved. Computer vision datasets are very different in character to robotic camera data, real-time performance is essential, and performance priorities can be different. This paper comprehensively evaluates and compares the utility of three state-of-the-art ConvNets on the problems of particular relevance to navigation for robots; viewpoint-invariance and condition-invariance, and for the first time enables real-time place recognition performance using ConvNets with large maps by integrating a variety of existing (locality-sensitive hashing) and novel (semantic search space partitioning) optimization techniques. We present extensive experiments on four real world datasets cultivated to evaluate each of the specific challenges in place recognition. The results demonstrate that speed-ups of two orders of magnitude can be achieved with minimal accuracy degradation, enabling real-time performance. We confirm that networks trained for semantic place categorization also perform better at (specific) place recognition when faced with severe appearance changes and provide a reference for which networks and layers are optimal for different aspects of the place recognition problem.",
"The place recognition problem comprises two distinct subproblems; recognizing a specific location in the world (“specific” or “ordinary” place recognition) and recognizing the type of place (place categorization). Both are important competencies for mobile robots and have each received significant attention in the robotics and computer vision community, but usually as separate areas of investigation. In this paper, we leverage the powerful complementary nature of place recognition and place categorization processes to create a new hybrid place recognition system that uses place context to inform place recognition. We show that semantic place categorization creates an informative natural segmenting of physical space that in turn enables significantly better place recognition performance in comparison to existing techniques. In particular, this new semantically-informed approach adds robustness to significant local changes within the environment, such as transitioning between indoor and outdoor environments or between dark and light rooms in a house, complementing the capabilities of current condition-invariant techniques that are robust to globally consistent change (such as day to night cycles). We perform experiments using 4 novel benchmark datasets and show that semantically-informed place recognition outperforms the previous state-of-the-art systems. Like it does for object recognition [1], we believe that semantics can play a key role in boosting conventional place recognition and navigation performance for robotic systems.",
"We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state of-the-art compact image representations on standard image retrieval benchmarks."
]
} |
1902.09848 | 2949256944 | This paper presents PUBSUB-SGX, a content-based publish-subscribe system that exploits trusted execution environments (TEEs), such as Intel SGX, to guarantee confidentiality and integrity of data as well as anonymity and privacy of publishers and subscribers. We describe the technical details of our Python implementation, as well as the required system support introduced to deploy our system in a container-based runtime. Our evaluation results show that our approach is sound, while at the same time highlighting the performance and scalability trade-offs. In particular, by supporting just-in-time compilation inside of TEEs, Python programs inside of TEEs are in general faster than when executed natively using standard CPython. | To the best of our knowledge, there is only one existing system (SCBR @cite_34 ) that efficiently combines the publish subscribe paradigm with Intel SGX. SCBR integrates with Intel SGX by running the code of the brokers inside the enclaves, thus preventing an attacker with full control to inspect the messages in transit through the network. Conversely, prevents unintended messages to being routed to such brokers by means of properly crafted subscription policies, leveraging the mechanisms described earlier. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2552697910"
],
"abstract": [
"Content-based routing (CBR) is a powerful model that supports scalable asynchronous communication among large sets of geographically distributed nodes. Yet, preserving privacy represents a major limitation for the wide adoption of CBR, notably when the routers are located in public clouds. Indeed, a CBR router must see the content of the messages sent by data producers, as well as the filters (or subscriptions) registered by data consumers. This represents a major deterrent for companies for which data is a key asset, as for instance in the case of financial markets or to conduct sensitive business-to-business transactions. While there exists some techniques for privacy-preserving computation, they are either prohibitively slow or too limited to be usable in real systems. In this paper, we follow a different strategy by taking advantage of trusted hardware extensions that have just been introduced in off-the-shelf processors and provide a trusted execution environment. We exploit Intel's new software guard extensions (SGX) to implement a CBR engine in a secure enclave. Thanks to the hardware-based trusted execution environment (TEE), the compute-intensive CBR operations can operate on decrypted data shielded by the enclave and leverage efficient matching algorithms. Extensive experimental evaluation shows that SGX adds only limited overhead to insecure plaintext matching outside secure enclaves while providing much better performance and more powerful filtering capabilities than alternative software-only solutions. To the best of our knowledge, this work is the first to demonstrate the practical benefits of SGX for privacy-preserving CBR."
]
} |
1902.09848 | 2949256944 | This paper presents PUBSUB-SGX, a content-based publish-subscribe system that exploits trusted execution environments (TEEs), such as Intel SGX, to guarantee confidentiality and integrity of data as well as anonymity and privacy of publishers and subscribers. We describe the technical details of our Python implementation, as well as the required system support introduced to deploy our system in a container-based runtime. Our evaluation results show that our approach is sound, while at the same time highlighting the performance and scalability trade-offs. In particular, by supporting just-in-time compilation inside of TEEs, Python programs inside of TEEs are in general faster than when executed natively using standard CPython. | There is a large body of work on confidentiality issues in publish subscribe systems @cite_37 . PP-CBPS is one of the first attempts to provide support for it @cite_22 . It was built on top of the popular SIENA @cite_28 . It relies on heavyweight cryptographic techniques, and in some cases (e.g., equality filtering), it performs the matching within @math that of the baseline. As shown in the evaluation, improves greatly on these results by exploiting the hardware support for cryptographic operations offered directly by the TEEs. | {
"cite_N": [
"@cite_28",
"@cite_37",
"@cite_22"
],
"mid": [
"2131975004",
"",
"2005903673"
],
"abstract": [
"The components of a loosely coupled system are typically designed to operate by generating and responding to asynchronous events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems, whereby generators of events publish event notifications to the infrastructure and consumers of events subscribe with the infrastructure to receive relevant notifications. The two primary services that should be provided to components by the infrastructure are notification selection (i.e., determining which notifications match which subscriptions) and notification delivery (i.e., routing matching notifications from publishers to subscribers). Numerous event notification services have been developed for local-area networks, generally based on a centralized server to select and deliver event notifications. Therefore, they suffer from an inherent inability to scale to wide-area networks, such as the Internet, where the number and physical distribution of the service’s clients can quickly overwhelm a centralized solution. The critical challenge in the setting of a wide-area network is to maximize the expressiveness in the selection mechanism without sacrificing scalability in the delivery mechanism. This paper presents SIENA, an event notification service that we have designed and implemented to exhibit both expressiveness and scalability. We describe the service’s interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used",
"",
"The ability to seamlessly scale on demand has made Content-Based Publish-Subscribe (CBPS) systems the choice of distributing messages documents produced by Content Publishers to many Subscribers through Content Brokers. Most of the current systems assume that Content Brokers are trusted for the confidentiality of the data published by Content Publishers and the privacy of the subscriptions, which specify their interests, made by Subscribers. However, with the increased use of technologies, such as service oriented architectures and cloud computing, essentially outsourcing the broker functionality to third-party providers, one can no longer assume the trust relationship to hold. The problem of providing privacy confidentiality in CBPS systems is challenging, since the solution to the problem should allow Content Brokers to make routing decisions based on the content without revealing the content to them. The previous work attempted to solve this problem was not fully successful. The problem may appear unsolvable since it involves conflicting goals, but in this paper, we propose a novel approach to preserve the privacy of the subscriptions made by Subscribers and confidentiality of the data published by Content Publishers using cryptographic techniques when third-party Content Brokers are utilized to make routing decisions based on the content. Our protocols are expressive to support any type of subscriptions and designed to work efficiently. We distribute the work such that the load on Content Brokers, where the bottleneck is in a CBPS system, is minimized. We extend a popular CBPS system using our protocols to implement a privacy preserving CBPS system."
]
} |
1902.09848 | 2949256944 | This paper presents PUBSUB-SGX, a content-based publish-subscribe system that exploits trusted execution environments (TEEs), such as Intel SGX, to guarantee confidentiality and integrity of data as well as anonymity and privacy of publishers and subscribers. We describe the technical details of our Python implementation, as well as the required system support introduced to deploy our system in a container-based runtime. Our evaluation results show that our approach is sound, while at the same time highlighting the performance and scalability trade-offs. In particular, by supporting just-in-time compilation inside of TEEs, Python programs inside of TEEs are in general faster than when executed natively using standard CPython. | PS3 @cite_2 exploits Cipher-text Policy Attribute Based Encryption (CP-ABE) and Hidden Vector Encryption (HVE) @cite_32 to protect the privacy of subscriber interest and confidentiality of published content. This system assumes an honest-but-curious threat model as well colluding subscribers. supports a more powerful threat model, as provided directly by the security guarantees offered by SGX. | {
"cite_N": [
"@cite_32",
"@cite_2"
],
"mid": [
"1539207281",
"140571791"
],
"abstract": [
"Predicate encryption schemes are encryption schemes in which each ciphertext Ct is associated with a binary attribute vector and keys Kare associated with predicates. A key Kcan decrypt a ciphertext if and only if the attribute vector of the ciphertext satisfies the predicate of the key. Predicate encryption schemes can be used to implement fine-grained access control on encrypted data and to perform search on encrypted data. Hidden vector encryption schemes [Boneh and Waters --- TCC 2007] are encryption schemes in which each ciphertext is associated with a binary vector and each key Kis associated with binary vector with \"don't care\" entries (denoted with i¾?). Key Kcan decrypt ciphertext if and only if and agree for all ifor which @math . Hidden vector encryption schemes are an important type of predicate encryption schemes as they can be used to construct more sophisticated predicate encryption schemes (supporting for example range and subset queries). We give a construction for hidden-vector encryption from standard complexity assumptions on bilinear groups of prime order. Previous constructions were in bilinear groups of composite orderand thus resulted in less efficient schemes. Our construction is both payload-hiding and attribute-hiding meaning that also the privacy of the attribute vector, besides privacy of the cleartext, is guaranteed.",
"This paper presents P3S, a publish-subscribe middleware designed to protect the privacy of subscriber interest and confidentiality of published content. P3S combines recent advances in cryptography, specifically Ciphertext Policy Attribute Based Encryption (CP-ABE) and Predicate Based Encryption (PBE) with an innovative architecture to achieve the desired level of privacy. An initial P3S prototype has been implemented on top of a COTS JMS platform (ActiveMQ). Results of preliminary security analysis and initial evaluation of latency and throughput indicate that the P3S design is both practical and flexible to provide different levels of privacy for publish-subscribe messaging over various message sizes and network bandwidth settings."
]
} |
1902.09848 | 2949256944 | This paper presents PUBSUB-SGX, a content-based publish-subscribe system that exploits trusted execution environments (TEEs), such as Intel SGX, to guarantee confidentiality and integrity of data as well as anonymity and privacy of publishers and subscribers. We describe the technical details of our Python implementation, as well as the required system support introduced to deploy our system in a container-based runtime. Our evaluation results show that our approach is sound, while at the same time highlighting the performance and scalability trade-offs. In particular, by supporting just-in-time compilation inside of TEEs, Python programs inside of TEEs are in general faster than when executed natively using standard CPython. | Thrifty @cite_35 uses a pre-filtering stage and containment graphs (inspired by Bloom filters @cite_1 ) to implement encrypted matching operators. This system relies on ASPE @cite_38 , a cryptographic technique used to implement efficient subscription matching. does not require the implementation of complex cryptographic primitives, given that confidential data is only managed within the hardware boundaries of the enclaves. | {
"cite_N": [
"@cite_35",
"@cite_1",
"@cite_38"
],
"mid": [
"2125585202",
"2123845384",
"1498290244"
],
"abstract": [
"Content-based publish subscribe is an appealing paradigm for building large-scale distributed applications. Such applications are often deployed over multiple administrative domains, some of which may not be trusted. Recent attacks in public clouds indicate that a major concern in untrusted domains is the enforcement of privacy. By routing data based on subscriptions evaluated on the content of publications, publish subscribe systems can expose critical information to unauthorized parties. Information leakage can be avoided by the means of privacy-preserving filtering, which is supported by several mechanisms for encrypted matching. Unfortunately, all existing approaches have in common a high performance overhead and the difficulty to use classical optimization for content-based filtering such as per-attribute containment. In this paper, we propose a novel mechanism that greatly reduces the cost of supporting privacy-preserving filtering based on encrypted matching operators. It is based on a pre-filtering stage that can be combined with containment graphs, if available. Our experiments indicate that pre-filtering is able to significantly reduce the number of encrypted matching for a variety of workloads, and therefore the costs associated with the cryptographic mechanisms. Furthermore, our analysis shows that the additional data structures used for pre-filtering have very limited impact on the effectiveness of privacy preservation.",
"In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash-coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency. The new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods. In such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to “catch” the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods. Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time.",
"Users of content-based publish subscribe systems (CBPS) are interested in receiving data items with values that satisfy certain conditions. Each user submits a list of subscription specifications to a broker, which routes data items from publishers to users. When a broker receives a notification that contains a value from a publisher, it forwards it only to the subscribers whose requests match the value. However, in many applications, the data published are confidential, and their contents must not be revealed to brokers. Furthermore, a user's subscription may contain sensitive information that must be protected from brokers. Therefore, a difficult challenge arises: how to route publisher data to the appropriate subscribers without the intermediate brokers learning the plain text values of the notifications and subscriptions. To that extent, brokers must be able to perform operations on top of the encrypted contents of subscriptions and notifications. Such operations may be as simple as equality match, but often require more complex operations such as determining inclusion of data in a value interval. Previous work attempted to solve this problem by using one-way data mappings or specialized encryption functions that allow evaluation of conditions on ciphertexts. However, such operations are computationally expensive, and the resulting CBPS lack scalability. As fast dissemination is an important requirement in many applications, we focus on a new data transformation method called Asymmetric Scalar-product Preserving Encryption (ASPE) [1]. We devise methods that build upon ASPE to support private evaluation of several types of conditions. We also suggest techniques for secure aggregation of notifications, supporting functions such as sum, minimum, maximum and count. Our experimental evaluation shows that ASPE-based CBPS incurs 65 less overhead for exact-match filtering and 50 less overhead for range filtering compared to the state-of-the-art."
]
} |
1902.09809 | 2915775857 | Recurrent convolution (RC) shares the same convolutional kernels and unrolls them multiple steps, which is originally proposed to model time-space signals. We argue that RC can be viewed as a model compression strategy for deep convolutional neural networks. RC reduces the redundancy across layers. However, the performance of an RC network is not satisfactory if we directly unroll the same kernels multiple steps. We propose a simple yet effective variant which improves the RC networks: the batch normalization layers of an RC module are learned independently (not shared) for different unrolling steps. Moreover, we verify that RC can perform cost-adjustable inference which is achieved by varying its unrolling steps. We learn double independent BN layers for cost-adjustable RC networks, i.e. independent w.r.t both the unrolling steps of current cell and upstream cell. We provide insights on why the proposed method works successfully. Experiments on both image classification and image denoise demonstrate the effectiveness of our method. | Many works have been proposed to obtain compact neural networks. Han al @cite_26 compress the parameters of a network by combining weight pruning, k-means clustering, and Huffman coding. Their approach is further optimized by @cite_19 using Hessian-weighted k-means clustering. Each weight is pruned independently in these two works. It is observed in @cite_5 that the practical acceleration is very limited due to the non-structured pruning. Luo al @cite_27 propose ThiNet which performs filter level pruning. Thus both the number of parameters and the computational cost are reduced. However, filters are pruned at each layer greedily. The redundancy across layers is not considered. Lin al @cite_22 use reinforcement learning method to train an agency which skips (equivalent to pruning) some layers. After pruning, all of those methods require fine-tuning the pruned networks. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_19",
"@cite_27",
"@cite_5"
],
"mid": [
"2119144962",
"2752037867",
"2582794520",
"2737100304",
"2963000224"
],
"abstract": [
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"In this paper, we propose a Runtime Neural Pruning (RNP) framework which prunes the deep neural network dynamically at the runtime. Unlike existing neural pruning methods which produce a fixed pruned model for deployment, our method preserves the full ability of the original network and conducts pruning according to the input image and current feature maps adaptively. The pruning is performed in a bottom-up, layer-by-layer manner, which we model as a Markov decision process and use reinforcement learning for training. The agent judges the importance of each convolutional kernel and conducts channel-wise pruning conditioned on different samples, where the network is pruned more when the image is easier for the task. Since the ability of network is fully preserved, the balance point is easily adjustable according to the available resources. Our method can be applied to off-the-shelf network structures and reach a better tradeoff between speed and accuracy, especially with a large pruning rate.",
"Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks. It reduces the number of distinct network parameter values by quantization in order to save the storage for them. In this paper, we design network quantization schemes that minimize the performance loss due to quantization given a compression ratio constraint. We analyze the quantitative relation of quantization errors to the neural network loss function and identify that the Hessian-weighted distortion measure is locally the right objective function for the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When optimal variable-length binary codes, e.g., Huffman codes, are employed for further compression, we derive that the network quantization problem can be related to the entropy-constrained scalar quantization (ECSQ) problem in information theory and consequently propose two solutions of ECSQ for network quantization, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm. Finally, using the simple uniform quantization followed by Huffman coding, we show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively.",
"We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 @math FLOPs reduction and 16.63 @math compression on VGG-16, with only 0.52 @math top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 @math top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.",
"High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 ."
]
} |
1902.09809 | 2915775857 | Recurrent convolution (RC) shares the same convolutional kernels and unrolls them multiple steps, which is originally proposed to model time-space signals. We argue that RC can be viewed as a model compression strategy for deep convolutional neural networks. RC reduces the redundancy across layers. However, the performance of an RC network is not satisfactory if we directly unroll the same kernels multiple steps. We propose a simple yet effective variant which improves the RC networks: the batch normalization layers of an RC module are learned independently (not shared) for different unrolling steps. Moreover, we verify that RC can perform cost-adjustable inference which is achieved by varying its unrolling steps. We learn double independent BN layers for cost-adjustable RC networks, i.e. independent w.r.t both the unrolling steps of current cell and upstream cell. We provide insights on why the proposed method works successfully. Experiments on both image classification and image denoise demonstrate the effectiveness of our method. | Another line of works trains a compact network from scratch. Zhang al @cite_25 design ShuffleNet for mobile devices which groups filters at each layer and shuffles the order of channels of features at particular layers. Hinton al @cite_3 improve the performance of student (small) networks by imitating the probabilistic outputs of teacher (large) networks, which is called knowledge distilling. Further, Romero al @cite_23 propose FitNets which imitates the intermediate representations learned by the teachers. The numbers of filters of students and teachers are not necessary to be equal. | {
"cite_N": [
"@cite_23",
"@cite_25",
"@cite_3"
],
"mid": [
"2964118293",
"2963125010",
"1821462560"
],
"abstract": [
"Abstract: While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.",
"We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8 ) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves 13A— actual speedup over AlexNet while maintaining comparable accuracy.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."
]
} |
1902.09809 | 2915775857 | Recurrent convolution (RC) shares the same convolutional kernels and unrolls them multiple steps, which is originally proposed to model time-space signals. We argue that RC can be viewed as a model compression strategy for deep convolutional neural networks. RC reduces the redundancy across layers. However, the performance of an RC network is not satisfactory if we directly unroll the same kernels multiple steps. We propose a simple yet effective variant which improves the RC networks: the batch normalization layers of an RC module are learned independently (not shared) for different unrolling steps. Moreover, we verify that RC can perform cost-adjustable inference which is achieved by varying its unrolling steps. We learn double independent BN layers for cost-adjustable RC networks, i.e. independent w.r.t both the unrolling steps of current cell and upstream cell. We provide insights on why the proposed method works successfully. Experiments on both image classification and image denoise demonstrate the effectiveness of our method. | A network that can perform inference at different computational costs depending on the user requirements, is considered to be capable of cost-adjustable inference @cite_18 . One popular way for cost-adjustable inference is to train multiple classifiers at intermediate layers @cite_8 @cite_9 @cite_13 . One can manually set the stop point based on certain resource constraints. Or one can decide whether to stop based on the response of intermediate classifiers. The spirits of those works are similar to ours, changing the computational depth of networks. However, the higher layers are not used in these works if we stop at an early layer. Our work doesn't suffer from this issue due to its recurrent nature and only a single classifier is trained. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_13",
"@cite_8"
],
"mid": [
"",
"2964227400",
"2962677625",
"2767421475"
],
"abstract": [
"",
"It is desirable to train convolutional networks (CNNs) to run more efficiently during inference. In many cases however, the computational budget that the system has for inference cannot be known beforehand during training, or the inference budget is dependent on the changing real-time resource availability. Thus, it is inadequate to train just inference-efficient CNNs, whose inference costs are not adjustable and cannot adapt to varied inference budgets. We propose a novel approach for cost-adjustable inference in CNNs - Stochastic Downsampling Point (SDPoint). During training, SDPoint applies feature map downsampling to a random point in the layer hierarchy, with a random downsampling ratio. The different stochastic downsampling configurations known as SDPoint instances (of the same model) have computational costs different from each other, while being trained to minimize the same prediction loss. Sharing network parameters across different instances provides significant regularization boost. During inference, one may handpick a SDPoint instance that best fits the inference budget. The effectiveness of SDPoint, as both a cost-adjustable inference approach and a regularizer, is validated through extensive experiments on image classification.",
"Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network.",
"In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings."
]
} |
1902.09809 | 2915775857 | Recurrent convolution (RC) shares the same convolutional kernels and unrolls them multiple steps, which is originally proposed to model time-space signals. We argue that RC can be viewed as a model compression strategy for deep convolutional neural networks. RC reduces the redundancy across layers. However, the performance of an RC network is not satisfactory if we directly unroll the same kernels multiple steps. We propose a simple yet effective variant which improves the RC networks: the batch normalization layers of an RC module are learned independently (not shared) for different unrolling steps. Moreover, we verify that RC can perform cost-adjustable inference which is achieved by varying its unrolling steps. We learn double independent BN layers for cost-adjustable RC networks, i.e. independent w.r.t both the unrolling steps of current cell and upstream cell. We provide insights on why the proposed method works successfully. Experiments on both image classification and image denoise demonstrate the effectiveness of our method. | BN is first introduced into RNN by Laurent al @cite_10 . BN is only applied to the sequential inputs, @math in their work. Then Cooijmans al @cite_0 show it is also helpful to apply BN to the hidden states. BN layers are shared in both of these two works. While in our work, we learn independent BN layers over unrolling steps. | {
"cite_N": [
"@cite_0",
"@cite_10"
],
"mid": [
"2963304263",
"2962949994"
],
"abstract": [
"We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. @PARASPLIT We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.",
"Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computationally expensive to train and difficult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can significantly improve convergence rates in feed-forward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to significantly reduce training time. In this paper, we investigate how batch normalization can be applied to RNNs. We show for both a speech recognition task and language modeling that the way we apply batch normalization leads to a faster convergence of the training criterion but doesn't seem to improve the generalization performance."
]
} |
1902.10053 | 2915220476 | Modeling the sequential information of image sequences has been a vital step of various vision tasks and convolutional long short-term memory (ConvLSTM) has demonstrated its superb performance in such spatiotemporal problems. Nevertheless, the hierarchical data structures in a significant amount of tasks (e.g., human body parts and vessel airway tree in biomedical images) cannot be properly modeled by sequential models. Thus, ConvLSTM is not suitable for tree-structured image data analysis. In order to address these limitations, we present tree-structured ConvLSTM models for tree-structured image analysis tasks which can be trained end-to-end. To demonstrate the effectiveness of the proposed tree-structured ConvLSTM model, we present a tree-structured segmentation framework which consists of a tree-structured ConvLSTM and an attention fully convolutional network (FCN) model. The proposed framework is extensively validated on four large-scale coronary artery datasets. The results demonstrate the effectiveness and efficiency of the proposed method. | Recurrent neural network (RNN) has been proven to be an efficient tool for sequence learning. Its recursive formulations naturally allow handling of variable-length sequences. Nevertheless, the notorious vanishing or exploding gradients problem @cite_14 in its training algorithm (i.e., back-propagation through time) prevents RNN from achieving satisfying results in applications requiring long-term dependencies. This problem is alleviated with the long short-term memory (LSTM) @cite_30 which incorporates long-term stable memory over time using a series of gating functions. LSTM has been widely adopted and achieved state-of-the-art results in numerous sequence learning applications @cite_13 @cite_2 @cite_34 @cite_15 . However, the traditional LSTM is not suitable for image sequence analysis since it uses fully-connected structure during both the input-to-state and state-to-state transitions, neglecting the spatial information. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_2",
"@cite_15",
"@cite_34",
"@cite_13"
],
"mid": [
"2136848157",
"1815076433",
"2524916860",
"2951183276",
"2617267541",
"2962741254"
],
"abstract": [
"Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive \"forget gate\" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.",
"There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section.",
"Accurate measurement of left ventricular volumes and Ejection Fraction from cine MRI is of paramount importance to the evaluation of cardiovascular functions, yet it usually requires laborious and tedious work of trained experts to interpret them. To facilitate this procedure, numerous computer aided diagnosis (CAD) methods and tools have been proposed, most of which focus on the left or right ventricle segmentation. However, the identification of ES and ED frames from cardiac sequences is largely ignored, which is a key procedure in the automated workflow. This seemingly easy task is quite challenging, due to the requirement of high accuracy (i.e., precisely identifying specific frames from a sequence) and subtle differences among consecutive frames. Recently, with the rapid growth of annotated data and the increasing computational power, deep learning methods have been widely exploited in medical image analysis. In this paper, we propose a novel deep learning architecture, named as temporal regression network (TempReg-Net), to accurately identify specific frames from MRI sequences, by integrating the Convolutional Neural Network (CNN) with the Recurrent Neural Network (RNN). Specifically, a CNN encodes the spatial information of a cardiac sequence, and a RNN decodes the temporal information. In addition, we design a new loss function in our network to constrain the structure of predicted labels, which further improves the performance. Our approach is extensively validated on thousands of cardiac sequences and the average difference is merely 0.4 frames, comparing favorably with previous systems.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"Metastasis detection of lymph nodes in Whole-slide Images (WSIs) plays a critical role in the diagnosis of breast cancer. Automatic metastasis detection is a challenging issue due to the large variance of their appearances and the size of WSIs. Recently, deep neural networks have been employed to detect cancer metastases by dividing the WSIs into small image patches. However, most existing works simply treat these patches independently and do not consider the structural information among them. In this paper, we propose a novel deep neural network, namely Spatially Structured Network (Spatio-Net) to tackle the metastasis detection problem in WSIs. By integrating the Convolutional Neural Network (CNN) with the 2D Long-Short Term Memory (2D-LSTM), our Spatio-Net is able to learn the appearances and spatial dependencies of image patches effectively. Specifically, the CNN encodes each image patch into a compact feature vector, and the 2D-LSTM layers provide the classification results (i.e., normal or tumor), considering its dependencies on other relevant image patches. Moreover, a new loss function is designed to constrain the structure of the output labels, which further improves the performance. Finally, the metastasis positions are obtained by locating the regions with high tumor probabilities in the resulting accurate probability map. The proposed method is validated on hundreds of WSIs, and the accuracy is significantly improved, in comparison with a state-of-the-art baseline that does not have the spatial dependency constraint.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye."
]
} |
1902.10053 | 2915220476 | Modeling the sequential information of image sequences has been a vital step of various vision tasks and convolutional long short-term memory (ConvLSTM) has demonstrated its superb performance in such spatiotemporal problems. Nevertheless, the hierarchical data structures in a significant amount of tasks (e.g., human body parts and vessel airway tree in biomedical images) cannot be properly modeled by sequential models. Thus, ConvLSTM is not suitable for tree-structured image data analysis. In order to address these limitations, we present tree-structured ConvLSTM models for tree-structured image analysis tasks which can be trained end-to-end. To demonstrate the effectiveness of the proposed tree-structured ConvLSTM model, we present a tree-structured segmentation framework which consists of a tree-structured ConvLSTM and an attention fully convolutional network (FCN) model. The proposed framework is extensively validated on four large-scale coronary artery datasets. The results demonstrate the effectiveness and efficiency of the proposed method. | Different from traditional LSTM, ConvLSTM @cite_25 takes image sequences as the inputs and the vector multiplications in traditional LSTM are replaced by convolutional operations. By this means, ConvLSTM preserves the spatial topology of the inputs and introduces sparsity and locality to the LSTM units to reduce over-parameterization and overfitting. Thus, ConvLSTM models are suitable for spatiotemporal prediction problems. However, as mentioned in the introduction, sequential ConvLSTM is not capable of dealing with many applications with tree structure data. Tree-structured LSTM @cite_7 and graph convolutional recurrent networks @cite_20 have been proposed for language processing tasks. Nevertheless, as the vector multiplication was used, it is not suitable for image analysis. Compared with tree-structured LSTM, our tree-structured ConvLSTM model considers both spatial information and inter-frame dependencies in the tree structure. | {
"cite_N": [
"@cite_20",
"@cite_25",
"@cite_7"
],
"mid": [
"2953170998",
"1485009520",
"2963355447"
],
"abstract": [
"This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep learning model able to predict structured sequences of data. Precisely, GCRN is a generalization of classical recurrent neural networks (RNN) to data structured by an arbitrary graph. Such structured sequences can represent series of frames in videos, spatio-temporal measurements on a network of sensors, or random walks on a vocabulary graph for natural language modeling. The proposed model combines convolutional neural networks (CNN) on graphs to identify spatial structures and RNN to find dynamic patterns. We study two possible architectures of GCRN, and apply the models to two practical problems: predicting moving MNIST data, and modeling natural language with the Penn Treebank dataset. Experiments show that exploiting simultaneously graph spatial and dynamic information about data can improve both precision and learning speed.",
"The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-the-art operational ROVER algorithm for precipitation nowcasting.",
"A Long Short-Term Memory (LSTM) network is a type of recurrent neural network architecture which has recently obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. TreeLSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)."
]
} |
1902.10059 | 2968978385 | Place recognition and loop closure detection are challenging for long-term visual navigation tasks. SeqSLAM is considered to be one of the most successful approaches to achieve long-term localization under varying environmental conditions and changing viewpoints. SeqSLAM uses a brute-force sequential matching method, which is computationally intensive. In this work, we introduce a multi-resolution sampling-based global visual place recognition method (MRS-VPR), which can significantly improve the matching efficiency and accuracy in sequential matching. The novelty of this method lies in the coarse-to-fine searching pipeline and a particle filter-based global sampling scheme, that can balance the matching efficiency and accuracy in the long-term navigation task. Moreover, our model works much better than SeqSLAM when the testing sequence is over a much smaller time scale than the reference sequence. Our experiments demonstrate that MRSVPR is efficient in locating short temporary trajectories within long-term reference ones without compromising on the accuracy compared to SeqSLAM. | Appearance changing under variant conditions leads to an unstable place recognition in SLAM frameworks. Traditional V-SLAM methods use BoW @cite_10 (vector of local handcrafted features) as the image descriptor, or rely on prior 3D maps for online matching @cite_16 , or use hierarchical BoW @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"2623510711",
"1518580144",
"1984309565"
],
"abstract": [
"In this paper, we propose a novel appearance-based approach for topological mapping based on a hierarchical decomposition of the environment. In our map, images with similar visual properties are grouped together in nodes, which are represented by means of an average global descriptor and an index of binary features based on a bag-of-words online approach. Each image is represented by means of a global descriptor and a set of local features, and this information is used in a two-level loop closure approach, where first global descriptors are employed to obtain the most likely nodes of the map and then binary image features are used to retrieve the most likely images inside these nodes. This hierarchical scheme enables us to reduce the search space when recognizing places, maintaining high accuracy when creating a map. Our approach is validated using several public datasets and compared against several state-of-the-art techniques. The accuracy and the sparsity of the generated maps are also discussed.",
"This paper is concerned with large-scale localisation at city scales with monocular cameras. Our primary motivation lies with the development of autonomous road vehicles - an application domain in which low-cost sensing is particularly important. Here we present a method for localising against a textured 3-dimensional prior mesh using a monocular camera. We first present a system for generating and texturing the prior using a LIDAR scanner and camera. We then describe how we can localise against that prior with a single camera, using an information-theoretic measure of image similarity. This process requires dealing with the distortions induced by a wide-angle camera. We present and justify an interesting approach to this issue in which we distort the prior map into the image rather than vice-versa. Finally we explain how the general purpose computation functionality of a modern GPU is particularly apt for our task, allowing us to run the system in real time. We present results showing centimetre-level localisation accuracy through a city over six kilometres.",
"This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core."
]
} |
1902.10059 | 2968978385 | Place recognition and loop closure detection are challenging for long-term visual navigation tasks. SeqSLAM is considered to be one of the most successful approaches to achieve long-term localization under varying environmental conditions and changing viewpoints. SeqSLAM uses a brute-force sequential matching method, which is computationally intensive. In this work, we introduce a multi-resolution sampling-based global visual place recognition method (MRS-VPR), which can significantly improve the matching efficiency and accuracy in sequential matching. The novelty of this method lies in the coarse-to-fine searching pipeline and a particle filter-based global sampling scheme, that can balance the matching efficiency and accuracy in the long-term navigation task. Moreover, our model works much better than SeqSLAM when the testing sequence is over a much smaller time scale than the reference sequence. Our experiments demonstrate that MRSVPR is efficient in locating short temporary trajectories within long-term reference ones without compromising on the accuracy compared to SeqSLAM. | Sequential matching-based methods are not practical in real world applications due to their computational complexity. To improve the robustness in sequence matching, Naseer @cite_2 proposed an minimum cost flow-based data association, which could deal with non-matching image sequences that result from temporal occlusions or from visiting new places. Vysotska @cite_3 , improved the work of Naseer with GPS priors. | {
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"1521752665",
"26944259"
],
"abstract": [
"The ability to localize a robot is an important capability and matching of observations under substantial changes is a prerequisite for robust long-term operation. This paper investigates the problem of efficiently coping with seasonal changes in image data. We present an extension of a recent approach [15] to visual image matching using sequence information. Our extension allows for exploiting GPS priors in the matching process to overcome the main computational bottleneck of the previous method and to handle loops within the image sequences. We present an experimental evaluation using real world data containing substantial seasonal changes and show that our approach outperforms the previous method in case a noisy GPS pose prior is available.",
"Image-based localization is an important problem in robotics and an integral part of visual mapping and navigation systems. An approach to robustly match images to previously recorded ones must be able to cope with seasonal changes especially when it is supposed to work reliably over long periods of time. In this paper, we present a novel approach to visual localization of mobile robots in outdoor environments, which is able to deal with substantial seasonal changes. We formulate image matching as a minimum cost flow problem in a data association graph to effectively exploit sequence information. This allows us to deal with nonmatching image sequences that result from temporal occlusions or from visiting new places. We present extensive experimental evaluations under substantial seasonal changes. Our approach achieves accurate matching across seasons and outperforms existing state-of-the-art methods such as FABMAP2 and SeqSLAM."
]
} |
1902.10059 | 2968978385 | Place recognition and loop closure detection are challenging for long-term visual navigation tasks. SeqSLAM is considered to be one of the most successful approaches to achieve long-term localization under varying environmental conditions and changing viewpoints. SeqSLAM uses a brute-force sequential matching method, which is computationally intensive. In this work, we introduce a multi-resolution sampling-based global visual place recognition method (MRS-VPR), which can significantly improve the matching efficiency and accuracy in sequential matching. The novelty of this method lies in the coarse-to-fine searching pipeline and a particle filter-based global sampling scheme, that can balance the matching efficiency and accuracy in the long-term navigation task. Moreover, our model works much better than SeqSLAM when the testing sequence is over a much smaller time scale than the reference sequence. Our experiments demonstrate that MRSVPR is efficient in locating short temporary trajectories within long-term reference ones without compromising on the accuracy compared to SeqSLAM. | More recently, Sayem @cite_14 proposed a Fast-SeqSLAM method, which improved the searching efficiency by utilizing an approximate nearest neighbor (ANN) as the initial estimate for potential matches. Since ANN in Fast-SeqSLAM still relies on single image feature similarities, the initial search efficiency may decrease when the original matching frame sequence is of a relatively long-time scale. Liu and Zhang @cite_12 applied a particle filter to improve the matching efficiency, where each particle represented a potential subset of the frame sequence @cite_11 . Rather than evaluating the whole frame sequence, they predicted the weights of multiple particles based on frame sequence similarities and the robot motion. However, both the methods described above require a good estimation of the initial matched location. | {
"cite_N": [
"@cite_14",
"@cite_12",
"@cite_11"
],
"mid": [
"2737165983",
"2025199671",
"2124156864"
],
"abstract": [
"Loop closure detection or place recognition is a fundamental problem in robot simultaneous localization and mapping (SLAM). SeqSLAM is considered to be one of the most successful algorithms for loop closure detection as it has been demonstrated to be able to handle significant environmental condition changes including those due to illumination, weather, and time of the day. However, SeqSLAM relies heavily on exhaustive sequence matching, a computationally expensive process that prevents the algorithm from being used in dealing with large maps. In this paper, we propose Fast-SeqSLAM, an efficient version of SeqSLAM. Fast-SeqSLAM has a much reduced time complexity without degrading the accuracy, and this is achieved by using an approximate nearest neighbor (ANN) algorithm to match the current image with those in the robot map and extending the idea of SeqSLAM to greedily search a sequence of images that best match with the current sequence. We demonstrate the effectiveness of our Fast-SeqSLAM algorithm in appearance based loop closure detection.",
"We propose a method in this paper to perform sequence-based appearance SLAM in an efficient and effective way. Sequence-based SLAM (or SeqSLAM for short) makes use of the image descriptors extracted from a series of consecutive frames and matching is done between two such image sequences. It has been shown to be effective in dealing with significant illumination change where localization and mapping can be conducted under different time periods and weather conditions. To address the computational issue that can arise from the exhaustive search of the candidate sequences with the increase of map size, we use a particle filter to implement the Bayes filtering framework of estimating the true match. The resampling of the particles allows us to maintain only a small number of hypotheses while still capturing the true distribution of the robot location. Our method is highly scalable and efficient, validated on a large dataset with comparable results to the original algorithm in terms of performance.",
"In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very \"nice\" properties. Firstly, it makes efficient use of the latest available information and, secondly, it can have heavy tails. As a result, we find that the algorithm outperforms standard particle filtering and other nonlinear filtering methods very substantially. This experimental finding is in agreement with the theoretical convergence proof for the algorithm. The algorithm also includes resampling and (possibly) Markov chain Monte Carlo (MCMC) steps."
]
} |
1902.10126 | 2932687579 | This paper describes our system submitted to SemEval 2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours, Subtask A (, 2019). The challenge focused on classifying whether posts from Twitter and Reddit support, deny, query, or comment a hidden rumour, truthfulness of which is the topic of an underlying discussion thread. We formulate the problem as a stance classification, determining the rumour stance of a post with respect to the previous thread post and the source thread post. The recent BERT architecture was employed to build an end-to-end system which has reached the F1 score of 61.67 on the provided test data. It finished at the 2nd place in the competition, without any hand-crafted features, only 0.2 behind the winner. | : @cite_5 encodes the target text by means of a bidirectional LSTM (BiLSTM), conditioned on the source text and empirically shows that the conditioning on the source text matters. @cite_23 proposes target augmented embeddings -- embeddings concatenated with an average of the source text embeddings and applies these to compute an attention based on the weighted sum of the target embeddings that were previously transformed via the BiLSTM. @cite_4 proposes an architecture that encodes the source and the target text via a LSTM and a CNN separately and then uses a memory network together with a similarity matrix to capture the similarity between the source and the target text, and infers a fixed-size vector suitable for the stance prediction. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_23"
],
"mid": [
"2437771934",
"2798624200",
""
],
"abstract": [
"Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be \"positive\", negative\" or \"neutral\". Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challenging version of this task, where targets are not always mentioned and no training data is available for the test targets. We experiment with conditional LSTM encoding, which builds a representation of the tweet that is dependent on the target, and demonstrate that it outperforms encoding the tweet and the target independently. Performance is improved further when the conditional model is augmented with bidirectional encoding. We evaluate our approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving performance second best only to a system trained on semi-automatically labelled tweets for the test target. When such weak supervision is added, our approach achieves state-of-the-art results.",
"We present a novel end-to-end memory network for stance detection, which jointly (i) predicts whether a document agrees, disagrees, discusses or is unrelated with respect to a given target claim, and also (ii) extracts snippets of evidence for that prediction. The network operates at the paragraph level and integrates convolutional and recurrent neural networks, as well as a similarity matrix as part of the overall architecture. The experimental evaluation on the Fake News Challenge dataset shows state-of-the-art performance.",
""
]
} |
1902.09928 | 2916995828 | Effective spatiotemporal feature representation is crucial to the video-based action recognition task. Focusing on discriminate spatiotemporal feature learning, we propose Information Fused Temporal Transformation Network (IF-TTN) for action recognition on top of popular Temporal Segment Network (TSN) framework. In the network, Information Fusion Module (IFM) is designed to fuse the appearance and motion features at multiple ConvNet levels for each video snippet, forming a short-term video descriptor. With fused features as inputs, Temporal Transformation Networks (TTN) are employed to model middle-term temporal transformation between the neighboring snippets following a sequential order. As TSN itself depicts long-term temporal structure by segmental consensus, the proposed network comprehensively considers multiple granularity temporal features. Our IF-TTN achieves the state-of-the-art results on two most popular action recognition datasets: UCF101 and HMDB51. Empirical investigation reveals that our architecture is robust to the input motion map quality. Replacing optical flow with the motion vectors from compressed video stream, the performance is still comparable to the flow-based methods while the testing speed is 10x faster. | Improved Dense Trajectory Feature (iDTF) @cite_23 @cite_29 has been in a dominant position in the field of action recognition. Recently, 2D Convolutional Neural Networks trained on ImageNet @cite_14 were employed to perform RGB image classification. But their performance on video classification was limited as they can only capture appearance information. In order to model motion information, Two-Stream CNN was proposed and got a significantly boost in performance by taking both RGB images and optical flow as inputs. To model spatiotemporal feature better, Tran proposed 3D CNN architecture called C3D in an attempt to directly extract high-level semantics spatiotemporal abstraction from raw videos @cite_7 and then proposed Res3D to further improve recognition performance @cite_12 . To take advantage of both Two-Stream CNN and 3D CNN, a Two-Stream Inflated 3D CNN (I3D) was proposed and allowed for initialization with ImageNet pre-trained weights @cite_27 . | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_29",
"@cite_27",
"@cite_23",
"@cite_12"
],
"mid": [
"2952020226",
"2952633803",
"2105101328",
"2619082050",
"",
"2745519816"
],
"abstract": [
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.",
"",
"Learning image representations with ConvNets by pre-training on ImageNet has proven useful across many visual understanding tasks including object detection, semantic segmentation, and image captioning. Although any image representation can be applied to video frames, a dedicated spatiotemporal representation is still vital in order to incorporate motion patterns that cannot be captured by appearance based models alone. This paper presents an empirical ConvNet architecture search for spatiotemporal feature learning, culminating in a deep 3-dimensional (3D) Residual ConvNet. Our proposed architecture outperforms C3D by a good margin on Sports-1M, UCF101, HMDB51, THUMOS14, and ASLAN while being 2 times faster at inference time, 2 times smaller in model size, and having a more compact representation."
]
} |
1902.09928 | 2916995828 | Effective spatiotemporal feature representation is crucial to the video-based action recognition task. Focusing on discriminate spatiotemporal feature learning, we propose Information Fused Temporal Transformation Network (IF-TTN) for action recognition on top of popular Temporal Segment Network (TSN) framework. In the network, Information Fusion Module (IFM) is designed to fuse the appearance and motion features at multiple ConvNet levels for each video snippet, forming a short-term video descriptor. With fused features as inputs, Temporal Transformation Networks (TTN) are employed to model middle-term temporal transformation between the neighboring snippets following a sequential order. As TSN itself depicts long-term temporal structure by segmental consensus, the proposed network comprehensively considers multiple granularity temporal features. Our IF-TTN achieves the state-of-the-art results on two most popular action recognition datasets: UCF101 and HMDB51. Empirical investigation reveals that our architecture is robust to the input motion map quality. Replacing optical flow with the motion vectors from compressed video stream, the performance is still comparable to the flow-based methods while the testing speed is 10x faster. | Plenty of works have been dedicated to model the temporal structure for action recognition @cite_28 @cite_10 @cite_33 @cite_5 . With the development of Deep Learning, many recent works modeled the temporal structure via network design. Temporal Segment Network (TSN) @cite_5 was proposed to model temporal structure on the entire videos in an end-to-end manner. However, TSN failed to capture the temporal order of video frames. Zhou proposed a Temporal Relation Network (TRN) @cite_15 to learn and reason about temporal dependencies between video frames at multiple time scales. In @cite_25 and @cite_17 , Long Short-Term Memory (LSTM) networks were used to capture the long-range dynamics for action recognition. | {
"cite_N": [
"@cite_33",
"@cite_28",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"410625161",
"1498368596",
"2507009361",
"2950870964",
"2084341401",
"",
"2951183276"
],
"abstract": [
"Action detection is of great importance in understanding human motion from video. Compared with action recognition, it not only recognizes action type, but also localizes its spatiotemporal extent. This paper presents a relational model for action detection, which first decomposes human action into temporal “key poses” and then further into spatial “action parts”. Specifically, we start by clustering cuboids around each human joint into dynamic-poselets using a new descriptor. The cuboids from the same cluster share consistent geometric and dynamic structure, and each cluster acts as a mixture of body parts. We then propose a sequential skeleton model to capture the relations among dynamic-poselets. This model unifies the tasks of learning the composites of mixture dynamic-poselets, the spatiotemporal structures of action parts, and the local model for each action part in a single framework. Our model not only allows to localize the action in a video stream, but also enables a detailed pose estimation of an actor. We formulate the model learning problem in a structured SVM framework and speed up model inference by dynamic programming. We conduct experiments on three challenging action detection datasets: the MSR-II dataset, the UCF Sports dataset, and the JHMDB dataset. The results show that our method achieves superior performance to the state-of-the-art methods on these datasets.",
"Much recent research in human activity recognition has focused on the problem of recognizing simple repetitive (walking, running, waving) and punctual actions (sitting up, opening a door, hugging). However, many interesting human activities are characterized by a complex temporal composition of simple actions. Automatic recognition of such complex actions can benefit from a good understanding of the temporal structures. We present in this paper a framework for modeling motion by exploiting the temporal structure of the human activities. In our framework, we represent activities as temporal compositions of motion segments. We train a discriminative model that encodes a temporal decomposition of video sequences, and appearance models for each motion segment. In recognition, a query video is matched to the model according to the learned appearances and motion segment decomposition. Classification is made based on the quality of matching between the motion segment classifiers and the temporal segments in the query sequence. To validate our approach, we introduce a new dataset of complex Olympic Sports activities. We show that our algorithm performs better than other state of the art methods.",
"Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).",
"Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets - Something-Something, Jester, and Charades - which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos.",
"We address the problem of localizing actions, such as opening a door, in hours of challenging video data. We propose a model based on a sequence of atomic action units, termed \"actoms,\" that are semantically meaningful and characteristic for the action. Our actom sequence model (ASM) represents an action as a sequence of histograms of actom-anchored visual features, which can be seen as a temporally structured extension of the bag-of-features. Training requires the annotation of actoms for action examples. At test time, actoms are localized automatically based on a nonparametric model of the distribution of actoms, which also acts as a prior on an action's temporal structure. We present experimental results on two recent benchmarks for action localization \"Coffee and Cigarettes\" and the \"DLSBP\" dataset. We also adapt our approach to a classification-by-localization set-up and demonstrate its applicability on the challenging \"Hollywood 2\" dataset. We show that our ASM method outperforms the current state of the art in temporal action localization, as well as baselines that localize actions with a sliding window method.",
"",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized."
]
} |
1902.09928 | 2916995828 | Effective spatiotemporal feature representation is crucial to the video-based action recognition task. Focusing on discriminate spatiotemporal feature learning, we propose Information Fused Temporal Transformation Network (IF-TTN) for action recognition on top of popular Temporal Segment Network (TSN) framework. In the network, Information Fusion Module (IFM) is designed to fuse the appearance and motion features at multiple ConvNet levels for each video snippet, forming a short-term video descriptor. With fused features as inputs, Temporal Transformation Networks (TTN) are employed to model middle-term temporal transformation between the neighboring snippets following a sequential order. As TSN itself depicts long-term temporal structure by segmental consensus, the proposed network comprehensively considers multiple granularity temporal features. Our IF-TTN achieves the state-of-the-art results on two most popular action recognition datasets: UCF101 and HMDB51. Empirical investigation reveals that our architecture is robust to the input motion map quality. Replacing optical flow with the motion vectors from compressed video stream, the performance is still comparable to the flow-based methods while the testing speed is 10x faster. | State-of-the-art video understanding methods relied heavily on optical flow. The heavy computation cost of optical flow prevented these methods from real-time implementation. There were a few works dealt with real-time video understanding by replacing the costly optical flow with low-cost motion representations. Bilen proposed dynamic image (DI) @cite_31 to simulate the motion information, and Sun proposed Optical Flow guided Feature (OFF) @cite_22 to model short-term temporal variation (e.g. at a temporal length of about 7 frames 7 frames are calculated from the training strategy of OFF. TSN Two-Stream CNN used 5 stacked optical flow frames to model short-term motions. Thus 7 frames belong to short-term motions. ). Motion Vector (MV) was a coarse representation of motion, but it can be obtained directly from compressed video streams without extra calculation. Therefore, Enhanced Motion Vectors CNN (EMV-CNN) @cite_18 used motion vector as the input of temporal CNN to improve inference speed and CoViAR @cite_1 adopted an accumulated motion vector for real-time action recognition. Suffered from the lack of fine detailed motion information in MV, recognition performance was degraded dramatically. The performance of both EMV-CNN and CoViAR was far behind Two-Stream CNN with optical flow. | {
"cite_N": [
"@cite_1",
"@cite_31",
"@cite_18",
"@cite_22"
],
"mid": [
"2772539318",
"2462996230",
"",
"2774324727"
],
"abstract": [
"Training robust deep video representations has proven to be much more challenging than learning deep image representations and consequently hampered tasks like video action recognition. This is in part due to the enormous size of raw video streams, the associated amount of computation required, and the high temporal redundancy. The 'true' and interesting signal is often drowned in too much irrelevant data. Motivated by the fact that the superfluous information can be reduced by up to two orders of magnitude with video compression techniques (like H.264, HEVC, etc.), in this work, we propose to train a deep network directly on the compressed video, devoid of redundancy, rather than the traditional highly redundant RGB stream. This representation has a higher information density and we found the training to be easier. In addition, the signals in a compressed video provide free, albeit noisy, motion information. We propose novel techniques to use them effectively. Our approach is about 4.6 times faster than a state-of-the-art 3D-CNN model, 2.7 times faster than a ResNet-152, and very easy to implement. On the task of action recognition, our approach outperforms all the other methods on the UCF-101, HMDB-51, and Charades dataset.",
"We introduce the concept of dynamic image, a novel compact representation of videos useful for video analysis especially when convolutional neural networks (CNNs) are used. The dynamic image is based on the rank pooling concept and is obtained through the parameters of a ranking machine that encodes the temporal evolution of the frames of the video. Dynamic images are obtained by directly applying rank pooling on the raw image pixels of a video producing a single RGB image per video. This idea is simple but powerful as it enables the use of existing CNN models directly on video data with fine-tuning. We present an efficient and effective approximate rank pooling operator, speeding it up orders of magnitude compared to rank pooling. Our new approximate rank pooling CNN layer allows us to generalize dynamic images to dynamic feature maps and we demonstrate the power of our new representations on standard benchmarks in action recognition achieving state-of-the-art performance.",
"",
"Motion representation plays a vital role in human action recognition in videos. In this study, we introduce a novel compact motion representation for video action recognition, named Optical Flow guided Feature (OFF), which enables the network to distill temporal information through a fast and robust approach. The OFF is derived from the definition of optical flow and is orthogonal to the optical flow. The derivation also provides theoretical support for using the difference between two frames. By directly calculating pixel-wise spatiotemporal gradients of the deep feature maps, the OFF could be embedded in any existing CNN based video action recognition framework with only a slight additional cost. It enables the CNN to extract spatiotemporal information, especially the temporal information between frames simultaneously. This simple but powerful idea is validated by experimental results. The network with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3 on UCF-101, which is comparable with the result obtained by two streams (RGB and optical flow), but is 15 times faster in speed. Experimental results also show that OFF is complementary to other motion modalities such as optical flow. When the proposed method is plugged into the state-of-the-art video action recognition framework, it has 96:0 and 74:2 accuracy on UCF-101 and HMDB-51 respectively. The code for this project is available at this https URL."
]
} |
1902.09868 | 2917551809 | This paper addresses the problem of 3D human pose estimation from single images. While for a long time human skeletons were parameterized and fitted to the observation by satisfying a reprojection error, nowadays researchers directly use neural networks to infer the 3D pose from the observations. However, most of these approaches ignore the fact that a reprojection constraint has to be satisfied and are sensitive to overfitting. We tackle the overfitting problem by ignoring 2D to 3D correspondences. This efficiently avoids a simple memorization of the training data and allows for a weakly supervised training. One part of the proposed reprojection network (RepNet) learns a mapping from a distribution of 2D poses to a distribution of 3D poses using an adversarial training approach. Another part of the network estimates the camera. This allows for the definition of a network layer that performs the reprojection of the estimated 3D pose back to 2D which results in a reprojection loss function. Our experiments show that RepNet generalizes well to unknown data and outperforms state-of-the-art methods when applied to unseen data. Moreover, our implementation runs in real-time on a standard desktop PC. | Early works on human pose estimation from single images date back to Lee and Chen @cite_42 in 1985. They use known bone lengths and a binary decision tree to reconstruct a human pose. Some authors @cite_16 @cite_21 @cite_18 propose to search for 3D poses in large pose databases that explain the 2D observations the best. To compress the knowledge from these databases a widely used method is to learn an overcomplete dictionary of 3D human poses either using principal component analysis (PCA) or another dictionary learning method. Commonly the best linear combination of bases obtained by a principal component analysis is optimized @cite_37 @cite_50 @cite_20 @cite_5 . To constrain the optimization several priors are proposed, such as joint angle limits @cite_1 , physical plausibility @cite_17 , or anthropometric regularization @cite_25 @cite_30 @cite_46 . Other works enforce temporal coherence in video sequences @cite_3 @cite_12 @cite_38 @cite_17 or use additional sensors @cite_39 @cite_27 @cite_11 . | {
"cite_N": [
"@cite_30",
"@cite_42",
"@cite_3",
"@cite_5",
"@cite_20",
"@cite_38",
"@cite_18",
"@cite_21",
"@cite_39",
"@cite_46",
"@cite_17",
"@cite_37",
"@cite_27",
"@cite_50",
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_1",
"@cite_11"
],
"mid": [
"2088196373",
"2013397696",
"2337500368",
"2963688992",
"2256477790",
"",
"2583372902",
"2057232399",
"2333544554",
"2039262381",
"2737756234",
"1537787403",
"2605243700",
"2143482322",
"2105041273",
"2155196764",
"",
"1943191679",
"2895748257"
],
"abstract": [
"Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.",
"Abstract In this paper a method is proposed to recover and interpret the 3D body structures of a person from a single view, provided that (1) at least six feature points on the head and a set of body joints are available on the image plane, and (2) the geometry of head and lengths of body segments formed by joints are known. First of all, the feature points on the head in the head-centered coordinate system and their image projections are used to determine a transformation matrix. Then, the camera position and orientations are extracted from the matrix. Finally, the 3D coordinates of the head points expressed in the camera-centered coordinate system are obtained. Starting from the coordinates of the neck, which is a head feature point, the 3D coordinates of other joints one-by-one are determined under the assumption of the fixed lengths of the body segments. A binary interpretation tree is used to represent the 2 n − 1 possible body structures, if a human body has n joints. To determine the final feasible body structures, physical and motion constraints are used to prune the interpretation tree. Formulas and rules required for the tree pruning are formulated. Experiments are used to illustrate the pruning powers of these constraints. In the two cases of input data chosen, a unique or nearly unique solution of the body structure is obtained.",
"This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.",
"This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables to take into account considerable uncertainties in 2D joint locations. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.",
"We investigate the problem of estimating the 3D shape of an object defined by a set of 3D landmarks, given their 2D correspondences in a single image. A successful approach to alleviating the reconstruction ambiguity is the 3D deformable shape model and a sparse representation is often used to capture complex shape variability. But the model inference is still challenging due to the nonconvexity in the joint optimization of shape and viewpoint. In contrast to prior work that relies on an alternating scheme whose solution depends on initialization, we propose a convex approach to addressing this challenge and develop an efficient algorithm to solve the proposed convex program. We further propose a robust model to handle gross errors in the 2D correspondences. We demonstrate the exact recovery property of the proposed method, the advantage compared to several nonconvex baselines and the applicability to recover 3D human poses and car models from single images.",
"",
"We explore 3D human pose estimation from a single RGB image. While many approaches try to directly predict 3D pose from image measurements, we explore a simple architecture that reasons through intermediate 2D pose predictions. Our approach is based on two key observations (1) Deep neural nets have revolutionized 2D pose estimation, producing accurate 2D predictions even for poses with self-occlusions (2) Big-datasets of 3D mocap data are now readily available, making it tempting to lift predicted 2D poses to 3D through simple memorization (e.g., nearest neighbors). The resulting architecture is straightforward to implement with off-the-shelf 2D pose estimation systems and 3D mocap libraries. Importantly, we demonstratethatsuchmethodsoutperformalmostallstate-of-theart 3D pose estimation systems, most of which directly try to regress 3D pose from 2D measurements.",
"We describe a new approach to transfer knowledge across views for action recognition by using examples from a large collection of unlabelled mocap data. We achieve this by directly matching purely motion based features from videos to mocap. Our approach recovers 3D pose sequences without performing any body part tracking. We use these matches to generate multiple motion projections and thus add view invariance to our action recognition model. We also introduce a closed form solution for approximate non-linear Circulant Temporal Encoding (nCTE), which allows us to efficiently perform the matches in the frequency domain. We test our approach on the challenging unsupervised modality of the IXMAS dataset, and use publicly available motion capture data for matching. Without any additional annotation effort, we are able to significantly outperform the current state of the art.",
"In this work, we present an approach to fuse video with sparse orientation data obtained from inertial sensors to improve and stabilize full-body human motion capture. Even though video data is a strong cue for motion analysis, tracking artifacts occur frequently due to ambiguities in the images, rapid motions, occlusions or noise. As a complementary data source, inertial sensors allow for accurate estimation of limb orientations even under fast motions. However, accurate position information cannot be obtained in continuous operation. Therefore, we propose a hybrid tracker that combines video with a small number of inertial units to compensate for the drawbacks of each sensor type: on the one hand, we obtain drift-free and accurate position information from video data and, on the other hand, we obtain accurate limb orientations and good performance under fast motions from inertial sensors. In several experiments we demonstrate the increased performance and stability of our human motion tracker.",
"Human pose estimation is a key step to action recognition. We propose a method of estimating 3D human poses from a single image, which works in conjunction with an existing 2D pose joint detector. 3D pose estimation is challenging because multiple 3D poses may correspond to the same 2D pose after projection due to the lack of depth information. Moreover, current 2D pose estimators are usually inaccurate which may cause errors in the 3D estimation. We address the challenges in three ways: (i) We represent a 3D pose as a linear combination of a sparse set of bases learned from 3D human skeletons. (ii) We enforce limb length constraints to eliminate anthropomorphically implausible skeletons. (iii) We estimate a 3D pose by minimizing the 1-norm error between the projection of the 3D pose and the corresponding 2D detection. The 1-norm loss term is robust to inaccurate 2D joint estimations. We use the alternating direction method (ADM) to solve the optimization problem efficiently. Our approach outperforms the state-of-the-arts on three benchmark datasets.",
"Motion analysis is often restricted to a laboratory setup with multiple cameras and force sensors which requires expensive equipment and knowledgeable operators. Therefore it lacks in simplicity and flexibility. We propose an algorithm combining monocular 3D pose estimation with physics-based modeling to introduce a statistical framework for fast and robust 3D motion analysis from 2D video-data. We use a factorization approach to learn 3D motion coefficients and join them with physical parameters, that describe the dynamic of a mass-spring-model. Our approach does neither require additional force measurement nor torque optimization and only uses a single camera while allowing to estimate unobservable torques in the human body. We show that our algorithm improves the monocular 3D reconstruction by enforcing plausible human motion and resolving the ambiguity of camera and object motion.,,,,,,The performance is evaluated on different motions and multiple test data sets as well as on challenging outdoor sequences.",
"This paper introduces a new model-based approach for simultaneously reconstructing 3D human motion and full-body skelet al size from a small set of 2D image features tracked from uncalibrated monocular video sequences. The key idea of our approach is to construct a generative human motion model from a large set of preprocessed human motion examples to constrain the solution space of monocular human motion tracking. In addition, we learn a generative skeleton model from prerecorded human skeleton data to reduce ambiguity of the human skeleton reconstruction. We formulate the reconstruction process in a nonlinear optimization framework by continuously deforming the generative models to best match a small set of 2D image features tracked from a monocular video sequence. We evaluate the performance of our system by testing the algorithm on a variety of uncalibrated monocular video sequences.",
"We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: i making use of a realistic statistical body model that includes anthropometric constraints and ii using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser SIP enables motion capture using only 6 sensors attached to the wrists, lower legs, back and head and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.",
"This paper introduces an efficient algorithm that reconstructs 3D human poses as well as camera parameters from a small number of 2D point correspondences obtained from uncalibrated monocular images. This problem is challenging because 2D image constraints (e.g. 2D point correspondences) are often not sufficient to determine 3D poses of an articulated object. The key idea of this paper is to identify a set of new constraints and use them to eliminate the ambiguity of 3D pose reconstruction. We also develop an optimization process to simultaneously reconstruct both human poses and camera parameters from various forms of reconstruction constraints. We demonstrate the power and effectiveness of our system by evaluating the performance of the algorithm on both real and synthetic data. We show the algorithm can accurately reconstruct 3D poses and camera parameters from a wide variety of real images, including internet photos and key frames extracted from monocular video sequences.",
"We propose a novel exemplar based method to estimate 3D human poses from single images by using only the joint correspondences. Due to the inherent depth ambiguity, estimating 3D poses from a monocular view is a challenging problem. We solve the problem by searching through millions of exemplars for optimal poses. Compared with traditional parametric schemes, our method is able to handle very large pose database, relieves parameter tweaking, is easier to train and is more effective for complex pose 3D reconstruction. The proposed method estimates upper body poses and lower body poses sequentially, which implicitly squares the size of the exemplar database and enables us to reconstruct unconstrained poses efficiently. Our implementation based on the kd-tree achieves real-time performance. The experiments on a variety of images show that the proposed method is efficient and effective.",
"Reconstructing an arbitrary configuration of 3D points from their projection in an image is an ill-posed problem. When the points hold semantic meaning, such as anatomical landmarks on a body, human observers can often infer a plausible 3D configuration, drawing on extensive visual memory. We present an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory. Our method solves for anthropometrically regular body pose and explicitly estimates the camera via a matching pursuit algorithm operating on the image projections. Anthropometric regularity (i.e., that limbs obey known proportions) is a highly informative prior, but directly applying such constraints is intractable. Instead, we enforce a necessary condition on the sum of squared limb-lengths that can be solved for in closed form to discourage implausible configurations in 3D. We evaluate performance on a wide variety of human poses captured from different viewpoints and show generalization to novel 3D configurations and robustness to missing data.",
"",
"Estimating 3D human pose from 2D joint locations is central to the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collect a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. Both dataset and prior are available for research purposes. Second, we define a general parametrization of body pose and a new, multi-stage, method to estimate 3D pose from 2D joint locations using an over-complete dictionary of poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results using manual annotations on real images and automatic detections on the Leeds sports pose dataset.",
"In this work, we propose a method that combines a single hand-held camera and a set of Inertial Measurement Units (IMUs) attached at the body limbs to estimate accurate 3D poses in the wild. This poses many new challenges: the moving camera, heading drift, cluttered background, occlusions and many people visible in the video. We associate 2D pose detections in each image to the corresponding IMU-equipped persons by solving a novel graph based optimization problem that forces 3D to 2D coherency within a frame and across long range frames. Given associations, we jointly optimize the pose of a statistical body model, the camera pose and heading drift using a continuous optimization framework. We validated our method on the TotalCapture dataset, which provides video and IMU synchronized with ground truth. We obtain an accuracy of 26 mm, which makes it accurate enough to serve as a benchmark for image-based 3D pose estimation in the wild. Using our method, we recorded 3D Poses in the Wild (3DPW), a new dataset consisting of more than 51, 000 frames with accurate 3D pose in challenging sequences, including walking in the city, going up-stairs, having coffee or taking the bus. We make the reconstructed 3D poses, video, IMU and 3D models available for research purposes at http: virtualhumans.mpi-inf.mpg.de 3DPW."
]
} |
1902.09865 | 2951637176 | This paper addresses the problem of constructing secure exact-repair regenerating codes at the MSR point for all feasible values of the parameters. The setting involves a passive eavesdropper who is allowed to observe the stored contents of, and the downloads into, an @math -subset of the @math nodes of a distributed storage system (DSS). The objective is to achieve perfect secrecy between the eavesdropped symbols and the file stored on the DSS. Previous secure code constructions (most notably that by ) tackle the problem only for the restricted case wherein the number, @math , of helper nodes aiding in the recovery of a failed node is equal to @math . This paper builds on Rawat's work, by combining Gabidulin pre-coding and an MSR construction by Ye and Barg to prove the achievability of secrecy capacity at the MSR point for all allowed values of @math . | In this section, we formally describe the system model, and provide details of Gabidulin-based pre-coding, and an overview of the MSR construction by Ye and Barg @cite_7 . In what follows, the notation @math denotes the set of integers between @math and @math , both inclusive, i.e., @math . We use @math as shorthand for @math . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2963754880"
],
"abstract": [
"Maximum distance separable (MDS) codes are optimal error-correcting codes in the sense that they provide the maximum failure tolerance for a given number of parity nodes. Suppose that an MDS code with @math information nodes and @math parity nodes is used to encode data in a distributed storage system. It is known that if @math out of the @math nodes are inaccessible and @math surviving (helper) nodes are used to recover the lost data, then we need to download at least @math fraction of the data stored in each of the helper nodes ( , 2010 and , 2013). If this lower bound is achieved for the repair of any @math erased nodes from any @math helper nodes, we say that the MDS code has the @math -optimal repair property. We study high-rate MDS array codes with the optimal repair property (also known as minimum storage regenerating codes, or MSR codes). Explicit constructions of such codes in the literature are only available for the cases where there are at most three parity nodes, and these existing constructions can only optimally repair a single node failure by accessing all the surviving nodes. In this paper, given any @math and @math , we present two explicit constructions of MDS array codes with the @math -optimal repair property for all @math and @math simultaneously. Codes in the first family can be constructed over any base field @math as long as @math , where @math . The encoding, decoding, repair of failed nodes, and update procedures of these codes all have low complexity. Codes in the second family have the optimal access property and can be constructed over any base field @math as long as @math . Moreover, both code families have the optimal error resilience capability when repairing failed nodes. We also construct several other related families of MDS codes with the optimal repair property."
]
} |
1902.09865 | 2951637176 | This paper addresses the problem of constructing secure exact-repair regenerating codes at the MSR point for all feasible values of the parameters. The setting involves a passive eavesdropper who is allowed to observe the stored contents of, and the downloads into, an @math -subset of the @math nodes of a distributed storage system (DSS). The objective is to achieve perfect secrecy between the eavesdropped symbols and the file stored on the DSS. Previous secure code constructions (most notably that by ) tackle the problem only for the restricted case wherein the number, @math , of helper nodes aiding in the recovery of a failed node is equal to @math . This paper builds on Rawat's work, by combining Gabidulin pre-coding and an MSR construction by Ye and Barg to prove the achievability of secrecy capacity at the MSR point for all allowed values of @math . | We assume that node failures in the system occur in stages, with no more than one failure at any stage. At stage @math , we say that a node @math is if it does not fail in that stage. We operate in the setting, wherein the downloads from @math active helper nodes ( @math ) can exactly recover the contents of the failed node. In keeping with @cite_9 , our second constraint is that the failed node downloads no more than @math independent symbols from any one helper node. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2105185344"
],
"abstract": [
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff."
]
} |
1902.09865 | 2951637176 | This paper addresses the problem of constructing secure exact-repair regenerating codes at the MSR point for all feasible values of the parameters. The setting involves a passive eavesdropper who is allowed to observe the stored contents of, and the downloads into, an @math -subset of the @math nodes of a distributed storage system (DSS). The objective is to achieve perfect secrecy between the eavesdropped symbols and the file stored on the DSS. Previous secure code constructions (most notably that by ) tackle the problem only for the restricted case wherein the number, @math , of helper nodes aiding in the recovery of a failed node is equal to @math . This paper builds on Rawat's work, by combining Gabidulin pre-coding and an MSR construction by Ye and Barg to prove the achievability of secrecy capacity at the MSR point for all allowed values of @math . | Now, suppose that node @math has failed. Let @math denote the collection of random symbols sent by helper node @math to @math . If @math represents the entropy of a random variable @math , then From @cite_5 , we know that exact-repair codes that satisfy with equality must also satisfy and with equality. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2153923251"
],
"abstract": [
"Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of nodes within the -node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when . This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as “helper node pooling,” and show that it is the necessity to satisfy such scenarios that overconstrains the system."
]
} |
1902.09865 | 2951637176 | This paper addresses the problem of constructing secure exact-repair regenerating codes at the MSR point for all feasible values of the parameters. The setting involves a passive eavesdropper who is allowed to observe the stored contents of, and the downloads into, an @math -subset of the @math nodes of a distributed storage system (DSS). The objective is to achieve perfect secrecy between the eavesdropped symbols and the file stored on the DSS. Previous secure code constructions (most notably that by ) tackle the problem only for the restricted case wherein the number, @math , of helper nodes aiding in the recovery of a failed node is equal to @math . This paper builds on Rawat's work, by combining Gabidulin pre-coding and an MSR construction by Ye and Barg to prove the achievability of secrecy capacity at the MSR point for all allowed values of @math . | The setting of the passive adversary was first discussed in @cite_3 , and an upper bound on the secrecy capacity for functional repair was derived to be where @math . Later work by @cite_8 employed the Product-Matrix (PM) code construction to design a secure MSR coding scheme that achieved a maximum file size of @math . This was improved upon in @cite_0 and @cite_10 , wherein the secrecy capacity was shown to be bounded as This upper bound was shown to be achievable in @cite_0 , for the case @math , using the concept of zigzag codes. Another achievability scheme, due to Rawat @cite_1 , uses a construction in Ye and Barg's paper @cite_7 to show the capacity upper bound in being met, again when @math . In this paper, we build upon Rawat's work to prove the achievability of the capacity upper bound in for feasible values of @math , using an alternative construction from @cite_7 . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_10"
],
"mid": [
"2963754880",
"2964275527",
"2962865070",
"1970080388",
"2014149071",
"2096296858"
],
"abstract": [
"Maximum distance separable (MDS) codes are optimal error-correcting codes in the sense that they provide the maximum failure tolerance for a given number of parity nodes. Suppose that an MDS code with @math information nodes and @math parity nodes is used to encode data in a distributed storage system. It is known that if @math out of the @math nodes are inaccessible and @math surviving (helper) nodes are used to recover the lost data, then we need to download at least @math fraction of the data stored in each of the helper nodes ( , 2010 and , 2013). If this lower bound is achieved for the repair of any @math erased nodes from any @math helper nodes, we say that the MDS code has the @math -optimal repair property. We study high-rate MDS array codes with the optimal repair property (also known as minimum storage regenerating codes, or MSR codes). Explicit constructions of such codes in the literature are only available for the cases where there are at most three parity nodes, and these existing constructions can only optimally repair a single node failure by accessing all the surviving nodes. In this paper, given any @math and @math , we present two explicit constructions of MDS array codes with the @math -optimal repair property for all @math and @math simultaneously. Codes in the first family can be constructed over any base field @math as long as @math , where @math . The encoding, decoding, repair of failed nodes, and update procedures of these codes all have low complexity. Codes in the second family have the optimal access property and can be constructed over any base field @math as long as @math . Moreover, both code families have the optimal error resilience capability when repairing failed nodes. We also construct several other related families of MDS codes with the optimal repair property.",
"Regenerating codes are a class of codes for distributed storage networks that provide reliability and availability of data, and also perform efficient node repair. Another important aspect of a distributed storage network is its security. In this paper, we consider a threat model where an eavesdropper may gain access to the data stored in a subset of the storage nodes, and possibly also, to the data downloaded during repair of some nodes. We provide explicit constructions of regenerating codes that achieve information-theoretic secrecy capacity in this setting.",
"In this paper, we revisit the problem of characterizing the secrecy capacity of minimum storage regenerating (MSR) codes under the passive @math -eavesdropper model, where the eavesdropper has access to data stored on @math nodes and the repair data for an additional @math nodes. We study it from the information-theoretic perspective. First, some general properties of MSR codes as well as a simple and generally applicable upper bound on secrecy capacity are given. Second, a new concept of stable MSR codes is introduced, where the stable property is shown to be closely linked with secrecy capacity. Finally, a comprehensive and explicit result on secrecy capacity in the linear MSR scenario is present, which generalizes all related works in the literature and also predicts certain results for some unexplored linear MSR codes.",
"We address the problem of securing distributed storage systems against passive eavesdroppers that can observe a limited number of storage nodes. An important aspect of these systems is node failures over time, which demand a repair mechanism aimed at maintaining a targeted high level of system reliability. If an eavesdropper observes a node that is added to the system to replace a failed node, it will have access to all the data downloaded during repair, which can potentially compromise the entire information in the system.We are interested in determining the secrecy capacity of distributed storage systems under repair dynamics, i.e., the maximum amount of data that can be securely stored and made available to a legitimate user without revealing any information to any eavesdropper. We derive a general upper bound on the secrecy capacity and show that this bound is tight for the bandwidth-limited regime which is of importance in scenarios such as peer-to-peer distributed storage systems. We also provide a simple explicit code construction that achieves the capacity for this regime.",
"This paper aims to go beyond resilience into the study of security and local-repairability for distributed storage systems (DSSs). Security and local-repairability are both important as features of an efficient storage system, and this paper aims to understand the trade-offs between resilience, security, and local-repairability in these systems. In particular, this paper first investigates security in the presence of colluding eavesdroppers, where eavesdroppers are assumed to work together in decoding the stored information. Second, this paper focuses on coding schemes that enable optimal local repairs. It further brings these two concepts together to develop locally repairable coding schemes for DSS that are secure against eavesdroppers. The main results of this paper include: 1) an improved bound on the secrecy capacity for minimum storage regenerating codes; 2) secure coding schemes that achieve the bound for some special cases; 3) a new bound on minimum distance for locally repairable codes; 4) code construction for locally repairable codes that attain the minimum distance bound; and 5) repair-bandwidth-efficient locally repairable codes with and without security constraints.",
"The problem of securing data against eavesdropping in distributed storage systems is studied. The focus is on systems that use linear codes and implement exact repair to recover from node failures. The maximum file size that can be stored securely is determined for systems in which all the available nodes help in repair (i.e., repair degree d = n -1, where n is the total number of nodes) and for any number of compromised nodes. Similar results in the literature are restricted to the case of at most two compromised nodes. Moreover, new explicit upper bounds are given on the maximum secure file size for systems with d <; n - 1. The key ingredients for the contribution of this paper are new results on subspace intersection for the data downloaded during repair. The new bounds imply the interesting fact that the maximum amount of data that can be stored securely decreases exponentially with the number of compromised nodes. Whether this exponential decrease is fundamental or is a consequence of the exactness and linearity constraints remains an open question."
]
} |
1902.09865 | 2951637176 | This paper addresses the problem of constructing secure exact-repair regenerating codes at the MSR point for all feasible values of the parameters. The setting involves a passive eavesdropper who is allowed to observe the stored contents of, and the downloads into, an @math -subset of the @math nodes of a distributed storage system (DSS). The objective is to achieve perfect secrecy between the eavesdropped symbols and the file stored on the DSS. Previous secure code constructions (most notably that by ) tackle the problem only for the restricted case wherein the number, @math , of helper nodes aiding in the recovery of a failed node is equal to @math . This paper builds on Rawat's work, by combining Gabidulin pre-coding and an MSR construction by Ye and Barg to prove the achievability of secrecy capacity at the MSR point for all allowed values of @math . | Given a DSS that can store @math symbols when @math , we augment our file of size @math with random symbols @math , where @math is a random vector of length @math . Each random symbol in @math is drawn i.i.d. and uniformly at random from the field @math . We shall now describe the ingredients of our construction, namely the Gabidulin pre-coding procedure and the MSR construction (for all parameters @math ), by Ye and Barg @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2963754880"
],
"abstract": [
"Maximum distance separable (MDS) codes are optimal error-correcting codes in the sense that they provide the maximum failure tolerance for a given number of parity nodes. Suppose that an MDS code with @math information nodes and @math parity nodes is used to encode data in a distributed storage system. It is known that if @math out of the @math nodes are inaccessible and @math surviving (helper) nodes are used to recover the lost data, then we need to download at least @math fraction of the data stored in each of the helper nodes ( , 2010 and , 2013). If this lower bound is achieved for the repair of any @math erased nodes from any @math helper nodes, we say that the MDS code has the @math -optimal repair property. We study high-rate MDS array codes with the optimal repair property (also known as minimum storage regenerating codes, or MSR codes). Explicit constructions of such codes in the literature are only available for the cases where there are at most three parity nodes, and these existing constructions can only optimally repair a single node failure by accessing all the surviving nodes. In this paper, given any @math and @math , we present two explicit constructions of MDS array codes with the @math -optimal repair property for all @math and @math simultaneously. Codes in the first family can be constructed over any base field @math as long as @math , where @math . The encoding, decoding, repair of failed nodes, and update procedures of these codes all have low complexity. Codes in the second family have the optimal access property and can be constructed over any base field @math as long as @math . Moreover, both code families have the optimal error resilience capability when repairing failed nodes. We also construct several other related families of MDS codes with the optimal repair property."
]
} |
1902.09602 | 2915495741 | We propose a new way of interpreting active learning methods by analyzing the information lost' upon sampling a random variable. We use some recent analytical developments of these losses to formally prove that facility location methods reduce these losses under mild assumptions, and to derive a new data dependent bound on information losses that can be used to evaluate other active learning methods. We show that this new bound is extremely tight to experiment, and further show that the bound has a decent predictive power for classification accuracy. | On the bridge between streaming supervised methods and pure unsupervised methods lie un-streamed supervised methods. These methods select data all at once, but use some labelled data to do so @cite_67 . | {
"cite_N": [
"@cite_67"
],
"mid": [
"1912128066"
],
"abstract": [
"We study the problem of selecting a subset of big data to train a classifier while incurring minimal performance loss. We show the connection of submodularity to the data likelihood functions for Naive Bayes (NB) and Nearest Neighbor (NN) classifiers, and formulate the data subset selection problems for these classifiers as constrained submodular maximization. Furthermore, we apply this framework to active learning and propose a novel scheme called filtered active submodular selection (FASS), where we combine the uncertainty sampling method with a submodular data subset selection framework. We extensively evaluate the proposed framework on text categorization and handwritten digit recognition tasks with four different classifiers, including deep neural network (DNN) based classifiers. Empirical results indicate that the proposed framework yields significant improvement over the state-of-the-art algorithms on all classifiers."
]
} |
1902.09602 | 2915495741 | We propose a new way of interpreting active learning methods by analyzing the information lost' upon sampling a random variable. We use some recent analytical developments of these losses to formally prove that facility location methods reduce these losses under mild assumptions, and to derive a new data dependent bound on information losses that can be used to evaluate other active learning methods. We show that this new bound is extremely tight to experiment, and further show that the bound has a decent predictive power for classification accuracy. | On the unsupervised active learning side, two works in particular are relevant to our work. The first @cite_3 provides an objective function for choosing points in a linear (or kernelized linear) regression problem. The goal of the objective function is to minimize the error variances on the unlabelled points in the linear regression setting. While its derivation is not as general to the one provided in this work - relating only to linear or kernel-linear regression problems - its objective function ends up being a variant of the one that we derive. In this sense, our work strengthens this method by showing that a variant of its objective targets information losses directly. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2114188922"
],
"abstract": [
"This paper considers the problem of selecting the most informative experiments x to get measurements y for learning a regression model y = f(x). We propose a novel and simple concept for active learning, transductive experimental design, that explores available unmeasured experiments (i.e., unlabeled data) and has a better scalability in comparison with classic experimental design methods. Our in-depth analysis shows that the new method tends to favor experiments that are on the one side hard-to-predict and on the other side representative for the rest of the experiments. Efficient optimization of the new design problem is achieved through alternating optimization and sequential greedy search. Extensive experimental results on synthetic problems and three real-world tasks, including questionnaire design for preference learning, active learning for text categorization, and spatial sensor placement, highlight the advantages of the proposed approaches."
]
} |
1902.09602 | 2915495741 | We propose a new way of interpreting active learning methods by analyzing the information lost' upon sampling a random variable. We use some recent analytical developments of these losses to formally prove that facility location methods reduce these losses under mild assumptions, and to derive a new data dependent bound on information losses that can be used to evaluate other active learning methods. We show that this new bound is extremely tight to experiment, and further show that the bound has a decent predictive power for classification accuracy. | The second @cite_35 proves that under a Lipschitz continuity condition, empirical risks can be bounded in a way dependent on the maximum distance from any unlabelled point to a labelled one. Thus good data points can be chosen by ensuring that this maximum distance is small. In our work, we will prove that a slight variation of this method reduces information losses - giving the method even stronger theoretical backing. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2774918944"
],
"abstract": [
"Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin."
]
} |
1902.09636 | 2917545790 | To date, cloud applications have used datacenter resources through manual configuration and deployment of virtual machines and containers. Current trends see increasing use of microservices, where larger applications are split into many small containers, to be developed and deployed independently. However, even with the rise of the devops movement and orchestration facilities such as Kubernetes, there is a tendency to separate development from deployment. We present an exploration of a more extreme point on the devops spectrum: Fractal. Developers embed orchestration logic inside their application, fully automating the processes of scaling up and down. Providing a set of extensions to and an API over the Jitsu platform, we outline the design of Fractal and describe the key features of its implementation: how an application is self-replicated, how replica lifecycles are managed, how failure recovery is handled, and how network traffic is transparently distributed between replicas. We present evaluation of a self-scaling website, and demonstrate that Fractal is both useful and feasible. | Another example on a public cloud is Azure's Ananta @cite_7 , a datacenter load-balancing service for the Azure datacenter. Ananta splits the load balancing function in two operations: flow distribution across service replicas and SNAT translation. Flow distribution for incoming traffic is implemented on standalone software switches, , which map each flow to a specific replica. MUX nodes integrate with the datacenter network policy through the routing protocol by advertising each service's IP reachability, and achieve scale-out capabilities via ECMP routing. SNAT functionality for outgoing traffic is implemented by software agents in the dom0 of virtualised hosts. Unlike Ananta, reduces the per-packet processing by integrating the per-flow mapping operation with the forwarding decision on the edge of the datacenter network, where the traffic load is lower. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2106352837"
],
"abstract": [
"Layer-4 load balancing is fundamental to creating scale-out web services. We designed and implemented Ananta, a scale-out layer-4 load balancer that runs on commodity hardware and meets the performance, reliability and operational requirements of multi-tenant cloud computing environments. Ananta combines existing techniques in routing and distributed systems in a unique way and splits the components of a load balancer into a consensus-based reliable control plane and a decentralized scale-out data plane. A key component of Ananta is an agent in every host that can take over the packet modification function from the load balancer, thereby enabling the load balancer to naturally scale with the size of the data center. Due to its distributed architecture, Ananta provides direct server return (DSR) and network address translation (NAT) capabilities across layer-2 boundaries. Multiple instances of Ananta have been deployed in the Windows Azure public cloud with combined bandwidth capacity exceeding 1Tbps. It is serving traffic needs of a diverse set of tenants, including the blob, table and relational storage services. With its scale-out data plane we can easily achieve more than 100Gbps throughput for a single public IP address. In this paper, we describe the requirements of a cloud-scale load balancer, the design of Ananta and lessons learnt from its implementation and operation in the Windows Azure public cloud."
]
} |
1902.09636 | 2917545790 | To date, cloud applications have used datacenter resources through manual configuration and deployment of virtual machines and containers. Current trends see increasing use of microservices, where larger applications are split into many small containers, to be developed and deployed independently. However, even with the rise of the devops movement and orchestration facilities such as Kubernetes, there is a tendency to separate development from deployment. We present an exploration of a more extreme point on the devops spectrum: Fractal. Developers embed orchestration logic inside their application, fully automating the processes of scaling up and down. Providing a set of extensions to and an API over the Jitsu platform, we outline the design of Fractal and describe the key features of its implementation: how an application is self-replicated, how replica lifecycles are managed, how failure recovery is handled, and how network traffic is transparently distributed between replicas. We present evaluation of a self-scaling website, and demonstrate that Fractal is both useful and feasible. | Duet @cite_3 , an extension of Ananta's design, offloads some of the replica assignment processing to hardware switches, thus improving flow latency and scalability. Duet exploits the line-rate per-packet hashing for ECMP mapping and IP-in-IP tunnelling capabilities of hardware switches, which remain underutilised in modern datacenters. Nonetheless, the respective lookup tables are limited and cannot store global load balancing state. Duet implements a routing policy which localise traffic of a load balanced IP in a single switch and defines a distribution algorithm that maximize service coverage. Software switches are used for fail-over redundancy and traffic processing when the mapping state for a load balanced IP does not fit in the hardware switch lookup tables. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2159279524"
],
"abstract": [
"Load balancing is a foundational function of datacenter infrastructures and is critical to the performance of online services hosted in datacenters. As the demand for cloud services grows, expensive and hard-to-scale dedicated hardware load balancers are being replaced with software load balancers that scale using a distributed data plane that runs on commodity servers. Software load balancers offer low cost, high availability and high flexibility, but suffer high latency and low capacity per load balancer, making them less than ideal for applications that demand either high throughput, or low latency or both. In this paper, we present Duet, which offers all the benefits of software load balancer, along with low latency and high availability -- at next to no cost. We do this by exploiting a hitherto overlooked resource in the data center networks -- the switches themselves. We show how to embed the load balancing functionality into existing hardware switches, thereby achieving organic scalability at no extra cost. For flexibility and high availability, Duet seamlessly integrates the switch-based load balancer with a small deployment of software load balancer. We enumerate and solve several architectural and algorithmic challenges involved in building such a hybrid load balancer. We evaluate Duet using a prototype implementation, as well as extensive simulations driven by traces from our production data centers. Our evaluation shows that Duet provides 10x more capacity than a software load balancer, at a fraction of a cost, while reducing latency by a factor of 10 or more, and is able to quickly adapt to network dynamics including failures."
]
} |
1902.09636 | 2917545790 | To date, cloud applications have used datacenter resources through manual configuration and deployment of virtual machines and containers. Current trends see increasing use of microservices, where larger applications are split into many small containers, to be developed and deployed independently. However, even with the rise of the devops movement and orchestration facilities such as Kubernetes, there is a tendency to separate development from deployment. We present an exploration of a more extreme point on the devops spectrum: Fractal. Developers embed orchestration logic inside their application, fully automating the processes of scaling up and down. Providing a set of extensions to and an API over the Jitsu platform, we outline the design of Fractal and describe the key features of its implementation: how an application is self-replicated, how replica lifecycles are managed, how failure recovery is handled, and how network traffic is transparently distributed between replicas. We present evaluation of a self-scaling website, and demonstrate that Fractal is both useful and feasible. | Application controlled management has been explored in the context of virtualized environments. The Potemkin honeypot farm @cite_4 was a highly available honeypot farm that could spawn new honeypot VMs during an attack and was an early attempt ro define an application API for VM management. The authors developed a set of extensions in the Click @cite_1 router that triggered the creation of dedicated honeypots and forwarded traffic to it when an attack was positively detected; during idle periods, those VMs would be garbage collected. Another example is SnowFlock @cite_32 which provides a Xen-specific -inspired fork API enabling rapid parallel VM migration using network multicast for the distribution of the OS state. While the API is superficially similar, the authors primarily focused on the optimisation of the migration process and did not discuss integration with the complex network policies used in modern multi-tenant datacenters. | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_32"
],
"mid": [
"2010365467",
"2100673955",
"2083357144"
],
"abstract": [
"Clicks is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements . Individual elements implement simple router functions like packet classification, queuing, scheduling, and interfacing with network devices. A router configurable is a directed graph with elements at the vertices; packets flow along the edges of the graph. Several features make individual elements more powerful and complex configurations easier to write, including pull connections, which model packet flow drivn by transmitting hardware devices, and flow-based router context, which helps an element locate other interesting elements. Click configurations are modular and easy to extend. A standards-compliant Click IP router has 16 elements on its forwarding path; some of its elements are also useful in Ethernet switches and IP tunnelling configurations. Extending the IP router to support dropping policies, fairness among flows, or Differentiated Services simply requires adding a couple of element at the right place. On conventional PC hardware, the Click IP router achieves a maximum loss-free forwarding rate of 333,000 64-byte packets per second, demonstrating that Click's modular and flexible architecture is compatible with good performance.",
"The rapid evolution of large-scale worms, viruses and bot-nets have made Internet malware a pressing concern. Such infections are at the root of modern scourges including DDoS extortion, on-line identity theft, SPAM, phishing, and piracy. However, the most widely used tools for gathering intelligence on new malware -- network honeypots -- have forced investigators to choose between monitoring activity at a large scale or capturing behavior with high fidelity. In this paper, we describe an approach to minimize this tension and improve honeypot scalability by up to six orders of magnitude while still closely emulating the execution behavior of individual Internet hosts. We have built a prototype honeyfarm system, called Potemkin, that exploits virtual machines, aggressive memory sharing, and late binding of resources to achieve this goal. While still an immature implementation, Potemkin has emulated over 64,000 Internet honeypots in live test runs, using only a handful of physical servers.",
"Virtual Machine (VM) fork is a new cloud computing abstraction that instantaneously clones a VM into multiple replicas running on different hosts. All replicas share the same initial state, matching the intuitive semantics of stateful worker creation. VM fork thus enables the straightforward creation and efficient deployment of many tasks demanding swift instantiation of stateful workers in a cloud environment, e.g. excess load handling, opportunistic job placement, or parallel computing. Lack of instantaneous stateful cloning forces users of cloud computing into ad hoc practices to manage application state and cycle provisioning. We present SnowFlock, our implementation of the VM fork abstraction. To evaluate SnowFlock, we focus on the demanding scenario of services requiring on-the-fly creation of hundreds of parallel workers in order to solve computationally-intensive queries in seconds. These services are prominent in fields such as bioinformatics, finance, and rendering. SnowFlock provides sub-second VM cloning, scales to hundreds of workers, consumes few cloud I O resources, and has negligible runtime overhead."
]
} |
1902.09636 | 2917545790 | To date, cloud applications have used datacenter resources through manual configuration and deployment of virtual machines and containers. Current trends see increasing use of microservices, where larger applications are split into many small containers, to be developed and deployed independently. However, even with the rise of the devops movement and orchestration facilities such as Kubernetes, there is a tendency to separate development from deployment. We present an exploration of a more extreme point on the devops spectrum: Fractal. Developers embed orchestration logic inside their application, fully automating the processes of scaling up and down. Providing a set of extensions to and an API over the Jitsu platform, we outline the design of Fractal and describe the key features of its implementation: how an application is self-replicated, how replica lifecycles are managed, how failure recovery is handled, and how network traffic is transparently distributed between replicas. We present evaluation of a self-scaling website, and demonstrate that Fractal is both useful and feasible. | Datacenter research has also explored mechanisms to automate cloud application replication. AGILE @cite_15 , a notable service replication automation framework, uses wavelets analysis to predict application resource requirements in order to fulfil specific service delivery guarantees and proactively optimize replication strategies. AGILE define a generic black-box performance modeling framework, which uses monitoring agents to collect low-level information, like CPU, memory and network utlization for each VM and predicts future resource requirements. Unlike AGILE, enables applications to define and run application centric scaling mechanisms with greater agility. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1743501494"
],
"abstract": [
"Dynamically adjusting the number of virtual machines (VMs) assigned to a cloud application to keep up with load changes and interference from other uses typically requires detailed application knowledge and an ability to know the future, neither of which are readily available to infrastructure service providers or application owners. The result is that systems need to be over-provisioned (costly), or risk missing their performance Service Level Objectives (SLOs) and have to pay penalties (also costly). AGILE deals with both issues: it uses wavelets to provide a medium-term resource demand prediction with enough lead time to start up new application server instances before performance falls short, and it uses dynamic VM cloning to reduce application startup times. Tests using RUBiS and Google cluster traces show that AGILE can predict varying resource demands over the medium-term with up to 3.42× better true positive rate and 0.34× the false positive rate than existing schemes. Given a target SLO violation rate, AGILE can efficiently handle dynamic application workloads, reducing both penalties and user dissatisfaction."
]
} |
1902.09700 | 2952489486 | Finding hard instances, which need a long time to solve, of graph problems such as the graph coloring problem and the maximum clique problem, is important for (1) building a good benchmark for evaluating the performance of algorithms, and (2) analyzing the algorithms to accelerate them. The existing methods for generating hard instances rely on parameters or rules that are found by domain experts; however, they are specific to the problem. Hence, it is difficult to generate hard instances for general cases. To address this issue, in this paper, we formulate finding hard instances of graph problems as two equivalent optimization problems. Then, we propose a method to automatically find hard instances by solving the optimization problems. The advantage of the proposed algorithm over the existing rule based approach is that it does not require any task specific knowledge. To the best of our knowledge, this is the first non-trivial method in the literature to automatically find hard instances. Through experiments on various problems, we demonstrate that our proposed method can generate instances that are a few to several orders of magnitude harder than the random based approach in many settings. In particular, our method outperforms rule-based algorithms in the 3-coloring problem. | Deep Generative Graph Models: Recently, several generative graph models utilizing deep learning techniques were proposed. The variational graph auto-encoder @cite_5 is one of the first models of this kind. It is a variant of the Variational Auto Encoder (VAE), which outputs a probabilistic adjacent matrix. This model was used for the link prediction of citation networks. Then, VAE @cite_17 @cite_8 @cite_18 @cite_3 , Generative Adversarial Networks (GAN) @cite_9 @cite_21 @cite_12 , and sequential generation @cite_6 @cite_23 @cite_1 @cite_7 based generating models were proposed. In particular, they succeeded in generating various de-novo chemical materials and modeling real-world networks. ORGAN @cite_19 utilizes SeqGAN @cite_10 and reinforcement learning to generate molecular graphs with the desired properties. It uses SMILES @cite_15 to represent a molecular graph because SeqGAN generates a sequence of symbols rather than a graph itself. MolGAN @cite_14 is another graph generative model utilizing GAN and reinforcement learning. It models the probabilistic adjacent matrix and attributes of graphs directly instead of using SMILES. The generative graph model of this work belongs to this stream. To the best of our knowledge, there is no deep learning based hard instance generating algorithm. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2794980531",
"2806351858",
"",
"",
"",
"2964271403",
"",
"2161779091",
"2806115886",
"2618625858",
"",
"",
"",
"2964268978",
"",
"2964108670"
],
"abstract": [
"Graphs are a fundamental abstraction for modeling relational data. However, graphs are discrete and combinatorial in nature, and learning representations suitable for machine learning tasks poses statistical and computational challenges. In this work, we propose Graphite an algorithmic framework for unsupervised learning of representations over nodes in a graph using deep latent variable generative models. Our model is based on variational autoencoders (VAE), and differs from existing VAE frameworks for data modalities such as images, speech, and text in the use of graph neural networks for parameterizing both the generative model (i.e., decoder) and inference model (i.e., encoder). The use of graph neural networks directly incorporates inductive biases due to the spatial, local structure of graphs directly in the generative model. Moreover, we draw novel connections between graph neural networks and approximate inference via kernel embeddings of distributions. We demonstrate empirically that Graphite outperforms state-of-the-art approaches for the tasks of density estimation, link prediction, and node classification on synthetic and benchmark datasets.",
"eep generative models for graph-structured data offer a new angle on the problem of chemical synthesis: by optimizing differentiable models that directly generate molecular graphs, it is pos-sible to side-step expensive search procedures in the discrete and vast space of chemical structures. We introduce MolGAN, an implicit, likelihood-free generative model for small molecular graphs that circumvents the need for expensive graph matching procedures or node ordering heuris-tics of previous likelihood-based methods. Our method adapts generative adversarial networks (GANs) to operate directly on graph-structured data. We combine our approach with a reinforce-ment learning objective to encourage the genera-tion of molecules with specific desired chemical properties. In experiments on the QM9 chemi-cal database, we demonstrate that our model is capable of generating close to 100 valid com-pounds. MolGAN compares favorably both to recent proposals that use string-based (SMILES) representations of molecules and to a likelihood-based method that directly generates graphs, al-beit being susceptible to mode collapse.",
"",
"",
"",
"",
"",
"We present a simple generation procedure which turns out to be an effective source of very hard cases for graph 3-colorability. The graphs distributed according to this generation procedure are much denser in very hard cases than previously reported for the same problem size. The coloring cost for these instances is also orders of magnitude bigger. This ability is issued from the fact that the procedure favors-inside the class of graphs with given connectivity and free of 4-cliques-the generation of graphs with relatively few paths of length three (that we call 3-paths). There is a critical value of the ratio between the number of 3-paths and the number of edges, independent of the number of nodes, which separates the graphs having the same connectivity in two regions: one contains almost all graphs free of 4-cliques while the other contains almost no such graphs. The generated very hard cases are near this phase transition, and have a regular structure, witnessed by the low variance in node degrees, as opposite to the random graphs. This regularity in the graph structure seems to confuse the coloring algorithm by inducing an uniform search space, with no clue for the search.",
"",
"In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics.",
"",
"",
"",
"As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.",
"",
"Deep learning on graphs has become a popular research topic with many applications. However, past work has concentrated on learning graph embedding tasks only, which is in contrast with advances in generative models for images and text. Is it possible to transfer this progress to the domain of graphs? We propose to sidestep hurdles associated with linearization of such discrete structures by having a decoder output a probabilistic fully-connected graph of a predefined maximum size directly at once. Our method is formulated as a variational autoencoder. We evaluate on the challenging task of conditional molecule generation."
]
} |
1902.09729 | 2916880568 | Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance against an earlier version of the system. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation of SIMFL using 355 faults in Defects4J shows that SIMFL can successfully localise up to 103 faults at the top, and 152 faults within the top five, on par with state-of-the-art alternatives. The cost of mutation analysis can be further reduced by mutation sampling: SIMFL retains over 80 of its localisation accuracy at the top rank when using only 10 of generated mutants, compared to results obtained without sampling. | is an MBFL technique that allows ahead-of-time mutation analysis. Existing MBFL techniques, Met allaxis @cite_31 @cite_12 , MUSE @cite_11 and its variation MUSEUM @cite_30 , all require the faulty program to be mutated, incurring significant analysis cost after the observation of failure. Met allaxis uses SBFL-like formulas to measure the similarity between failure patterns of the actual fault and mutants. MUSE and MUSEUM both focus on two principles: first, if we mutate already faulty parts of the program, it is unlikely that we will observe more failing test cases, and we may even observe partial fixes, and second, if we mutate non-faulty parts of the program, it is likely that passing tests will now fail. MUSE and MUSEUM define their suspiciousness scores using the ratios of fail-become-pass and pass-become-fail tests. @cite_32 showed that these MBFL techniques can be improved by hybridising mutation-based scores with other features. | {
"cite_N": [
"@cite_30",
"@cite_32",
"@cite_31",
"@cite_12",
"@cite_11"
],
"mid": [
"2536479894",
"2620081107",
"1950030762",
"1963679610",
"1990785546"
],
"abstract": [
"Context: The programming language ecosystem has diversified over the last few decades. Non-trivial programs are likely to be written in more than a single language to take advantage of various control data abstractions and legacy libraries. Objective: Debugging multilingual bugs is challenging because language interfaces are difficult to use correctly and the scope of fault localization goes beyond language boundaries. To locate the causes of real-world multilingual bugs, this article proposes a mutation-based fault localization technique (MUSEUM). Method: MUSEUM modifies a buggy program systematically with our new mutation operators as well as conventional mutation operators, observes the dynamic behavioral changes in a test suite, and reports suspicious statements. To reduce the analysis cost, MUSEUM selects a subset of mutated programs and test cases. Results: Our empirical evaluation shows that MUSEUM is (i) effective: it identifies the buggy statements as the most suspicious statements for both resolved and unresolved non-trivial bugs in real-world multilingual programming projects; and (ii) efficient: it locates the buggy statements in modest amount of time using multiple machines in parallel. Also, by applying selective mutation analysis (i.e., selecting subsets of mutants and test cases to use), MUSEUM achieves significant speedup with marginal accuracy loss compared to the full mutation analysis. Conclusion: It is concluded that MUSEUM locates real-world multilingual bugs accurately. This result shows that mutation analysis can provide an effective, efficient, and language semantics agnostic analysis on multilingual code. Our light-weight analysis approach would play important roles as programmers write and debug large and complex programs in diverse programming languages.",
"Most fault localization techniques take as input a faulty program, and produce as output a ranked list of suspicious code locations at which the program may be defective. When researchers propose a new fault localization technique, they typically evaluate it on programs with known faults. The technique is scored based on where in its output list the defective code appears. This enables the comparison of multiple fault localization techniques to determine which one is better. Previous research has evaluated fault localization techniques using artificial faults, generated either by mutation tools or manually. In other words, previous research has determined which fault localization techniques are best at finding artificial faults. However, it is not known which fault localization techniques are best at finding real faults. It is not obvious that the answer is the same, given previous work showing that artificial faults have both similarities to and differences from real faults. We performed a replication study to evaluate 10 claims in the literature that compared fault localization techniques (from the spectrum-based and mutation-based families). We used 2995 artificial faults in 6 real-world programs. Our results support 7 of the previous claims as statistically significant, but only 3 as having non-negligible effect sizes. Then, we evaluated the same 10 claims, using 310 real faults from the 6 programs. Every previous result was refuted or was statistically and practically insignificant. Our experiments show that artificial faults are not useful for predicting which fault localization techniques perform best on real faults. In light of these results, we identified a design space that includes many previously-studied fault localization techniques as well as hundreds of new techniques. We experimentally determined which factors in the design space are most important, using an overall set of 395 real faults. Then, we extended this design space with new techniques. Several of our novel techniques outperform all existing techniques, notably in terms of ranking defective code in the top-5 or top-10 reports.",
"Fault localization methods seek to identify faulty program statements based on the information provided by the failing and passing test executions. Spectrum-based methods are among the most popular ones and assist programmers by assigning suspiciousness values on program statements according to their probability of being faulty. This paper proposes Met allaxis, a fault localization approach based on mutation analysis. The innovative part of Met allaxis is that it uses mutants and links them with the faulty program places. Thus, mutants that are killed mostly by failing tests provide a good indication about the location of a fault. Experimentation using Met allaxis suggests that it is significantly more effective than statement-based approaches. This is true even in the case where mutation cost-reduction techniques, such as mutant sampling, are facilitated. Additionally, results from a controlled experiment show that the use of mutation as a testing technique provides benefits to the fault localization process. Therefore, fault localization is significantly improved by using mutation-based tests instead of block-based or branch-based test suites. Finally, evidence in support of the methods' scalability is also given. Copyright © 2013 John Wiley & Sons, Ltd.",
"Many fault localization techniques operate by crosscutting coverage information of passed and failed test executions. Generally, their accuracy depends on the utilized coverage elements and on the selected test cases. This paper proposes a novel fault localization method using mutation and investigates its accuracy when using classical test selection criteria such as mutation, branch and block. A controlled experiment shows that (1) the mutation based approach is quite effective at identifying \"unknown\" faulty program statements. Additionally, the experimental results reveal (2) that the mutation-based test suites are significantly more effective at supporting fault localization than block or branch-based test suites. Further, (3) evidence in support of facilitating mutation alternatives, such as mutant sampling, in order to diminish mutation overheads is also given.",
"We present MUSE (MUtation-baSEd fault localization technique), a new fault localization technique based on mutation analysis. A key idea of MUSE is to identify a faulty statement by utilizing different characteristics of two groups of mutants-one that mutates a faulty statement and the other that mutates a correct statement. We also propose a new evaluation metric for fault localization techniques based on information theory, called Locality Information Loss (LIL): it can measure the aptitude of a localization technique for automated fault repair systems as well as human debuggers. The empirical evaluation using 14 faulty versions of the five real-world programs shows that MUSE localizes a fault after reviewing 7.4 statements on average, which is about 25 times more precise than the state-of-the-art SBFL technique Op2."
]
} |
1902.09729 | 2916880568 | Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance against an earlier version of the system. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation of SIMFL using 355 faults in Defects4J shows that SIMFL can successfully localise up to 103 faults at the top, and 152 faults within the top five, on par with state-of-the-art alternatives. The cost of mutation analysis can be further reduced by mutation sampling: SIMFL retains over 80 of its localisation accuracy at the top rank when using only 10 of generated mutants, compared to results obtained without sampling. | As a fault localisation technique, can be seen as predicting where faults are most likely to be located. This is different from, but related to, other forms of software defect prediction @cite_40 @cite_41 which typically uses software metrics and code attributes, but rarely information from the dynamic testing, to predict defect-proneness. A recent exception is the study by @cite_7 , which uses test results to dynamically tune traditional defect prediction models. Still, they predict and rank modules based on their overall defect proneness, with the goal of directing quality assurance efforts, rather than for localising actually observed faults. | {
"cite_N": [
"@cite_41",
"@cite_40",
"@cite_7"
],
"mid": [
"2151666086",
"2105776892",
"2803652181"
],
"abstract": [
"Background: The accurate prediction of where faults are likely to occur in code can help direct test effort, reduce costs, and improve the quality of software. Objective: We investigate how the context of models, the independent variables used, and the modeling techniques applied influence the performance of fault prediction models. Method: We used a systematic literature review to identify 208 fault prediction studies published from January 2000 to December 2010. We synthesize the quantitative and qualitative results of 36 studies which report sufficient contextual and methodological information according to the criteria we develop and apply. Results: The models that perform well tend to be based on simple modeling techniques such as Naive Bayes or Logistic Regression. Combinations of independent variables have been used by models that perform well. Feature selection has been applied to these combinations when models are performing particularly well. Conclusion: The methodology used to build models seems to be influential to predictive performance. Although there are a set of fault prediction studies in which confidence is possible, more studies are needed that use a reliable methodology and which report their context, methodology, and performance comprehensively.",
"Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers.",
"Abstract Test resource constraints is a common phenomenon in software testing. Using defect prediction to guide the resource allocation can significantly improve the efficiency and effectiveness of available test resources. However, traditional defect prediction (t-DP) is a static strategy, where the predictor cannot be dynamically adjusted during the software testing process (STP). This paper combines defect prediction with feedback control in STP and proposes a feedback-based defect prediction model, where the test results generated during STP is used as feedback information for on-line adjustment of predictor to optimize the prediction result. In addition, a novel approach called feedback-based integrated prediction (FIP) is proposed to improve the prediction accuracy, where a global predictor and a local predictor are employed to make an integrated prediction using the weight to adjust the effects of predictors at different test stages. A systematic experiment is conducted to investigate the performance of the FIP over 10 public data sets. Results show that FIP has better prediction efficiency and better robustness for external data than the t-DP, especially when the percentage of the test modules is 40 ."
]
} |
1902.09729 | 2916880568 | Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance against an earlier version of the system. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation of SIMFL using 355 faults in Defects4J shows that SIMFL can successfully localise up to 103 faults at the top, and 152 faults within the top five, on par with state-of-the-art alternatives. The cost of mutation analysis can be further reduced by mutation sampling: SIMFL retains over 80 of its localisation accuracy at the top rank when using only 10 of generated mutants, compared to results obtained without sampling. | was initially formulated based on Bayesian analysis to infer likely fault locations given test information. In the context of fault localisation, @cite_2 have introduced , an SBFL technique that adopts Bayesian reasoning to generate candidate sets of multiple fault locations. To the best of our knowledge, is the first MBFL technique that uses Bayesian inference as well as other statistical inference techniques. Bayesian inference has been used in other forms of defect modelling. @cite_9 proposed and later summarized @cite_39 work on using Bayesian Belief Networks for software defect prediction. Similarly, Okutan and Yildiz @cite_34 used Bayesian modelling to predict defect proneness based on software metrics. However, like the other work on software defect prediction, the focus is on software quality, reliability, and the number of remaining defects, not, as for , on helping to locate specific faults based on test information. | {
"cite_N": [
"@cite_9",
"@cite_34",
"@cite_39",
"@cite_2"
],
"mid": [
"2133575467",
"2127623179",
"2820802326",
"2172154252"
],
"abstract": [
"An important decision in software projects is when to stop testing. Decision support tools for this have been built using causal models represented by Bayesian Networks (BNs), incorporating empirical data and expert judgement. Previously, this required a custom BN for each development lifecycle. We describe a more general approach that allows causal models to be applied to any lifecycle. The approach evolved through collaborative projects and captures significant commercial input. For projects within the range of the models, defect predictions are very accurate. This approach enables decision-makers to reason in a way that is not possible with regression-based models.",
"There are lots of different software metrics discovered and used for defect prediction in the literature. Instead of dealing with so many metrics, it would be practical and easy if we could determine the set of metrics that are most important and focus on them more to predict defectiveness. We use Bayesian networks to determine the probabilistic influential relationships among software metrics and defect proneness. In addition to the metrics used in Promise data repository, we define two more metrics, i.e. NOD for the number of developers and LOCQ for the source code quality. We extract these metrics by inspecting the source code repositories of the selected Promise data repository data sets. At the end of our modeling, we learn the marginal defect proneness probability of the whole software system, the set of most effective metrics, and the influential relationships among metrics and defectiveness. Our experiments on nine open source Promise data repository data sets show that response for class (RFC), lines of code (LOC), and lack of coding quality (LOCQ) are the most effective metrics whereas coupling between objects (CBO), weighted method per class (WMC), and lack of cohesion of methods (LCOM) are less effective metrics on defect proneness. Furthermore, number of children (NOC) and depth of inheritance tree (DIT) have very limited effect and are untrustworthy. On the other hand, based on the experiments on Poi, Tomcat, and Xalan data sets, we observe that there is a positive correlation between the number of developers (NOD) and the level of defectiveness. However, further investigation involving a greater number of projects is needed to confirm our findings.",
"",
"Fault diagnosis approaches can generally be categorized into spectrum-based fault localization (SFL, correlating failures with abstractions of program traces), and model-based diagnosis (MBD, logic reasoning over a behavioral model). Although MBD approaches are inherently more accurate than SFL, their high computational complexity prohibits application to large programs. We present a framework to combine the best of both worlds, coined BARINEL. The program is modeled using abstractions of program traces (as in SFL) while Bayesian reasoning is used to deduce multiple-fault candidates and their probabilities (as in MBD). A particular feature of BARINEL is the usage of a probabilistic component model that accounts for the fact that faulty components may fail intermittently. Experimental results on both synthetic and real software programs show that BARINEL typically outperforms current SFL approaches at a cost complexity that is only marginally higher. In the context of single faults this superiority is established by formal proof."
]
} |
1902.09729 | 2916880568 | Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance against an earlier version of the system. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation of SIMFL using 355 faults in Defects4J shows that SIMFL can successfully localise up to 103 faults at the top, and 152 faults within the top five, on par with state-of-the-art alternatives. The cost of mutation analysis can be further reduced by mutation sampling: SIMFL retains over 80 of its localisation accuracy at the top rank when using only 10 of generated mutants, compared to results obtained without sampling. | Mutation analysis has been used to improve statistical software fault prediction models, by @cite_35 . By adding 40 mutation-based metrics calculated based on information collected by running the publicly available PITest mutation testing tool to 39 traditional, static source code metrics the study showed improvements in predictive performance. The models predicted which classes where faulty rather than helping rank and locate specific faults at the method level as we do here. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2471516951"
],
"abstract": [
"We introduce mutation-aware fault prediction, which leverages additional guidance from metrics constructed in terms of mutants and the test cases that cover and detect them. We report the results of 12 sets of experiments, applying 4 different predictive modelling techniques to 3 large real-world systems (both open and closed source). The results show that our proposal can significantly (p ≤ 0.05) improve fault prediction performance. Moreover, mutation-based metrics lie in the top 5 most frequently relied upon fault predictors in 10 of the 12 sets of experiments, and provide the majority of the top ten fault predictors in 9 of the 12 sets of experiments."
]
} |
1902.09729 | 2916880568 | Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance against an earlier version of the system. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation of SIMFL using 355 faults in Defects4J shows that SIMFL can successfully localise up to 103 faults at the top, and 152 faults within the top five, on par with state-of-the-art alternatives. The cost of mutation analysis can be further reduced by mutation sampling: SIMFL retains over 80 of its localisation accuracy at the top rank when using only 10 of generated mutants, compared to results obtained without sampling. | Like other fault localisation (FL) techniques, tries to locate faults in a SUT based on dynamic information @cite_26 . The most widely studied FL approach is Spectrum-based Fault Localisation (SBFL) @cite_38 @cite_18 @cite_5 which combines code coverage and test outcomes to rank code locations by their . FLUCCS @cite_33 is a state-of-the-art FL technique which aggregates multiple SBFL techniques and improves predictions with software defect prediction metrics, both static code as well as change metrics. Like other SBFL techniques, FLUCCS is more costly than since it requires code coverage information for the SUT version on which the tests failed. While also depends on dynamic information, it not only eliminates the need for code instrumentations, but can perform the dynamic (i.e., mutation) analysis ahead-of-time. This allows the cost to be amortized over multiple development iterations and allows faster feedback when a failure is observed. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_26",
"@cite_33",
"@cite_5"
],
"mid": [
"2162045655",
"2101819268",
"2343875716",
"",
"1992862603"
],
"abstract": [
"One of the most expensive and time-consuming components of the debugging process is locating the errors or faults. To locate faults, developers must identify statements involved in failures and select suspicious statements that might contain faults. This paper presents a new technique that uses visualization to assist with these tasks. The technique uses color to visually map the participation of each program statement in the outcome of the execution of the program with a test suite, consisting of both passed and failed test cases. Based on this visual mapping, a user can inspect the statements in the program, identify statements involved in failures, and locate potentially faulty statements. The paper also describes a prototype tool that implements our technique along with a set of empirical studies that use the tool for evaluation of the technique. The empirical studies show that, for the subject we studied, the technique can be effective in helping a user locate faults in a program.",
"The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.",
"Software fault localization, the act of identifying the locations of faults in a program, is widely recognized to be one of the most tedious, time consuming, and expensive – yet equally critical – activities in program debugging. Due to the increasing scale and complexity of software today, manually locating faults when failures occur is rapidly becoming infeasible, and consequently, there is a strong demand for techniques that can guide software developers to the locations of faults in a program with minimal human intervention. This demand in turn has fueled the proposal and development of a broad spectrum of fault localization techniques, each of which aims to streamline the fault localization process and make it more effective by attacking the problem in a unique way. In this article, we catalog and provide a comprehensive overview of such techniques and discuss key issues and concerns that are pertinent to software fault localization as a whole.",
"",
"AMPLE locates likely failure-causing classes by comparing method call sequences of passing and failing runs. A difference in method call sequences, such as multiple deallocation of the same resource, is likely to point to the erroneous class. Such sequences can be collected from arbitrary Java programs at low cost; comparing object-specific sequences predicts defects better than simply comparing coverage. AMPLE comes as a plug-in for the Java IDE Eclipse that is automatically invoked as soon as a JUnit test fails."
]
} |
1902.09729 | 2916880568 | Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance against an earlier version of the system. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation of SIMFL using 355 faults in Defects4J shows that SIMFL can successfully localise up to 103 faults at the top, and 152 faults within the top five, on par with state-of-the-art alternatives. The cost of mutation analysis can be further reduced by mutation sampling: SIMFL retains over 80 of its localisation accuracy at the top rank when using only 10 of generated mutants, compared to results obtained without sampling. | Xuan and Monperrus @cite_20 improve SBFL by creating more fine-grained test cases based on individual assertions in the existing test suite. As discussed above performs better when test cases are of higher granularity and can better point to a specific or small set of source code locations. Future work should investigate if we could thus combine with the fine-graining techniques of Xuan and Monperrus. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2110706065"
],
"abstract": [
"Finding and fixing bugs are time-consuming activities in software development. Spectrum-based fault localization aims to identify the faulty position in source code based on the execution trace of test cases. Failing test cases and their assertions form test oracles for the failing behavior of the system under analysis. In this paper, we propose a novel concept of spectrum driven test case purification for improving fault localization. The goal of test case purification is to separate existing test cases into small fractions (called purified test cases) and to enhance the test oracles to further localize faults. Combining with an original fault localization technique (e.g., Tarantula), test case purification results in better ranking the program statements. Our experiments on 1800 faults in six open-source Java programs show that test case purification can effectively improve existing fault localization techniques."
]
} |
1902.09696 | 2951808800 | Effective network slicing requires an infrastructure network provider to deal with the uncertain demand and real-time dynamics of network resource requests. Another challenge is the combinatorial optimization of numerous resources, e.g., radio, computing, and storage. This article develops an optimal and fast real-time resource slicing framework that maximizes the long-term return of the network provider while taking into account the uncertainty of resource demand from tenants. Specifically, we first propose a novel system model which enables the network provider to effectively slice various types of resources to different classes of users under separate virtual slices. We then capture the real-time arrival of slice requests by a semi-Markov decision process. To obtain the optimal resource allocation policy under the dynamics of slicing requests, e.g., uncertain service time and resource demands, a Q-learning algorithm is often adopted in the literature. However, such an algorithm is notorious for its slow convergence, especially for problems with large state action spaces. This makes Q-learning practically inapplicable to our case in which multiple resources are simultaneously optimized. To tackle it, we propose a novel network slicing approach with an advanced deep learning architecture, called deep dueling that attains the optimal average reward much faster than the conventional Q-learning algorithm. This property is especially desirable to cope with real-time resource requests and the dynamic demands of users. Extensive simulations show that the proposed framework yields up to 40 higher long-term average return while being few thousand times faster, compared with state of the art network slicing approaches. | A number of research works have been introduced recently to address the network slicing resource allocation problem for the network provider @cite_28 - @cite_0 . In particular, the authors in @cite_28 and @cite_2 developed a two-tier admission control and resource allocation model to answer two fundamental questions, i.e., whether a slice request is accepted and how much radio resource is allocated to the accepted slice. To address this problem, the authors in @cite_28 used an extensive searching method to achieve the globally optimal resource allocation solution for the network provider. However, this searching method cannot be applied to complex systems with a large number of resources. To address this problem, a heuristic scheme with three main steps was introduced in @cite_2 to effectively allocate resources to the users. Yet this heuristic scheme cannot guarantee to achieve the optimal solution for the network provider. In addition, both network slicing resource allocation solutions proposed in @cite_28 and @cite_2 are heuristic methods with only radio resource taken into consideration. Thus, these solutions may not be appropriate to implement in dynamic network slicing resource allocation systems with a wide range of resource demands and services. | {
"cite_N": [
"@cite_28",
"@cite_0",
"@cite_2"
],
"mid": [
"2468141484",
"2774504519",
"2584942485"
],
"abstract": [
"5G mobile network is expected to serve flexible requirements hence dynamically allocate network resources according to the demands. Network slicing, where network resources are packaged and assigned in an isolated manner to set of users according to their specific requirements, is considered as a key paradigm to fulfil diversity of requirements. There will clearly be conflicting demands in allocation of such slices, and the effective provisioning of network slicing poses several challenges. Indeed, network slicing has a twofold impact in terms of user traffic prioritization as it dictates for the simultaneous management of the priority among different slices (i.e., interslice) and the priority among the users belonging to the same slice (i.e., intra-slice). In this paper, we propose a novel heuristicbased admission control mechanism able to dynamically allocate network resources to different slices in order to maximize the satisfaction of the users while guaranteeing to meet the requirements of the slices they belong to. Through simulations, we demonstrate how our proposal provides (i) higher user experience in individual slices, (ii) increased utilization of network resources and (iii) higher scalability when the number of users in each slice increases.",
"A method of providing radio resources to a number of communication flows in a radio access network that comprises a slice controller and a plurality of base stations. The method comprises in the slice controller abstracting radio resources that are available to the plurality of base stations and dynamically allocating the abstracted radio resources to a number of isolated slices, each slice accommodating one or more of the communication flows and communicating the radio resource allocations to the base stations. The method further comprises, in the base stations, performing application specific customisation radio resources within an allocated slice.",
"Wireless virtualization is a promising approach to foster innovation and prevent the ossification of wireless networks. Within a virtualized wireless network, multiple network slices, or virtual operators (VO), are co- hosted on the same physical infrastructure. A fundamental question in this environment is which multiplexing technique, TDMA, FDMA or SDMA, should be used to slice the network among the VOs. Another related question is how should the stochastic arrival process affect the slicing and QoS criteria. To answer these two questions, we study the problem of QoS-aware joint admission control and network slicing. Due to the NP- hardness of the problem, we approach it using a heuristic algorithm composed of three steps: spectrum allocation, admission control and spatial multiplexing. The proposed algorithm incorporates the effects of QoS and stochastic traffic. We study through simulations the benefits of joint spatial- frequency multiplexing over the static frequency slicing approach. Finally, our simulation results help shed some light on the trade-offs between frequency and spatial multiplexing as well as between QoS and utilization."
]
} |
1902.09590 | 2917584588 | Driverless vehicles are expected to form the foundation of future connected transport infrastructure. A key weakness of connected vehicles is their vulnerability to physical-proximity attacks such as sensor saturation attacks. It is natural to study whether such attacks can be used to disrupt swarms of autonomous vehicles used as part of a large fleet providing taxi and courier delivery services. In this paper, we start to examine the strategic options available to attackers and defenders (autonomous-fleet operators) in such conflicts. We find that attackers have the upper hand in most cases and are able to carry out crippling denial-of-service attacks on fleets, by leveraging the inherent deficiencies of road networks identified by techniques from graph analysis. Experimental results on ten cities using real-world courier traces shows that most cities will require upgraded infrastructure to defend driverless vehicles against denial-of-service attacks. We found several hidden costs that impact equipment designers and operators of driverless vehicles - not least, that road-networks need to be redesigned for robustness against attacks thus raising some fundamental questions about the benefits. | The game-theoretic background to the problem at hand lies in the search game within predator-prey games, also known as hider-seeker games. This is a zero sum game between a single predator and a single mobile prey. The predator and prey move about in a search region. The game ends with positive payoff to the predator when it meets the prey. As a bio-inspired example, the blancardella wasp finds larvae by searching for visible evidence of leaf-mining. Wasps are attracted by the appearance of holes or other leaf deformation created bythe leaf-mining larvae. The game begins when the wasp lands on the leaf to search for the larvae, who in turn is alerted by the vibrations caused by the landing wasp triggering evasive behaviour by larvae. When the wasp encounters a feeding hole, it repeatedly inserts its ovipositor violently in the area to ambush the prey. The game ends either with the wasp paralysing the larvae or abandoning the leaf. The formalisation of this problem is well studied within pursuit-evasion games @cite_8 . | {
"cite_N": [
"@cite_8"
],
"mid": [
"1978547141"
],
"abstract": [
"We analyse a randomized pursuit-evasion game played by two players on a graph, a hunter and a rabbit. Let @math be any connected, undirected graph with @math nodes. The game is played in rounds and in each round both the hunter and the rabbit are located at a node of the graph. Between rounds both the hunter and the rabbit can stay at the current node or move to another node. The hunter is assumed to be restricted to the graph @math : in every round, the hunter can move using at most one edge. For the rabbit we investigate two models: in one model the rabbit is restricted to the same graph as the hunter, and in the other model the rabbit is unrestricted, i.e., it can jump to an arbitrary node in every round.We say that the rabbit is caught as soon as hunter and rabbit are located at the same node in a round. The goal of the hunter is to catch the rabbit in as few rounds as possible, whereas the rabbit aims to maximize the number of rounds until it is caught. Given a randomized hunter strategy for @math , the escape length for that strategy is the worst case expected number of rounds it takes the hunter to catch the rabbit, where the worst case is with regard to all (possibly randomized) rabbit strategies. Our main result is a hunter strategy for general graphs with an escape length of only @math against restricted as well as unrestricted rabbits. This bound is close to optimal since @math is a trivial lower bound on the escape length in both models. Furthermore, we prove that our upper bound is optimal up to constant factors against unrestricted rabbits."
]
} |
1902.09590 | 2917584588 | Driverless vehicles are expected to form the foundation of future connected transport infrastructure. A key weakness of connected vehicles is their vulnerability to physical-proximity attacks such as sensor saturation attacks. It is natural to study whether such attacks can be used to disrupt swarms of autonomous vehicles used as part of a large fleet providing taxi and courier delivery services. In this paper, we start to examine the strategic options available to attackers and defenders (autonomous-fleet operators) in such conflicts. We find that attackers have the upper hand in most cases and are able to carry out crippling denial-of-service attacks on fleets, by leveraging the inherent deficiencies of road networks identified by techniques from graph analysis. Experimental results on ten cities using real-world courier traces shows that most cities will require upgraded infrastructure to defend driverless vehicles against denial-of-service attacks. We found several hidden costs that impact equipment designers and operators of driverless vehicles - not least, that road-networks need to be redesigned for robustness against attacks thus raising some fundamental questions about the benefits. | A particular form of hider-seeker game called an interdiction game @cite_21 which was originally developed to understand and intercept drug smuggling in the 90s. In an interdiction game, one or more smugglers (hiders) attempt to traverse a path between two nodes on a network while the police (seeker) patrol certain routes intensively to interdict smugglers. Both the players are intelligent and adapt to eachother to avoid being predictable. Our work uses Wood's game formulation as the starting point. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2133494553"
],
"abstract": [
"Interest in network interdiction has been rekindled because of attempts to reduce the flow of drugs and precursor chemicals moving through river and road networks in South America. This paper considers a problem in which an enemy attempts to maximize flow through a capacitated network while an interdictor tries to minimize this maximum flow by interdicting (stopping flow on) network arcs using limited resources. This problem is shown to be NP-complete even when the interdiction of an arc requires exactly one unit of resource. New, flexible, integer programming models are developed for the problem and its variations and valid inequalities and a reformulation are derived to tighten the LP relaxations of some of these models. A small computational example from the literature illustrates a hybrid (partly directed and partly undirected) model and the usefulness of the valid inequalities and the reformulation."
]
} |
1902.09680 | 2918059195 | Video frame synthesis is an active computer vision problem which has applications in video compression, streaming, editing, and understanding. In this work, we present a computational high speed video synthesis framework. Our framework takes as inputs two types of data streams: an intensity frame stream and a neuromorphic event stream, which consists of asynchronous bipolar "events" which encode brightness variations over time at over 1000 fps. We introduce an algorithm to recover a space-time video from these two modes of observations. We factor the reconstruction into a physical model-based reconstruction (PBR) process and a residual denoising process. We use a differentiable model to approximate the physical sensing process, which enables stochastic gradient descent optimization using automatic differentiation. Residual errors in PBR reconstruction are further reduced by training a residual denoiser to remove reconstruction artifacts. The video output from our reconstruction algorithm has both high frame rate and well-recovered spatial features. Our framework is capable of handling challenging scenes that include fast motion and strong occlusions. | Computational high speed cameras. High speed cameras have appications in sports activity recording and broadcasting, biophysical process detection and analysis, and aerospace exploration. While some consumer grade high speed cameras exist, it remains a significant challenge to simultaneously capture high temporal resolution (TR) and high spatial resolution (SR) videos. To bridge the gap between still cameras (high SR, low TR) and video cameras (low SR, high TR), several methods @cite_49 @cite_14 @cite_48 have been proposed that utilize inter-frame correspondences via optical flow and or space-time regularization @cite_0 @cite_51 . In addition, methods have been proposed for flexible @cite_36 , adaptive @cite_34 , and compressive @cite_47 sampling and reconstruction @cite_41 of high speed videos. Gupta al @cite_36 proposed multi-level SR-TR sampling and post-processing schemes enabled motion-aware videography. Tarvainen al @cite_43 leveraged spatial-temporal information as camera parameters for super resolution. A number of compressive video sensing @cite_28 , prototypes @cite_25 @cite_44 have been devised with additional spatial-temporal encoders and compressive sensing algorithms for data recovery and inference. A recent work demonstrated single-shot 3D video recovery by incorporating active illumination @cite_12 . | {
"cite_N": [
"@cite_14",
"@cite_36",
"@cite_48",
"@cite_41",
"@cite_28",
"@cite_0",
"@cite_43",
"@cite_44",
"@cite_49",
"@cite_47",
"@cite_34",
"@cite_51",
"@cite_25",
"@cite_12"
],
"mid": [
"2047367895",
"1540804545",
"2105583697",
"",
"2576230203",
"",
"",
"1986701690",
"2098118679",
"2198925517",
"",
"",
"2092680585",
"2539488881"
],
"abstract": [
"",
"The goal of this work is to build video cameras whose spatial and temporal resolutions can be changed post-capture depending on the scene. Building such cameras is difficult due to two reasons. First, current video cameras allow the same spatial resolution and frame rate for the entire captured spatio-temporal volume. Second, both these parameters are fixed before the scene is captured. We propose different components of video camera design: a sampling scheme, processing of captured data and hardware that offer post-capture variable spatial and temporal resolutions, independently at each image location. Using the motion information in the captured data, the correct resolution for each location is decided automatically. Our techniques make it possible to capture fast moving objects without motion blur, while simultaneously preserving high-spatial resolution for static scene parts within the same video sequence. Our sampling scheme requires a fast per-pixel shutter on the sensor-array, which we have implemented using a co-located camera-projector system.",
"We present solutions for enhancing the spatial and or temporal resolution of videos. Our algorithm targets the emerging consumer-level hybrid cameras that can simultaneously capture video and high-resolution stills. Our technique produces a high spacetime resolution video using the high-resolution stills for rendering and the low-resolution video to guide the reconstruction and the rendering process. Our framework integrates and extends two existing algorithms, namely a high-quality optical flow algorithm and a high-quality image-based-rendering algorithm. The framework enables a variety of applications that were previously unavailable to the amateur user, such as the ability to (1) automatically create videos with high spatiotemporal resolution, and (2) shift a high-resolution still to nearby points in time to better capture a missed event.",
"",
"The design of conventional sensors is based primarily on the Shannon?Nyquist sampling theorem, which states that a signal of bandwidth W Hz is fully determined by its discrete time samples provided the sampling rate exceeds 2 W samples per second. For discrete time signals, the Shannon?Nyquist theorem has a very simple interpretation: the number of data samples must be at least as large as the dimensionality of the signal being sampled and recovered. This important result enables signal processing in the discrete time domain without any loss of information. However, in an increasing number of applications, the Shannon-Nyquist sampling theorem dictates an unnecessary and often prohibitively high sampling rate (see lWhat Is the Nyquist Rate of a Video Signal?r). As a motivating example, the high resolution of the image sensor hardware in modern cameras reflects the large amount of data sensed to capture an image. A 10-megapixel camera, in effect, takes 10 million measurements of the scene. Yet, almost immediately after acquisition, redundancies in the image are exploited to compress the acquired data significantly, often at compression ratios of 100:1 for visualization and even higher for detection and classification tasks. This example suggests immense wastage in the overall design of conventional cameras.",
"",
"",
"We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video’s temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.",
"We present a framework for automatically enhancing videos of a static scene using a few photographs of the same scene. For example, our system can transfer photographic qualities such as high resolution, high dynamic range and better lighting from the photographs to the video. Additionally, the user can quickly modify the video by editing only a few still images of the scene. Finally, our system allows a user to remove unwanted objects and camera shake from the video. These capabilities are enabled by two technical contributions presented in this paper. First, we make several improvements to a state-of-the-art multiview stereo algorithm in order to compute view-dependent depths using video, photographs, and structure-from-motion data. Second, we present a novel image-based rendering algorithm that can re-render the input video using the appearance of the photographs while preserving certain temporal dynamics such as specularities and dynamic scene lighting.",
"",
"",
"",
"We describe an imaging architecture for compressive video sensing termed programmable pixel compressive camera (P2C2). P2C2 allows us to capture fast phenomena at frame rates higher than the camera sensor. In P2C2, each pixel has an independent shutter that is modulated at a rate higher than the camera frame-rate. The observed intensity at a pixel is an integration of the incoming light modulated by its specific shutter. We propose a reconstruction algorithm that uses the data from P2C2 along with additional priors about videos to perform temporal super-resolution. We model the spatial redundancy of videos using sparse representations and the temporal redundancy using brightness constancy constraints inferred via optical flow. We show that by modeling such spatio-temporal redundancies in a video volume, one can faithfully recover the underlying high-speed video frames from the observed low speed coded video. The imaging architecture and the reconstruction algorithm allows us to achieve temporal super-resolution without loss in spatial resolution. We implement a prototype of P2C2 using an LCOS modulator and recover several videos at 200 fps using a 25 fps camera.",
"Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms."
]
} |
1902.09680 | 2918059195 | Video frame synthesis is an active computer vision problem which has applications in video compression, streaming, editing, and understanding. In this work, we present a computational high speed video synthesis framework. Our framework takes as inputs two types of data streams: an intensity frame stream and a neuromorphic event stream, which consists of asynchronous bipolar "events" which encode brightness variations over time at over 1000 fps. We introduce an algorithm to recover a space-time video from these two modes of observations. We factor the reconstruction into a physical model-based reconstruction (PBR) process and a residual denoising process. We use a differentiable model to approximate the physical sensing process, which enables stochastic gradient descent optimization using automatic differentiation. Residual errors in PBR reconstruction are further reduced by training a residual denoiser to remove reconstruction artifacts. The video output from our reconstruction algorithm has both high frame rate and well-recovered spatial features. Our framework is capable of handling challenging scenes that include fast motion and strong occlusions. | Video frame synthesis. Video frame synthesis is a classic computer vision topic and has applications in video compression, frame rate conversion, motion field estimation, and video editing rendering. Early work on video frame interpolation and extrapolation focused on establishing block-wise @cite_29 @cite_38 and or pixel-wise @cite_22 @cite_7 correspondences between available frames. In particular, pixel-wise optical flow methods are commonly used, which assume local brightness constancy @cite_15 @cite_31 . Implementation schemes for improved performances include coarse-to-fine estimation @cite_11 , texture decomposition @cite_45 , and deep networks @cite_8 . Recent work has focused on constructing end-to-end deep neural networks to bypass optical flow estimation. In @cite_17 , a voxel flow layer was inserted in a convolutional neural network (CNN). Niklaus al @cite_13 forms frame interpolation as a local convolution process and trained a CNN with gradient loss and color loss. Novel architectures, , Generative Adversarial Networks (GAN) @cite_21 , and loss functions, , transitive consistency loss @cite_18 , have also been proposed for multi-scale frame interpolation and extrapolation. | {
"cite_N": [
"@cite_38",
"@cite_31",
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_29",
"@cite_21",
"@cite_17",
"@cite_45",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2114961653",
"2970672284",
"",
"2138313367",
"",
"2560474170",
"1530310776",
"2751683986",
"2586480386",
"2109432061",
"",
"2964251418",
"1938714998"
],
"abstract": [
"In this work, we develop a new motion-compe (MC) interpolation algorithm to enhance the temporal resolution of video sequences. First, we propose the bilateral motion estimation scheme to obtain the motion field of an interpolated frame without yielding the hole and overlapping problems. Then, we partition a frame into several object regions by clustering motion vectors. We apply the variable-size block MC (VS-BMC) algorithm to object boundaries in order to reconstruct edge information with a higher quality. Finally, we use the adaptive overlapped block MC (OBMC), which adjusts the coefficients of overlapped windows based on the reliabilities of neighboring motion vectors. The adaptive OBMC (AOBMC) can overcome the limitations of the conventional OBMC, such as over-smoothing and poor de-blocking. Experimental results show that the proposed algorithm provides a better image quality than conventional methods both objectively and subjectively",
"",
"",
"We consider the problems of motion-compensated frame interpolation (MCFI) and bidirectional prediction in a video coding environment. These applications generally require good motion estimates at the decoder. We use a multiscale optical-flow-based motion estimator that provides smooth, natural motion fields under bit-rate constraints. These motion estimates scale well with change in temporal resolution and provide considerable flexibility in the design and operation of coders and decoders. In the MCFI application, this estimator provides excellent interpolated frames that are superior to those of conventional motion estimators, both visually and in terms of peak signal-to-noise ratio (PSNR). We also consider the effect of occlusions in the bidirectional prediction application and introduce a dense label field that complements our motion estimator. This label field enables us to adaptively weight the forward and backward predictions and gives us substantial visual and PSNR improvements in the covered uncovered regions of the sequence.",
"",
"The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"Distributed video coding (DVC) is a new compression paradigm based on two key Information Theory results: the Slepian-Wolf and Wyner-Ziv theorems. A particular case of DVC deals with lossy source coding with side information at the decoder (Wyner-Ziv) and enables to shift the coding complexity from the encoder to the decoder. The solution here described is based on a very lightweight encoder leaving for the decoder the time consuming motion estimation compensation task. In this paper, the performance of the pixel domain distributed video codec is improved by using better side information based derived by motion compensated frame interpolation algorithms at the decoder. Besides forward and bidirectional motion estimation, a spatial motion smoothing algorithm to eliminate motion outliers is proposed. This allows significant improvements in the rate-distortion (RD) performance without sacrificing the encoder complexity.",
"Predicting the future from a sequence of video frames has been recently a sought after yet challenging task in the field of computer vision and machine learning. Although there have been efforts for tracking using motion trajectories and flow features, the complex problem of generating unseen frames has not been studied extensively. In this paper, we deal with this problem using convolutional models within a multi-stage Generative Adversarial Networks (GAN) framework. The proposed method uses two stages of GANs to generate a crisp and clear set of future frames. Although GANs have been used in the past for predicting the future, none of the works consider the relation between subsequent frames in the temporal dimension. Our main contribution lies in formulating two objective functions based on the Normalized Cross Correlation (NCC) and the Pairwise Contrastive Divergence (PCD) for solving this problem. This method, coupled with the traditional L1 loss, has been experimented with three real-world video datasets, viz. Sports-1M, UCF-101 and the KITTI. Performance analysis reveals superior results over the recent state-of-the-art methods.",
"We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). This problem is challenging because video appearance and motion can be highly complex. Traditional optical-flow-based solutions often fail where flow estimation is challenging, while newer neural-network-based methods that hallucinate pixel values directly often produce blurry results. We combine the advantages of these two methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which we call deep voxel flow. Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. The technique is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-of-the-art.",
"Variational techniques yield the most accurate results for dense optical flow fields between two images. They have the nice property of inherent smoothness to cope with untextured image regions: the filling-in of such regions is driven by neighbouring pixels. Such filling-in is not always the best choice. If the scene is mostly stationary and the camera is moving, the direction of the optical flow vectors can be restricted using the fundamental matrix. In this paper we propose an exact solution of the variational optical flow, using the fundamental matrix geometry as an additional weak prior. Our novel approach currently performs best on the Middlebury flow evaluation which includes images from stationary and dynamic scenes.",
"",
"Standard video frame interpolation methods first estimate optical flow between input frames and then synthesize an intermediate frame guided by motion. Recent approaches merge these two steps into a single convolution process by convolving input frames with spatially adaptive kernels that account for motion and re-sampling simultaneously. These methods require large kernels to handle large motion, which limits the number of pixels whose kernels can be estimated at once due to the large memory demand. To address this problem, this paper formulates frame interpolation as local separable convolution over input frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D kernels require significantly fewer parameters to be estimated. Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously. Since our method is able to estimate kernels and synthesizes the whole video frame at once, it allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames. This deep neural network is trained end-to-end using widely available video data without any human annotation. Both qualitative and quantitative experiments show that our method provides a practical solution to high-quality video frame interpolation.",
"This paper describes a hierarchical estimation framework for the computation of diverse representations of motion information. The key features of the resulting framework (or family of algorithms) are a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-fine refinement strategy. Four specific motion models: affine flow, planar surface flow, rigid body motion, and general optical flow, are described along with their application to specific examples."
]
} |
1902.09723 | 2917105563 | Writing style is a combination of consistent decisions at different levels of language production including lexical, syntactic, and structural associated to a specific author (or author groups). While lexical-based models have been widely explored in style-based text classification, relying on content makes the model less scalable when dealing with heterogeneous data comprised of various topics. On the other hand, syntactic models which are content-independent, are more robust against topic variance. In this paper, we introduce a syntactic recurrent neural network to encode the syntactic patterns of a document in a hierarchical structure. The model first learns the syntactic representation of sentences from the sequence of part-of-speech tags. For this purpose, we exploit both convolutional filters and long short-term memories to investigate the short-term and long-term dependencies of part-of-speech tags in the sentences. Subsequently, the syntactic representations of sentences are aggregated into document representation using recurrent neural networks. Our experimental results on PAN 2012 dataset for authorship attribution task shows that syntactic recurrent neural network outperforms the lexical model with the identical architecture by approximately 14 in terms of accuracy. | Writing style is a combination of consistent decisions at different levels of language production including lexical, syntactic, and structural associated to a specific author (or author groups, e.g. female authors or teenage authors) @cite_17 . Nowadays, computational stylometry has a wide range of applications in literary science @cite_8 @cite_22 , forensics @cite_27 @cite_16 @cite_5 , and psycholinguistics @cite_11 @cite_13 . Style-based text classification was proposed by Argamon- @cite_21 . The authors used basic stylistic features (the frequency of function words and part-of-speech trigrams) to classify news documents based on the corresponding publisher (newspaper or magazine) as well as text genre (editorial or news item). | {
"cite_N": [
"@cite_11",
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_27",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2091034860",
"",
"2566433528",
"94749751",
"2119804197",
"2611921107",
"1996665163",
"1972820248",
"105812634"
],
"abstract": [
"Telling lies often requires creating a story about an experience or attitude that does not exist. As a result, false stories may be qualitatively different from true stories. The current project investigated the features of linguistic style that distinguish between true and false stories. In an analysis of five independent samples, a computer-based text analysis program correctly classified liars and truth-tellers at a rate of 67 when the topic was constant and a rate of 61 overall. Compared to truth-tellers, liars showed lower cognitive complexity, used fewer self-references and other-references, and used more negative emotion words.",
"",
"",
"",
"The use of stylometry, authorship recognition through purely linguistic means, has contributed to literary, historical, and criminal investigation breakthroughs. Existing stylometry research assumes that authors have not attempted to disguise their linguistic writing style. We challenge this basic assumption of existing stylometry methodologies and present a new area of research: adversarial stylometry. Adversaries have a devastating effect on the robustness of existing classification methods. Our work presents a framework for creating adversarial passages including obfuscation, where a subject attempts to hide her identity, and imitation, where a subject attempts to frame another subject by imitating his writing style, and translation where original passages are obfuscated with machine translation services. This research demonstrates that manual circumvention methods work very well while automated translation methods are not effective. The obfuscation method reduces the techniques' effectiveness to the level of random guessing and the imitation attempts succeed up to 67p of the time depending on the stylometry technique used. These results are more significant given the fact that experimental subjects were unfamiliar with stylometry, were not professional writers, and spent little time on the attacks. This article also contributes to the field by using human subjects to empirically validate the claim of high accuracy for four current techniques (without adversaries). We have also compiled and released two corpora of adversarial stylometry texts to promote research in this field with a total of 57 unique authors. We argue that this field is important to a multidisciplinary approach to privacy, security, and anonymity.",
"Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.",
"In digital forensics, questions often arise about the authors of documents: their identity, demographic background, and whether they can be linked to other documents. The field of stylometry uses linguistic features and machine learning techniques to answer these questions. While stylometry techniques can identify authors with high accuracy in non-adversarial scenarios, their accuracy is reduced to random guessing when faced with authors who intentionally obfuscate their writing style or attempt to imitate that of another author. While these results are good for privacy, they raise concerns about fraud. We argue that some linguistic features change when people hide their writing style and by identifying those features, stylistic deception can be recognized. The major contribution of this work is a method for detecting stylistic deception in written documents. We show that using a large feature set, it is possible to distinguish regular documents from deceptive documents with 96.6 accuracy (F-measure). We also present an analysis of linguistic features that can be modified to hide writing style.",
"Can language use reflect personality style? Studies examined the reliability, factor structure, and validity of written language using a word-based, computerized text analysis program. Daily diaries from 15 substance abuse inpatients, daily writing assignments from 35 students, and journal abstracts from 40 social psychologists demonstrated good internal consistency for over 36 language dimensions. Analyses of the best 15 language dimensions from essays by 838 students yielded 4 factors that replicated across written samples from another 381 students. Finally, linguistic profiles from writing samples were compared with Thematic Apperception Test coding, self-reports, and behavioral measures from 79 students and with self-reports of a 5-factor measure and health markers from more than 1,200 students. Despite modest effect sizes, the data suggest that linguistic style is an independent and meaningful way of exploring personality.",
"Computational stylometry, as in authorship attribution or profiling, has a large potential for applications in diverse areas: literary science, forensics, language psychology, sociolinguistics, even medical diagnosis. Yet, many of the basic research questions of this field are not studied systematically or even at all. In this paper we will go into these problems, and suggest that a reinterpretation of current and historical methods in the framework and methodology of machine learning of natural language processing would be helpful. We also argue for more attention in research for explanation in computational stylometry as opposed to purely quantitative evaluation measures and propose a strategy for data collection and analysis for achieving progress in computational stylometry. We also introduce a fairly new application of computational stylometry in internet security."
]
} |
1902.09723 | 2917105563 | Writing style is a combination of consistent decisions at different levels of language production including lexical, syntactic, and structural associated to a specific author (or author groups). While lexical-based models have been widely explored in style-based text classification, relying on content makes the model less scalable when dealing with heterogeneous data comprised of various topics. On the other hand, syntactic models which are content-independent, are more robust against topic variance. In this paper, we introduce a syntactic recurrent neural network to encode the syntactic patterns of a document in a hierarchical structure. The model first learns the syntactic representation of sentences from the sequence of part-of-speech tags. For this purpose, we exploit both convolutional filters and long short-term memories to investigate the short-term and long-term dependencies of part-of-speech tags in the sentences. Subsequently, the syntactic representations of sentences are aggregated into document representation using recurrent neural networks. Our experimental results on PAN 2012 dataset for authorship attribution task shows that syntactic recurrent neural network outperforms the lexical model with the identical architecture by approximately 14 in terms of accuracy. | With the recent advances in deep learning, there exists a large body of work in the literature which employs deep neural networks for stylometry and authorship attribution. For instance, used a feed forward neural network language model on an authorship attribution task. The output achieves promising results compared to the n-gram baseline @cite_15 . have employed a recurrent neural network with a shared recurrent state which outperforms other proposed methods in PAN 2015 task @cite_20 . | {
"cite_N": [
"@cite_15",
"@cite_20"
],
"mid": [
"2963514251",
"2517226363"
],
"abstract": [
"In practice, training language models for individual authors is often expensive because of limited data resources. In such cases, Neural Network Language Models (NNLMs), generally outperform the traditional non-parametric N-gram models. Here we investigate the performance of a feedforward NNLM on an authorship attribution problem, with moderate author set size and relatively limited data. We also consider how the text topics impact performance. Compared with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the proposed method achieves nearly 2.5 reduction in perplexity and increases author classification accuracy by 3.43 on average, given as few as 5 test sentences. The performance is very competitive with the state of the art in terms of accuracy and demand on test data. The source code, preprocessed datasets, a detailed description of the methodology and results are available at https: github.com zge authorship-attribution.",
"A recurrent neural network that has been trained to separately model the language of several documents by unknown authors is used to measure similarity between the documents. It is able to find clues of common authorship even when the documents are very short and about disparate topics. While it is easy to make statistically significant predictions regarding authorship, it is difficult to group documents into definite clusters with high accuracy."
]
} |
1902.09193 | 2916659265 | Conventional SLAM algorithms takes a strong assumption of scene motionlessness, which limits the application in real environments. This paper tries to tackle the challenging visual SLAM issue of moving objects in dynamic environments. We present GMC, grid-based motion clustering approach, a lightweight dynamic object filtering method that is free from high-power and expensive processors. GMC encapsulates motion consistency as the statistical likelihood of detected key points within a certain region. Using this method can we provide real-time and robust correspondence algorithm that can differentiate dynamic objects with static backgrounds. We evaluate our system in public TUM dataset. To compare with the state-of-the-art methods, our system can provide more accurate results by detecting dynamic objects. | The basic assumption of most SLAM methods is that the landmarks observed are static, which limits the application of visual SLAM. Some recent researches start researching active moving objects using dense SLAM by incorporating optical flow technique. Optical flow is generated with movement of pixels, so theoretically static background and moving targets can be distinguished by that @cite_0 @cite_2 . However, this kind of dense sampling requires a large amount of calculation. Also as the association cannot be guaranteed, these methods are not robust, and RANSAC might fail easily, especially when the object size is big. Some learning based methods are utilized to minimize the reprojection error or pixel intensity with the association issue one of the values to be solve using EM @cite_7 or CRF @cite_20 , but the results cannot be achieved in real-time. | {
"cite_N": [
"@cite_0",
"@cite_20",
"@cite_7",
"@cite_2"
],
"mid": [
"2053740312",
"2126319832",
"2739423245",
""
],
"abstract": [
"In this paper, we introduce the concept of dense scene flow for visual SLAM applications. Traditional visual SLAM methods assume static features in the environment and that a dominant part of the scene changes only due to camera egomotion. These assumptions make traditional visual SLAM methods prone to failure in crowded real-world dynamic environments with many independently moving objects, such as the typical environments for the visually impaired. By means of a dense scene flow representation, moving objects can be detected. In this way, the visual SLAM process can be improved considerably, by not adding erroneous measurements into the estimation, yielding more consistent and improved localization and mapping results. We show large-scale visual SLAM results in challenging indoor and outdoor crowded environments with real visually impaired users. In particular, we performed experiments inside the Atocha railway station and in the city-center of Alcala de Henares, both in Madrid, Spain. Our results show that the combination of visual SLAM and dense scene flow allows to obtain an accurate localization, improving considerably the results of traditional visual SLAM methods and GPS-based approaches.",
"Moving objects are present in many robotic applications. An accurate detection and motion estimation of these objects can be crucial for the success and safety of the robot and people surrounding it. This paper presents a new probabilistic framework for clustering dependent or relational data, applied to the problem of motion clustering and estimation. While conventional techniques such as scan differencing perform well in many cases, they usually assume that a good pose estimate is available and fail when points belonging to dynamic objects show some overlap in consecutive readings. The technique proposed, CRF-Clustering, by explicitly reasoning about the underlying motion of the object, is able to deal with poor initial motion estimate and overlapping points. Moreover, it is able to consider the dependencies between neighbor points in the scans to reduce the noise in the clustering assignment. The model parameters can be estimated from labeled data in a statistically sound learning procedure. Experiments show that CRF-Clustering is able to detect moving objects, cluster them and estimate their motion.",
"Traditional approaches to simultaneous localization and mapping (SLAM) rely on low-level geometric features such as points, lines, and planes. They are unable to assign semantic labels to landmarks observed in the environment. Furthermore, loop closure recognition based on low-level features is often viewpoint-dependent and subject to failure in ambiguous or repetitive environments. On the other hand, object recognition methods can infer landmark classes and scales, resulting in a small set of easily recognizable landmarks, ideal for view-independent unambiguous loop closure. In a map with several objects of the same class, however, a crucial data association problem exists. While data association and recognition are discrete problems usually solved using discrete inference, classical SLAM is a continuous optimization over metric information. In this paper, we formulate an optimization problem over sensor states and semantic landmark positions that integrates metric information, semantic information, and data associations, and decompose it into two interconnected problems: an estimation of discrete data association and landmark class probabilities, and a continuous optimization over the metric states. The estimated landmark and robot poses affect the association and class distributions, which in turn affect the robot-landmark pose optimization. The performance of our algorithm is demonstrated on indoor and outdoor datasets.",
""
]
} |
1902.09411 | 2916703471 | Opacity is an important information-flow security property in the analysis of cyber-physical systems. It captures the plausible deniability of the system's secret behavior in the presence of an intruder that may access the information flow. Existing works on opacity only consider non-metric systems by assuming that the intruder can always distinguish two different outputs precisely. In this paper, we extend the concept of opacity to systems whose output sets are equipped with metrics. Such systems are widely used in the modeling of many real-world systems whose measurements are physical signals. A new concept called approximate opacity is proposed in order to quantitatively evaluate the security guarantee level with respect to the measurement precision of the intruder. Then we propose a new simulation-type relation, called approximate opacity preserving simulation relation, which characterizes how close two systems are in terms of the satisfaction of approximate opacity. This allows us to verify approximate opacity for large-scale, or even infinite systems, using their abstractions. We also discuss how to construct approximate opacity preserving symbolic models for a class of discrete-time control systems. Our results extend the definitions and analysis techniques for opacity from non-metric systems to metric systems. | Our work is closely related to several works in the literature. First, several different approaches have been proposed in the literature to evaluate opacity more quantitatively rather than requiring that the system is opaque exactly @cite_26 @cite_5 @cite_3 @cite_48 . For example, in @cite_3 , the authors adopt the Jensen-Shannon divergence as the measurement to quantify secrecy loss. In @cite_26 @cite_5 @cite_48 , stochastic DES models are used to study the probabilistic measurement of opacity. These approaches essentially aim to analyze how opaque a single system is, e.g., the probability of being opaque. However, they neither consider how close two systems are in terms of being opaque nor consider under what observation precision level, we can guarantee opacity. | {
"cite_N": [
"@cite_5",
"@cite_26",
"@cite_3",
"@cite_48"
],
"mid": [
"",
"1992424076",
"2521231675",
"2901162916"
],
"abstract": [
"",
"A system is said to be current-state opaque if the entrance of the system state to a set of secret states remains opaque (uncertain) to an intruder-at least until the system leaves the set of secret states. This notion of opacity has been studied in nondeterministic finite automata settings (where the intruder observes a subset of events, for example, via some natural projection mapping) and has been shown to be useful in characterizing security requirements in many applications (including encryption using pseudorandom generators and coverage properties in sensor networks). One limitation of the majority of existing analysis is that it fails to provide a quantifiable measure of opacity for a given system; instead, it simply provides a binary characterization of the system (being opaque or not opaque). In this paper, we address this limitation by extending current-state opacity formulations to systems that can be modeled as probabilistic finite automata under partial observation. We introduce three notions of opacity, namely: 1) step-based almost current-state opacity; 2) almost current-state opacity; and 3) probabilistic current-state opacity, all of which can be used to provide a measure of a given system's opacity. We also propose verification methods for these probabilistic notions of opacity and characterize their corresponding computational complexities.",
"While cryptography is used to protect the content of information (e.g., a message) by making it undecipherable, behaviors (as opposed to information) may not be encrypted and may only be protected by partially or fully hiding through creation of ambiguity (by providing covers that generate indistinguishable observations from secrets). Having a cover together with partial observability does cause ambiguity about the system behaviors desired to be kept secret, yet some information about secrets may still be leaked due to statistical difference between the occurrence probabilities of the secrets and their covers. In this paper, we propose a Jensen–Shannon divergence (JSD)-based measure to quantify secrecy loss in systems modeled as partially observed stochastic discrete event systems, which quantifies the statistical difference between two distributions, one over the observations generated by secret and the other over those generated by cover. We further show that the proposed JSD measure for secrecy loss is equivalent to the mutual information between the distributions over possible observations and that over possible system status (secret versus cover). Since an adversary is likely to discriminate more if he she observes for a longer period, our goal is to evaluate the worst case loss of secrecy as obtained in the limit over longer and longer observations. Computation for the proposed measure is also presented. Illustrative examples, including the one with side-channel attack, are provided to demonstrate the proposed computation approach.",
"Abstract Opacity is an important information-flow property that arises in security and privacy analysis of cyber–physical systems. Among many different notions of opacity, K -step opacity requires that the intruder can never determine unambiguously that the system was at a secret state for any specific instant within K steps prior to that particular instant. This notion becomes infinity-step opacity when K goes to infinity. Existing works on the analysis of infinite-step opacity and K -step opacity only provide a binary characterization, i.e., a system is either opaque or non-opaque. To analyze infinite-step and K -step opacity more quantitatively, in this paper, we investigate the verification of infinite-step and K -step opacity in the context of stochastic discrete-event systems. A new notion of opacity, called almost infinite-step opacity (respectively, almost K -step opacity), is proposed to capture whether or not the probability of violating infinite-step opacity (respectively, K -step opacity) is smaller than a given threshold. We also provide effective algorithms for the verification of almost infinite-step opacity and almost K -step opacity."
]
} |
1902.09411 | 2916703471 | Opacity is an important information-flow security property in the analysis of cyber-physical systems. It captures the plausible deniability of the system's secret behavior in the presence of an intruder that may access the information flow. Existing works on opacity only consider non-metric systems by assuming that the intruder can always distinguish two different outputs precisely. In this paper, we extend the concept of opacity to systems whose output sets are equipped with metrics. Such systems are widely used in the modeling of many real-world systems whose measurements are physical signals. A new concept called approximate opacity is proposed in order to quantitatively evaluate the security guarantee level with respect to the measurement precision of the intruder. Then we propose a new simulation-type relation, called approximate opacity preserving simulation relation, which characterizes how close two systems are in terms of the satisfaction of approximate opacity. This allows us to verify approximate opacity for large-scale, or even infinite systems, using their abstractions. We also discuss how to construct approximate opacity preserving symbolic models for a class of discrete-time control systems. Our results extend the definitions and analysis techniques for opacity from non-metric systems to metric systems. | There are also attempts in the literature that extend opacity from discrete systems to continuous systems. For example, in the recent results in @cite_44 @cite_29 @cite_22 , the authors extended the notion of opacity to (switched) linear systems. However, their definition of opacity is more related to an output reachability property rather than an information-flow property. Moreover, their formulation is mostly based on the setting of exact opacity, i.e., we can always distinguish two different outputs precisely no matter how close they are, In @cite_44 , the authors mentioned the direction of using output metric to quantify opacity and a property called strong @math - @math -initial-state opacity was proposed, which is closely related to our notions. However, no systematic study, e.g., verification and abstraction as we consider in this paper, was provided for this property. | {
"cite_N": [
"@cite_44",
"@cite_29",
"@cite_22"
],
"mid": [
"2500452216",
"2588012692",
"2783442534"
],
"abstract": [
"We present a framework for opacity in cyberphysical systems modeled as discrete time linear time invariant systems. A set of secret states is k-ISO with respect to a set of nonsecret states if, starting from these sets at time 0, the outputs at time k are indistinguishable to a passive adversarial observer. Necessary and sufficient conditions for k-ISO are given in terms of reachable sets of the system. Properties of k-ISO under unions and intersections are verified. It is seen that while unions of opaque sets preserve opacity, this is not necessarily true for intersections. We show that under certain conditions, k-ISO is equivalent to output controllability. Finally, we present an algorithm to compute a k-ISO set of states, given candidate secret and nonsecret sets of initial states.",
"We formulate several notions of decentralized opacity for cyberphysical systems in the presence of multiple adversarial observers. Broadly speaking, we study the following cases: i) the presence or lack of a centralized coordinator, and ii) the presence or absence of collusion among the adversaries. In the case of colluding adversaries, we derive a condition for non-opacity that depends on the structure of the directed graph representing the communication between adversaries. Finally, we define a notion of opacity where the condition that the outputs be indistinguishable is relaxed.",
"A switched system consists of a finite number of subsystems and a rule that orchestrates switching among them. We develop notions of opacity for discrete-time switched linear systems. We distinguish between cases when the secret is specified as a set of initial modes, a set of initial states, or a combination of the two. The novelty of our schemes is in the fact that we place restrictions on: i) the allowed transitions between modes (specified by a directed graph), ii) the number of allowed changes of modes (specified by lengths of paths in the directed graph), and iii) the dwell times in each mode. Each notion of opacity is characterized in terms of allowed switching sequences and sets of reachable states and or modes. Finally, we present algorithmic procedures to verify these notions, and provide bounds on their computational complexity."
]
} |
1902.09411 | 2916703471 | Opacity is an important information-flow security property in the analysis of cyber-physical systems. It captures the plausible deniability of the system's secret behavior in the presence of an intruder that may access the information flow. Existing works on opacity only consider non-metric systems by assuming that the intruder can always distinguish two different outputs precisely. In this paper, we extend the concept of opacity to systems whose output sets are equipped with metrics. Such systems are widely used in the modeling of many real-world systems whose measurements are physical signals. A new concept called approximate opacity is proposed in order to quantitatively evaluate the security guarantee level with respect to the measurement precision of the intruder. Then we propose a new simulation-type relation, called approximate opacity preserving simulation relation, which characterizes how close two systems are in terms of the satisfaction of approximate opacity. This allows us to verify approximate opacity for large-scale, or even infinite systems, using their abstractions. We also discuss how to construct approximate opacity preserving symbolic models for a class of discrete-time control systems. Our results extend the definitions and analysis techniques for opacity from non-metric systems to metric systems. | Regarding the techniques used in this paper, first, our algorithms for the verification of approximate notions of opacity are motivated by the verification algorithms for exact opacity studied in @cite_33 @cite_41 . In particular, we use the idea of constructing a new system, called the state-estimator, that tracks all possible states consistent with the observation. However, our construction of state-estimator is not exactly the same as the existing one as additional state information is needed in order to handle the issue of approximation. | {
"cite_N": [
"@cite_41",
"@cite_33"
],
"mid": [
"2601738991",
"2146852848"
],
"abstract": [
"Abstract In the context of security analysis for information flow properties, where a potentially malicious observer (intruder) tracks the observed behavior of a given system, infinite-step opacity (respectively, K -step opacity) holds if the intruder can never determine for sure that the system was in a secret state for any instant within infinite steps (respectively, K steps) prior to that particular instant. We present new algorithms for the verification of the properties of infinite-step opacity and K -step opacity for partially-observed discrete event systems modeled as finite-state automata. Our new algorithms are based on a novel separation principle for state estimates that characterizes the information dependence in opacity verification problems, and they have lower computational complexity than previously-proposed ones in the literature. Specifically, we propose a new information structure, called the two-way observer, that is used for the verification of infinite-step and K -step opacity. Based on the two-way observer, a new upper bound for the delay in K -step opacity is derived, which also improves previously-known results.",
"Motivated by security and privacy considerations in a variety of applications of discrete event systems, we describe and analyze the computational complexity required for verifying the notion of K -step opacity for systems that are modeled as nondeterministic finite automata with partial observation on their transitions. Specifically, a system is K-step opaque if, at any specific point within the last K observations, the entrance of the system state to a given set of secret states remains opaque (uncertain) to an intruder who has complete knowledge of the system model and observes system activity through some natural projection map. We provide two methods for verifying K -step opacity using two different state estimator constructions, and analyze the computational complexity of both."
]
} |
1902.09411 | 2916703471 | Opacity is an important information-flow security property in the analysis of cyber-physical systems. It captures the plausible deniability of the system's secret behavior in the presence of an intruder that may access the information flow. Existing works on opacity only consider non-metric systems by assuming that the intruder can always distinguish two different outputs precisely. In this paper, we extend the concept of opacity to systems whose output sets are equipped with metrics. Such systems are widely used in the modeling of many real-world systems whose measurements are physical signals. A new concept called approximate opacity is proposed in order to quantitatively evaluate the security guarantee level with respect to the measurement precision of the intruder. Then we propose a new simulation-type relation, called approximate opacity preserving simulation relation, which characterizes how close two systems are in terms of the satisfaction of approximate opacity. This allows us to verify approximate opacity for large-scale, or even infinite systems, using their abstractions. We also discuss how to construct approximate opacity preserving symbolic models for a class of discrete-time control systems. Our results extend the definitions and analysis techniques for opacity from non-metric systems to metric systems. | Abstraction-based techniques have also been investigated in the literature for the verification and synthesis of opacity; see, e.g., @cite_43 @cite_38 @cite_14 @cite_19 @cite_0 . In particular, in our recent work @cite_43 , we propose several notions of opacity preserving (bi)simulation relations. However, these relations only preserve exact opacity for non-metric systems. Our new relations extend the relations in @cite_43 to metric systems by taking into account how close two systems are. Such an extension is motivated by the definition of approximate (bi)simulation relation originally proposed in @cite_30 . However, the original definition of approximate (bi)simulation relation does not necessarily preserves approximate opacity. Constructing symbolic models for control systems is also an active research area; see, e.g., @cite_28 @cite_39 @cite_36 @cite_16 . However, most of the existing works on the construction of symbolic models only consider the dynamics of the systems and are not taking into account the opacity property. In our approach, we need to consider both the dynamic and the secret of the system while constructing the symbolic model and guarantee the preservation of approximate opacity across related systems. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_14",
"@cite_28",
"@cite_36",
"@cite_39",
"@cite_0",
"@cite_19",
"@cite_43",
"@cite_16"
],
"mid": [
"2011979623",
"2882998909",
"2904010205",
"2165748475",
"2169289043",
"2152784531",
"2914250191",
"2809079675",
"2787221116",
"2952918945"
],
"abstract": [
"Established system relationships for discrete systems, such as language inclusion, simulation, and bisimulation, require system observations to be identical. When interacting with the physical world, modeled by continuous or hybrid systems, exact relationships are restrictive and not robust. In this paper, we develop the first framework of system approximation that applies to both discrete and continuous systems by developing notions of approximate language inclusion, approximate simulation, and approximate bisimulation relations. We define a hierarchy of approximation pseudo-metrics between two systems that quantify the quality of the approximation, and capture the established exact relationships as zero sections. Our approximation framework is compositional for a synchronous composition operator. Algorithms are developed for computing the proposed pseudo-metrics, both exactly and approximately. The exact algorithms require the generalization of the fixed point algorithms for computing simulation and bisimulation relations, or dually, the solution of a static game whose cost is the so-called branching distance between the systems. Approximations for the pseudo-metrics can be obtained by considering Lyapunov-like functions called simulation and bisimulation functions. We illustrate our approximation framework in reducing the complexity of safety verification problems for both deterministic and nondeterministic continuous systems",
"In this paper, an alternative equivalence based definition of bisimulation is proposed, called visible bisimulation equivalence. It includes both state and transition labels and therefore unifies stuttering and branching bisimulation. Furthermore, it is equivalent to a temporal logic called ECTL*, where CTL* is extended with events. The presented bisimulation abstraction is applied to a set of synchronized submodels, where local events are identified incrementally and abstracted after each synchronization. Since the bisimulation reduction is applied after each synchronization, a significant part of the state space explosion in ordinary synchronization is avoided. This compositional abstraction is used for opacity verification, where it is shown that local observers can be generated before they are synchronized, a key factor to be able to apply compositional opacity verification. The efficiency of this method is illustrated on a modular opacity problem with mutual exclusion of moving agents.",
"With the proliferation of communication networks and mobile devices, the privacy and security concerns on their information flow are raised. Given a critical system that may leak confidential information, the problem consists of verifying and also enforcing opacity by designing supervisors, to conceal confidential information from unauthorized persons. To find out what the intruder sees, it is required to construct an observer of the system. In this paper, we consider incremental observer generation of modular systems, for verification and enforcement of current state opacity. The synchronization of the subsystems generate a large state space. Moreover, the observer generation with exponential complexity adds even larger state space. To tackle the complexity problem, we prove that observer generation can be done locally before synchronizing the subsystems. The incremental local observer generation along with an abstraction method lead to a significant state space reduction compared to traditional monolithic methods. The existence of shared unobservable events is also considered in the incremental approach. Moreover, we present an illustrative example, where the results of verification and enforcement of current state opacity are shown on a modular multiple floor elevator building with an intruder. Furthermore, we extend the current state opacity, current state anonymity, and language based opacity formulations for verification of modular systems.",
"Switched systems constitute an important modeling paradigm faithfully describing many engineering systems in which software interacts with the physical world. Despite considerable progress on stability and stabilization of switched systems, the constant evolution of technology demands that we make similar progress with respect to different, and perhaps more complex, objectives. This paper describes one particular approach to address these different objectives based on the construction of approximately equivalent (bisimilar) symbolic models for switched systems. The main contribution of this paper consists in showing that under standard assumptions ensuring incremental stability of a switched system (i.e., existence of a common Lyapunov function, or multiple Lyapunov functions with dwell time), it is possible to construct a finite symbolic model that is approximately bisimilar to the original switched system with a precision that can be chosen a priori. To support the computational merits of the proposed approach, we use symbolic models to synthesize controllers for two examples of switched systems, including the boost dc-dc converter.",
"Finite-state models of control systems were proposed by several researchers as a convenient mechanism to synthesize controllers enforcing complex specifications. Most techniques for the construction of such symbolic models have two main drawbacks: either they can only be applied to restrictive classes of systems, or they require the exact computation of reachable sets. In this paper, we propose a new abstraction technique that is applicable to any nonlinear sampled-data control system as long as we are only interested in its behavior in a compact set. Moreover, the exact computation of reachable sets is not required. The effectiveness of the proposed results is illustrated by synthesizing a controller to steer a vehicle.",
"Sufficiently accurate finite state models, also called symbolic models or discrete abstractions, allow one to apply fully automated methods, originally developed for purely discrete systems, to formally reason about continuous and hybrid systems and to design finite state controllers that provably enforce predefined specifications. We present a novel algorithm to compute such finite state models for nonlinear discrete-time and sampled systems which depends on quantizing the state space using polyhedral cells, embedding these cells into suitable supersets whose attainable sets are convex, and over-approximating attainable sets by intersections of supporting half-spaces. We prove a novel recursive description of these half-spaces and propose an iterative procedure to compute them efficiently. We also provide new sufficient conditions for the convexity of attainable sets which imply the existence of the aforementioned embeddings of quantizer cells. Our method yields highly accurate abstractions and applies to nonlinear systems under mild assumptions, which reduce to sufficient smoothness in the case of sampled systems. Its practicability in the design of discrete controllers for nonlinear continuous plants under state and control constraints is demonstrated by an example.",
"This paper investigates the synthesis of edit functions for opacity enforcement using abstraction methods to reduce computational complexity. Edit functions are used to alter system outputs by erasing or inserting events in order to prevent violations of opacity. We introduce two abstraction methods, called opaque observation equivalence and opaque bisimulation, that are used to abstract the original system and its observer before calculating edit functions. We present a set of results on abstraction for opacity and its enforcement by edit functions that prove that edit functions synthesized from abstracted models are “equivalent” to ones synthesized from original ones. Our approach leverages the technique of edit function synthesis using the All Edit Structure from prior works.",
"",
"In this paper, we propose several opacity-preserving (bi)simulation relations for general nondeterministic transition systems (NTS) in terms of initial-state opacity, current-state opacity, K-step opacity, and infinite-step opacity. We also show how one can leverage quotient construction to compute such relations. In addition, we use a two-way observer method to verify opacity of nondeterministic finite transition systems (NFTSs). As a result, although the verification of opacity for infinite NTSs is generally undecidable, if one can find such an opacity-preserving relation from an infinite NTS to an NFTS, the (lack of) opacity of the NTS can be easily verified over the NFTS which is decidable.",
"Stochastic switched systems are a relevant class of stochastic hybrid systems with probabilistic evolution over a continuous domain and control-dependent discrete dynamics over a finite set of modes. In the past few years several different techniques have been developed to assist in the stability analysis of stochastic switched systems. However, more complex and challenging objectives related to the verification of and the controller synthesis for logic specifications have not been formally investigated for this class of systems as of yet. With logic specifications we mean properties expressed as formulae in linear temporal logic or as automata on infinite strings. This paper addresses these complex objectives by constructively deriving approximately equivalent (bisimilar) symbolic models of stochastic switched systems. More precisely, this paper provides two different symbolic abstraction techniques: one requires state space discretization, but the other one does not require any space discretization which can be potentially more efficient than the first one when dealing with higher dimensional stochastic switched systems. Both techniques provide finite symbolic models that are approximately bisimilar to stochastic switched systems under some stability assumptions on the concrete model. This allows formally synthesizing controllers (switching signals) that are valid for the concrete system over the finite symbolic model, by means of mature automata-theoretic techniques in the literature. The effectiveness of the results are illustrated by synthesizing switching signals enforcing logic specifications for two case studies including temperature control of a six-room building."
]
} |
1902.09411 | 2916703471 | Opacity is an important information-flow security property in the analysis of cyber-physical systems. It captures the plausible deniability of the system's secret behavior in the presence of an intruder that may access the information flow. Existing works on opacity only consider non-metric systems by assuming that the intruder can always distinguish two different outputs precisely. In this paper, we extend the concept of opacity to systems whose output sets are equipped with metrics. Such systems are widely used in the modeling of many real-world systems whose measurements are physical signals. A new concept called approximate opacity is proposed in order to quantitatively evaluate the security guarantee level with respect to the measurement precision of the intruder. Then we propose a new simulation-type relation, called approximate opacity preserving simulation relation, which characterizes how close two systems are in terms of the satisfaction of approximate opacity. This allows us to verify approximate opacity for large-scale, or even infinite systems, using their abstractions. We also discuss how to construct approximate opacity preserving symbolic models for a class of discrete-time control systems. Our results extend the definitions and analysis techniques for opacity from non-metric systems to metric systems. | Finally, approximate notions of two related properties called diagnosability and predictability are investigated recently in @cite_37 @cite_6 . Their setting is very similar to us as we both consider a measurement uncertainty threshold. However, diagnosability and predictability are properties, which can be preserved by standard approximate simulation relation. Our notions of opacity are and we show that standard approximate simulation relation does not preserve opacity. Therefore, the proposed approximate opacity preserving simulation relation is different from the standard approximate simulation relation in the literature. | {
"cite_N": [
"@cite_37",
"@cite_6"
],
"mid": [
"2762781394",
"2889498487"
],
"abstract": [
"The increasing complexity in nowadays engineered systems requires great attention to safety hazards and occurrence of faults, which must be readily detected to possibly restore nominal behavior of the system. The notion of diagnosability plays a key role in this regard, since it corresponds to the possibility of detecting within a finite delay if a fault, or in general a hazardous situation, did occur. In this letter, we introduce and characterize the notion of approximate diagnosability for the general class of metric systems, which are typically used in the research community working on hybrid systems to deal with complex heterogeneous processes in, e.g., cyber-physical systems. This notion captures the possibility of detecting faults on the basis of measurements corrupted by errors, always introduced by non-ideal sensors in a real environment. Relations are established between approximate diagnosability of a given metric system and approximate diagnosability of a system that approximately simulates the given one. Application of the proposed results to the analysis of approximate diagnosability of nonlinear systems is finally discussed.",
"Abstract In this paper we introduce and characterize the notion of approximate predictability for the general class of metric systems, which are a powerful modeling framework to deal with complex heterogeneous systems such as hybrid systems. Approximate predictability corresponds to the possibility of predicting the occurrence of specific states belonging to a particular subset of interest, in advance with respect to their occurrence, on the basis of observations corrupted by measurement errors. We establish a relation between approximate predictability of a given metric system and approximate predictability of a metric system that approximately simulates the given one. This relation allows checking approximate predictability of a system with an infinite number of states, provided that one is able to construct a metric system with a finite number of states and inputs, approximating the original one in the sense of approximate simulation. The analysis of approximate predictability of Piecewise Affine (PWA) systems is carried out as an application of the proposed approach."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.