aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1703.04706
2950964646
In the domain of sequence modelling, Recurrent Neural Networks (RNN) have been capable of achieving impressive results in a variety of application areas including visual question answering, part-of-speech tagging and machine translation. However this success in modelling short term dependencies has not successfully transitioned to application areas such as trajectory prediction, which require capturing both short term and long term relationships. In this paper, we propose a Tree Memory Network (TMN) for modelling long term and short term relationships in sequence-to-sequence mapping problems. The proposed network architecture is composed of an input module, controller and a memory module. In contrast to related literature, which models the memory as a sequence of historical states, we model the memory as a recursive tree structure. This structure more effectively captures temporal dependencies across both short term and long term sequences using its hierarchical structure. We demonstrate the effectiveness and flexibility of the proposed TMN in two practical problems, aircraft trajectory modelling and pedestrian trajectory modelling in a surveillance setting, and in both cases we outperform the current state-of-the-art. Furthermore, we perform an in depth analysis on the evolution of the memory module content over time and provide visual evidence on how the proposed TMN is able to map both long term and short term relationships efficiently via a hierarchical structure.
Approaches such as utilise probability models on aircraft dynamics in order to generate predictions on their future motion. They purely rely on the assumptions made on the dynamics of the aircraft, without using any historical information, which contributes to its main drawback. In @cite_33 , @cite_46 , @cite_14 researchers treated aircraft trajectory prediction as a machine learning problem, in which they train the model using historical trajectory data together with weather observations. Most recently @cite_7 proposed an approach that considered trajectories as a set of 4 dimensional data cubes, together with weather parameters. Initially they performed time series clustering on data for segmentation and then learnt a HMM on top of each cluster. Still due to the uncertainty with weather observations, these trajectory prediction approaches become inefficient.
{ "cite_N": [ "@cite_46", "@cite_14", "@cite_33", "@cite_7" ], "mid": [ "2333507536", "1547373000", "41309742", "2511826256" ], "abstract": [ "A machine learning approach to trajectory prediction for sequencing and merging of traffic following fixed arrival routes is described and evaluated using actual aircraft trajectory and meteorological data. In the approach a model is trained using historic data to make arrival time predictions. Model inputs are the aircraft type, aircraft ground speed and altitude at the start of the arrival route, surface wind, and altitude winds. A stepwise regression method is used to systematically determine the inputs and functions of inputs that are included in the prediction model based on their explanatory power. For the evaluation of the approach a 45 NM fixed arrival route was used that ends at the runway. Traffic performed a continuous descent operation. At a prediction horizon of 45 NM the model explained 63 of the observed variance in the arrival time. The mean absolute time error was 18 s. Finally, the prediction model was used to determine the required initial spacing interval between aircraft for continuous descent operation and examine the impact on runway throughput and conflicts. Using the prediction model, throughput increased by up to 4 aircraft per hour compared to a constant initial spacing.", "Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.", "This paper presents an approach to predict future motion of a moving object based on its past movement. This approach is capable of learning object movement in an open environment, which is one of the limitions in some prior works. The proposed approach exploits the similarities of short-term movement behaviors by modeling a trajectory as concatenation of short segments. These short segments are assumed to be noisy realizations of latent segments. The transitions between the underlying latent segments are assumed to follow a Markov model. This predictive model was applied to two real-world applications and yielded favorable performance on both tasks.", "At the heart of Air Traffic Management (ATM) lies the Decision Support Systems (DST) that rely upon accurate trajectory prediction to determine how the airspace will look like in the future to make better decisions and advisories. Dealing with airspace that is prone to congestion due to environmental factors still remains the challenge especially when a deterministic approach is used in the trajectory prediction process. In this paper, we describe a novel stochastic trajectory prediction approach for ATM that can be used for more efficient and realistic flight planning and to assist airspace flow management, potentially resulting in higher safety, capacity, and efficiency commensurate with fuel savings thereby reducing emissions for a better environment. Our approach considers airspace as a 3D grid network, where each grid point is a location of a weather observation. We hypothetically build cubes around these grid points, so the entire airspace can be considered as a set of cubes. Each cube is defined by its centroid, the original grid point, and associated weather parameters that remain homogeneous within the cube during a period of time. Then, we align raw trajectories to a set of cube centroids which are basically fixed 3D positions independent of trajectory data. This creates a new form of trajectories which are 4D joint cubes, where each cube is a segment that is associated with not only spatio-temporal attributes but also with weather parameters. Next, we exploit machine learning techniques to train inference models from historical data and apply a stochastic model, a Hidden Markov Model (HMM), to predict trajectories taking environmental uncertainties into account. During the process, we apply time series clustering to generate input observations from an excessive set of weather parameters to feed into the Viterbi algorithm. Our experiments use a real trajectory dataset with pertaining weather observations and demonstrate the effectiveness of our approach to the trajectory prediction process for ATM." ] }
1703.04550
2952367428
Deep reinforcement learning is becoming increasingly popular for robot control algorithms, with the aim for a robot to self-learn useful feature representations from unstructured sensory input leading to the optimal actuation policy. In addition to sensors mounted on the robot, sensors might also be deployed in the environment, although these might need to be accessed via an unreliable wireless connection. In this paper, we demonstrate deep neural network architectures that are able to fuse information coming from multiple sensors and are robust to sensor failures at runtime. We evaluate our method on a search and pick task for a robot both in simulation and the real world.
Combining information from several sources in order to get a unified picture has been a research topic for decades @cite_14 . An often used sensor fusion technique is (Extended) Kalman Filtering and variants, which has been shown useful for use cases such as object position estimation @cite_5 , robot pose estimation @cite_22 , localization @cite_9 and navigation @cite_12 . In these cases the desired state representation based on the sensor input (e.g. the robot pose) is known and fixed upfront. However, in our approach we want to learn a policy end-to-end and the optimal feature representation to fuse the sensor input to is unknown upfront.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_9", "@cite_5", "@cite_12" ], "mid": [ "", "2411338219", "2007008254", "2155256807", "2127752689" ], "abstract": [ "", "While global navigation satellite systems (GNSS) are the state of the art for localization, in general they are unable to operate inside buildings, and there is currently no well-established solution for indoor localization. In this paper we propose a 3D mobile robot pose (2D position and 1D orientation) estimation system for indoor applications. The system is based on the cooperative sensor fusion of radar, ultrasonic and odometry data using an extended Kalman filter (EKF). A prerequisite for the EKF is an occupancy grid map of the scenario as well as the pose of the reference radar node inside the map. Our system can handle even the kidnapped-robot case as the radar provides absolute localization. We conducted a series of measurements in an office building corridor. We determined the typical position root-mean square error (RMSE) to be less than 15 cm.", "Localization is the crucial problem for mobile robot navigation. For indoor mobile robot, since a global positioning system (GPS) is incapable, another promising technique to detect the position is the received signal strength indicator (RSSI) from wireless communication. To improve the precision and robustness of mobile unit localization, an inertial measurement unit (IMU) is normally used. In this report, we propose the algorithm for mobile robot localization based on sensor fusion between RSSI from wireless local area network (WLAN) and an IMU. The proposed fusion scheme is based on the extended Kalman filter (EKF). The experiment is conducted by using mobile unit equipped with low-cost IMU and a wireless communication module together with access points to evaluate the performance of our algorithm, and the result is promising.", "We present a method for representing, communicating and fusing distributed, noisy and uncertain observations of an object by multiple robots. The approach relies on re-parameterization of the canonical two-dimensional Gaussian distribution that corresponds more naturally to the observation space of a robot. The approach enables two or more observers to achieve greater effective sensor coverage of the environment and improved accuracy in object position estimation. We demonstrate empirically that, when using our approach, more observers achieve more accurate estimations of an object's position. The method is tested in three application areas, including object location, object tracking, and ball position estimation for robotic soccer. Quantitative evaluations of the technique in use on mobile robots are provided.", "It has been long known that fusing information from multiple sensors for robot navigation results in increased robustness and accuracy. However, accurate calibration of the sensor ensemble prior to deployment in the field as well as coping with sensor outages, different measurement rates and delays, render multi-sensor fusion a challenge. As a result, most often, systems do not exploit all the sensor information available in exchange for simplicity. For example, on a mission requiring transition of the robot from indoors to outdoors, it is the norm to ignore the Global Positioning System (GPS) signals which become freely available once outdoors and instead, rely only on sensor feeds (e.g., vision and laser) continuously available throughout the mission. Naturally, this comes at the expense of robustness and accuracy in real deployment. This paper presents a generic framework, dubbed MultiSensor-Fusion Extended Kalman Filter (MSF-EKF), able to process delayed, relative and absolute measurements from a theoretically unlimited number of different sensors and sensor types, while allowing self-calibration of the sensor-suite online. The modularity of MSF-EKF allows seamless handling of additional lost sensor signals during operation while employing a state buffering scheme augmented with Iterated EKF (IEKF) updates to allow for efficient re-linearization of the prediction to get near optimal linearization points for both absolute and relative state updates. We demonstrate our approach in outdoor navigation experiments using a Micro Aerial Vehicle (MAV) equipped with a GPS receiver as well as visual, inertial, and pressure sensors." ] }
1703.04550
2952367428
Deep reinforcement learning is becoming increasingly popular for robot control algorithms, with the aim for a robot to self-learn useful feature representations from unstructured sensory input leading to the optimal actuation policy. In addition to sensors mounted on the robot, sensors might also be deployed in the environment, although these might need to be accessed via an unreliable wireless connection. In this paper, we demonstrate deep neural network architectures that are able to fuse information coming from multiple sensors and are robust to sensor failures at runtime. We evaluate our method on a search and pick task for a robot both in simulation and the real world.
Work has been done on fusing multiple inputs using a deep learning approach, by learning a shared representation between all inputs @cite_0 . @cite_1 present a two-stream cnn for RGB-D object recognition. Both RGB and D data are first separately processed by a pretrained cnn , whose outputs are then concatenated and converged in a fully connected layer with softmax classifier. @cite_2 , RGB-D data is used for detecting robotic grasps. A structured regularization penalty is added when concatenating multi-modal features. @cite_17 fuse bi-modal image-text and audio-video data using deep Boltzman machines. In this paper, we use neural networks to fuse lidar data from multiple sensors, which are trained end-to-end using a deep reinforcement learning approach.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_17", "@cite_2" ], "mid": [ "", "1012273433", "2184188583", "1999156278" ], "abstract": [ "", "Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset and show recognition in challenging RGB-D real-world noisy settings.", "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.", "We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
In a nutshell, spammers are those accounts that advertise un-solicited and often harmful content, containing links to malicious pages @cite_48 , bots are computer programs that control social accounts, as stealthy as to mimic real users @cite_38 , while cyborgs interweave characteristics of both manual and automated behavior @cite_47 . Finally, there are fake followers, namely accounts massively created to follow a target account and that can be bought from online markets @cite_22 @cite_18 , also attracting the interest of mass media, with sometimes questionable results @cite_34 . Each of these categories has been the matter of several investigations.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_22", "@cite_48", "@cite_47", "@cite_34" ], "mid": [ "1992685726", "1849719402", "2011366667", "1986678144", "2072715695", "2165985862" ], "abstract": [ "Online Social Networks (OSNs) have become an integral part of today's Web. Politicians, celebrities, revolutionists, and others use OSNs as a podium to deliver their message to millions of active web users. Unfortunately, in the wrong hands, OSNs can be used to run astroturf campaigns to spread misinformation and propaganda. Such campaigns usually start off by infiltrating a targeted OSN on a large scale. In this paper, we evaluate how vulnerable OSNs are to a large-scale infiltration by socialbots: computer programs that control OSN accounts and mimic real users. We adopt a traditional web-based botnet design and built a Socialbot Network (SbN): a group of adaptive socialbots that are orchestrated in a command-and-control fashion. We operated such an SbN on Facebook---a 750 million user OSN---for about 8 weeks. We collected data related to users' behavior in response to a large-scale infiltration where socialbots were used to connect to a large number of Facebook users. Our results show that (1) OSNs, such as Facebook, can be infiltrated with a success rate of up to 80 , (2) depending on users' privacy settings, a successful infiltration can result in privacy breaches where even more users' data are exposed when compared to a purely public access, and (3) in practice, OSN security defenses, such as the Facebook Immune System, are not effective enough in detecting or stopping a large-scale infiltration as it occurs.", "Fake followers are those Twitter accounts specifically created to inflate the number of followers of a target account. Fake followers are dangerous for the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere-hence impacting on economy, politics, and society. In this paper, we contribute along different dimensions. First, we review some of the most relevant existing features and rules (proposed by Academia and Media) for anomalous Twitter accounts detection. Second, we create a baseline dataset of verified human and fake follower accounts. Such baseline dataset is publicly available to the scientific community. Then, we exploit the baseline dataset to train a set of machine-learning classifiers built over the reviewed rules and features. Our results show that most of the rules proposed by Media provide unsatisfactory performance in revealing fake followers, while features proposed in the past by Academia for spam detection provide good results. Building on the most promising features, we revise the classifiers both in terms of reduction of overfitting and cost for gathering the data needed to compute the features. The final result is a novel Class A classifier, general enough to thwart overfitting, lightweight thanks to the usage of the less costly features, and still able to correctly classify more than 95 of the accounts of the original training set. We ultimately perform an information fusion-based sensitivity analysis, to assess the global sensitivity of each of the features employed by the classifier.The findings reported in this paper, other than being supported by a thorough experimental methodology and interesting on their own, also pave the way for further investigation on the novel issue of fake Twitter followers.", "The users of microblogging services, such as Twitter, use the count of followers of an account as a measure of its reputation or influence. For those unwilling or unable to attract followers naturally, a growing industry of \"Twitter follower markets\" provides followers for sale. Some markets use fake accounts to boost the follower count of their customers, while others rely on a pyramid scheme to turn non-paying customers into followers for each other, and into followers for paying customers. In this paper, we present a detailed study of Twitter follower markets, report in detail on both the static and dynamic properties of customers of these markets, and develop and evaluate multiple techniques for detecting these activities. We show that our detection system is robust and reliable, and can detect a significant number of customers in the wild.", "Social networking has become a popular way for users to meet and interact online. Users spend a significant amount of time on popular social network platforms (such as Facebook, MySpace, or Twitter), storing and sharing a wealth of personal information. This information, as well as the possibility of contacting thousands of users, also attracts the interest of cybercriminals. For example, cybercriminals might exploit the implicit trust relationships between users in order to lure victims to malicious websites. As another example, cybercriminals might find personal information valuable for identity theft or to drive targeted spam campaigns. In this paper, we analyze to which extent spam has entered social networks. More precisely, we analyze how spammers who target social networking sites operate. To collect the data about spamming activity, we created a large and diverse set of \"honey-profiles\" on three large social networking sites, and logged the kind of contacts and messages that they received. We then analyzed the collected data and identified anomalous behavior of users who contacted our profiles. Based on the analysis of this behavior, we developed techniques to detect spammers in social networks, and we aggregated their messages in large spam campaigns. Our results show that it is possible to automatically identify the accounts used by spammers, and our analysis was used for take-down efforts in a real-world social network. More precisely, during this study, we collaborated with Twitter and correctly detected and deleted 15,857 spam profiles.", "Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.", "Analytic tools are beginning to be largely employed, given their ability to rank, e.g., the visibility of social media users. Visibility that, in turns, can have a monetary value, since social media popular people usually either anticipate or establish trends that could impact the real world (at least, from a consumer point of view). The above rationale has fostered the flourishing of private companies providing statistical results for social media analysis. These results have been accepted, and largely diffused, by media without any apparent scrutiny, while Academia has moderately focused its attention on this phenomenon. In this paper, we provide evidence that analytic results provided by field-flagship companies are questionable (at least). In particular, we focus on Twitter and its \"fake followers\". We survey popular Twitter analytics that count the fake followers of some target account. We perform a series of experiments aimed at verifying the trustworthiness of their results. We compare the results of such tools with a machine-learning classifier whose methodology bases on scientific basis and on a sound sampling scheme. The findings of this work call for a serious re-thinking of the methodology currently used by companies providing analytic results, whose present deliveries seem to lack on any reliability." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
As an example for spam detection, a branch of research mined the textual content of tweets @cite_14 , others studied the redirection of embedded URLs in tweets @cite_43 , or classified the URLs landing pages @cite_17 . Work in @cite_8 moves beyond the difficulty of labeling those tweets without URLs as spam tweets, by proposing a composite tool, able to match incoming tweets with underlying templates commonly used by spammers.
{ "cite_N": [ "@cite_43", "@cite_14", "@cite_8", "@cite_17" ], "mid": [ "2158568356", "2407789839", "2157679667", "2163764145" ], "abstract": [ "Twitter is prone to malicious tweets containing URLs for spam, phishing, and malware distribution. Conventional Twitter spam detection schemes utilize account features such as the ratio of tweets containing URLs and the account creation date, or relation features in the Twitter graph. These detection schemes are ineffective against feature fabrications or consume much time and resources. Conventional suspicious URL detection schemes utilize several features including lexical features of URLs, URL redirection, HTML content, and dynamic behavior. However, evading techniques such as time-based evasion and crawler evasion exist. In this paper, we propose WarningBird, a suspicious URL detection system for Twitter. Our system investigates correlations of URL redirect chains extracted from several tweets. Because attackers have limited resources and usually reuse them, their URL redirect chains frequently share the same URLs. We develop methods to discover correlated URL redirect chains using the frequently shared URLs and to determine their suspiciousness. We collect numerous tweets from the Twitter public timeline and build a statistical classifier using them. Evaluation results show that our classifier accurately and efficiently detects suspicious URLs. We also present WarningBird as a near real-time system for classifying suspicious URLs in the Twitter stream.", "Online social networks (OSNs) are extremely popular among Internet users. Unfortunately, in the wrong hands, they are also effective tools for executing spam campaigns. In this paper, we present an online spam filtering system that can be deployed as a component of the OSN platform to inspect messages generated by users in real-time. We propose to reconstruct spam messages into campaigns for classification rather than examine them individually. Although campaign identification has been used for offline spam analysis, we apply this technique to aid the online spam detection problem with sufficiently low overhead. Accordingly, our system adopts a set of novel features that effectively distinguish spam campaigns. It drops messages classified as “spam” before they reach the intended recipients, thus protecting them from various kinds of fraud. We evaluate the system using 187 million wall posts collected from Facebook and 17 million tweets collected from Twitter. In different parameter settings, the true positive rate reaches 80.9 while the false positive rate reaches 0.19 in the best case. In addition, it stays accurate for more than 9 months after the initial training phase. Once deployed, it can constantly secure the OSNs without the need for frequent re-training. Finally, tested on a server machine with eight cores (Xeon E5520 2.2Ghz) and 16GB memory, the system achieves an average throughput of 1580 messages sec and an average processing latency of 21.5ms on the Facebook dataset.", "In online social networks (OSNs), spam originating from friends and acquaintances not only reduces the joy of Internet surfing but also causes damage to less security-savvy users. Prior countermeasures combat OSN spam from different angles. Due to the diversity of spam, there is hardly any existing method that can independently detect the majority or most of OSN spam. In this paper, we empirically analyze the textual pattern of a large collection of OSN spam. An inspiring finding is that the majority (63.0 ) of the collected spam is generated with underlying templates. We therefore propose extracting templates of spam detected by existing methods and then matching messages against the templates toward accurate and fast spam detection. We implement this insight through Tangram, an OSN spam filtering system that performs online inspection on the stream of user-generated messages. Tangram automatically divides OSN spam into segments and uses the segments to construct templates to filter future spam. Experimental results show that Tangram is highly accurate and can rapidly generate templates to throttle newly emerged campaigns. Specifically, Tangram detects the most prevalent template-based spam with 95.7 true positive rate, whereas the existing template generation approach detects only 32.3 . The integration of Tangram and its auxiliary spam filter achieves an overall accuracy of 85.4 true positive rate and 0.33 false positive rate.", "On the heels of the widespread adoption of web services such as social networks and URL shorteners, scams, phishing, and malware have become regular threats. Despite extensive research, email-based spam filtering techniques generally fall short for protecting other web services. To better address this need, we present Monarch, a real-time system that crawls URLs as they are submitted to web services and determines whether the URLs direct to spam. We evaluate the viability of Monarch and the fundamental challenges that arise due to the diversity of web service spam. We show that Monarch can provide accurate, real-time protection, but that the underlying characteristics of spam do not generalize across web services. In particular, we find that spam targeting email qualitatively differs in significant ways from spam campaigns targeting Twitter. We explore the distinctions between email and Twitter spam, including the abuse of public web hosting and redirector services. Finally, we demonstrate Monarch's scalability, showing our system could protect a service such as Twitter -- which needs to process 15 million URLs day -- for a bit under $800 day." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
Other work investigated spammers through a multi-feature approach, including features on the profile, the behavior, and the timeline of an account. Examples of such an analysis include @cite_48 @cite_24 @cite_41 . In particular, @cite_24 designed a series of novel criteria, demonstrating their efficacy in detecting those spammers that evade existing detection techniques.
{ "cite_N": [ "@cite_24", "@cite_48", "@cite_41" ], "mid": [ "1998871422", "1986678144", "2085704997" ], "abstract": [ "To date, as one of the most popular online social networks (OSNs), Twitter is paying its dues as more and more spammers set their sights on this microblogging site. Twitter spammers can achieve their malicious goals such as sending spam, spreading malware, hosting botnet command and control (C&C) channels, and launching other underground illicit activities. Due to the significance and indispensability of detecting and suspending those spam accounts, many researchers along with the engineers at Twitter Inc. have devoted themselves to keeping Twitter as spam-free online communities. Most of the existing studies utilize machine learning techniques to detect Twitter spammers. “While the priest climbs a post, the devil climbs ten.” Twitter spammers are evolving to evade existing detection features. In this paper, we first make a comprehensive and empirical analysis of the evasion tactics utilized by Twitter spammers. We further design several new detection features to detect more Twitter spammers. In addition, to deeply understand the effectiveness and difficulties of using machine learning features to detect spammers, we analyze the robustness of 24 detection features that are commonly utilized in the literature as well as our proposed ones. Through our experiments, we show that our new designed features are much more effective to be used to detect (even evasive) Twitter spammers. According to our evaluation, while keeping an even lower false positive rate, the detection rate using our new feature set is also significantly higher than that of existing work. To the best of our knowledge, this work is the first empirical study and evaluation of the effect of evasion tactics utilized by Twitter spammers and is a valuable supplement to this line of research.", "Social networking has become a popular way for users to meet and interact online. Users spend a significant amount of time on popular social network platforms (such as Facebook, MySpace, or Twitter), storing and sharing a wealth of personal information. This information, as well as the possibility of contacting thousands of users, also attracts the interest of cybercriminals. For example, cybercriminals might exploit the implicit trust relationships between users in order to lure victims to malicious websites. As another example, cybercriminals might find personal information valuable for identity theft or to drive targeted spam campaigns. In this paper, we analyze to which extent spam has entered social networks. More precisely, we analyze how spammers who target social networking sites operate. To collect the data about spamming activity, we created a large and diverse set of \"honey-profiles\" on three large social networking sites, and logged the kind of contacts and messages that they received. We then analyzed the collected data and identified anomalous behavior of users who contacted our profiles. Based on the analysis of this behavior, we developed techniques to detect spammers in social networks, and we aggregated their messages in large spam campaigns. Our results show that it is possible to automatically identify the accounts used by spammers, and our analysis was used for take-down efforts in a real-world social network. More precisely, during this study, we collaborated with Twitter and correctly detected and deleted 15,857 spam profiles.", "As the microblogging service (such as Weibo) is becoming popular, spam becomes a serious problem of affecting the credibility and readability of Online Social Networks. Most existing studies took use of a set of features to identify spam, but without the consideration of the overlap and dependency among different features. In this study, we investigate the problem of spam detection by analyzing real spam dataset collections of Weibo and propose a novel hybrid model of spammer detection, called SDHM, which utilizing significant features, i.e. user behavior information, online social network attributes and text content characteristics, in an organic way. Experiments on real Weibo dataset demonstrate the power of the proposed hybrid model and the promising performance." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
Our previous work in @cite_18 considers fake Twitter followers. Since both spammers, bots, and genuine accounts could fall in this category, we tested a series of rules and features from both the grey literature and Academia on a reference dataset of humans and fake followers. Our main contributions were: (i) pruning those rules and features that behaved worst in detecting fake followers, and (ii) implementing a classifier which significantly reduces overfitting and cost for data gathering.
{ "cite_N": [ "@cite_18" ], "mid": [ "1849719402" ], "abstract": [ "Fake followers are those Twitter accounts specifically created to inflate the number of followers of a target account. Fake followers are dangerous for the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere-hence impacting on economy, politics, and society. In this paper, we contribute along different dimensions. First, we review some of the most relevant existing features and rules (proposed by Academia and Media) for anomalous Twitter accounts detection. Second, we create a baseline dataset of verified human and fake follower accounts. Such baseline dataset is publicly available to the scientific community. Then, we exploit the baseline dataset to train a set of machine-learning classifiers built over the reviewed rules and features. Our results show that most of the rules proposed by Media provide unsatisfactory performance in revealing fake followers, while features proposed in the past by Academia for spam detection provide good results. Building on the most promising features, we revise the classifiers both in terms of reduction of overfitting and cost for gathering the data needed to compute the features. The final result is a novel Class A classifier, general enough to thwart overfitting, lightweight thanks to the usage of the less costly features, and still able to correctly classify more than 95 of the accounts of the original training set. We ultimately perform an information fusion-based sensitivity analysis, to assess the global sensitivity of each of the features employed by the classifier.The findings reported in this paper, other than being supported by a thorough experimental methodology and interesting on their own, also pave the way for further investigation on the novel issue of fake Twitter followers." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
Remarkably, we observe a significant shift taking place over the last two years. As observed in @cite_55 , new social bots are rising, whose peculiarity emerges only when considering their collective behavior. As observed from the analysis of the datasets introduced in , the new waves of social bots are such that, if the accounts are considered one by one, they are no more distinguishable from genuine ones. We claim that such social spambots represent the third and most novel generation of spambots, following the original wave dating back from the 00s, passing through a second generation dated around 2011 (described by Yang in @cite_24 ). Interestingly, the thesis of a third, recent generation of spambots is also supported by related work carried on over datasets dated around the early 10s, such as @cite_1 , whose conclusions were that malicious accounts in a group appeared as disconnected among themselves and whose behaviors were not similar between one another.
{ "cite_N": [ "@cite_24", "@cite_55", "@cite_1" ], "mid": [ "1998871422", "1837843568", "2092277251" ], "abstract": [ "To date, as one of the most popular online social networks (OSNs), Twitter is paying its dues as more and more spammers set their sights on this microblogging site. Twitter spammers can achieve their malicious goals such as sending spam, spreading malware, hosting botnet command and control (C&C) channels, and launching other underground illicit activities. Due to the significance and indispensability of detecting and suspending those spam accounts, many researchers along with the engineers at Twitter Inc. have devoted themselves to keeping Twitter as spam-free online communities. Most of the existing studies utilize machine learning techniques to detect Twitter spammers. “While the priest climbs a post, the devil climbs ten.” Twitter spammers are evolving to evade existing detection features. In this paper, we first make a comprehensive and empirical analysis of the evasion tactics utilized by Twitter spammers. We further design several new detection features to detect more Twitter spammers. In addition, to deeply understand the effectiveness and difficulties of using machine learning features to detect spammers, we analyze the robustness of 24 detection features that are commonly utilized in the literature as well as our proposed ones. Through our experiments, we show that our new designed features are much more effective to be used to detect (even evasive) Twitter spammers. According to our evaluation, while keeping an even lower false positive rate, the detection rate using our new feature set is also significantly higher than that of existing work. To the best of our knowledge, this work is the first empirical study and evaluation of the effect of evasion tactics utilized by Twitter spammers and is a valuable supplement to this line of research.", "Today's social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society.", "Sybil accounts are fake identities created to unfairly increase the power or resources of a single malicious user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but they have not been able to perform large-scale measurements to detect them or measure their activities. In this article, we describe our efforts to detect, characterize, and understand Sybil account activity in the Renren Online Social Network (OSN). We use ground truth provided by Renren Inc. to build measurement-based Sybil detectors and deploy them on Renren to detect more than 100,000 Sybil accounts. Using our full dataset of 650,000 Sybils, we examine several aspects of Sybil behavior. First, we study their link creation behavior and find that contrary to prior conjecture, Sybils in OSNs do not form tight-knit communities. Next, we examine the fine-grained behaviors of Sybils on Renren using clickstream data. Third, we investigate behind-the-scenes collusion between large groups of Sybils. Our results reveal that Sybils with no explicit social ties still act in concert to launch attacks. Finally, we investigate enhanced techniques to identify stealthy Sybils. In summary, our study advances the understanding of Sybil behavior on OSNs and shows that Sybils can effectively avoid existing community-based Sybil detectors. We hope that our results will foster new research on Sybil detection that is based on novel types of Sybil features." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
Work in @cite_31 and @cite_32 study connectivity patterns in large graphs to let unexpected behaviors emerge. Following the factual data that unexpected behaviors feature lockstep characteristics, e.g., large groups of followers connect to the same groups of followees, the authors depict correspondences between lockstep behaviors in the social graph and dense blocks in the adjacency matrix of the graph. Furthermore, they propose an algorithm to spot users featuring unexpected behaviors.
{ "cite_N": [ "@cite_31", "@cite_32" ], "mid": [ "2133591726", "2182891243" ], "abstract": [ "How can web services that depend on user generated content discern fraudulent input by spammers from legitimate input? In this paper we focus on the social network Facebook and the problem of discerning ill-gotten Page Likes, made by spammers hoping to turn a profit, from legitimate Page Likes. Our method, which we refer to as CopyCatch, detects lockstep Page Like patterns on Facebook by analyzing only the social graph between users and Pages and the times at which the edges in the graph (the Likes) were created. We offer the following contributions: (1) We give a novel problem formulation, with a simple concrete definition of suspicious behavior in terms of graph structure and edge constraints. (2) We offer two algorithms to find such suspicious lockstep behavior - one provably-convergent iterative algorithm and one approximate, scalable MapReduce implementation. (3) We show that our method severely limits \"greedy attacks\" and analyze the bounds from the application of the Zarankiewicz problem to our setting. Finally, we demonstrate and discuss the effectiveness of CopyCatch at Facebook and on synthetic data, as well as potential extensions to anomaly detection problems in other domains. CopyCatch is actively in use at Facebook, searching for attacks on Facebook's social graph of over a billion users, many millions of Pages, and billions of Page Likes.", "Given multimillion-node graphs such as \"who-follows-whom\", \"patent-cites-patent\", \"user-likes-page\" and \"actor director-makes-movie\" networks, how can we find unexpected behaviors? When companies operate on the graphs with monetary incentives to sell Twitter \"Followers\" and Facebook page \"Likes\", the graphs show strange connectivity patterns. In this paper, we study a complete graph from a large Twitter-style social network, spanning up to 3.33 billion edges. We report strange deviations from typical patterns like smooth degree distributions. We find that such deviations are often due to \"lockstep behavior\" that large groups of followers connect to the same groups of followees. Our first contribution is that we study strange patterns on the adjacency matrix and in the spectral subspaces with respect to several flavors of lockstep. We discover that (a) the lockstep behaviors on the graph shape dense \"block\" in its adjacency matrix and creates \"rays\" in spectral subspaces, and (b) partially overlapping of the behaviors shape \"staircase\" in its adjacency matrix and creates \"pearls\" in spectral subspaces. The second contribution is that we provide a fast algorithm, using the discovery as a guide for practitioners, to detect users who offer the lockstep behaviors in undirected directed bipartite graphs. We carry out extensive experiments on both synthetic and real datasets, as well as public datasets from IMDb and US Patent. The results demonstrate the scalability and effectiveness of our proposed algorithm." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
The authors of @cite_56 move from the intuition that, if a collective online action happens once, then that action is not necessarily fraudulent. Instead, if that collective action repeats over time, especially in reaction to the same kind of event, it probably represents an anomalous activity. In particular, the work focuses on retweeting activities, defines features for retweet threads characterization, and proposes a methodology for catching synchronized frauds.
{ "cite_N": [ "@cite_56" ], "mid": [ "919122619" ], "abstract": [ "Given the retweeting activity for the posts of several Twitter users, how can we distinguish organic activity from spammy retweets by paid followers to boost a post’s appearance of popularity? More generally, given groups of observations, can we spot strange groups? Our main intuition is that organic behavior has more variability, while fraudulent behavior, like retweets by botnet members, is more synchronized. We refer to the detection of such synchronized observations as the Synchonization Fraud problem, and we study a specific instance of it, Retweet Fraud Detection, manifested in Twitter. Here, we propose: (A) ND-Sync, an efficient method for detecting group fraud, and (B) a set of carefully designed features for characterizing retweet threads. ND-Sync is effective in spotting retweet fraudsters, robust to different types of abnormal activity, and adaptable as it can easily incorporate additional features. Our method achieves a 97 accuracy on a real dataset of 12 million retweets crawled from Twitter." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
SynchroTrap @cite_50 aims at detecting loosely synchronized behaviors for a broad range of social network applications. Time is an important dimension for SynchroTrap (thus marking a difference with the approach proposed here), since the methodology forms clusters on the basis of equal actions performed by online accounts within the same time interval.
{ "cite_N": [ "@cite_50" ], "mid": [ "2125490153" ], "abstract": [ "The success of online social networks has attracted a constant interest in attacking and exploiting them. Attackers usually control malicious accounts, including both fake and compromised real user accounts, to launch attack campaigns such as social spam, malware distribution, and online rating distortion. To defend against these attacks, we design and implement a malicious account detection system called SynchroTrap. We observe that malicious accounts usually perform loosely synchronized actions in a variety of social network context. Our system clusters user accounts according to the similarity of their actions and uncovers large groups of malicious accounts that act similarly at around the same time for a sustained period of time. We implement SynchroTrap as an incremental processing system on Hadoop and Giraph so that it can process the massive user activity data in a large online social network efficiently. We have deployed our system in five applications at Facebook and Instagram. SynchroTrap was able to unveil more than two million malicious accounts and 1156 large attack campaigns within one month." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
Inspired by particle physics, fluids mechanics, and astronomy, the authors of @cite_35 consider group anomalies in a wider flavor, not necessarily oriented to groups of spambots. As an example, focusing on the major conferences in the area of artificial intelligence, they consider if there are published papers whose topics are anomalous for those conferences, leveraging features inherent to both the single component (e.g., the topic of the paper) and the relationships among components (e.g., authors common to different papers).
{ "cite_N": [ "@cite_35" ], "mid": [ "2015172091" ], "abstract": [ "Traditional anomaly detection on social media mostly focuses on individual point anomalies while anomalous phenomena usually occur in groups. Therefore, it is valuable to study the collective behavior of individuals and detect group anomalies. Existing group anomaly detection approaches rely on the assumption that the groups are known, which can hardly be true in real world social media applications. In this article, we take a generative approach by proposing a hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD takes both pairwise and point-wise data as input, automatically infers the groups and detects group anomalies simultaneously. To account for the dynamic properties of the social media data, we further generalize GLAD to its dynamic extension d-GLAD. We conduct extensive experiments to evaluate our models on both synthetic and real world datasets. The empirical results demonstrate that our approach is effective and robust in discovering latent groups and detecting group anomalies." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
We briefly highlight the main differences of our approach with respect to the cited papers. First of all, we consider a single dimension as the basis to let groups of social accounts emerge: the digital DNA, i.e., the sequence of characters encoding the accounts' behavior. Secondly, in this work we do not consider properties of the social graph (e.g., a follow link over Twitter or a friendship on Facebook). This leads to the significant advantage of reducing the cost for data gathering. Indeed, approaches that are based on graph mining (such as @cite_4 ) generally rely on a large quantity of data and can require computationally expensive algorithms to perform their detection @cite_18 . Our proposal, instead, only exploits Twitter timeline data to perform spambots detection.
{ "cite_N": [ "@cite_18", "@cite_4" ], "mid": [ "1849719402", "2473813716" ], "abstract": [ "Fake followers are those Twitter accounts specifically created to inflate the number of followers of a target account. Fake followers are dangerous for the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere-hence impacting on economy, politics, and society. In this paper, we contribute along different dimensions. First, we review some of the most relevant existing features and rules (proposed by Academia and Media) for anomalous Twitter accounts detection. Second, we create a baseline dataset of verified human and fake follower accounts. Such baseline dataset is publicly available to the scientific community. Then, we exploit the baseline dataset to train a set of machine-learning classifiers built over the reviewed rules and features. Our results show that most of the rules proposed by Media provide unsatisfactory performance in revealing fake followers, while features proposed in the past by Academia for spam detection provide good results. Building on the most promising features, we revise the classifiers both in terms of reduction of overfitting and cost for gathering the data needed to compute the features. The final result is a novel Class A classifier, general enough to thwart overfitting, lightweight thanks to the usage of the less costly features, and still able to correctly classify more than 95 of the accounts of the original training set. We ultimately perform an information fusion-based sensitivity analysis, to assess the global sensitivity of each of the features employed by the classifier.The findings reported in this paper, other than being supported by a thorough experimental methodology and interesting on their own, also pave the way for further investigation on the novel issue of fake Twitter followers.", "Given a directed graph of millions of nodes, how can we automatically spot anomalous, suspicious nodes judging only from their connectivity patterns? Suspicious graph patterns show up in many applications, from Twitter users who buy fake followers, manipulating the social network, to botnet members performing distributed denial of service attacks, disturbing the network traffic graph. We propose a fast and effective method, C atch S ync , which exploits two of the tell-tale signs left in graphs by fraudsters: (a) synchronized behavior: suspicious nodes have extremely similar behavior patterns because they are often required to perform some task together (such as follow the same user); and (b) rare behavior: their connectivity patterns are very different from the majority. We introduce novel measures to quantify both concepts (“synchronicity” and “normality”) and we propose a parameter-free algorithm that works on the resulting synchronicity-normality plots. Thanks to careful design, C atch S ync has the following desirable properties: (a) it is scalable to large datasets, being linear in the graph size; (b) it is parameter free; and (c) it is side-information-oblivious: it can operate using only the topology, without needing labeled data, nor timing information, and the like., while still capable of using side information if available. We applied C atch S ync on three large, real datasets, 1-billion-edge Twitter social graph, 3-billion-edge, and 12-billion-edge Tencent Weibo social graphs, and several synthetic ones; C atch S ync consistently outperforms existing competitors, both in detection accuracy by 36p on Twitter and 20p on Tencent Weibo, as well as in speed." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
Furthermore, our DNA-inspired modeling focuses on the concept of sequence, namely ordered lists of symbols, with variable length, and taken from a relatively small alphabet. This marks a clear separation from other well-known behavioral analysis techniques that do not consider the ordering of the elements, like hashing @cite_19 .
{ "cite_N": [ "@cite_19" ], "mid": [ "1853324590" ], "abstract": [ "Due to the simplicity and efficiency, many hashing methods have recently been developed for large-scale similarity search. Most of the existing hashing methods focus on mapping low-level features to binary codes, but neglect attributes that are commonly associated with data samples. Attribute data, such as image tag, product brand, and user profile, can represent human recognition better than low-level features. However, attributes have specific characteristics, including high-dimensional, sparse and categorical properties, which is hardly leveraged into the existing hashing learning frameworks. In this paper, we propose a hashing learning framework, Probabilistic Attributed Hashing (PAH), to integrate attributes with low-level features. The connections between attributes and low-level features are built through sharing a common set of latent binary variables, i.e. hash codes, through which attributes and features can complement each other. Finally, we develop an efficient iterative learning algorithm, which is generally feasible for large-scale applications. Extensive experiments and comparison study are conducted on two public datasets, i.e., DBLP and NUS-WIDE. The results clearly demonstrate that the proposed PAH method substantially outperforms the peer methods." ] }
1703.04482
2596225776
Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.
In , we will show a comparison of our approach with two unsupervised approaches, namely @cite_20 and @cite_11 , in terms of detection performances. As discussed later on, the results are promising and they lead us to believe that digital DNA is a simple and compact, yet powerful, mean to detect the novel waves of social spambots. The intuition behind our approach has been succinctly presented in a magazine paper @cite_45 .
{ "cite_N": [ "@cite_45", "@cite_20", "@cite_11" ], "mid": [ "2963342853", "2090182207", "2020754036" ], "abstract": [ "A novel, simple, and effective approach to modeling online user behavior extracts and analyzes digital DNA sequences from user online actions and uses Twitter as a benchmark to test the proposal. Specifically, the model obtains an incisive and compact DNA-inspired characterization of user actions. Then, standard DNA analysis techniques discriminate between genuine and spambot accounts on Twitter. An experimental campaign supports the proposal, showing its effectiveness and viability. Although Twitter spambot detection is a specific use case on a specific social media platform, the proposed methodology is platform and technology agnostic, paving the way for diverse behavioral characterization tasks.", "Abstract In this paper, we present a generic statistical approach to identify spam profiles on Online Social Networks (OSNs). Our study is based on real datasets containing both normal and spam profiles crawled from Facebook and Twitter networks. We have identified a set of 14 generic statistical features to identify spam profiles. The identified features are common to both Facebook and Twitter networks. For classification task, we have used three different classification algorithms – na i ve Bayes , Jrip , and J48 , and evaluated them on both individual and combined datasets to establish the discriminative property of the identified features. The results obtained on a combined dataset has detection rate (DR) as 0.957 and false positive rate (FPR) as 0.048, whereas on Facebook dataset the DR and FPR values are 0.964 and 0.089, respectively, and that on Twitter dataset the DR and FPR values are 0.976 and 0.075, respectively. We have also analyzed the contribution of each individual feature towards the detection accuracy of spam profiles. Thereafter, we have considered 7 most discriminative features and proposed a clustering-based approach to identify spam campaigns on Facebook and Twitter networks.", "The rapid growth of Twitter has triggered a dramatic increase in spam volume and sophistication. The abuse of certain Twitter components such as ''hashtags'', ''mentions'', and shortened URLs enables spammers to operate efficiently. These same features, however, may be a key factor in identifying new spam accounts as shown in previous studies. Our study provides three novel contributions. Firstly, previous studies have approached spam detection as a classification problem, whereas we view it as an anomaly detection problem. Secondly, 95 one-gram features from tweet text were introduced alongside the user information analyzed in previous studies. Finally, to effectively handle the streaming nature of tweets, two stream clustering algorithms, StreamKM++ and DenStream, were modified to facilitate spam identification. Both algorithms clustered normal Twitter users, treating outliers as spammers. Each of these algorithms performed well individually, with StreamKM++ achieving 99 recall and a 6.4 false positive rate; and DenStream producing 99 recall and a 2.8 false positive rate. When used in conjunction, these algorithms reached 100 recall and a 2.2 false positive rate, meaning that our system was able to identify 100 of the spammers in our test while incorrectly detecting only 2.2 of normal users as spammers." ] }
1703.04057
2952888913
Blockchain technologies are taking the world by storm. Public blockchains, such as Bitcoin and Ethereum, enable secure peer-to-peer applications like crypto-currency or smart contracts. Their security and performance are well studied. This paper concerns recent private blockchain systems designed with stronger security (trust) assumption and performance requirement. These systems target and aim to disrupt applications which have so far been implemented on top of database systems, for example banking, finance applications. Multiple platforms for private blockchains are being actively developed and fine tuned. However, there is a clear lack of a systematic framework with which different systems can be analyzed and compared against each other. Such a framework can be used to assess blockchains' viability as another distributed data processing platform, while helping developers to identify bottlenecks and accordingly improve their platforms. In this paper, we first describe BlockBench, the first evaluation framework for analyzing private blockchains. It serves as a fair means of comparison for different platforms and enables deeper understanding of different system design choices. Any private blockchain can be integrated to BlockBench via simple APIs and benchmarked against workloads that are based on real and synthetic smart contracts. BlockBench measures overall and component-wise performance in terms of throughput, latency, scalability and fault-tolerance. Next, we use BlockBench to conduct comprehensive evaluation of three major private blockchains: Ethereum, Parity and Hyperledger Fabric. The results demonstrate that these systems are still far from displacing current database systems in traditional data processing workloads. Furthermore, there are gaps in performance among the three systems which are attributed to the design choices at different layers of the software stack.
There are many standard frameworks for benchmarking database systems. OLTP-Bench @cite_22 contains standard workloads such as TPC-C for transactional systems. YCSB @cite_11 contains key-value workloads. HiBench @cite_49 and BigBench @cite_14 feature big-data analytics workloads for MapReduce-like systems. shares the same high-level design as these frameworks, but its workloads and main driver are designed specifically for blockchain systems.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_49", "@cite_11" ], "mid": [ "2052312648", "2240667924", "", "1985229168" ], "abstract": [ "There is a tremendous interest in big data by academia, industry and a large user base. Several commercial and open source providers unleashed a variety of products to support big data storage and processing. As these products mature, there is a need to evaluate and compare the performance of these systems. In this paper, we present BigBench, an end-to-end big data benchmark proposal. The underlying business model of BigBench is a product retailer. The proposal covers a data model and synthetic data generator that addresses the variety, velocity and volume aspects of big data systems containing structured, semi-structured and unstructured data. The structured part of the BigBench data model is adopted from the TPC-DS benchmark, which is enriched with semi-structured and unstructured data components. The semi-structured part captures registered and guest user clicks on the retailer's website. The unstructured data captures product reviews submitted online. The data generator designed for BigBench provides scalable volumes of raw data based on a scale factor. The BigBench workload is designed around a set of queries against the data model. From a business prospective, the queries cover the different categories of big data analytics proposed by McKinsey. From a technical prospective, the queries are designed to span three different dimensions based on data sources, query processing types and analytic techniques. We illustrate the feasibility of BigBench by implementing it on the Teradata Aster Database. The test includes generating and loading a 200 Gigabyte BigBench data set and testing the workload by executing the BigBench queries (written using Teradata Aster SQL-MR) and reporting their response times.", "Benchmarking is an essential aspect of any database management system (DBMS) effort. Despite several recent advancements, such as pre-configured cloud database images and database-as-a-service (DBaaS) offerings, the deployment of a comprehensive testing platform with a diverse set of datasets and workloads is still far from being trivial. In many cases, researchers and developers are limited to a small number of workloads to evaluate the performance characteristics of their work. This is due to the lack of a universal benchmarking infrastructure, and to the difficulty of gaining access to real data and workloads. This results in lots of unnecessary engineering efforts and makes the performance evaluation results difficult to compare. To remedy these problems, we present OLTP-Bench, an extensible \"batteries included\" DBMS benchmarking testbed. The key contributions of OLTP-Bench are its ease of use and extensibility, support for tight control of transaction mixtures, request rates, and access distributions over time, as well as the ability to support all major DBMSs and DBaaS platforms. Moreover, it is bundled with fifteen workloads that all differ in complexity and system demands, including four synthetic workloads, eight workloads from popular benchmarks, and three workloads that are derived from real-world applications. We demonstrate through a comprehensive set of experiments conducted on popular DBMS and DBaaS offerings the different features provided by OLTP-Bench and the effectiveness of our testbed in characterizing the performance of database services.", "", "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems." ] }
1703.04079
2951357726
3D shape models are naturally parameterized using vertices and faces, , composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent geometry images' representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images.
Creating 3D content is an important problem in computer vision. Early works focussed on coherent synthesis of 3D primitives and surface patches @cite_40 . Recent approaches for assembly-based 3D shape creation from components use probabilistic models @cite_1 @cite_13 , or deep-learned models @cite_26 . Estimates of wireframe for 3D objects are obtained by a 3D geometric object class model in @cite_5 . Kar al learn a deformable 3D model for shape reconstruction from a single image @cite_16 . Huang al show that joint analysis of image and shape collection enables 3D shape reconstruction from a single image @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_1", "@cite_40", "@cite_5", "@cite_16", "@cite_13" ], "mid": [ "1915142102", "2161960196", "2156756707", "2114111978", "", "2092773680" ], "abstract": [ "We present a method for joint analysis and synthesis of geometrically diverse 3D shape families. Our method first learns part-based templates such that an optimal set of fuzzy point and part correspondences is computed between the shapes of an input collection based on a probabilistic deformation model. In contrast to previous template-based approaches, the geometry and deformation parameters of our part-based templates are learned from scratch. Based on the estimated shape correspondence, our method also learns a probabilistic generative model that hierarchically captures statistical relationships of corresponding surface point positions and parts as well as their existence in the input shapes. A deep learning procedure is used to capture these hierarchical relationships. The resulting generative model is used to produce control point arrangements that drive shape synthesis by combining and deforming parts from the input collection. The generative model also yields compact shape descriptors that are used to perform fine-grained classification. Finally, it can be also coupled with the probabilistic deformation model to further improve shape correspondence. We provide qualitative and quantitative evaluations of our method for shape correspondence, segmentation, fine-grained classification and synthesis. Our experiments demonstrate superior correspondence and segmentation results than previous state-of-the-art approaches.", "Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling is the identification of relevant components to be presented to the user. In this paper, we introduce a probabilistic reasoning approach to this problem. Given a repository of shapes, our approach learns a probabilistic graphical model that encodes semantic and geometric relationships among shape components. The probabilistic model is used to present components that are semantically and stylistically compatible with the 3D model that is being assembled. Our experiments indicate that the probabilistic model increases the relevance of presented components.", "There are several successful systems that provide algorithms that allow for the intersection of polygonal objects or other primitive shapes to create more complex objects. Our intent is to provide similar algorithms for intersecting surface patches. There have been contributions to this concept at the display algorithm level, that is, computing the intersection at the time the frame is generated. In an animation environment, however, it becomes important to incorporate the intersection in the data generation routines, in order that those parts of the intersected object that never contribute to an image are not processed by the display algorithm. This only increases the complexity of the object unnecessarily, and subsequently puts an additional burden on the display algorithms. An algorithm is described which uses a modified Catmull recursive subdivision scheme to find the space curve which is the intersection of two bicubic patches. An associated data structure is discussed which incorporates this curve of intersection in the patch description in a way suitable for efficient display of the intersected object. Sample output of these intersections are shown which serve to illustrate the capabilities and limitations of the described procedures.", "Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.", "", "We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis." ] }
1703.04079
2951357726
3D shape models are naturally parameterized using vertices and faces, , composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent geometry images' representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images.
The success of deep learning architectures for generating images @cite_23 @cite_21 has resulted in extension of these techniques to generate models of 3D shapes. The authors of 3D ShapeNets @cite_22 perform pioneering work on using deep neural nets for 3D shape recognition and completion. Girdhar al @cite_12 learn a vector representation for 3D objects using images and CAD objects which are used for generating 3D shapes from an image. A volumetric denoising auto-encoder is demonstrated for 3D shape completion from noisy inputs in @cite_9 . Choy al propose a 3D recurrent reconstruction neural network for 3D shape creation from single or multiple images @cite_37 . A probabilistic latent space of 3D shapes is learnt by extending generative-adversarial model of @cite_27 to the 3D domain in @cite_17 . All these deep learning methods use 3D voxel representation for generating the 3D shapes. A conditional generative model is proposed in @cite_14 to infer 3D representation from 2D images. Although, this method can generate both 3D voxels or meshes, the mesh representation is limited to standard parameterizations which restrict shape variability. A 3D interpreter network is developed in @cite_8 which estimates the 3D skeleton of a shape.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_22", "@cite_8", "@cite_9", "@cite_21", "@cite_27", "@cite_23", "@cite_12", "@cite_17" ], "mid": [ "2342277278", "2469266052", "2951755740", "2345308174", "2338532005", "", "", "", "2335364074", "2949551726" ], "abstract": [ "Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM SLAM methods fail (because of lack of texture and or wide baseline).", "A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.", "3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.", "Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information.", "With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes.", "", "", "", "What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.", "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods." ] }
1703.04079
2951357726
3D shape models are naturally parameterized using vertices and faces, , composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent geometry images' representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images.
Different from all above approaches, our thrust is to generate category-specific 3D point clouds representative of a surface instead of voxels to represent 3D objects. Our work is motivated by geometry image @cite_10 representation used for learning 3D shapes surfaces in @cite_28 . Our neural network architecture is inspired by deep residual nets @cite_6 which have achieved impressive results on image recognition tasks, and by the architectural considerations in @cite_36 to generate chairs.
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_36", "@cite_6" ], "mid": [ "2518780089", "", "1893585201", "2949650786" ], "abstract": [ "Surfaces serve as a natural parametrization to 3D shapes. Learning surfaces using convolutional neural networks (CNNs) is a challenging task. Current paradigms to tackle this challenge are to either adapt the convolutional filters to operate on surfaces, learn spectral descriptors defined by the Laplace-Beltrami operator, or to drop surfaces altogether in lieu of voxelized inputs. Here we adopt an approach of converting the 3D shape into a ‘geometry image’ so that standard CNNs can directly be used to learn 3D shapes. We qualitatively and quantitatively validate that creating geometry images using authalic parametrization on a spherical domain is suitable for robust learning of 3D shape surfaces. This spherically parameterized shape is then projected and cut to convert the original 3D shape into a flat and regular geometry image. We propose a way to implicitly learn the topology and structure of 3D shapes using geometry images encoded with suitable features. We show the efficacy of our approach to learn 3D shape surfaces for classification and retrieval tasks on non-rigid and rigid shape datasets.", "", "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1703.04298
2026615349
The concept of software quality is very complex and has many facets. Reflecting all these facets and at the same time measuring everything related to these facets results in comprehensive but large quality models and extensive measurements. In contrast, there are also many smaller, focused quality models claiming to evaluate quality with few measures. We investigate if and to what extent it is possible to build a focused quality model with similar evaluation results as a comprehensive quality model but with far less measures needed to be collected and, hence, reduced effort. We make quality evaluations with the comprehensive Quamoco base quality model and build focused quality models based on the same set of measures and data from over 2,000 open source systems. We analyse the ability of the focused model to predict the results of the Quamoco model by comparing them with a random predictor as a baseline. We calculate the standardised accuracy measure SA and effect sizes. We found that for the Quamoco model and its 378 automatically collected measures, we can build a focused model with only 10 measures but an accuracy of 61 and a medium to high effect size. We conclude that we can build focused quality models to get an impression of a system's quality similar to comprehensive models. However, when including manually collected measures, the accuracy of the models stayed below 50 . Hence, manual measures seem to have a high impact and should therefore not be ignored in a focused model.
To the best of our knowledge there is no study directly resembling our approach of using predictors to predict a quality evaluation result from a comprehensive quality model. The approach of Nagappan and Ball @cite_14 comes closest to our approach, because they use the rules of static code checkers as independent variables, like we do. However, they again try to predict faulty components based on it with quite successful results. We take this result as an indicator that rule-based static code analysis is principally suited for quality analysis.
{ "cite_N": [ "@cite_14" ], "mid": [ "2082314767" ], "abstract": [ "During software development it is helpful to obtain early estimates of the defect density of software components. Such estimates identify fault-prone areas of code requiring further testing. We present an empirical approach for the early prediction of pre-release defect density based on the defects found using static analysis tools. The defects identified by two different static analysis tools are used to fit and predict the actual pre-release defect density for Windows Server 2003. We show that there exists a strong positive correlation between the static analysis defect density and the pre-release defect density determined by testing. Further, the predicted pre-release defect density and the actual pre-release defect density are strongly correlated at a high degree of statistical significance. Discriminant analysis shows that the results of static analysis tools can be used to separate high and low quality components with an overall classification rate of 82.91 ." ] }
1703.04192
2952698448
We consider UAV IoT aerial sensing that delivers multiple VR AR immersive communication sessions to remote users. The UAV swarm is spatially distributed over a wide area of interest, and each UAV captures a viewpoint of the scene below it. The remote users are interested in visual immersive navigation of specific subareas scenes of interest, reconstructed on their respective VR AR devices from the captured data. The reconstruction quality of the immersive scene representations at the users will depend on the sampling sensing rates associated with each UAV. There is a limit on the aggregate amount of data that the UAV swarm can sample and send towards the users, stemming from physical transmission capacity constraints. Similarly, each VR AR application has minimum reconstruction quality requirements for its own session. We propose an optimization framework that makes three contributions in this context. First, we select the optimal sampling rates to be used by each UAV, such that the system and application constraints are not exceed, while the priority weighted reconstruction quality across all VR AR sessions is maximized. Then, we design an optimal scalable source-channel signal representation that instills into the captured data inherent rate adaptivity, unequal error protection, and minimum required redundancy. Finally, the UAV transmission efficiency is enhanced by the use of small-form-factor multi-beam directional antennas and optimal power link scheduling across the scalable signal representation layers. Our experiments demonstrate competitive advantages over conventional methods for visual sensing. This is a first-of-its-kind study of an emerging application of prospectively broad societ al impact.
VR AR immersive communication via UAV IoT sensing is a new topic that has not been studied before. Closely related areas include ground-based multi-camera wireless sensing for multi-view systems @cite_50 , immersive telecollaboration @cite_7 @cite_11 @cite_40 , multi-view video coding communication @cite_29 @cite_42 @cite_44 @cite_19 , and 360 @math video streaming @cite_25 @cite_28 . Another related area is graph-based signal processing for time-varying point clouds used in AR applications @cite_36 @cite_10 .
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_36", "@cite_29", "@cite_42", "@cite_44", "@cite_19", "@cite_40", "@cite_50", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2539646095", "2524582890", "2078064910", "2131723117", "2098033563", "2035789088", "2093252776", "1964183243", "2094381464", "1498318113", "2523094016", "2141159841" ], "abstract": [ "Traditional set-top camera video-conferencing systems still fail to meet the 'telepresence challenge' of providing a viable alternative for physical business travel, which is nowadays characterized by unacceptable delays, costs, inconvenience, and an increasingly large ecological footprint. Even recent high-end commercial solutions, while partially removing some of these traditional shortcomings, still present the problems of not scaling easily, expensive implementations, not utilizing 3D life-sized representations of the remote participants and addressing only eye contact and gesture-based interactions in very limited ways. The European FP7 project 3DPresence will develop a multi-party, high-end 3D videoconferencing concept that will tackle the problem of transmitting the feeling of physical presence in real-time to multiple remote locations in a transparent and natural way. In this paper, we present an overall concept, which includes the geometrical design of the whole prototype demonstrator, the arrangement of the cameras and displays and the general multi-view video analysis chain. The driving force behind the design strategy is to fulfil the requirements of a novel 3D immersive videoconferencing system, including directional eye gaze and gesture awareness. (8 pages)", "While traditional multimedia applications such as games and videos are still popular, there has been a significant interest in the recent years towards new 3D media such as 3D immersion and Virtual Reality (VR) applications, especially 360 VR videos. 360 VR video is an immersive spherical video where the user can look around during playback. Unfortunately, 360 VR videos are extremely bandwidth intensive, and therefore are difficult to stream at acceptable quality levels. In this paper, we propose an adaptive bandwidth-efficient 360 VR video streaming system using a divide and conquer approach. In our approach, we propose a dynamic view-aware adaptation technique to tackle the huge streaming bandwidth demands of 360 VR videos. We spatially divide the videos into multiple tiles while encoding and packaging, use MPEG-DASH SRD to describe the spatial relationship of tiles in the 360-degree space, and prioritize the tiles in the Field of View (FoV). In order to describe such tiled representations, we extend MPEG-DASH SRD to the 3D space of 360 VR videos. We spatially partition the underlying 3D mesh, and construct an efficient 3D geometry mesh called hexaface sphere to optimally represent a tiled 360 VR video in the 3D space. Our initial evaluation results report up to 72 bandwidth savings on 360 VR video streaming with minor negative quality impacts compared to the baseline scenario when no adaptations is applied.", "The next step in immersive communication beyond video from a single camera is object-based free viewpoint video, which is the capture and compression of a dynamic object such that it can be reconstructed and viewed from an arbitrary viewpoint. The moving human body is a particularly useful subclass of dynamic object for object-based free viewpoint video relevant to both telepresence and entertainment. In this paper, we compress moving human body sequences by applying recently developed Graph Wavelet Filter Banks to time-varying geometry and color signals living on a mesh representation of the human body. This model-based approach significantly outperforms state- of-the-art coding of the human body represented as ordinary depth plus color video sequences. I. INTRODUCTION", "While much of multiview video coding focuses on the rate-distortion performance of compressing all frames of all views for storage or non-interactive video delivery over networks, we address the problem of designing a frame structure to enable interactive multiview streaming, where clients can interactively switch views during video playback. Thus, as a client is playing back successive frames (in time) for a given view, it can send a request to the server to switch to a different view while continuing uninterrupted temporal playback. Noting that standard tools for random access (i.e., I-frame insertion) can be bandwidth-inefficient for this application, we propose a redundant representation of I-, P-, and “merge” frames, where each original picture can be encoded into multiple versions, appropriately trading off expected transmission rate with storage, to facilitate view switching. We first present ad hoc frame structures with good performance when the view-switching probabilities are either very large or very small. We then present optimization algorithms that generate more general frame structures with better overall performance for the general case. We show in our experiments that we can generate redundant frame structures offering a range of tradeoff points between transmission and storage, e.g., outperforming simple I-frame insertion structures by up to 45 in terms of bandwidth efficiency at twice the storage cost.", "We derive an optimization framework for joint view and rate scalable coding of multi-view video content represented in the texture plus depth format. The optimization enables the sender to select the subset of coded views and their encoding rates such that the aggregate distortion over a continuum of synthesized views is minimized. We construct the view and rate embedded bitstream such that it delivers optimal performance simultaneously over a discrete set of transmission rates. In conjunction, we develop a user interaction model that characterizes the view selection actions of the client as a Markov chain over a discrete state-space. We exploit the model within the context of our optimization to compute user-action-driven coding strategies that aim at enhancing the client's performance in terms of latency and video quality. Our optimization outperforms the state-of-the-art H.264 SVC codec as well as a multi-view wavelet-based coder equipped with a uniform rate allocation strategy, across all scenarios studied in our experiments. Equally important, we can achieve an arbitrarily fine granularity of encoding bit rates, while providing a novel functionality of view embedded encoding, unlike the other encoding methods that we examined. Finally, we observe that the interactivity-aware coding delivers superior performance over conventional allocation techniques that do not anticipate the client's view selection actions in their operation.", "We formulate a system framework for network compression of interactive multi-view streaming video. The setup comprises a media server that delivers the content over two independent network paths to a client. Our system features a proxy-server located at the junction of the wired and wireless portions of each path. The proxy dynamically adapts the content data sent over the wireless links, in response to channel quality feedback from the client, such that video distortion at the client is minimized. We analyze the performance of our system and contrast its characteristics with dynamic content adaptation at the source and conventional streaming architectures, including scalable video. We numerically simulate the operation of all streaming systems under comparison and establish a close agreement between our analysis and the experimental findings. The proposed system delivers superior video quality over the reference competitors, while enabling notable transmission rate savings, at the same time.", "We study the scenario of multicasting multi-view video content, recorded in the video plus depth format, to a collection of heterogeneous clients featuring Internet access links of diverse packet loss and transmission bandwidth values. We design a popularity-aware joint source-channel coding optimization framework that allocates source and channel coding rates to the captured content, such that the aggregate video quality of the reconstructed content across the client population is maximized, for the given packet loss and bandwidth characteristics of the clients and their view selection preferences. The source coding component of our framework features a procedure for generating a view and rate embedded bitstream that is optimally decodable at multiple data rates and accounts for the different popularity of diverse video perspectives of the scene of interest, among the clients. The channel coding component of our framework comprises an expanding-window rateless coding procedure that optimally allocates parity protection bits to the source encoded layers, in order to address packet loss across the unreliable client access links. We develop an optimization method that jointly computes the source and channel coding decisions of our framework, and also design a fast local-search-based solution that exhibits a negligible performance loss relative to the full optimization. We carry out comprehensive simulation experiments and demonstrate significant performance gains over competitive state-of-the-art methods (based on H.264 AVC and network coding, and H.264 SVC and our own channel coding procedure), across different scenario settings and parameter values.", "3D tele-immersion improves the state of collaboration among geographically distributed participants. Unlike the traditional 2D videos, a 3D tele-immersive system employs multiple 3D cameras based in each physical site to cover a much larger field of view, generating a very large amount of stream data. One of the major challenges is how to efficiently transmit these bulky 3D streaming data to bandwidth-constrained sites. In this paper, we propose an adaptive Human Visual System (HVS) -aware bandwidth management framework for efficient delivery of multiple streams produced from distributed 3D tele-immersive sites to a receiver site with limited bandwidth budget. Our novel adaptation framework exploits the semantics link of HVS with multiple 3D streams in the 3D tele-immersive environment. Our evaluation results show that the proposed HVS-aware adaptation improves the total quality per unit of bandwidth used to deliver streams in 3D tele-immersive systems.", "Decentralized camera sensors capture a 3D scene of interest from multiple perspectives. The captured video signals need to be transmitted to a central station over a shared wireless channel. A collocated server streams the gathered data to a collection of clients interested in experiencing the scene interactively. We design a constrained optimization framework for sharing the transmission bandwidth of the wireless channel across the sensors such that the average video quality over the client population is maximized. We consider scheduling the uplink resources of the wireless channel at the view or packet level. In the first case, the central station partitions the channel capacity across a select subset of views that are transmitted entirely. In the second case, the station coordinates the packet transmissions of every sensor. We formulate exact and approximate algorithms to solve the optimization of interest. We examine their transmission efficiency via simulation experiments that demonstrate considerable gains over the state-of-the-art and study the impact of view popularity.", "This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way.", "As an important component of the virtual reality (VR) technology, 360-degree videos provide users with panoramic view and allow them to freely control their viewing direction during video playback. Usually, a player displays only the visible portion of a 360 video. Thus, fetching the entire raw video frame wastes bandwidth. In this paper, we consider the problem of optimizing 360 video delivery over cellular networks. We first conduct a measurement study on commercial 360 video platforms. We then propose a cellular-friendly streaming scheme that delivers only 360 videos' visible portion based on head movement prediction. Using viewing data collected from real users, we demonstrate the feasibility of our approach, which can reduce bandwidth consumption by up to 80 based on a trace-driven simulation.", "Though the variety of desktop real time stereo vision systems has grown considerably in the past several years, few make any verifiable claims about the accuracy of the algorithms used to construct 3D data or describe how the data generated by such systems, which is large in size, can be effectively distributed. In this paper, we describe a system that creates an accurate (on the order of a centimeter), 3D reconstruction of an environment in real time (under 30 ms) that also allows for remote interaction between users. This paper addresses how to reconstruct, compress, and visualize the 3D environment. In contrast to most commercial desktop real time stereo vision systems our algorithm produces 3D meshes instead of dense point clouds, which we show allows for better quality visualizations. The chosen representation of the data also allows for high compression ratios for transfer to remote sites. We demonstrate the accuracy and speed of our results on a variety of benchmarks." ] }
1703.04111
2952616635
Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.
The BF is just one of a large number of edge-preserving filters that include Anisotropic Diffusion @cite_7 , guided image filter @cite_1 , or the domain transform filter @cite_25 to name a few. These filters smooth images by averaging neighboring pixels. The weights are determined based on similarity in appearance and proximity in location. Correctly determining these weights determines what parts of the image should be smoothed and where smoothing should stop.
{ "cite_N": [ "@cite_1", "@cite_25", "@cite_7" ], "mid": [ "", "2019904315", "2150134853" ], "abstract": [ "", "We present a new approach for performing high-quality edge-preserving filtering of images and videos in real time. Our solution is based on a transform that defines an isometry between curves on the 2D image manifold in 5D and the real line. This transform preserves the geodesic distance between points on these curves, adaptively warping the input signal so that 1D edge-preserving filtering can be efficiently performed in linear time. We demonstrate three realizations of 1D edge-preserving filters, show how to produce high-quality 2D edge-preserving filters by iterating 1D-filtering operations, and empirically analyze the convergence of this process. Our approach has several desirable features: the use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. We demonstrate the versatility of our domain transform and edge-preserving filters on several real-time image and video processing tasks including edge-preserving filtering, depth-of-field effects, stylization, recoloring, colorization, detail enhancement, and tone mapping.", "A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image. >" ] }
1703.04111
2952616635
Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.
Joint Cross BF @cite_14 @cite_16 recovers weights on one image and applies it to another image. This concept was taken one step further with the guided image filter @cite_1 where an image is assumed to be a locally linear model of the guidance image.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_16" ], "mid": [ "2160451035", "", "2030716039" ], "abstract": [ "Digital photography has made it possible to quickly and easily take a pair of images of low-light environments: one with flash to capture detail and one without flash to capture ambient illumination. We present a variety of applications that analyze and combine the strengths of such flash no-flash image pairs. Our applications include denoising and detail transfer (to merge the ambient qualities of the no-flash image with the high-frequency flash detail), white-balancing (to change the color tone of the ambient image), continuous flash (to interactively adjust flash intensity), and red-eye removal (to repair artifacts in the flash image). We demonstrate how these applications can synthesize new images that are of higher quality than either of the originals.", "", "We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct flash shadows. This combines the advantages of available illumination and flash photography." ] }
1703.04111
2952616635
Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.
The WLS method @cite_20 treats edge preserving filtering as a weighted least square problem where the goal is to approximate the input image anywhere, except at sharp edges. The Euclidean distance in WLS can be replaced by the diffusion distance @cite_17 . The diffusion distance between two points equals the difference between the probabilities of random walkers to start at both points and end up in the same point. To approximate this, @cite_17 uses the dominant eigenvectors of the affinity matrix, dubbed diffusion maps. Diffusion maps can be efficiently calculated using the Nayst "o m method.
{ "cite_N": [ "@cite_20", "@cite_17" ], "mid": [ "2141957843", "1994456831" ], "abstract": [ "Many recent computational photography techniques decompose an image into a piecewise smooth base layer, containing large scale variations in intensity, and a residual detail layer capturing the smaller scale details in the image. In many of these applications, it is important to control the spatial scale of the extracted details, and it is often desirable to manipulate details at multiple scales, while avoiding visual artifacts. In this paper we introduce a new way to construct edge-preserving multi-scale image decompositions. We show that current basedetail decomposition techniques, based on the bilateral filter, are limited in their ability to extract detail at arbitrary scales. Instead, we advocate the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction. After describing this operator, we show how to use it to construct edge-preserving multi-scale decompositions, and compare it to the bilateral filter, as well as to other schemes. Finally, we demonstrate the effectiveness of our edge-preserving decompositions in the context of LDR and HDR tone mapping, detail enhancement, and other applications.", "Edge-aware operations, such as edge-preserving smoothing and edge-aware interpolation, require assessing the degree of similarity between pairs of pixels, typically defined as a simple monotonic function of the Euclidean distance between pixel values in some feature space. In this work we introduce the idea of replacing these Euclidean distances with diffusion distances, which better account for the global distribution of pixels in their feature space. These distances are approximated using diffusion maps: a set of the dominant eigenvectors of a large affinity matrix, which may be computed efficiently by sampling a small number of matrix columns (the Nystrom method). We demonstrate the benefits of using diffusion distances in a variety of image editing contexts, and explore the use of diffusion maps as a tool for facilitating the creation of complex selection masks. Finally, we present a new analysis that establishes a connection between the spatial interaction range between two pixels, and the number of samples necessary for accurate Nystrom approximations." ] }
1703.04111
2952616635
Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.
In the field of edge detection there has been great progress in recent years. This progress can be quantitatively measured on the Berkeley Segmentation Data Set @cite_4 . Some of the leading methods include Normalized Cuts and its derivative work @cite_10 @cite_11 that treat the problem as a spectral clustering problem where affinities between pixels are trained offline. Structured Edge Detector @cite_9 trains a structured random forest on a large training set and then applies it to detect true edges in the query image.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_4", "@cite_11" ], "mid": [ "2121947440", "1976047850", "2119823327", "2110158442" ], "abstract": [ "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains realtime performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.", "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.", "This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications." ] }
1703.04111
2952616635
Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.
Co-occurrences were recently used for boundary detection @cite_23 . They collect co-occurrence statistics (termed Pointwise Mutual Information, or PMI, in their paper) to learn the probability of boundaries in an image and use that information to compute the affinities required by spectral clustering. The method performs very well on the Berkeley Segmentation Data Set @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_23" ], "mid": [ "2119823327", "105270443" ], "abstract": [ "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.", "Detecting boundaries between semantically meaningful objects in visual scenes is an important component of many vision algorithms. In this paper, we propose a novel method for detecting such boundaries based on a simple underlying principle: pixels belonging to the same object exhibit higher statistical dependencies than pixels belonging to different objects. We show how to derive an affinity measure based on this principle using pointwise mutual information, and we show that this measure is indeed a good predictor of whether or not two pixels reside on the same object. Using this affinity with spectral clustering, we can find object boundaries in the image – achieving state-of-the-art results on the BSDS500 dataset. Our method produces pixel-level accurate boundaries while requiring minimal feature engineering." ] }
1703.04111
2952616635
Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.
Co-occurrence information was first introduced by Haralick @cite_15 . They proposed 14 statistical measures that can be extracted from the co-occurrence matrix and be used to measure similarity between textures. Later, color correlograms, that also rely on co-occurrence data, were used by Huang @cite_22 as image descriptor within an image retrieval system. Finally, co-occurrence statistics was also used with graph cuts by Ladicky @cite_6 where the goal was to solve a label assignment problem such that the labels will satisfy some given co-occurrence matrix.
{ "cite_N": [ "@cite_15", "@cite_22", "@cite_6" ], "mid": [ "", "1917380066", "2113016070" ], "abstract": [ "", "We define a new image feature called the color correlogram and use it for image indexing and comparison. This feature distills the spatial correlation of colors, and is both effective and inexpensive for content-based image retrieval. The correlogram robustly tolerates large changes in appearance and shape caused by changes in viewing positions, camera zooms, etc. Experimental evidence suggests that this new feature outperforms not only the traditional color histogram method but also the recently proposed histogram refinement methods for image indexing retrieval.", "The Markov and Conditional random fields (CRFs) used in computer vision typically model only local interactions between variables, as this is generally thought to be the only case that is computationally tractable. In this paper we consider a class of global potentials defined over all variables in the CRF. We show how they can be readily optimised using standard graph cut algorithms at little extra expense compared to a standard pairwise field. This result can be directly used for the problem of class based image segmentation which has seen increasing recent interest within computer vision. Here the aim is to assign a label to each pixel of a given image from a set of possible object classes. Typically these methods use random fields to model local interactions between pixels or super-pixels. One of the cues that helps recognition is global object co-occurrence statistics, a measure of which classes (such as chair or motorbike) are likely to occur in the same image together. There have been several approaches proposed to exploit this property, but all of them suffer from different limitations and typically carry a high computational cost, preventing their application on large images. We find that the new model we propose produces a significant improvement in the labelling compared to just using a pairwise model and that this improvement increases as the number of labels increases." ] }
1703.04316
2950934519
This paper presents a novel technique that allows for both computationally fast and sufficiently plausible simulation of vehicles with non-deformable tracks. The method is based on an effect we have called Contact Surface Motion. A comparison with several other methods for simulation of tracked vehicle dynamics is presented with the aim to evaluate methods that are available off-the-shelf or with minimum effort in general-purpose robotics simulators. The proposed method is implemented as a plugin for the open-source physics-based simulator Gazebo using the Open Dynamics Engine.
In agriculture and military research, the track-soil interaction is of high interest (mainly due to sinkage of the track plates). Most of these works seem to only consider planar motion of the vehicle @cite_3 @cite_2 @cite_14 and mainly concentrate on computing correct sinkage-induced behavior. Yamakawa and Watanabe @cite_17 provide a fully three-dimensional simulation taking into account the track-soil interactions and wheel suspension.
{ "cite_N": [ "@cite_14", "@cite_17", "@cite_3", "@cite_2" ], "mid": [ "2169761102", "2071529051", "2074462567", "2044410627" ], "abstract": [ "Abstract Currently available models for dynamic simulation of tracked vehicles usually include super-elements to describe the tracks and the suspension systems. In these models, the dynamics of the track, the interaction between each track link and the ground, and their effect on the vehicle dynamics cannot be considered properly. The rapid increase in computing speed enables the utilization of more complex models, including numerous bodies and force elements. A three-dimensional multi-body simulation model for simulating the dynamic behavior of tracked off-road vehicles was developed using the LMS-DADS simulation program. The model incorporates detailed description of the track, the suspension system, and the dynamic interaction between its components. The bodies of the model are the chassis, the wheel-arms, the wheels, and each track link. Three-dimensional contact force elements are used to describe the interaction of the track links with the vehicle's road wheels, sprocket, and idler. Additional force elements are used to simulate the bump stops and the dampers. User-defined force elements are used to describe the interaction between each track link and the ground. The normal and tangential forces are calculated using classical soil mechanics equations, such as Bekker and Janosi correlations. Sinkage and slip are calculated separately for each track link. Alternative correlations, based on recent studies of the dynamic variations of these forces, can also be used. The model was first applied to the M113 armored carrier. Simulation results under various road conditions were compared with the results of a super-element-based model. It was concluded that the influence of the track dynamics and the soil–link interaction on the vehicle dynamics can be better predicted with the newly developed model.", "A spatial motion analysis model for high-mobility tracked vehicles was constructed for evaluation of ride performance, steerability, and stability on rough terrain. Ordinary high-mobility tracked vehicles are equipped with independent torsion bar type suspension system, which consists of road arms and road wheels. The road arm rotates about the axis of torsion bar, and rigidity of the torsion bar and cohesion of damper absorb sudden force change exerted by interaction with the ground. The motion of the road arms should be considered for the evaluation of off-road vehicle performance in numerical analysis model. In order to obtain equations of motion for the tracked vehicles, the equations of motion for the vehicle body and for the assembly of a road wheel and a road arm were constructed separately at first. Two sets of equations were reduced with the constraint equations, which the road arms are mechanically connected to the vehicle body. The equations of motion for the vehicle have been expressed with minimal set of variables of the same number as the degrees of freedom for the vehicle motion. We also included the effect of track tension in the equations without constructing equations of motion for the tracks. Numerical simulation based on the vehicle model and experiment of a scale model passing over a trapezoidal speed bump were performed in order to examine the numerical model. It was found that the numerical results reasonably predict the vehicle motion.", "A new approach to the dynamic modelling of tracked vehicles is proposed in this paper, resulting in a 3D, 8 degrees of freedom dynamic model of an agricultural tracked vehicle, having the two independently applied sprocket torques as input variables. The main features of the approach are a new dynamic model of the shear displacement and the adoption of an innovative modelling and simulation environment: MOSES, based on Object-Oriented tools and techniques. Simulation results are reported for a qualitative validation of the model.", "Abstract In recent years virtual dynamic system simulation has become very important in the design and development stage, as new strategies can be examined without expensive measurements and with reduced time. This paper describes the development of a simulation model for transient analysis of the longitudinal dynamics of a heavy tracked vehicle. The driving inputs for this simulation model are obtained from a powertrain model. The main elements of the powertrain include the engine, Torque Converter (TC), transmission and drivetrain. Here the engine is modeled based on the engine maps from steady-state experiments. The TC is modeled based on its characteristic map from experiments. A fairly simple transmission model is used which is based on static gear ratios assuming small shift times. The final drivetrain model however includes the rotational dynamics of the sprocket. The simulation model developed is validated by comparing the predicted values with the measured data from experiments. The results have demonstrated that the developed model is able to predict fairly accurately the acceleration and braking performance of the heavy tracked vehicle on both soft and hard terrain." ] }
1703.04316
2950934519
This paper presents a novel technique that allows for both computationally fast and sufficiently plausible simulation of vehicles with non-deformable tracks. The method is based on an effect we have called Contact Surface Motion. A comparison with several other methods for simulation of tracked vehicle dynamics is presented with the aim to evaluate methods that are available off-the-shelf or with minimum effort in general-purpose robotics simulators. The proposed method is implemented as a plugin for the open-source physics-based simulator Gazebo using the Open Dynamics Engine.
Mart ' i nez et al. @cite_12 define virtual points called (ICR), which depend on the desired turning radius and on a coefficient called . The robot follows a circular path centered at the ICR, and if the steering efficiency is equal to @math , the motion is the same as the motion of a geometrically equal differential-drive wheeled vehicle.
{ "cite_N": [ "@cite_12" ], "mid": [ "2027162577" ], "abstract": [ "In this paper we propose a kinematic approach for tracked mobile robots in order to improve motion control and pose estimation. Complex dynamics due to slippage and track-soil interactions make it difficult to predict the exact motion of the vehicle on the basis of track velocities. Nevertheless, real-time computations for autonomous navigation require an effective kinematics approximation without introducing dynamics in the loop. The proposed solution is based on the fact that the instantaneous centers of rotation (ICRs) of treads on the motion plane with respect to the vehicle are dynamics-dependent, but they lie within a bounded area. Thus, optimizing constant ICR positions for a particular terrain results in an approximate kinematic model for tracked mobile robots. Two different approaches are presented for off-line estimation of kinematic parameters: (i) simulation of the stationary response of the dynamic model for the whole velocity range of the vehicle; (ii) introduction of an experimental setup so that a genetic algorithm can produce the model from actual sensor readings. These methods have been evaluated for on-line odometric computations and low-level motion control with the Auriga-α mobile robot on a hard-surface flat soil at moderate speeds." ] }
1703.04009
2951737564
A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.
Bag-of-words approaches tend to have high recall but lead to high rates of false positives since the presence of offensive words can lead to the misclassification of tweets as hate speech @cite_3 @cite_0 . Focusing on anti-black racism, DBLP:conf aaai KwokW13 find that 86 Syntactic features have been leveraged to better identify the targets and intensity of hate speech, for example sentences where a relevant noun and verb occur (e.g. and ) @cite_9 , the POS trigram DT jewish NN @cite_8 , and the syntactic structure I intensity user intent hate target , e.g. I f*cking hate white people @cite_12 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_3", "@cite_0", "@cite_12" ], "mid": [ "78136081", "2181854537", "80056832", "1871142974", "2953180101" ], "abstract": [ "We present an approach to detecting hate speech in online text, where hate speech is defined as abusive speech targeting specific group characteristics, such as ethnic origin, religion, gender, or sexual orientation. While hate speech against any group may exhibit some common characteristics, we have observed that hatred against each different group is typically characterized by the use of a small set of high frequency stereotypical words; however, such words may be used in either a positive or a negative sense, making our task similar to that of words sense disambiguation. In this paper we describe our definition of hate speech, the collection and annotation of our hate speech corpus, and a mechanism for detecting some commonly used methods of evading common \"dirty word\" filters. We describe pilot classification experiments in which we classify anti-semitic speech reaching an accuracy 94 , precision of 68 and recall at 60 , for an F1 measure of. 6375.", "We explore the idea of creating a classifier that can be used to detect presence of hate speech in web discourses such as web forums and blogs. In this work, hate speech problem is abstracted into three main thematic areas of race, nationality and religion. The goal of our research is to create a model classifier that uses sentiment analysis techniques and in particular subjectivity detection to not only detect that a given sentence is subjective but also to identify and rate the polarity of sentiment expressions. We begin by whittling down the document size by removing objective sentences. Then, using subjectivity and semantic features related to hate speech, we create a lexicon that is employed to build a classifier for hate speech detection. Experiments with a hate corpus show significant practical application for a real-world web discourse.", "Although the social medium Twitter grants users freedom of speech, its instantaneous nature and retweeting features also amplify hate speech. Because Twitter has a sizeable black constituency, racist tweets against blacks are especially detrimental in the Twitter community, though this effect may not be obvious against a backdrop of half a billion tweets a day. We apply a supervised machine learning approach, employing inexpensively acquired labeled data from diverse Twitter accounts to learn a binary classifier for the labels \"racist\" and \"nonracist\" The classifier has a 76 average accuracy on individual tweets, suggesting that with further improvements, our work can contribute data on the sources of anti-black hate speech.", "The use of “Big Data” in policy and decision making is a current topic of debate. The 2013 murder of Drummer Lee Rigby in Woolwich, London, UK led to an extensive public reaction on social media, providing the opportunity to study the spread of online hate speech (cyber hate) on Twitter. Human annotated Twitter data was collected in the immediate aftermath of Rigby's murder to train and test a supervised machine learning text classifier that distinguishes between hateful and or antagonistic responses with a focus on race, ethnicity, or religion; and more general responses. Classification features were derived from the content of each tweet, including grammatical dependencies between words to recognize “othering” phrases, incitement to respond with antagonistic action, and claims of well-founded or justified discrimination against social groups. The results of the classifier were optimal using a combination of probabilistic, rule-based, and spatial-based classifiers with a voted ensemble meta-classifier. We demonstrate how the results of the classifier can be robustly utilized in a statistical model used to forecast the likely spread of cyber hate in a sample of Twitter data. The applications to policy and decision making are discussed.", "Social media systems allow Internet users a congenial platform to freely express their thoughts and opinions. Although this property represents incredible and unique communication opportunities, it also brings along important challenges. Online hate speech is an archetypal example of such challenges. Despite its magnitude and scale, there is a significant gap in understanding the nature of hate speech on social media. In this paper, we provide the first of a kind systematic large scale measurement study of the main targets of hate speech in online social media. To do that, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both these systems. Our results identify online hate speech forms and offer a broader understanding of the phenomenon, providing directions for prevention and detection approaches." ] }
1703.04009
2951737564
A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.
Other supervised approaches to hate speech classification have unfortunately conflated hate speech with offensive language, making it difficult to ascertain the extent to which they are really identifying hate speech @cite_0 @cite_13 . Neural language models show promise in the task but existing work has used training data has a similarly broad definition of hate speech @cite_1 . Non-linguistic features like the gender or ethnicity of the author can help improve hate speech classification but this information is often unavailable or unreliable on social media @cite_13 .
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_13" ], "mid": [ "1871142974", "1071251684", "2473555522" ], "abstract": [ "The use of “Big Data” in policy and decision making is a current topic of debate. The 2013 murder of Drummer Lee Rigby in Woolwich, London, UK led to an extensive public reaction on social media, providing the opportunity to study the spread of online hate speech (cyber hate) on Twitter. Human annotated Twitter data was collected in the immediate aftermath of Rigby's murder to train and test a supervised machine learning text classifier that distinguishes between hateful and or antagonistic responses with a focus on race, ethnicity, or religion; and more general responses. Classification features were derived from the content of each tweet, including grammatical dependencies between words to recognize “othering” phrases, incitement to respond with antagonistic action, and claims of well-founded or justified discrimination against social groups. The results of the classifier were optimal using a combination of probabilistic, rule-based, and spatial-based classifiers with a voted ensemble meta-classifier. We demonstrate how the results of the classifier can be robustly utilized in a statistical model used to forecast the likely spread of cyber hate in a sample of Twitter data. The applications to policy and decision making are discussed.", "We address the problem of hate speech detection in online user comments. Hate speech, defined as an \"abusive speech targeting specific group characteristics, such as ethnicity, religion, or gender\", is an important problem plaguing websites that allow users to leave feedback, having a negative impact on their online business and overall user experience. We propose to learn distributed low-dimensional representations of comments using recently proposed neural language models, that can then be fed as inputs to a classification algorithm. Our approach addresses issues of high-dimensionality and sparsity that impact the current state-of-the-art, resulting in highly efficient and effective hate speech detectors.", "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data." ] }
1703.04105
2596627958
We propose an end-to-end deep learning architecture for word-level visual speech recognition. The system is a combination of spatiotemporal convolutional, residual and bidirectional Long Short-Term Memory networks. We train and evaluate it on the Lipreading In-The-Wild benchmark, a challenging database of 500-size target-words consisting of 1.28sec video excerpts from BBC TV broadcasts. The proposed network attains word accuracy equal to 83.0, yielding 6.8 absolute improvement over the current state-of-the-art, without using information about word boundaries during training or testing.
Prior to the advent of deep learning ( @cite_7 ) most of the work in lipreading was based on hand-engineered features, that were usually modeled by HMM-based pipeline, @cite_2 @cite_27 @cite_6 @cite_16 @cite_29 . Spatiotemporal descriptors such as active appearance models and optical flow, and SVM classifiers have also been proposed, @cite_14 . For an analytic review on traditional lipreading methods we refer to @cite_8 and @cite_26 . More recent works deploy deep learning methods either for extracting "deep" features ( @cite_23 @cite_19 @cite_20 ) or for building end-to-end architectures. In @cite_18 , Deep Belief Networks were deployed for audio-visual recognition and 21
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_7", "@cite_8", "@cite_29", "@cite_6", "@cite_19", "@cite_27", "@cite_23", "@cite_2", "@cite_16", "@cite_20" ], "mid": [ "2022799064", "", "", "2160815625", "", "2142075667", "2096391593", "2579335913", "2106137268", "2076462394", "", "", "2398406965" ], "abstract": [ "Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21 relative over a baseline multi-stream audio-visual GMM HMM system.", "", "", "Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models (GMMs) to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. An alternative way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition benchmarks, sometimes by a large margin. This article provides an overview of this progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.", "", "While the accuracy of feature measurements heavily depends on changing environmental conditions, studying the consequences of this fact in pattern recognition tasks has received relatively little attention to date. In this paper, we explicitly take feature measurement uncertainty into account and show how multimodal classification and learning rules should be adjusted to compensate for its effects. Our approach is particularly fruitful in multimodal fusion scenarios, such as audiovisual speech recognition, where multiple streams of complementary time-evolving features are integrated. For such applications, provided that the measurement noise uncertainty for each feature stream can be estimated, the proposed framework leads to highly adaptive multimodal fusion rules which are easy and efficient to implement. Our technique is widely applicable and can be transparently integrated with either synchronous or asynchronous multimodal sequence integration architectures. We further show that multimodal fusion methods relying on stream weights can naturally emerge from our scheme under certain assumptions; this connection provides valuable insights into the adaptivity properties of our multimodal uncertainty compensation approach. We show how these ideas can be practically applied for audiovisual speech recognition. In this context, we propose improved techniques for person-independent visual feature extraction and uncertainty estimation with active appearance models, and also discuss how enhanced audio features along with their uncertainty estimates can be effectively computed. We demonstrate the efficacy of our approach in audiovisual speech recognition experiments on the CUAVE database using either synchronous or asynchronous multimodal integration models.", "Visual speech information from the speaker's mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audiovisual automatic speech recognition (ASR) and present novel contributions in two main areas: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovisual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audiovisual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audiovisual adaptation. We apply our algorithms to three multisubject bimodal databases, ranging from small- to large-vocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves ASR over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.", "This paper presents preliminary experiments using the Kaldi toolkit to investigate audiovisual speech recognition (AVSR) in noisy environments using deep neural networks (DNNs). In particular we use a single-speaker large vocabulary, continuous audiovisual speech corpus to compare the performance of visual-only, audio-only and audiovisual speech recognition. The models trained using the Kaldi toolkit are compared with the performance of models trained using conventional hidden Markov models (HMMs). In addition, we compare the performance of a speech recognizer both with and without visual features over nine different SNR levels of babble noise ranging from 20dB down to -20dB. The results show that the DNN outperforms conventional HMMs in all experimental conditions, especially for the lip-reading only system, which achieves a gain of 37.19 accuracy (84.67 absolute word accuracy). Moreover, the DNN provides an effective improvement of 10 and 12dB SNR respectively for both the single modal and bimodal speech recognition systems. However, integrating the visual features using simple feature fusion is only effective in SNRs at 5dB and above. Below this the degradion in accuracy of an audiovisual system is similar to the audio only recognizer. Index Terms: lip-reading, speech reading, audiovisual speech recognition", "We have designed and implemented a lipreading system that recognizes isolated words using only color video of human lips (without acoustic data). The system performs video recognition using \"snakes\" to extract visual features of geometric space, Karhunen-Loeve transform (KLT) to extract principal components in the color eigenspace, and hidden Markov models (HMM's) to recognize the combined visual features sequences. With the visual information alone, we were able to achieve 94 accuracy for ten isolated words.", "Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for reliable speech recognition, particularly when the audio is corrupted by noise. However, cautious selection of sensory features is crucial for attaining high recognition performance. In the machine-learning community, deep learning approaches have recently attracted increasing attention because deep neural networks can effectively extract robust latent features that enable various recognition algorithms to demonstrate revolutionary generalization capabilities under diverse application conditions. This study introduces a connectionist-hidden Markov model (HMM) system for noise-robust AVSR. First, a deep denoising autoencoder is utilized for acquiring noise-robust audio features. By preparing the training data for the network with pairs of consecutive multiple steps of deteriorated audio features and the corresponding clean features, the network is trained to output denoised audio features from the corresponding features deteriorated by noise. Second, a convolutional neural network (CNN) is utilized to extract visual features from raw mouth area images. By preparing the training data for the CNN as pairs of raw images and the corresponding phoneme label outputs, the network is trained to predict phoneme labels from the corresponding mouth area input images. Finally, a multi-stream HMM (MSHMM) is applied for integrating the acquired audio and visual HMMs independently trained with the respective features. By comparing the cases when normal and denoised mel-frequency cepstral coefficients (MFCCs) are utilized as audio features to the HMM, our unimodal isolated word recognition results demonstrate that approximately 65 word recognition rate gain is attained with denoised MFCCs under 10 dB signal-to-noise-ratio (SNR) for the audio signal input. Moreover, our multimodal isolated word recognition results utilizing MSHMM with denoised MFCCs and acquired visual features demonstrate that an additional word recognition rate gain is attained for the SNR conditions below 10 dB.", "", "", "Recent improvements in tracking and feature extraction mean that speaker-dependent lip-reading of continuous speech using a medium size vocabulary (around 1000 words) is realistic. However, the recognition of previously unseen speakers has been found to be a very challenging task, because of the large variation in lip-shapes across speakers and the lack of large, tracked databases of visual features, which are very expensive to produce. By adapting a technique that is established in speech recognition but has not previously been used in lip-reading, we show that error-rates for speaker-independent lip-reading can be very significantly reduced. Furthermore, we show that error-rates can be even further reduced by the additional use of Deep Neural Networks (DNN). We also find that there is no need to map phonemes to visemes for context-dependent visual speech transcription." ] }
1703.04309
2951179855
We propose a novel deep learning architecture for regressing disparity from a rectified pair of stereo images. We leverage knowledge of the problem's geometry to form a cost volume using deep feature representations. We learn to incorporate contextual information using 3-D convolutions over this volume. Disparity values are regressed from the cost volume using a proposed differentiable soft argmin operation, which allows us to train our method end-to-end to sub-pixel accuracy without any additional post-processing or regularization. We evaluate our method on the Scene Flow and KITTI datasets and on KITTI we set a new state-of-the-art benchmark, while being significantly faster than competing approaches.
The matching cost is a measure of pixel dissimilarity for potentially corresponding image locations @cite_22 , of which absolute differences, squared differences, and truncated differences are examples. Local descriptors based on gradients @cite_44 or binary patterns, such as CENSUS @cite_13 or BRIEF @cite_39 @cite_1 , can be employed. Instead of aggregating neighboring pixels equally as patch-based matching costs do, awareness of the image content can more heavily incorporate neighboring pixels possessing similar appearance, under the assumption that they are more likely to come from the same surface and disparity. A survey of these techniques is provided by @cite_27 . Local matching costs may also be optimized within a global framework, usually minimizing an energy function combining a local data term and a pairwise smoothness term. Global optimization can be accomplished using graph cuts @cite_16 or belief propagation @cite_18 , which can be extended to slanted surfaces @cite_45 . A popular and effective approximation to global optimization is the (SGM) of Hirschm "u ller @cite_33 , where dynamic programming optimizes a pathwise form of the energy function in many directions.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_33", "@cite_1", "@cite_39", "@cite_44", "@cite_27", "@cite_45", "@cite_16", "@cite_13" ], "mid": [ "2112421488", "2133255058", "", "1562835991", "1491719799", "1912649600", "", "1964057156", "", "1674866864" ], "abstract": [ "A novel stereo matching algorithm is proposed that utilizes color segmentation on the reference image and a self-adapting matching score that maximizes the number of reliable correspondences. The scene structure is modeled by a set of planar surface patches which are estimated using a new technique that is more robust to outliers. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. The optimal disparity plane labeling is approximated by applying belief propagation. Experimental results using the Middlebury stereo test bed demonstrate the superior performance of the proposed method", "Stereo correspondence methods rely on matching costs for computing the similarity of image locations. In this paper we evaluate the insensitivity of different matching costs with respect to radiometric variations of the input images. We consider both pixel-based and window-based variants and measure their performance in the presence of global intensity changes (e.g., due to gain and exposure differences), local intensity changes (e.g., due to vignetting, non-Lambertian surfaces, and varying lighting), and noise. Using existing stereo datasets with ground-truth disparities as well as six new datasets taken under controlled changes of exposure and lighting, we evaluate the different costs with a local, a semi-global, and a global stereo method.", "", "The stereo correspondence problem is still a highly active topic of research with many applications in the robotic domain. Still many state of the art algorithms proposed to date are unable to reasonably handle high resolution images due to their run time complexities or memory requirements. In this work we propose a novel stereo correspondence estimation algorithm that employs binary locality sensitive hashing and is well suited to implementation on the GPU. Our proposed method is capable of processing very high-resolution stereo images at near real-time rates. An evaluation on the new Middlebury and Disney high-resolution stereo benchmarks demonstrates that our proposed method performs well compared to existing state of the art algorithms.", "We propose to use binary strings as an efficient feature point descriptor, which we call BRIEF. We show that it is highly discriminative even when using relatively few bits and can be computed using simple intensity difference tests. Furthermore, the descriptor similarity can be evaluated using the Hamming distance, which is very efficient to compute, instead of the L2 norm as is usually done. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and U-SURF on standard benchmarks and show that it yields a similar or better recognition performance, while running in a fraction of the time required by either.", "In this paper we propose a novel approach to binocular stereo for fast matching of high-resolution images. Our approach builds a prior on the disparities by forming a triangulation on a set of support points which can be robustly matched, reducing the matching ambiguities of the remaining points. This allows for efficient exploitation of the disparity search space, yielding accurate dense reconstruction without the need for global optimization. Moreover, our method automatically determines the disparity range and can be easily parallelized. We demonstrate the effectiveness of our approach on the large-scale Middlebury benchmark, and show that state-of-the-art performance can be achieved with significant speedups. Computing the left and right disparity maps for a one Megapixel image pair takes about one second on a single CPU core.", "", "", "", "We propose a new approach to the correspondence problem that makes use of non-parametric local transforms as the basis for correlation. Non-parametric local transforms rely on the relative ordering of local intensity values, and not on the intensity values themselves. Correlation using such transforms can tolerate a significant number of outliers. This can result in improved performance near object boundaries when compared with conventional methods such as normalized correlation. We introduce two non-parametric local transforms: the rank transform, which measures local intensity, and the census transform, which summarizes local image structure. We describe some properties of these transforms, and demonstrate their utility on both synthetic and real data." ] }
1703.04309
2951179855
We propose a novel deep learning architecture for regressing disparity from a rectified pair of stereo images. We leverage knowledge of the problem's geometry to form a cost volume using deep feature representations. We learn to incorporate contextual information using 3-D convolutions over this volume. Disparity values are regressed from the cost volume using a proposed differentiable soft argmin operation, which allows us to train our method end-to-end to sub-pixel accuracy without any additional post-processing or regularization. We evaluate our method on the Scene Flow and KITTI datasets and on KITTI we set a new state-of-the-art benchmark, while being significantly faster than competing approaches.
Deep convolutional neural networks can be trained to match image patches @cite_19 . A deep network trained to match @math image patches, followed by non-learned cost aggregation and regularization, was shown by Z bontar and LeCun @cite_28 @cite_40 to produce then state-of-the-art results. presented a notably faster network for computing local matching costs as a multi-label classification of disparities using a Siamese network @cite_49 . A multi-scale embedding model from @cite_50 also provided good local matching scores. Also noteworthy is the work of @cite_8 , which learns a cost volume combined with a separate conditional color model to predict novel viewpoints in a multi-view stereo setting.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_19", "@cite_40", "@cite_50", "@cite_49" ], "mid": [ "2952809312", "2144041313", "2949213045", "2963502507", "2214868166", "2440384215" ], "abstract": [ "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision, but their use in graphics problems has been limited. In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches which consist of multiple complex stages of processing, each of which require careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. To verify our method we show that it can convincingly reproduce known test views from nearby imagery. Additionally we show images rendered from novel viewpoints. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "We present a method for extracting depth information from a rectified image pair. We train a convolutional neural network to predict how well two image patches match and use it to compute the stereo matching cost. The cost is refined by cross-based cost aggregation and semiglobal matching, followed by a left-right consistency check to eliminate errors in the occluded regions. Our stereo method achieves an error rate of 2.61 on the KITTI stereo dataset and is currently (August 2014) the top performing method on this dataset.", "In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets.", "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.", "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches." ] }
1703.04309
2951179855
We propose a novel deep learning architecture for regressing disparity from a rectified pair of stereo images. We leverage knowledge of the problem's geometry to form a cost volume using deep feature representations. We learn to incorporate contextual information using 3-D convolutions over this volume. Disparity values are regressed from the cost volume using a proposed differentiable soft argmin operation, which allows us to train our method end-to-end to sub-pixel accuracy without any additional post-processing or regularization. We evaluate our method on the Scene Flow and KITTI datasets and on KITTI we set a new state-of-the-art benchmark, while being significantly faster than competing approaches.
created a large synthetic dataset to train a network for disparity estimation (as well as optical flow) @cite_3 , improving the state-of-the-art. As one variant of the network, a 1-D correlation was proposed along the disparity line which is a multiplicative approximation to the stereo cost volume. In addition, this volume is concatenated with convolutional features from a single image and succeeded by a series of further convolutions. In contrast, our work does not collapse the feature dimension when computing the cost volume and uses 3-D convolutions to incorporate context.
{ "cite_N": [ "@cite_3" ], "mid": [ "2259424905" ], "abstract": [ "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network." ] }
1703.04309
2951179855
We propose a novel deep learning architecture for regressing disparity from a rectified pair of stereo images. We leverage knowledge of the problem's geometry to form a cost volume using deep feature representations. We learn to incorporate contextual information using 3-D convolutions over this volume. Disparity values are regressed from the cost volume using a proposed differentiable soft argmin operation, which allows us to train our method end-to-end to sub-pixel accuracy without any additional post-processing or regularization. We evaluate our method on the Scene Flow and KITTI datasets and on KITTI we set a new state-of-the-art benchmark, while being significantly faster than competing approaches.
Though the focus of this work is on binocular stereo, it is worth noting that the representational power of deep convolutional networks also enables depth estimation from a single monocular image @cite_35 . Deep learning is combined with a continuous CRF by @cite_47 . Instead of supervising training with labeled ground truth, unlabeled stereo pairs can be used to train a monocular model @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_35", "@cite_47" ], "mid": [ "2300779272", "2951234442", "1803059841" ], "abstract": [ "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation.", "Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.", "In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches." ] }
1703.03861
2604200214
Wikidata, like Wikipedia, is a knowledge base that anyone can edit. This open collaboration model is powerful in that it reduces barriers to participation and allows a large number of people to contribute. However, it exposes the knowledge base to the risk of vandalism and low-quality contributions. In this work, we build on past work detecting vandalism in Wikipedia to detect vandalism in Wikidata. This work is novel in that identifying damaging changes in a structured knowledge-base requires substantially different feature engineering work than in a text-based wiki like Wikipedia. We also discuss the utility of these classifiers for reducing the overall workload of vandalism patrollers in Wikidata. We describe a machine classification strategy that is able to catch 89 of vandalism while reducing patrollers' workload by 98 , by drawing lightly from contextual features of an edit and heavily from the characteristics of the user making the edit.
built the first automated quality prediction models for Wikipedia that was able to distinguish between Featured (highest quality classification) and non-Featured articles @cite_7 . Warncke- extended this work by showing that the features used in prediction could be limited to characteristics of articles in Wikipedia and maintain a high level of fitness @cite_0 and used these predictions in task routing.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "2065167558", "9825390" ], "abstract": [ "In this paper we address the problem of developing actionable quality models for Wikipedia, models whose features directly suggest strategies for improving the quality of a given article. We first survey the literature in order to understand the notion of article quality in the context of Wikipedia and existing approaches to automatically assess article quality. We then develop classification models with varying combinations of more or less actionable features, and find that a model that only contains clearly actionable features delivers solid performance. Lastly we discuss the implications of these results in terms of how they can help improve the quality of articles across Wikipedia.", "Effective information quality analysis needs powerful yet easy ways to obtain metrics. The English version of Wikipedia provides an extremely interesting yet challenging case for the study of Information Quality dynamics at both macro and micro levels. We propose seven IQ metrics which can be evaluated automatically and test the set on a representative sample of Wikipedia content. The methodology of the metrics construction and the results of tests, along with a number of statistical characterizations of Wikipedia articles, their content construction, process metadata and social context are reported." ] }
1703.03861
2604200214
Wikidata, like Wikipedia, is a knowledge base that anyone can edit. This open collaboration model is powerful in that it reduces barriers to participation and allows a large number of people to contribute. However, it exposes the knowledge base to the risk of vandalism and low-quality contributions. In this work, we build on past work detecting vandalism in Wikipedia to detect vandalism in Wikidata. This work is novel in that identifying damaging changes in a structured knowledge-base requires substantially different feature engineering work than in a text-based wiki like Wikipedia. We also discuss the utility of these classifiers for reducing the overall workload of vandalism patrollers in Wikidata. We describe a machine classification strategy that is able to catch 89 of vandalism while reducing patrollers' workload by 98 , by drawing lightly from contextual features of an edit and heavily from the characteristics of the user making the edit.
explored the process by which articles improve most efficiently and found that articles with a small group of highly active editors and a large group of less active editors were more likely to increase in quality than articles whose editors contributed more evenly @cite_10 . They argued that this is due to the lower coordination cost when few people are primarily engaged in the construction of an article. challenged the conclusions of by showing a strong correlation between diversity of experience (global inequality) between editors who are active and positive changes in article quality @cite_17 . The visibility of articles in Wikipedia seems to be critical to their development. , showed that hiding newly created articles from Wikipedia readers in a drafting space substantially reduced the overall productivity of editors in Wikipeida @cite_23 .
{ "cite_N": [ "@cite_10", "@cite_23", "@cite_17" ], "mid": [ "2065100127", "2008657810", "1988286798" ], "abstract": [ "Wikipedia's success is often attributed to the large numbers of contributors who improve the accuracy, completeness and clarity of articles while reducing bias. However, because of the coordination needed to write an article collaboratively, adding contributors is costly. We examined how the number of editors in Wikipedia and the coordination methods they use affect article quality. We distinguish between explicit coordination, in which editors plan the article through communication, and implicit coordination, in which a subset of editors structure the work by doing the majority of it. Adding more editors to an article improved article quality only when they used appropriate coordination techniques and was harmful when they did not. Implicit coordination through concentrating the work was more helpful when many editors contributed, but explicit coordination through communication was not. Both types of coordination improved quality more when an article was in a formative stage. These results demonstrate the critical importance of coordination in effectively harnessing the \"wisdom of the crowd\" in online production environments.", "Wikipedia needs to attract and retain newcomers while also increasing the quality of its content. Yet new Wikipedia users are disproportionately affected by the quality assurance mechanisms designed to thwart spammers and promoters. English Wikipedia's Articles for Creation provides a protected space for drafting new articles, which are reviewed against minimum quality guidelines before they are published. In this study we explore how this drafting process has affected the productivity of newcomers in Wikipedia. Using a mixed qualitative and quantitative approach, we show how the process's pre-publication review, which is intended to improve the success of newcomers, in fact decreases newcomer productivity in English Wikipedia and offer recommendations for system designers.", "The success of Wikipedia and the relative high quality of its articles seem to contradict conventional wisdom. Recent studies have begun shedding light on the processes contributing to Wikipedia's success, highlighting the role of coordination and contribution inequality. In this study, we expand on these works in two ways. First, we make a distinction between global (Wikipedia-wide) and local (article-specific) inequality and investigate both constructs. Second, we explore both direct and indirect effects of these inequalities, exposing the intricate relationships between global inequality, local inequality, coordination, and article quality. We tested our hypotheses on a sample of a Wikipedia articles using structural equation modeling and found that global inequality exerts significant positive impact on article quality, while the effect of local inequality is indirect and is mediated by coordination" ] }
1703.03861
2604200214
Wikidata, like Wikipedia, is a knowledge base that anyone can edit. This open collaboration model is powerful in that it reduces barriers to participation and allows a large number of people to contribute. However, it exposes the knowledge base to the risk of vandalism and low-quality contributions. In this work, we build on past work detecting vandalism in Wikipedia to detect vandalism in Wikidata. This work is novel in that identifying damaging changes in a structured knowledge-base requires substantially different feature engineering work than in a text-based wiki like Wikipedia. We also discuss the utility of these classifiers for reducing the overall workload of vandalism patrollers in Wikidata. We describe a machine classification strategy that is able to catch 89 of vandalism while reducing patrollers' workload by 98 , by drawing lightly from contextual features of an edit and heavily from the characteristics of the user making the edit.
Regarding vandalism detection have studied on the demography of vandalism in Wikidata @cite_2 showing interesting dynamics in how and who vandalizes Wikidata. For example, most of the vandals in Wikidata had previously vandalized Wikipedia. @cite_2 As far as we can tell, our work is the first published about a vandalism detection classifier for Wikidata.
{ "cite_N": [ "@cite_2" ], "mid": [ "1978523792" ], "abstract": [ "We report on the construction of the Wikidata Vandalism Corpus WDVC-2015, the first corpus for vandalism in knowledge bases. Our corpus is based on the entire revision history of Wikidata, the knowledge base underlying Wikipedia. Among Wikidata's 24 million manual revisions, we have identified more than 100,000 cases of vandalism. An in-depth corpus analysis lays the groundwork for research and development on automatic vandalism detection in public knowledge bases. Our analysis shows that 58 of the vandalism revisions can be found in the textual portions of Wikidata, and the remainder in structural content, e.g., subject-predicate-object triples. Moreover, we find that some vandals also target Wikidata content whose manipulation may impact content displayed on Wikipedia, revealing potential vulnerabilities. Given today's importance of knowledge bases for information systems, this shows that public knowledge bases must be used with caution." ] }
1703.03868
2950212263
It is well-known that any admissible unidirectional heuristic search algorithm must expand all states whose @math -value is smaller than the optimal solution cost when using a consistent heuristic. Such states are called "surely expanded" (s.e.). A recent study characterized s.e. pairs of states for bidirectional search with consistent heuristics: if a pair of states is s.e. then at least one of the two states must be expanded. This paper derives a lower bound, VC, on the minimum number of expansions required to cover all s.e. pairs, and present a new admissible front-to-end bidirectional heuristic search algorithm, Near-Optimal Bidirectional Search (NBS), that is guaranteed to do no more than 2VC expansions. We further prove that no admissible front-to-end algorithm has a worst case better than 2VC. Experimental results show that NBS competes with or outperforms existing bidirectional search algorithms, and often outperforms A* as well.
Bidirectional search has a long history, beginning with bidirectional brute force search @cite_1 , and proceeding to heuristic search algorithms such as BHPA @cite_8 . Other notable algorithms include BS* @cite_7 , which avoids re-expanding states in both directions, and MM @cite_0 , which ensures that the search frontiers meet in the middle. Along with these algorithms there have been explanations for the poor performance of bidirectional heuristic search, including that the frontiers miss @cite_10 or that the frontiers meet early, and a long time is spent proving the optimal solution @cite_4 . Recent work has refined this, showing that with strong heuristics the frontiers meet later @cite_6 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_1", "@cite_6", "@cite_0", "@cite_10" ], "mid": [ "1539048696", "1971267722", "", "1965469937", "1881819129", "2566652907", "" ], "abstract": [ "The assessment of bidirectional heuristic search has been incorrect since it was first published more than a quarter of a century ago. For quite a long time, this search strategy did not achieve the expected results, and there was a major misunderstanding about the reasons behind it. Although there is still wide-spread belief that bidirectional heuristic search is afflicted by the problem of search frontiers passing each other, we demonstrate that this conjecture is wrong. Based on this finding, we present both a new generic approach to bidirectional heuristic search and a new approach to dynamically improving heuristic values that is feasible in bidirectional search only. These approaches are put into perspective with both the traditional and more recently proposed approaches in order to facilitate a better overall understanding. Empirical results of experiments with our new approaches show that bidirectional heuristic search can be performed very efficiently and also with limited memory. These results suggest that bidirectional heuristic search appears to be better for solving certain difficult problems than corresponding unidirectional search. This provides some evidence for the usefulness of a search strategy that was long neglected. In summary, we show that bidirectional heuristic search is viable and consequently propose that it be reconsidered.", "Abstract In order to reap the potential advantage of less extensive searching which bidirectional heuristic search algorithms offer, strategies are needed to influence the two search wavefronts to meet such that early termination will occur. The principled search control strategy aims to achieve this without trading running time, but can be found wanting still. An improved algorithm BS∗ is described which expands significantly less nodes on average than any other algorithm in the same class of non-wave-shaping admissible bidirectional algorithms. When pitted against BHPA, the only other heuristically guided member in this class, BS∗'s average search efficiency in time and space is about 30 better. BS∗'s superior performance stems from the use of all opportunities to achieve early termination and the elimination of unfruitful avenues by search reduction operations: nipping, pruning, trimming and screening. Such operations exploit information gathered during search and have several spin-offs: more accurate guidance of search control, early exposure of nonpromising nodes and reduced bookkeeping overheads, all of which further enhance BS∗'s performance. A further noteworthy feature of BS∗ is that it is the first staged search algorithm which preserves admissibility.", "", "A new method is proposed for finding the shortest route between two points in an interconnected network. The shortest route is found by investigating a selection of routes from both the starting point and the terminal point. The selection of routes is decided dynamically by extending one by one the routes which have currently covered the least distance. Once a complete through route has been found, it has to be made certain that it is the minimum. The new method appears to be more efficient than alternative approaches to the problem through linear or dynamic programming. Some applications of the technique to scheduling and other problems are briefly described.", "We present an intuitive explanation for the limited effectiveness of front-to-end bidirectional heuristic search, supported with extensive evidence from many commonly-studied domains. While previous work has proved the limitations of specific algorithms, we show that any front-to-end bidirectional heuristic search algorithm will likely be dominated by unidirectional heuristic search or bidirectional brute-force search. We also demonstrate a pathological case where bidirectional heuristic search is the dominant algorithm, so a stronger claim cannot be made. Finally, we show that on the four-peg Towers Of Hanoi with arbitrary start and goal states, bidirectional brute-force search outperforms unidirectional heuristic search using pattern-database heuristics.", "We present MM, the first bidirectional heuristic search algorithm whose forward and backward searches are guaranteed to \"meet in the middle\", i.e. never expand a node beyond the solution midpoint. We also present a novel framework for comparing MM, A*, and brute-force search, and identify conditions favoring each algorithm. Finally, we present experimental results that support our theoretical analysis.", "" ] }
1703.03859
2577166093
The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space. For a class of quadratic objectives, we show an analogous behavior where a distributed alternating direction method of multipliers (ADMM) algorithm can be seen as a lifting of gradient descent. This provides a deep insight for its faster convergence rate under optimal parameter tuning. We conjecture that this gain is always present, as opposed to the lifting of a Markov chain, which sometimes only provides a marginal speedup.
Furthermore, to obtain linear convergence, strong convexity is usually assumed @cite_7 , which does not hold for problem . Most results not requiring strong convexity focus on the convergence rate of the objective function, as opposed to this paper which focus on the convergence rate of the variables; see @cite_11 for example.
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "2123705108", "2216134724" ], "abstract": [ "In decentralized consensus optimization, a connected network of agents collaboratively minimize the sum of their local objective functions over a common decision variable, where their information exchange is restricted between the neighbors. To this end, one can first obtain a problem reformulation and then apply the alternating direction method of multipliers (ADMM). The method applies iterative computation at the individual agents and information exchange between the neighbors. This approach has been observed to converge quickly and deemed powerful. This paper establishes its linear convergence rate for the decentralized consensus optimization problem with strongly convex local objective functions. The theoretical convergence rate is explicitly given in terms of the network topology, the properties of local objective functions, and the algorithm parameter. This result is not only a performance guarantee but also a guideline toward accelerating the ADMM convergence.", "Operator-splitting schemes are iterative algorithms for solving many types of numerical problems. A lot is known about these methods: they converge, and in many cases we know how quickly they converge. But when they are applied to optimization problems, there is a gap in our understanding: The theoretical speed of operator-splitting schemes is nearly always measured in the ergodic sense, but ergodic operator-splitting schemes are rarely used in practice. In this chapter, we tackle the discrepancy between theory and practice and uncover fundamental limits of a class of operator-splitting schemes. Our surprising conclusion is that the relaxed Peaceman-Rachford splitting algorithm, a version of the Alternating Direction Method of Multipliers (ADMM), is nearly as fast as the proximal point algorithm in the ergodic sense and nearly as slow as the subgradient method in the nonergodic sense. A large class of operator-splitting schemes extend from the relaxed Peaceman-Rachford splitting algorithm. Our results show that this class of operator-splitting schemes is also nearly as slow as the subgradient method. The tools we create in this chapter can also be used to prove nonergodic convergence rates of more general splitting schemes, so they are interesting in their own right." ] }
1703.03859
2577166093
The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space. For a class of quadratic objectives, we show an analogous behavior where a distributed alternating direction method of multipliers (ADMM) algorithm can be seen as a lifting of gradient descent. This provides a deep insight for its faster convergence rate under optimal parameter tuning. We conjecture that this gain is always present, as opposed to the lifting of a Markov chain, which sometimes only provides a marginal speedup.
Few papers consider a consensus problem with an objective function different than . For instance, @cite_4 considers @math , subject to @math if @math , where @math are constants. This problem is strongly convex and does not reduce to , and vice-versa. Other branch of research consider @math with ADMM iterations that are agnostic to whether or not @math depends on a subset of the components of @math ; see @cite_5 and references therein. These are in contrast with our setting where decentralized ADMM is a message-passing algorithm @cite_6 , and the messages between agents @math and @math are only associated to the variables shared by functions @math and @math .
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_6" ], "mid": [ "2046387702", "2106715795", "1645278552" ], "abstract": [ "We consider a network of agents that are cooperatively solving a global unconstrained optimization problem, where the objective function is the sum of privately known local objective functions of the agents. Recent literature on distributed optimization methods for solving this problem focused on subgradient based methods, which typically converge at the rate equation, where k is the number of iterations. In this paper, we introduce a new distributed optimization algorithm based on Alternating Direction Method of Multipliers (ADMM), which is a classical method for sequentially decomposing optimization problems with coupled constraints. We show that this algorithm converges at the rate equation.", "The alternating direction multipliers method (ADMM) has been recently proposed as a practical and efficient algorithm for distributed computing. We discuss its applicability to the average consensus problem in this paper. By carefully relaxing ADMM augmentation coefficients we are able to analytically investigate its properties, and to propose simple and strict analytical bounds. These provide a clear indication on how to choose system parameters for optimized performance. We prove both analytically and via simulations that the proposed approach exhibits convergence speed between the best in the literature (classical and optimized solutions), while providing the most powerful resilience to noise.", "We describe how the powerful \"Divide and Concur\" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM DC by introducing three distinct weights for messages, with \"certain\" and \"no opinion\" weights, as well as the standard weight used in ADMM DC. The \"certain\" messages allow our improved algorithm to implement constraint propagation as a special case, while the \"no opinion\" messages speed convergence for some problems by making the algorithm focus only on active constraints. We describe how our three-weight version of ADMM DC can give greatly improved performance for non-convex problems such as circle packing and solving large Sudoku puzzles, while retaining the exact performance of ADMM for convex problems. We also describe the advantages of our algorithm compared to other message-passing algorithms based upon belief propagation." ] }
1703.03859
2577166093
The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space. For a class of quadratic objectives, we show an analogous behavior where a distributed alternating direction method of multipliers (ADMM) algorithm can be seen as a lifting of gradient descent. This provides a deep insight for its faster convergence rate under optimal parameter tuning. We conjecture that this gain is always present, as opposed to the lifting of a Markov chain, which sometimes only provides a marginal speedup.
For quadratic problems, there are explicit results on the convergence rate and optimal parameters of ADMM @cite_0 @cite_10 @cite_13 . However, their assumptions do not hold for the non strongly convex distributed problem considered in this paper. Moreover, there are very few results comparing the optimal convergence rate of ADMM as a function of the optimal convergence rate of GD. For a centralized setting, an explicit comparison is provided in @cite_12 , but it assumes strong convexity.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_12", "@cite_13" ], "mid": [ "2068628888", "2037603831", "2194067778", "1537500458" ], "abstract": [ "This paper addresses the optimal scaling of the ADMM method for distributed quadratic programming. Scaled ADMM iterations are first derived for generic equality-constrained quadratic problems and then applied to a class of distributed quadratic problems. In this setting, the scaling corresponds to the step-size and the edge-weights of the underlying communication graph. We optimize the convergence factor of the algorithm with respect to the step-size and graph edge-weights. Explicit analytical expressions for the optimal convergence factor and the optimal step-size are derived. Numerical simulations illustrate our results.", "The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm parameters that minimize the convergence factor of the ADMM iterates in the context of l 2 -regularized minimization and constrained quadratic programming. Numerical examples show that our parameter selection rules significantly outperform existing alternatives in the literature.", "The framework of Integral Quadratic Constraints of (2014) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to semi-definite programming (SDP). Follow up work by (2015) applies this technique to the entire family of over-relaxed Alternating Direction Method of Multipliers (ADMM). Unfortunately, they only provide an explicit error bound for sufficiently large values of some of the parameters of the problem, leaving the computation for the general case as a numerical optimization problem. In this paper we provide an exact analytical solution to this SDP and obtain a general and explicit upper bound on the convergence rate of the entire family of over-relaxed ADMM. Furthermore, we demonstrate that it is not possible to extract from this SDP a general bound better than ours. We end with a few numerical illustrations of our result and a comparison between the convergence rate we obtain for ADMM with known convergence rates for Gradient Descent (GD).", "Consider a set of @math agents seeking to solve distributively the minimization problem @math where the convex functions @math are local to the agents. The popular Alternating Direction Method of Multipliers has the potential to handle distributed optimization problems of this kind. We provide a general reformulation of the problem and obtain a class of distributed algorithms which encompass various network architectures. The rate of convergence of our method is considered. It is assumed that the infimum of the problem is reached at a point @math , the functions @math are twice differentiable at this point and @math in the positive definite ordering of symmetric matrices. With these assumptions, it is shown that the convergence to the consensus @math is linear and the exact rate is provided. Application examples where this rate can be optimized with respect to the ADMM free parameter @math are also given." ] }
1703.03609
2591749048
Nowadays, a big part of people rely on available content in social media in their decisions (e.g., reviews and feedback on a topic or product). The possibility that anybody can leave a review provides a golden opportunity for spammers to write spam reviews about products and services for different interests. Identifying these spammers and the spam content is a hot topic of research, and although a considerable number of studies have been done recently toward this end, but so far the methodologies put forth still barely detect spam reviews, and none of them show the importance of each extracted feature type. In this paper, we propose a novel framework, named NetSpam , which utilizes spam features for modeling review data sets as heterogeneous information networks to map spam detection procedure into a classification problem in such networks. Using the importance of spam features helps us to obtain better results in terms of different metrics experimented on real-world review data sets from Yelp and Amazon Web sites. The results show that NetSpam outperforms the existing methods and among four categories of features, including review-behavioral, user-behavioral, review-linguistic, and user-linguistic, the first type of features performs better than the other categories.
This approach extract linguistic-based features to find spam reviews. Feng @cite_10 use @math , @math and their composition. Other studies @cite_21 , @cite_31 , @cite_17 use other features like pairwise features (features between two reviews; e.g. content similarity), percentage of CAPITAL words in a reviews for finding spam reviews. [rgb] 0,0,0 Lai in @cite_22 use a probabilistic language modeling to spot spam. This study demonstrates that 2
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_31", "@cite_10", "@cite_17" ], "mid": [ "2145485370", "2401089081", "1775665607", "2124637344", "2159359879" ], "abstract": [ "Numerous reports have indicated the severity of fake reviews (i.e., spam) posted to various e-Commerce or opinion sharing Web sites. Nevertheless, very few studies have been conducted to examine the trustworthiness of online consumer reviews because of the lack of an effective computational methodology. Unlike other kinds of Web spam, untruthful reviews could just look like other legitimate reviews (i.e., ham), and so it is difficult to apply any features to distinguish the two classes. One main contribution of our research work is the development of a novel computational methodology to combat online review spam. Our experimental results confirm that the KL divergence and the probabilistic language modeling based computational model is effective for the detection of untruthful reviews. Empowered by the proposed computational methods, our empirical study found that around 2 of the consumer reviews posted to a large e-Commerce site is spam.", "Spam campaigns spotted in popular product review websites (e.g., amazon.com) have attracted mounting attention from both industry and academia, where a group of online posters are hired to collaboratively craft deceptive reviews for some target products. The goal is to manipulate perceived reputations of the targets for their best interests. Many efforts have been made to detect such colluders by extracting pointwise features from individual reviewers reviewer-groups, however, pairwise features which can potentially capture the underlying correlations among colluders are either ignored or just explored insufficiently in the literature. We observed that pairwise features can be more robust to model the relationships among colluders since they, as the ingredients of spam campaigns, are correlated in nature. In this paper, we explore multiple heterogeneous pairwise features in virtue of some collusion signals found in reviewers’ rating behaviors and linguistic patterns. In addition, an unsupervised and intuitive colluder detecting framework has been proposed which can benefit from these pairwise features. Extensive experiments on real dataset show the effectiveness of our method and satisfactory superiority over several competitors.", "In the past few years, sentiment analysis and opinion mining becomes a popular and important task. These studies all assume that their opinion resources are real and trustful. However, they may encounter the faked opinion or opinion spam problem. In this paper, we study this issue in the context of our product review mining system. On product review site, people may write faked reviews, called review spam, to promote their products, or defame their competitors' products. It is important to identify and filter out the review spam. Previous work only focuses on some heuristic rules, such as helpfulness voting, or rating deviation, which limits the performance of this task. In this paper, we exploit machine learning methods to identify review spam. Toward the end, we manually build a spam collection from our crawled reviews. We first analyze the effect of various features in spam identification. We also observe that the review spammer consistently writes spam. This provides us another view to identify review spam: we can identify if the author of the review is spammer. Based on this observation, we provide a twoview semi-supervised method, co-training, to exploit the large amount of unlabeled data. The experiment results show that our proposed method is effective. Our designed machine learning methods achieve significant improvements in comparison to the heuristic baselines.", "Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (, 2011) reaching 91.2 accuracy with 14 error reduction.", "This paper aims to detect users generating spam reviews or review spammers. We identify several characteristic behaviors of review spammers and model these behaviors so as to detect the spammers. In particular, we seek to model the following behaviors. First, spammers may target specific products or product groups in order to maximize their impact. Second, they tend to deviate from the other reviewers in their ratings of products. We propose scoring methods to measure the degree of spam for each reviewer and apply them on an Amazon review dataset. We then select a subset of highly suspicious reviewers for further scrutiny by our user evaluators with the help of a web based spammer evaluation software specially developed for user evaluation experiments. Our results show that our proposed ranking and supervised methods are effective in discovering spammers and outperform other baseline method based on helpfulness votes alone. We finally show that the detected spammers have more significant impact on ratings compared with the unhelpful reviewers." ] }
1703.03609
2591749048
Nowadays, a big part of people rely on available content in social media in their decisions (e.g., reviews and feedback on a topic or product). The possibility that anybody can leave a review provides a golden opportunity for spammers to write spam reviews about products and services for different interests. Identifying these spammers and the spam content is a hot topic of research, and although a considerable number of studies have been done recently toward this end, but so far the methodologies put forth still barely detect spam reviews, and none of them show the importance of each extracted feature type. In this paper, we propose a novel framework, named NetSpam , which utilizes spam features for modeling review data sets as heterogeneous information networks to map spam detection procedure into a classification problem in such networks. Using the importance of spam features helps us to obtain better results in terms of different metrics experimented on real-world review data sets from Yelp and Amazon Web sites. The results show that NetSpam outperforms the existing methods and among four categories of features, including review-behavioral, user-behavioral, review-linguistic, and user-linguistic, the first type of features performs better than the other categories.
Approaches in this group almost use reviews metadata to extract features; those which are normal pattern of a reviewer behaviors. Feng in @cite_15 focus on distribution of spammers rating on different products and traces them. In @cite_30 , Jindal extract 36 behavioral features and use a supervised method to find spammers on Amazon and @cite_14 indicates behavioral features show spammers' identity better than linguistic ones. [rgb] 0,0,0 Xue in @cite_28 use rate deviation of a specific user and use a trust-aware model to find the relationship between users for calculating final spamicity score. Minnich in @cite_5 use temporal and location features of users to find unusual behavior of spammers. Li in @cite_26 use some basic features (e.g polarity of reviews) and then run a HNC (Heterogeneous Network Classifier) to find final labels on Dianpings dataset. Mukherjee in @cite_0 almost engage behavioral features like rate deviation, extremity and etc. Xie in @cite_8 also use a temporal pattern (time window) to find singleton reviews (reviews written just once) on Amazon. Luca in @cite_2 use behavioral features to show increasing competition between companies leads to very large expansion of spam reviews on products.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_8", "@cite_28", "@cite_0", "@cite_2", "@cite_5", "@cite_15" ], "mid": [ "2047756776", "2089124807", "2059733647", "2136710010", "2185554806", "2016266039", "1804563568", "2212259974", "2202307757" ], "abstract": [ "Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them", "In recent years, opinion mining attracted a great deal of research attention. However, limited work has been done on detecting opinion spam (or fake reviews). The problem is analogous to spam in Web search [1, 9 11]. However, review spam is harder to detect because it is very hard, if not impossible, to recognize fake reviews by manually reading them [2]. This paper deals with a restricted problem, i.e., identifying unusual review patterns which can represent suspicious behaviors of reviewers. We formulate the problem as finding unexpected rules. The technique is domain independent. Using the technique, we analyzed an Amazon.com review dataset and found many unexpected rules and rule groups which indicate spam activities.", "Online reviews have become an increasingly important resource for decision making and product designing. But reviews systems are often targeted by opinion spamming. Although fake review detection has been studied by researchers for years using supervised learning, ground truth of large scale datasets is still unavailable and most of existing approaches of supervised learning are based on pseudo fake reviews rather than real fake reviews. Working with Dianping, the largest Chinese review hosting site, we present the first reported work on fake review detection in Chinese with filtered reviews from Dianping's fake review detection system. Dianping's algorithm has a very high precision, but the recall is hard to know. This means that all fake reviews detected by the system are almost certainly fake but the remaining reviews (unknown set) may not be all genuine. Since the unknown set may contain many fake reviews, it is more appropriate to treat it as an unlabeled set. This calls for the model of learning from positive and unlabeled examples (PU learning). By leveraging the intricate dependencies among reviews, users and IP addresses, we first propose a collective classification algorithm called Multi-typed Heterogeneous Collective Classification (MHCC) and then extend it to Collective Positive and Unlabeled learning (CPU). Our experiments are conducted on real-life reviews of 500 restaurants in Shanghai, China. Results show that our proposed models can markedly improve the F1 scores of strong baselines in both PU and non-PU learning settings. Since our models only use language independent features, they can be easily generalized to other languages.", "Online reviews play a crucial role in today's electronic commerce. It is desirable for a customer to read reviews of products or stores before making the decision of what or from where to buy. Due to the pervasive spam reviews, customers can be misled to buy low-quality products, while decent stores can be defamed by malicious reviews. We observe that, in reality, a great portion (> 90 in the data we study) of the reviewers write only one review (singleton review). These reviews are so enormous in number that they can almost determine a store's rating and impression. However, existing methods did not examine this larger part of the reviews. Are most of these singleton reviews truthful ones? If not, how to detect spam reviews in singleton reviews? We call this problem singleton review spam detection. To address this problem, we observe that the normal reviewers' arrival pattern is stable and uncorrelated to their rating pattern temporally. In contrast, spam attacks are usually bursty and either positively or negatively correlated to the rating. Thus, we propose to detect such attacks via unusually correlated temporal patterns. We identify and construct multidimensional time series based on aggregate statistics, in order to depict and mine such correlations. In this way, the singleton review spam detection problem is mapped to a abnormally correlated pattern detection problem. We propose a hierarchical algorithm to robustly detect the time windows where such attacks are likely to have happened. The algorithm also pinpoints such windows in different time resolutions to facilitate faster human inspection. Experimental results show that the proposed method is effective in detecting singleton review attacks. We discover that singleton review is a significant source of spam reviews and largely affects the ratings of online stores.", "Online review systems play an important role in affecting consumers' behaviors and decision making, attracting many spammers to insert fake reviews to manipulate review content and ratings. To increase utility and improve user experience, some online review systems allow users to form social relationships between each other and encourage their interactions. In this paper, we aim at providing an efficient and effective method to identify review spammers by incorporating social relations based on two assumptions that people are more likely to consider reviews from those connected with them as trustworthy, and review spammers are less likely to maintain a large relationship network with normal users. The contributions of this paper are two-fold: (1) We elaborate how social relationships can be incorporated into review rating prediction and propose a trust-based rating prediction model using proximity as trust weight, and (2) We design a trust-aware detection model based on rating variance which iteratively calculates user-specific overall trustworthiness scores as the indicator for spamicity. Experiments on the dataset collected from Yelp.com show that the proposed trust-based prediction achieves a higher accuracy than standard CF method, and there exists a strong correlation between social relationships and the overall trustworthiness scores.", "Opinionated social media such as product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to game the system by opinion spamming (e.g., writing fake reviews) to promote or to demote some target products. In recent years, fake review detection has attracted significant attention from both the business and research communities. However, due to the difficulty of human labeling needed for supervised learning and evaluation, the problem remains to be highly challenging. This work proposes a novel angle to the problem by modeling spamicity as latent. An unsupervised model, called Author Spamicity Model (ASM), is proposed. It works in the Bayesian setting, which facilitates modeling spamicity of authors as latent and allows us to exploit various observed behavioral footprints of reviewers. The intuition is that opinion spammers have different behavioral distributions than non-spammers. This creates a distributional divergence between the latent population distributions of two clusters: spammers and non-spammers. Model inference results in learning the population distributions of the two clusters. Several extensions of ASM are also considered leveraging from different priors. Experiments on a real-life Amazon review dataset demonstrate the effectiveness of the proposed models which significantly outperform the state-of-the-art competitors.", "Consumer reviews are now part of everyday decision-making. Yet, the credibility of these reviews is fundamentally undermined when businesses commit review fraud, creating fake reviews for themselves or their competitors. We investigate the economic incentives to commit review fraud on the popular review platform Yelp, using two complementary approaches and datasets. We begin by analyzing restaurant reviews that are identified by Yelp's filtering algorithm as suspicious, or fake ? and treat these as a proxy for review fraud (an assumption we provide evidence for). We present four main findings. First, roughly 16 of restaurant reviews on Yelp are filtered. These reviews tend to be more extreme (favorable or unfavorable) than other reviews, and the prevalence of suspicious reviews has grown significantly over time. Second, a restaurant is more likely to commit review fraud when its reputation is weak, i.e., when it has few reviews, or it has recently received bad reviews. Third, chain restaurants ? which benefit less from Yelp ? are also less likely to commit review fraud. Fourth, when restaurants face increased competition, they become more likely to receive unfavorable fake reviews. Using a separate dataset, we analyze businesses that were caught soliciting fake reviews through a sting conducted by Yelp. These data support our main results, and shed further light on the economic incentives behind a business's decision to leave fake reviews.", "Online reviews on products and services can be very useful for customers, but they need to be protected from manipulation. So far, most studies have focused on analyzing online reviews from a single hosting site. How could one leverage information from multiple review hosting sites? This is the key question in our work. In response, we develop a systematic methodology to merge, compare, and evaluate reviews from multiple hosting sites. We focus on hotel reviews and use more than 15 million reviews from more than 3.5 million users spanning three prominent travel sites. Our work consists of three thrusts: (a) we develop novel features capable of identifying cross-site discrepancies effectively, (b) we conduct arguably the first extensive study of cross-site variations using real data, and develop a hotel identity-matching method with 93 accuracy, (c) we introduce the TrueView score, as a proof of concept that cross-site analysis can better inform the end user. Our results show that: (1) we detect 7 times more suspicious hotels by using multiple sites compared to using the three sites in isolation, and (2) we find that 20 of all hotels appearing in all three sites seem to have low trustworthiness score. Our work is an early effort that explores the advantages and the challenges in using multiple reviewing sites towards more informed decision making.", "This paper postulates that there are natural distributions of opinions in product reviews. In particular, we hypothesize that for a given domain, there is a set of representative distributions of review rating scores. A deceptive business entity that hires people to write fake reviews will necessarily distort its distribution of review scores, leaving distributional footprints behind. In order to validate this hypothesis, we introduce strategies to create dataset with pseudo-gold standard that is labeled automatically based on different types of distributional footprints. A range of experiments confirm the hypothesized connection between the distributional anomaly and deceptive reviews. This study also provides novel quantitative insights into the characteristics of natural distributions of opinions in the TripAdvisor hotel review and the Amazon product review domains." ] }
1703.03609
2591749048
Nowadays, a big part of people rely on available content in social media in their decisions (e.g., reviews and feedback on a topic or product). The possibility that anybody can leave a review provides a golden opportunity for spammers to write spam reviews about products and services for different interests. Identifying these spammers and the spam content is a hot topic of research, and although a considerable number of studies have been done recently toward this end, but so far the methodologies put forth still barely detect spam reviews, and none of them show the importance of each extracted feature type. In this paper, we propose a novel framework, named NetSpam , which utilizes spam features for modeling review data sets as heterogeneous information networks to map spam detection procedure into a classification problem in such networks. Using the importance of spam features helps us to obtain better results in terms of different metrics experimented on real-world review data sets from Yelp and Amazon Web sites. The results show that NetSpam outperforms the existing methods and among four categories of features, including review-behavioral, user-behavioral, review-linguistic, and user-linguistic, the first type of features performs better than the other categories.
Crawford in @cite_36 indicates using different classification approach need different number of features to attain desired performance and propose approaches which use fewer features to attain that performance and hence recommend to improve their performance while they use fewer features which leads them to have better complexity. With this perspective our framework is arguable. This study shows using different approaches in classification yield different performance in terms of different metrics.
{ "cite_N": [ "@cite_36" ], "mid": [ "2461949443" ], "abstract": [ "Online reviews are quickly becoming one of the most important sources of information for consumers on various products and services. With their increased importance, there exists an increased opportunity for spammers or unethical business owners to create false reviews in order to artificially promote their goods and services or smear those of their competitors. In response to this growing problem, there have been many studies on the most effective ways of detecting review spam using various machine learning algorithms. One common thread in most of these studies is the conversion of reviews to word vectors, which can potentially result in hundreds of thousands of features. However, there has been little study on reducing the feature subset size to a manageable number or how best to do so. In this paper, we consider two distinct methods of reducing feature subset size in the review spam domain. The methods include filter-based feature rankers and word-frequency based feature selection. We show that there is not a one size fits all approach to feature selection, and the best way to reduce the feature subset size is dependent upon both the classifier being used and the feature subset size desired. It was also observed that the feature subset size had significant influence on which feature selection method is used." ] }
1703.03492
2604321021
This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition.
In this section, we cover the relevant literature of skeleton-based action recognition methods using hand-crafted features or using deep learning networks. In @cite_8 , the covariance matrices of the trajectories of the joint positions are computed over hierarchical temporal levels to model the skeleton sequences. In @cite_15 , the pairwise relative positions of each joint with other joints are computed to represent each frame of the skeleton sequences, and Fourier Temporal Pyramid (FTP) is used to model the temporal patterns. In @cite_24 , the pairwise relative positions of the joints are also used to characterize posture features, motion features, and offset features of the skeleton sequences. Principal Component Analysis (PCA) is then applied to the normalized features to compute EigenJoints as representations. In @cite_52 , histograms of 3D joint locations are computed to represent each frame of the skeleton sequences, and HMMs are used to model the temporal dynamics. In @cite_56 , the rotations and translations between various body parts are used as representations, and a skeleton sequence is modelled as a curve in the Lie group. The temporal dynamics are modelled with FTP.
{ "cite_N": [ "@cite_8", "@cite_52", "@cite_56", "@cite_24", "@cite_15" ], "mid": [ "203345490", "2145546283", "2048821851", "2073139398", "2143267104" ], "abstract": [ "Human action recognition from videos is a challenging machine vision task with multiple important application domains, such as human-robot machine interaction, interactive entertainment, multimedia information retrieval, and surveillance. In this paper, we present a novel approach to human action recognition from 3D skeleton sequences extracted from depth data. We use the covariance matrix for skeleton joint locations over time as a discriminative descriptor for a sequence. To encode the relationship between joint movement and time, we deploy multiple covariance matrices over sub-sequences in a hierarchical fashion. The descriptor has a fixed length that is independent from the length of the described sequence. Our experiments show that using the covariance descriptor with an off-the-shelf classification algorithm outperforms the state of the art in action recognition on multiple datasets, captured either via a Kinect-type sensor or a sophisticated motion capture system. We also include an evaluation on a novel large dataset using our own annotation.", "In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skelet al joint locations from Kinect depth maps using 's method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action 3D dataset and our algorithm outperforms [25] on most of the cases.", "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skelet al representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skelet al representation lies in the Lie group SE(3)×…×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skelet al representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "In this paper, we propose an effective method to recognize human actions from 3D positions of body joints. With the release of RGBD sensors and associated SDK, human body joints can be extracted in real time with reasonable accuracy. In our method, we propose a new type of features based on position differences of joints, EigenJoints, which combine action information including static posture, motion, and offset. We further employ the Naive-Bayes-Nearest-Neighbor (NBNN) classifier for multi-class action classification. The recognition results on the Microsoft Research (MSR) Action3D dataset demonstrate that our approach significantly outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to recognize actions on the MSR Action3D dataset. We observe 15–20 frames are sufficient to achieve comparable results to that using the entire video sequences.", "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms." ] }
1703.03492
2604321021
This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition.
In @cite_23 , the skeleton joints are divided into five sets corresponding to five body parts. They are fed into five LSTMs for feature fusion and classification. In @cite_41 , the skeleton joints are fed to a deep LSTM at each time slot to learn the inherent co-occurrence features of skeleton joints. In @cite_35 , the long-term context representations of the body parts are learned with a part-aware LSTM. In @cite_22 , both the spatial and temporal information of skeleton sequences are learned with a spatial temporal LSTM. A Trust Gate is also proposed to remove noisy joints. This method achieves the state-of-the-art performance on the NTU RGB+D dataset @cite_35 .
{ "cite_N": [ "@cite_41", "@cite_35", "@cite_22", "@cite_23" ], "mid": [ "2307035320", "2964134613", "2510185399", "1950788856" ], "abstract": [ "Skeleton based action recognition distinguishes human actions using the trajectories of skeleton joints, which provide a very good representation for describing actions. Considering that recurrent neural networks (RNNs) with Long Short-Term Memory (LSTM) can learn feature representations and model long-term temporal dependencies automatically, we propose an end-to-end fully connected deep LSTM network for skeleton based action recognition. Inspired by the observation that the co-occurrences of the joints intrinsically characterize human actions, we take the skeleton as the input at each time slot and introduce a novel regularization scheme to learn the co-occurrence features of skeleton joints. To train the deep LSTM network effectively, we propose a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons. Experimental results on three human action recognition datasets consistently demonstrate the effectiveness of the proposed model.", "Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+Dbased action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art handcrafted features on the suggested cross-subject and crossview evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis.", "3D action recognition – analysis of human actions based on 3D skeleton data – becomes popular recently due to its succinctness, robustness, and view-invariant representation. Recent attempts on this problem suggested to develop RNN-based learning methods to model the contextual dependency in the temporal domain. In this paper, we extend this idea to spatio-temporal domains to analyze the hidden sources of action-related information within the input data over both domains concurrently. Inspired by the graphical structure of the human skeleton, we further propose a more powerful tree-structure based traversal method. To handle the noise and occlusion in 3D skeleton data, we introduce new gating mechanism within LSTM to learn the reliability of the sequential input data and accordingly adjust its effect on updating the long-term context information stored in the memory cell. Our method achieves state-of-the-art performance on 4 challenging benchmark datasets for 3D human action analysis.", "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency." ] }
1703.03107
2595521492
Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9 and 15 of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.
Also known as sybil'' accounts, social bots can pollute online discussion by lending false credibility to their messages and influence other users @cite_19 @cite_3 . Recent studies quantify the extent to which automated systems can dominate discussions on Twitter about topics ranging from electronic cigarettes @cite_34 to elections @cite_10 . Large collections of social bots, also known as botnets, are controlled by botmasters and used for coordinated activities. Examples of such botnets identified for advertisement @cite_8 and influence about Syrian civic war @cite_48 . Social bots also vary greatly in terms of their behavior, intent, and vulnerabilities, as illustrated in a categorization scheme for bot attacks @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_48", "@cite_3", "@cite_19", "@cite_34", "@cite_10" ], "mid": [ "1549828508", "2951362990", "", "", "1837843568", "1957100977", "2550819555" ], "abstract": [ "In the past, online social networks (OSN) like Facebook and Twitter became powerful instruments for communication and networking. Unfortunately, they have also become a welcome target for socialbot attacks. Therefore, a deep understanding of the nature of such attacks is important to protect the Eco-System of OSNs. In this extended abstract we propose a categorization scheme of social bot attacks that aims at providing an overview of the state of the art of techniques in this emerging field. Finally, we demonstrate the usefulness of our categorization scheme by characterizing recent socialbot attacks according to our categorization scheme.", "It is known that many Twitter users are bots, which are accounts controlled and sometimes created by computers. Twitter bots can send spam tweets, manipulate public opinion and be used for online fraud. Here we report the discovery, retrieval, and analysis of the Star Wars' botnet in Twitter, which consists of more than 350,000 bots tweeting random quotations exclusively from Star Wars novels. The botnet contains a single type of bot, showing exactly the same properties throughout the botnet. It is unusually large, many times larger than other available datasets. It provides a valuable source of ground truth for research on Twitter bots. We analysed and revealed rich details on how the botnet was designed and created. As of this writing, the Star Wars bots are still alive in Twitter. They have survived since their creation in 2013, despite the increasing efforts in recent years to detect and remove Twitter bots.We also reflect on the unconventional' way in which we discovered the Star Wars bots, and discuss the current problems and future challenges of Twitter bot detection.", "", "", "Today's social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society.", "Background: Twitter has become the \"wild-west\" of marketing and promotional strategies for advertisement agencies. Electronic cigarettes have been heavily marketed across Twitter feeds, offering discounts, \"kid-friendly\" flavors, algorithmically generated false testimonials, and free samples. Methods:All electronic cigarette keyword related tweets from a 10 sample of Twitter spanning January 2012 through December 2014 (approximately 850,000 total tweets) were identified and categorized as Automated or Organic by combining a keyword classification and a machine trained Human Detection algorithm. A sentiment analysis using Hedonometrics was performed on Organic tweets to quantify the change in consumer sentiments over time. Commercialized tweets were topically categorized with key phrasal pattern matching. Results:The overwhelming majority (80 ) of tweets were classified as automated or promotional in nature. The majority of these tweets were coded as commercialized (83.65 in 2013), up to 33 of which offered discounts or free samples and appeared on over a billion twitter feeds as impressions. The positivity of Organic (human) classified tweets has decreased over time (5.84 in 2013 to 5.77 in 2014) due to a relative increase in the negative words ban,tobacco,doesn't,drug,against,poison,tax and a relative decrease in the positive words like haha,good,cool. Automated tweets are more positive than organic (6.17 versus 5.84) due to a relative increase in the marketing words best,win,buy,sale,health,discount and a relative decrease in negative words like bad, hate, stupid, don't. Conclusions:Due to the youth presence on Twitter and the clinical uncertainty of the long term health complications of electronic cigarette consumption, the protection of public health warrants scrutiny and potential regulation of social media marketing.", "Social media have been extensively praised for increasing democratic discussion on social issues related to policy and politics. However, what happens when this powerful communication tools are exploited to manipulate online discussion, to change the public perception of political entities, or even to try affecting the outcome of political elections? In this study we investigated how the presence of social media bots, algorithmically driven entities that on the surface appear as legitimate users, affect political discussion around the 2016 U.S. Presidential election. By leveraging state-of-the-art social bot detection algorithms, we uncovered a large fraction of user population that may not be human, accounting for a significant portion of generated content (about one-fifth of the entire conversation). We inferred political partisanships from hashtag adoption, for both humans and bots, and studied spatio-temporal communication, political support dynamics, and influence mechanisms by discovering the level of network embeddedness of the bots. Our findings suggest that the presence of social media bots can indeed negatively affect democratic political discussion rather than improving it, which in turn can potentially alter public opinion and endanger the integrity of the Presidential election." ] }
1703.03107
2595521492
Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9 and 15 of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.
Much of the previous work on detecting bots is from the perspective of the social network platform operators, implying full access to all data. These studies focus on collecting large-scale data to either cluster behavioral patterns of users @cite_9 or classify accounts using supervised learning techniques @cite_4 @cite_40 . For instance, Beutel decomposed event data in time, user, and activity dimensions to extract similar behaviors @cite_26 . These techniques are useful to identify coordinated large-scale attacks directed at a common set of targets at the same time, but accounts with similar strategies might also target different groups and operate separately from each other.
{ "cite_N": [ "@cite_40", "@cite_9", "@cite_26", "@cite_4" ], "mid": [ "176212337", "1781642226", "2133591726", "2092277251" ], "abstract": [ "The rise in popularity of social networking sites such as Twitter and Facebook has been paralleled by the rise of unwanted, disruptive entities on these networks- — including spammers, malware disseminators, and other content polluters. Inspired by sociologists working to ensure the success of commons and criminologists focused on deterring vandalism and preventing crime, we present the first long-term study of social honeypots for tempting, profiling, and filtering content polluters in social media. Concretely, we report on our experiences via a seven-month deployment of 60 honeypots on Twitter that resulted in the harvesting of 36,000 candidate content polluters. As part of our study, we (1) examine the harvested Twitter users, including an analysis of link payloads, user behavior over time, and followers following network dynamics and (2) evaluate a wide range of features to investigate the effectiveness of automatic content polluter identification.", "Fake identities and Sybil accounts are pervasive in today's online communities. They are responsible for a growing number of threats, including fake product reviews, malware and spam on social networks, and astroturf political campaigns. Unfortunately, studies show that existing tools such as CAPTCHAs and graph-based Sybil detectors have not proven to be effective defenses. In this paper, we describe our work on building a practical system for detecting fake identities using server-side clickstream models. We develop a detection approach that groups \"similar\" user clickstreams into behavioral clusters, by partitioning a similarity graph that captures distances between clickstream sequences. We validate our clickstream models using ground-truth traces of 16,000 real and Sybil users from Renren, a large Chinese social network with 220M users. We propose a practical detection system based on these models, and show that it provides very high detection accuracy on our clickstream traces. Finally, we worked with collaborators at Renren and LinkedIn to test our prototype on their server-side data. Following positive results, both companies have expressed strong interest in further experimentation and possible internal deployment.", "How can web services that depend on user generated content discern fraudulent input by spammers from legitimate input? In this paper we focus on the social network Facebook and the problem of discerning ill-gotten Page Likes, made by spammers hoping to turn a profit, from legitimate Page Likes. Our method, which we refer to as CopyCatch, detects lockstep Page Like patterns on Facebook by analyzing only the social graph between users and Pages and the times at which the edges in the graph (the Likes) were created. We offer the following contributions: (1) We give a novel problem formulation, with a simple concrete definition of suspicious behavior in terms of graph structure and edge constraints. (2) We offer two algorithms to find such suspicious lockstep behavior - one provably-convergent iterative algorithm and one approximate, scalable MapReduce implementation. (3) We show that our method severely limits \"greedy attacks\" and analyze the bounds from the application of the Zarankiewicz problem to our setting. Finally, we demonstrate and discuss the effectiveness of CopyCatch at Facebook and on synthetic data, as well as potential extensions to anomaly detection problems in other domains. CopyCatch is actively in use at Facebook, searching for attacks on Facebook's social graph of over a billion users, many millions of Pages, and billions of Page Likes.", "Sybil accounts are fake identities created to unfairly increase the power or resources of a single malicious user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but they have not been able to perform large-scale measurements to detect them or measure their activities. In this article, we describe our efforts to detect, characterize, and understand Sybil account activity in the Renren Online Social Network (OSN). We use ground truth provided by Renren Inc. to build measurement-based Sybil detectors and deploy them on Renren to detect more than 100,000 Sybil accounts. Using our full dataset of 650,000 Sybils, we examine several aspects of Sybil behavior. First, we study their link creation behavior and find that contrary to prior conjecture, Sybils in OSNs do not form tight-knit communities. Next, we examine the fine-grained behaviors of Sybils on Renren using clickstream data. Third, we investigate behind-the-scenes collusion between large groups of Sybils. Our results reveal that Sybils with no explicit social ties still act in concert to launch attacks. Finally, we investigate enhanced techniques to identify stealthy Sybils. In summary, our study advances the understanding of Sybil behavior on OSNs and shows that Sybils can effectively avoid existing community-based Sybil detectors. We hope that our results will foster new research on Sybil detection that is based on novel types of Sybil features." ] }
1703.03107
2595521492
Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9 and 15 of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.
Structural connectivity may provide important cues. However, Yang studied large-scale sybil attacks and observed sophisticated sybils that develop strategies for building normal-looking social ties, making themselves harder to detect @cite_4 . Some sybil attacks analyze the social graph of targeted groups to infiltrate specific organizations @cite_25 . SybilRank is a system developed to identify attacks from their underlying topology @cite_47 . Alvisi surveyed the evolution of sybil defense protocols that leverage the structural properties of the social graph @cite_15 .
{ "cite_N": [ "@cite_47", "@cite_4", "@cite_25", "@cite_15" ], "mid": [ "2168508162", "2092277251", "", "1989643196" ], "abstract": [ "Users increasingly rely on the trustworthiness of the information exposed on Online Social Networks (OSNs). In addition, OSN providers base their business models on the marketability of this information. However, OSNs suffer from abuse in the form of the creation of fake accounts, which do not correspond to real humans. Fakes can introduce spam, manipulate online rating, or exploit knowledge extracted from the network. OSN operators currently expend significant resources to detect, manually verify, and shut down fake accounts. Tuenti, the largest OSN in Spain, dedicates 14 full-time employees in that task alone, incurring a significant monetary cost. Such a task has yet to be successfully automated because of the difficulty in reliably capturing the diverse behavior of fake and real OSN profiles. We introduce a new tool in the hands of OSN operators, which we call SybilRank. It relies on social graph properties to rank users according to their perceived likelihood of being fake (Sybils). SybilRank is computationally efficient and can scale to graphs with hundreds of millions of nodes, as demonstrated by our Hadoop prototype. We deployed SybilRank in Tuenti's operation center. We found that ∼90 of the 200K accounts that SybilRank designated as most likely to be fake, actually warranted suspension. On the other hand, with Tuenti's current user-report-based approach only ∼5 of the inspected accounts are indeed fake.", "Sybil accounts are fake identities created to unfairly increase the power or resources of a single malicious user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but they have not been able to perform large-scale measurements to detect them or measure their activities. In this article, we describe our efforts to detect, characterize, and understand Sybil account activity in the Renren Online Social Network (OSN). We use ground truth provided by Renren Inc. to build measurement-based Sybil detectors and deploy them on Renren to detect more than 100,000 Sybil accounts. Using our full dataset of 650,000 Sybils, we examine several aspects of Sybil behavior. First, we study their link creation behavior and find that contrary to prior conjecture, Sybils in OSNs do not form tight-knit communities. Next, we examine the fine-grained behaviors of Sybils on Renren using clickstream data. Third, we investigate behind-the-scenes collusion between large groups of Sybils. Our results reveal that Sybils with no explicit social ties still act in concert to launch attacks. Finally, we investigate enhanced techniques to identify stealthy Sybils. In summary, our study advances the understanding of Sybil behavior on OSNs and shows that Sybils can effectively avoid existing community-based Sybil detectors. We hope that our results will foster new research on Sybil detection that is based on novel types of Sybil features.", "", "Sybil attacks in which an adversary forges a potentially unbounded number of identities are a danger to distributed systems and online social networks. The goal of sybil defense is to accurately identify sybil identities. This paper surveys the evolution of sybil defense protocols that leverage the structural properties of the social graph underlying a distributed system to identify sybil identities. We make two main contributions. First, we clarify the deep connection between sybil defense and the theory of random walks. This leads us to identify a community detection algorithm that, for the first time, offers provable guarantees in the context of sybil defense. Second, we advocate a new goal for sybil defense that addresses the more limited, but practically useful, goal of securely white-listing a local region of the graph." ] }
1703.03107
2595521492
Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9 and 15 of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.
The work presented here follows several previous contributions to the problem of social bot detection that leverage learning models trained with data collected from human and bot accounts. Chu built a classification system identifying accounts controlled by humans, bots, and cyborgs @cite_46 @cite_0 . Wang analyzed sybil attacks using annotations by experts and crowd-sourcing workers to evaluate consistency and effectiveness of different detection systems @cite_17 . Clark labeled 1,000 accounts by hand and found natural language text features to be very effective at discriminating between human and automated accounts @cite_23 . Lee used a honeypot approach to collect the largest sample of bot accounts available to date @cite_40 . That study generated the honeypot dataset used in the present paper. Here, we extend this body of prior work by exploring many different categories of features, contributing a new labeled dataset, estimating the number of bot accounts, analyzing information flow among accounts, identifying several classes of behaviors, and providing a public bot detection service.
{ "cite_N": [ "@cite_0", "@cite_40", "@cite_23", "@cite_46", "@cite_17" ], "mid": [ "2072715695", "176212337", "2963208026", "1969568357", "1671965234" ], "abstract": [ "Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.", "The rise in popularity of social networking sites such as Twitter and Facebook has been paralleled by the rise of unwanted, disruptive entities on these networks- — including spammers, malware disseminators, and other content polluters. Inspired by sociologists working to ensure the success of commons and criminologists focused on deterring vandalism and preventing crime, we present the first long-term study of social honeypots for tempting, profiling, and filtering content polluters in social media. Concretely, we report on our experiences via a seven-month deployment of 60 honeypots on Twitter that resulted in the harvesting of 36,000 candidate content polluters. As part of our study, we (1) examine the harvested Twitter users, including an analysis of link payloads, user behavior over time, and followers following network dynamics and (2) evaluate a wide range of features to investigate the effectiveness of automatic content polluter identification.", "Abstract Twitter, a popular social media outlet, has evolved into a vast source of linguistic data, rich with opinion, sentiment, and discussion. Due to the increasing popularity of Twitter, its perceived potential for exerting social influence has led to the rise of a diverse community of automatons, commonly referred to as bots. These inorganic and semi-organic Twitter entities can range from the benevolent (e.g., weather-update bots, help-wanted-alert bots) to the malevolent (e.g., spamming messages, advertisements, or radical opinions). Existing detection algorithms typically leverage metadata (time between tweets, number of followers, etc.) to identify robotic accounts. Here, we present a powerful classification scheme that exclusively uses the natural language text from organic users to provide a criterion for identifying accounts posting automated messages. Since the classifier operates on text alone, it is flexible and may be applied to any textual data beyond the Twittersphere.", "Twitter is a new web application playing dual roles of online social networking and micro-blogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: (1) an entropy-based component, (2) a machine-learning-based component, (3) an account properties component, and (4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.", "As popular tools for spreading spam and malware, Sybils (or fake accounts) pose a serious threat to online communities such as Online Social Networks (OSNs). Today, sophisticated attackers are creating realistic Sybils that effectively befriend legitimate users, rendering most automated Sybil detection techniques ineffective. In this paper, we explore the feasibility of a crowdsourced Sybil detection system for OSNs. We conduct a large user study on the ability of humans to detect today's Sybil accounts, using a large corpus of ground-truth Sybil accounts from the Facebook and Renren networks. We analyze detection accuracy by both \"experts\" and \"turkers\" under a variety of conditions, and find that while turkers vary significantly in their effectiveness, experts consistently produce near-optimal results. We use these results to drive the design of a multi-tier crowdsourcing Sybil detection system. Using our user study data, we show that this system is scalable, and can be highly effective either as a standalone system or as a complementary technique to current tools." ] }
1703.03107
2595521492
Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9 and 15 of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.
An alternative approach to study social bots and sybil attacks is to understand what makes certain groups and individuals more appealing as targets. Wald studied the factors affecting the likelihood of a users being targeted by social bots @cite_27 . These approaches point to effective strategies that future social bots might develop.
{ "cite_N": [ "@cite_27" ], "mid": [ "2013734030" ], "abstract": [ "The popularity of the Twitter social networking site has made it a target for social bots, which use increasingly-complex algorithms to engage users and pretend to be humans. While much research has studied how to identify such bots in the process of spam detection, little research has looked at the other side of the question - detecting users likely to be fooled by bots. In this paper, we examine a dataset consisting of 610 users who were messaged by Twitter bots, and determine which features describing these users were most helpful in predicting whether or not they would interact with the bots (through replies or following the bot). We then use six classifiers to build models for predicting whether a given user will interact with the bot, both using the selected features and using all features. We find that a users' Klout score, friends count, and followers count are most predictive of whether a user will interact with a bot, and that the Random Forest algorithm produces the best classifier, when used in conjunction with one of the better feature ranking algorithms (although poor feature ranking can actually make performance worse than no feature ranking). Overall, these results show promise for helping understand which users are most vulnerable to social bots." ] }
1703.03107
2595521492
Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9 and 15 of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.
Recently, we have observed efforts to facilitate research collaborations on the topic of social bots. DARPA organized a bot detection challenge in the domain of anti-vaccine campaigns on Twitter @cite_41 . We released our Twitter bot detection system online for public use @cite_29 . Since its release, our system has received millions of requests and we are improving models based on feedback we received from our users. The increasing availability of software and datasets on social bots will help design systems that are capable of co-evolving with recent social bots and hopefully mitigating the effects of their malicious activities.
{ "cite_N": [ "@cite_41", "@cite_29" ], "mid": [ "2278635123", "2263846226" ], "abstract": [ "From politicians and nation states to terrorist groups, numerous organizations reportedly conduct explicit campaigns to influence opinions on social media, posing a risk to freedom of expression. Thus, there is a need to identify and eliminate \"influence bots\"--realistic, automated identities that illicitly shape discussions on sites like Twitter and Facebook--before they get too influential.", "While most online social media accounts are controlled by humans, these platforms also host automated agents called social bots or sybil accounts. Recent literature reported on cases of social bots imitating humans to manipulate discussions, alter the popularity of users, pollute content and spread misinformation, and even perform terrorist propaganda and recruitment actions. Here we present BotOrNot, a publicly-available service that leverages more than one thousand features to evaluate the extent to which a Twitter account exhibits similarity to the known characteristics of social bots. Since its release in May 2014, BotOrNot has served over one million requests via our website and APIs." ] }
1703.03200
2963554023
Sparsity is one of the major problems in natural language processing. The problem becomes even more severe in agglutinating languages that are highly prone to be inflected. We deal with sparsity in Turkish by adopting morphological features for part-of-speech tagging. We learn inflectional and derivational morpheme tags in Turkish by using conditional random fields (CRF) and we employ the morpheme tags in part-of-speech (PoS) tagging by using hidden Markov models (HMMs) to mitigate sparsity. Results show that using morpheme tags in PoS tagging helps alleviate the sparsity in emission probabilities. Our model outperforms other hidden Markov model based PoS tagging models for small training datasets in Turkish. We obtain an accuracy of 94.1 in morpheme tagging and 89.2 in PoS tagging on a 5K training dataset.
There has been a substantial amount of work on unsupervised morphological segmentation. Goldsmith @cite_14 , Creutz and Lagus @cite_15 build morphological segmentation systems based on minimum description length (MDL). Creutz and Lagus @cite_12 introduce a hidden Markov model (HMM) that employs the probability distributions between different morpheme categories such as prefix, stem, and suffix. @cite_3 introduce a log-linear model for unsupervised morphological segmentation that incorporates MDL-inspired priors.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_3", "@cite_12" ], "mid": [ "2117621558", "2101711363", "1975638594", "201532657" ], "abstract": [ "We present two methods for unsupervised segmentation of words into morpheme-like units. The model utilized is especially suited for languages with a rich morphology, such as Finnish. The first method is based on the Minimum Description Length (MDL) principle and works online. In the second method, Maximum Likelihood (ML) optimization is used. The quality of the segmentations is measured using an evaluation method that compares the segmentations produced to an existing morphological analysis. Experiments on both Finnish and English corpora show that the presented methods perform well compared to a current state-of-the-art system.", "This study reports the results of using minimum description length (MDL) analysis to model unsupervised learning of the morphological segmentation of European languages, using corpora ranging in size from 5,000 words to 500,000 words. We develop a set of heuristics that rapidly develop a probabilistic morphological grammar, and use MDL as our primary tool to determine whether the modifications proposed by the heuristics will be adopted or not. The resulting grammar matches well the analysis that would be developed by a human morphologist.In the final section, we discuss the relationship of this style of MDL grammatical analysis to the notion of evaluation metric in early generative grammar.", "Morphological segmentation breaks words into morphemes (the basic semantic units). It is a key component for natural language processing systems. Unsupervised morphological segmentation is attractive, because in every language there are virtually unlimited supplies of text, but very few labeled resources. However, most existing model-based systems for unsupervised morphological segmentation use directed generative models, making it difficult to leverage arbitrary overlapping features that are potentially helpful to learning. In this paper, we present the first log-linear model for unsupervised morphological segmentation. Our model uses overlapping features such as morphemes and their contexts, and incorporates exponential priors inspired by the minimum description length (MDL) principle. We present efficient algorithms for learning and inference by combining contrastive estimation with sampling. Our system, based on monolingual features only, outperforms a state-of-the-art system by a large margin, even when the latter uses bilingual information such as phrasal alignment and phonetic correspondence. On the Arabic Penn Treebank, our system reduces F1 error by 11 compared to Morfessor.", "This work presents an algorithm for the unsupervised learning, or induction, of a simple morphology of a natural language. A probabilistic maximum a posteriori model is utilized, which builds hierarchical representations for a set of morphs, which are morpheme-like units discovered from unannotated text corpora. The induced morph lexicon stores parameters related to both the “meaning” and “form” of the morphs it contains. These parameters affect the role of the morphs in words. The model is implemented in a task of unsupervised morpheme segmentation of Finnish and English words. Very good results are obtained for Finnish and almost as good results are obtained in the English task." ] }
1703.03200
2963554023
Sparsity is one of the major problems in natural language processing. The problem becomes even more severe in agglutinating languages that are highly prone to be inflected. We deal with sparsity in Turkish by adopting morphological features for part-of-speech tagging. We learn inflectional and derivational morpheme tags in Turkish by using conditional random fields (CRF) and we employ the morpheme tags in part-of-speech (PoS) tagging by using hidden Markov models (HMMs) to mitigate sparsity. Results show that using morpheme tags in PoS tagging helps alleviate the sparsity in emission probabilities. Our model outperforms other hidden Markov model based PoS tagging models for small training datasets in Turkish. We obtain an accuracy of 94.1 in morpheme tagging and 89.2 in PoS tagging on a 5K training dataset.
To our knowledge, @cite_1 introduce labeled morphological segmentation for the first time in a supervised learning framework without using any rules. They model morphotactics by a semi-Markov model. Different levels of tagsets are introduced that capture different levels of granularity. Our model resembles their model from the aspect of morphological tagging.
{ "cite_N": [ "@cite_1" ], "mid": [ "2251565024" ], "abstract": [ "We present labeled morphological segmentation—an alternative view of morphological processing that unifies several tasks. We introduce a new hierarchy of morphotactic tagsets and CHIPMUNK, a discriminative morphological segmentation system that, contrary to previous work, explicitly models morphotactics. We show improved performance on three tasks for all six languages: (i) morphological segmentation, (ii) stemming and (iii) morphological tag classification. For morphological segmentation our method shows absolute improvements of 2-6 points F1 over a strong baseline." ] }
1703.03200
2963554023
Sparsity is one of the major problems in natural language processing. The problem becomes even more severe in agglutinating languages that are highly prone to be inflected. We deal with sparsity in Turkish by adopting morphological features for part-of-speech tagging. We learn inflectional and derivational morpheme tags in Turkish by using conditional random fields (CRF) and we employ the morpheme tags in part-of-speech (PoS) tagging by using hidden Markov models (HMMs) to mitigate sparsity. Results show that using morpheme tags in PoS tagging helps alleviate the sparsity in emission probabilities. Our model outperforms other hidden Markov model based PoS tagging models for small training datasets in Turkish. We obtain an accuracy of 94.1 in morpheme tagging and 89.2 in PoS tagging on a 5K training dataset.
Morpheme tags have been used in many natural language processing tasks. El-Kahlout and Oflazer @cite_8 employ morphological tags in order to alleviate the sparsity by matching the Turkish morphemes having the same morphological tag to the same English translation in statistical machine translation task. They address that using morphological tags provides a substantial improvement on the BLUE score.
{ "cite_N": [ "@cite_8" ], "mid": [ "2139770225" ], "abstract": [ "This paper presents some very preliminary results for and problems in developing a statistical machine translation system from English to Turkish. Starting with a baseline word model trained from about 20K aligned sentences, we explore various ways of exploiting morphological structure to improve upon the baseline system. As Turkish is a language with complex agglutinative word structures, we experiment with morphologically segmented and disambiguated versions of the parallel texts in order to also uncover relations between morphemes and function words in one language with morphemes and functions words in the other, in addition to relations between open class content words. Morphological segmentation on the Turkish side also conflates the statistics from allomorphs so that sparseness can be alleviated to a certain extent. We find that this approach coupled with a simple grouping of most frequent morphemes and function words on both sides improve the BLEU score from the baseline of 0.0752 to 0.0913 with the small training data. We close with a discussion on why one should not expect distortion parameters to model word-local morpheme ordering and that a new approach to handling complex morphotactics is needed." ] }
1703.03200
2963554023
Sparsity is one of the major problems in natural language processing. The problem becomes even more severe in agglutinating languages that are highly prone to be inflected. We deal with sparsity in Turkish by adopting morphological features for part-of-speech tagging. We learn inflectional and derivational morpheme tags in Turkish by using conditional random fields (CRF) and we employ the morpheme tags in part-of-speech (PoS) tagging by using hidden Markov models (HMMs) to mitigate sparsity. Results show that using morpheme tags in PoS tagging helps alleviate the sparsity in emission probabilities. Our model outperforms other hidden Markov model based PoS tagging models for small training datasets in Turkish. We obtain an accuracy of 94.1 in morpheme tagging and 89.2 in PoS tagging on a 5K training dataset.
Morpheme tags have been used in morphological PoS disambiguation in Turkish language. @cite_5 use conditional random fields for disambiguating PoS tags in Turkish by utilizing the morphological tags. They introduce some dependencies between inflectional groups of morphemes in order to simplify the transition probabilities. @cite_17 apply perceptron algorithm for morphological disambiguation. Hakkani- @cite_10 formulate a trigram HMM based on inflectional groups in order to disambiguate morphological parses of a given word. The results show that using the dependencies between inflectional groups of adjacent words improve PoS tagging accuracy. Many of these models select a complete morphological analysis for each word rather than providing a single PoS tag.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_17" ], "mid": [ "2293048196", "274041255", "1915022094" ], "abstract": [ "This paper presents the results of main part-of-speech tagging of Turkish sentences using Conditional Ran- dom Fields (CRFs). Although CRFs are applied to many different languages for part-of-speech (POS) tagging, Turkish poses interesting challenges to be modeled with them. The challenges include issues related to the statistical model of the problem as well as issues related to computational complexity and scaling. In this paper, we propose a novel model for main-POS tagging in Turkish. Furthermore, we pro- pose some approaches to reduce the computational complexity and allow better scaling characteristics or improve the performance without increased complexity. These approaches are discussed with respect to their advantages and disadvantages. We show that the best approach is competitive with the current state of the art in accuracy and also in training and test durations. The good results obtained imply a good first step towards full morphological disambiguation.", "We present statistical models for morphological disambiguation in agglutinative languages, with a specific application to Turkish. Turkish presents an interesting problem for statistical models as the potential tag set size is very large because of the productive derivational morphology. We propose to handle this by breaking up the morphosyntactic tags into inflectional groups, each of which contains the inflectional features for each (intermediate) derived form. Our statistical models score the probability of each morphosyntactic tag by considering statistics over the individual inflectional groups and surface roots in trigram models. Among the four models that we have developed and tested, the simplest model ignoring the local morphotactics within words performs the best. Our best trigram model performs with 93.95 accuracy on our test data getting all the morhosyntactic and semantic features correct. If we are just interested in syntactically relevant features and ignore a very small set of semantic features, then the accuracy increases to 95.07 .", "This paper describes the application of the perceptron algorithm to the morphological disambiguation of Turkish text. Turkish has a productive derivational morphology. Due to the ambiguity caused by complex morphology, a word may have multiple morphological parses, each with a different stem or sequence of morphemes. The methodology employed is based on ranking with perceptron algorithm which has been successful in some NLP tasks in English. We use a baseline statistical trigram-based model of a previous work to enumerate an n-best list of candidate morphological parse sequences for each sentence. We then apply the perceptron algorithm to rerank the n-best list using a set of 23 features. The perceptron trained to do morphological disambiguation improves the accuracy of the baseline model from 93.61 to 96.80 . When we train the perceptron as a POS tagger, the accuracy is 98.27 . Turkish morphological disambiguation and POS tagging results that we obtained is the best reported so far." ] }
1703.03111
2949059537
We study the cost sharing problem for cooperative games in situations where the cost function @math is not available via oracle queries, but must instead be derived from data, represented as tuples @math , for different subsets @math of players. We formalize this approach, which we call statistical cost sharing, and consider the computation of the core and the Shapley value, when the tuples are drawn from some distribution @math . Previous work by in this setting showed how to compute cost shares that satisfy the core property with high probability for limited classes of functions. We expand on their work and give an algorithm that computes such cost shares for any function with a non-empty core. We complement these results by proving an inapproximability lower bound for a weaker relaxation. We then turn our attention to the Shapley value. We first show that when cost functions come from the family of submodular functions with bounded curvature, @math , the Shapley value can be approximated from samples up to a @math factor, and that the bound is tight. We then define statistical analogues of the Shapley axioms, and derive a notion of statistical Shapley value. We show that these can always be approximated arbitrarily well for general functions over any distribution @math .
There are two avenues of work which we build upon. The first is the notion of cost sharing in cooperative games, first introduced by . We consider the Shapley value and the core, two popular solution concepts for cost-sharing in cooperative games. The Shapley value @cite_18 is studied in algorithmic mechanism design @cite_7 @cite_27 @cite_23 @cite_37 . For applications of the Shapley value, see the surveys by and . A naive computation of the Shapley value of a cooperative game would take exponential time; recently, methods for efficiently approximating the Shapley value have been suggested @cite_0 @cite_4 @cite_2 @cite_10 for some restricted settings.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_4", "@cite_7", "@cite_0", "@cite_27", "@cite_23", "@cite_2", "@cite_10" ], "mid": [ "1562353621", "2038644905", "2436401953", "2095094403", "2103691541", "", "2091289602", "2127813674", "" ], "abstract": [ "", "Each one of n users consumes an idiosyncratic commodity produced in indivisible units. The n commodities are jointly produced by a central facility and total cost must be shared by the users. A \"sequential stand alone mechanism\" shares costs incrementally according to a fixed ordering of the users: the first user always pays stand alone cost, the second pays the stand alone cost of the first two users minus that of the first and so on. If the second derivatives of costs are of a constant sign, such a method yields a unique strong equilibrium at every profile of convex preferences in the game where each user chooses his own demand. This equilibrium, in turn, defines a coalition strategy-proof social choice function. Under decreasing marginal costs and submodular costs, the sequential stand alone mechanisms are almost characterized by these properties; the only exception is the binary demand case (each agent consumes zero or one unit) where a rich family of cost sharing methods (the Shapley value among them) yields a coalition strategy-proof equilibrium selection. Under increasing marginal costs and supermodular costs, coalition strategy-proofness characterizes a richer family of cost sharing methods: they give out one unit at a time while charging marginal costs, with the users taking turns according to a sequence fixed in advance. These methods contain serial cost sharing as a limit case.", "The Shapley value is a key solution concept for coalitional games in general and voting games in particular. Its main advantage is that it provides a unique and fair solution, but its main drawback is the complexity of computing it (e.g., for voting games this complexity is #p-complete). However, given the importance of the Shapley value and voting games, a number of approximation methods have been developed to overcome this complexity. Among these, Owen's multi-linear extension method is the most time efficient, being linear in the number of players. Now, in addition to speed, the other key criterion for an approximation algorithm is its approximation error. On this dimension, the multi-linear extension method is less impressive. Against this background, this paper presents a new approximation algorithm, based on randomization, for computing the Shapley value of voting games. This method has time complexity linear in the number of players, but has an approximation error that is, on average, lower than Owen's. In addition to this comparative study, we empirically evaluate the error for our method and show how the different parameters of the voting game affect it. Specifically, we show the following effects. First, as the number of players in a voting game increases, the average percentage error decreases. Second, as the quota increases, the average percentage error decreases. Third, the error is different for players with different weights; players with weight closer to the mean weight have a lower error than those with weight further away. We then extend our approximation to the more general k-majority voting games and show that, for n players, the method has time complexity O(k^2n) and the upper bound on its approximation error is O(k^2 n).", "Network design is a fundamental problem for which it is important to understand the effects of strategic behavior. Given a collection of self-interested agents who want to form a network connecting certain endpoints, the set of stable solutions—the Nash equilibria—may look quite different from the centrally enforced optimum. We study the quality of the best Nash equilibrium, and refer to the ratio of its cost to the optimum network cost as the price of stability. The best Nash equilibrium solution has a natural meaning of stability in this context—it is the optimal solution that can be proposed from which no user will defect. We consider the price of stability for network design with respect to one of the most widely studied protocols for network cost allocation, in which the cost of each edge is divided equally between users whose connections make use of it; this fair-division scheme can be derived from the Shapley value and has a number of basic economic motivations. We show that the price of stability for network design with respect to this fair cost allocation is @math , where @math is the number of users, and that a good Nash equilibrium can be achieved via best-response dynamics in which users iteratively defect from a starting solution. This establishes that the fair cost allocation protocol is in fact a useful mechanism for inducing strategic behavior to form near-optimal equilibria. We discuss connections to the class of potential games defined by Monderer and Shapley, and extend our results to cases in which users are seeking to balance network design costs with latencies in the constructed network, with stronger results when the network has only delays and no construction costs. We also present bounds on the convergence time of best-response dynamics, and discuss extensions to a weighted game.", "Many multiagent domains where cooperation among agents is crucial to achieving a common goal can be modeled as coalitional games. However, in many of these domains, agents are unequal in their power to affect the outcome of the game. Prior research on weighted voting games has explored power indices, which reflect how much \"real power\" a voter has. Although primarily used for voting games, these indices can be applied to any simple coalitional game. Computing these indices is known to be computationally hard in various domains, so one must sometimes resort to approximate methods for calculating them. We suggest and analyze randomized methods to approximate power indices such as the Banzhaf power index and the Shapley---Shubik power index. Our approximation algorithms do not depend on a specific representation of the game, so they can be used in any simple coalitional game. Our methods are based on testing the game's value for several sample coalitions. We show that no approximation algorithm can do much better for general coalitional games, by providing lower bounds for both deterministic and randomized algorithms for calculating power indices. We also provide empirical results regarding our method, and show that it typically achieves much better accuracy and confidence than those required.", "", "", "Coalitional games allow subsets (coalitions) of players to cooperate to receive a collective payoff. This payoff is then distributed “fairly” among the members of that coalition according to some division scheme. Various solution concepts have been proposed as reasonable schemes for generating fair allocations. The Shapley value is one classic solution concept: player i’s share is precisely equal to i’s expected marginal contribution if the players join the coalition one at a time, in a uniformly random order. In this paper, we consider the class of supermodular games (sometimes called convex games), and give a fully polynomial-time randomized approximation scheme (FPRAS) to compute the Shapley value to within a (1 ±e) factor in monotone supermodular games. We show that this result is tight in several senses: no deterministic algorithm can approximate Shapley value as well, no randomized algorithm can do better, and both monotonicity and supermodularity are required for the existence of an efficient (1 ±e)-approximation algorithm. We also argue that, relative to supermodularity, monotonicity is a mild assumption, and we discuss how to transform supermodular games to be monotonic.", "" ] }
1703.03111
2949059537
We study the cost sharing problem for cooperative games in situations where the cost function @math is not available via oracle queries, but must instead be derived from data, represented as tuples @math , for different subsets @math of players. We formalize this approach, which we call statistical cost sharing, and consider the computation of the core and the Shapley value, when the tuples are drawn from some distribution @math . Previous work by in this setting showed how to compute cost shares that satisfy the core property with high probability for limited classes of functions. We expand on their work and give an algorithm that computes such cost shares for any function with a non-empty core. We complement these results by proving an inapproximability lower bound for a weaker relaxation. We then turn our attention to the Shapley value. We first show that when cost functions come from the family of submodular functions with bounded curvature, @math , the Shapley value can be approximated from samples up to a @math factor, and that the bound is tight. We then define statistical analogues of the Shapley axioms, and derive a notion of statistical Shapley value. We show that these can always be approximated arbitrarily well for general functions over any distribution @math .
The core, introduced by , is another well-studied solution concept for cooperative games. and characterized when the core is non-empty. The core has been studied in the context of multiple combinatorial games, such as facility location @cite_34 and maximum flow @cite_36 . In cases with no solutions in the core or when it is computationally hard to find one, the balance property has been relaxed to hold approximately @cite_12 @cite_39 . In applications where players submit bids, cross-monotone cost sharing, a concept stronger than the core that satisfies the group strategy proofness property, has attracted a lot of attention @cite_26 @cite_3 @cite_21 @cite_14 . We note that these applications are sufficiently different from the ones we are studying in this work.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_36", "@cite_21", "@cite_3", "@cite_39", "@cite_34", "@cite_12" ], "mid": [ "2621312118", "2107390991", "2551080748", "1993934254", "2102563009", "", "", "2154290910" ], "abstract": [ "A cost sharing scheme is a set of rules defining how to share the cost of a service (often computed by solving a combinatorial optimization problem) amongst serviced customers. A cost sharing scheme is cross-monotonic if it satisfies the property that everyone is better off when the set of people who receive the service expands. Cross-monotonic cost sharing schemes are used to define group-strategyproof mechanisms. In this paper, we investigate the limitations imposed by the cross-monotonicity property on cost-sharing schemes for several combinatorial optimization games including edge cover, vertex cover, set cover, metric facility location, maximum flow, arborescence packing, and maximum matching. We develop a novel technique based on the probabilistic method for proving upper bounds on the budget-balance factor of cross-monotonic cost sharing schemes, deriving tight or nearly-tight bounds for each game that we study. For the set cover game, which generalizes many of the above games, we show that no cross-monotonic cost sharing scheme can recover more than a O(1 n) fraction of the total cost, respectively, and thus we can not hope to use a set-cover cost sharing scheme as a black box for the cost sharing schemes of covering games. For the vertex cover game, we show no cross-monotonic cost sharing scheme can recover more than a O(n-1 3), demonstrating that cross-monotonicity is strictly harder to achieve than the core property (vertex cover games have a solution in the core that is 1 2-budget balanced). For the facility location game, we show that there is no cross-monotonic cost sharing scheme that recovers more than a third of the total cost. This result together with a recent 1 3-budget-balanced cross-monotonic cost sharing scheme of Pal and Tardos [16] closes the gap for the facility location game. Finally, we study the implications of our results on the existence of group-strategyproof mechanisms. We observe that the definition of group-strategyproofness does not exclude trivial mechanisms that recover all the cost. However, with extra assumptions, we show that group-strategyproof mechanisms give rise to cross-monotonic cost sharing schemes and therefore our upper bounds hold.", "We develop a general method for turning a primal-dual algorithm into a group strategy proof cost-sharing mechanism. We use our method to design approximately budget balanced cost sharing mechanisms for two NP-complete problems: metric facility location, and single source rent-or-buy network design. Both mechanisms are competitive, group strategyproof and recover a constant fraction of the cost. For the facility location game our cost-sharing method recovers a 1 3rd of the total cost, while in the network design game the cost shares pay for a 1 15 fraction of the cost of the solution.", "We consider the problem of optimization from samples of monotone submodular functions with bounded curvature. In numerous applications, the function optimized is not known a priori, but instead learned from data. What are the guarantees we have when optimizing functions from sampled data? In this paper we show that for any monotone submodular function with curvature c there is a (1 - c) (1 + c - c^2) approximation algorithm for maximization under cardinality constraints when polynomially-many samples are drawn from the uniform distribution over feasible sets. Moreover, we show that this algorithm is optimal. That is, for any c < 1, there exists a submodular function with curvature c for which no algorithm can achieve a better approximation. The curvature assumption is crucial as for general monotone submodular functions no algorithm can obtain a constant-factor approximation for maximization under a cardinality constraint when observing polynomially-many samples drawn from any distribution over feasible sets, even when the function is statistically learnable.", "A service is produced for a set of agents. The service is binary, each agent either receives service or not, and the total cost of service is a submodular function of the set receiving service. We investigate strategyproof mechanisms that elicit individual willingness to pay, decide who is served, and then share the cost among them. If such a mechanism is budget balanced (covers cost exactly), it cannot be efficient (serve the surplus maximizing set of users) and vice-versa. We characterize the rich family of budget balanced and group strategyproof mechanisms and find that the mechanism associated with the Shapley value cost sharing formula is characterized by the property that its worst welfare loss is minimal. When we require efficiency rather than budget balance – the more common route in the literature – we find that there is a single Clarke-Groves mechanism that satisfies certain reasonable conditions: we call this the marginal cost pricing mechanism. We compare the size of the marginal cost pricing mechanism's worst budget surplus with the worst welfare loss of the Shapley value mechanism.", "Perhaps the strongest notion of truth-revealing in a cost sharing method is group strategyproofness. However, matters are not so clear-cut on fairness, and many different, sometimes even conflicting, notions of fairness have been proposed which have relevance in different situations. We present a large class of group strategyproof cost sharing methods, for submodular cost functions, satisfying a wide range of fairness criteria, thereby allowing the service provider to choose a method that best satisfies the notion of fairness that is most relevant to her application. Our class includes the Dutta-Ray egalitarian method as a special case. It also includes a new cost sharing method, which we call the opportunity egalitarian method.", "", "", "Strategyproof cost-sharing mechanisms, lying in the core, that recover 1 @a fraction of the cost, are presented for the set cover and facility location games: @a=O(log n) for the former and 1:861 for the latter. Our mechanisms utilize approximation algorithms for these problems based on the method of dual-fitting." ] }
1703.03073
2595614461
Deep convolutional neural network (CNN) inference requires significant amount of memory and computation, which limits its deployment on embedded devices. To alleviate these problems to some extent, prior research utilize low precision fixed-point numbers to represent the CNN weights and activations. However, the minimum required data precision of fixed-point weights varies across different networks and also across different layers of the same network. In this work, we propose using floating-point numbers for representing the weights and fixed-point numbers for representing the activations. We show that using floating-point representation for weights is more efficient than fixed-point representation for the same bit-width and demonstrate it on popular large-scale CNNs such as AlexNet, SqueezeNet, GoogLeNet and VGG-16. We also show that such a representation scheme enables compact hardware multiply-and-accumulate (MAC) unit design. Experimental results show that the proposed scheme reduces the weight storage by up to 36 and power consumption of the hardware multiplier by up to 50 .
Precision of the neural network weights and activations plays a major role in determining the efficiency of the CNN hardware or software implementations. A lot of research focuses on replacing the standard 32-bit floating-point data with reduced precision data for CNN inference. For example, @cite_1 propose representing both CNN weights and activations using minifloat, i.e., floating-point number with shorter bit-width. Since fixed-point arithmetic is more hardware efficient than floating-point arithmetic, most research focuses on fixed-point quantization. @cite_18 present the impacts of different fixed-point rounding schemes on the accuracy. @cite_21 demonstrate that the minimum required data precision not only varies across different networks, but also across different layers of the same network. @cite_15 present a fixed-point quantization methodology to identify the optimal data precision for all layers of a network. @cite_24 present a framework Ristretto for fixed-point quantization and re-training of CNNs based on Caffe @cite_9 .
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_21", "@cite_1", "@cite_24", "@cite_15" ], "mid": [ "1841592590", "2950094539", "2246760854", "2395566064", "2337344472", "" ], "abstract": [ "Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.", "Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ( @math 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.", "This work investigates how using reduced precision data in Convolutional Neural Networks (CNNs) affects network accuracy during classification. More specifically, this study considers networks where each layer may use different precision data. Our key result is the observation that the tolerance of CNNs to reduced precision data not only varies across networks, a well established observation, but also within networks. Tuning precision per layer is appealing as it could enable energy and performance improvements. In this paper we study how error tolerance across layers varies and propose a method for finding a low precision configuration for a network while maintaining high accuracy. A diverse set of CNNs is analyzed showing that compared to a conventional implementation using a 32-bit floating-point representation for all layers, and with less than 1 loss in relative accuracy, the data footprint required by these networks can be reduced by an average of 74 and up to 92 .", "Convolutional neural networks (CNN) have achieved major breakthroughs in recent years. Their performance in computer vision have matched and in some areas even surpassed human capabilities. Deep neural networks can capture complex non-linear features; however this ability comes at the cost of high computational and memory requirements. State-of-art networks require billions of arithmetic operations and millions of parameters. To enable embedded devices such as smartphones, Google glasses and monitoring cameras with the astonishing power of deep learning, dedicated hardware accelerators can be used to decrease both execution time and power consumption. In applications where fast connection to the cloud is not guaranteed or where privacy is important, computation needs to be done locally. Many hardware accelerators for deep neural networks have been proposed recently. A first important step of accelerator design is hardware-oriented approximation of deep networks, which enables energy-efficient inference. We present Ristretto, a fast and automated framework for CNN approximation. Ristretto simulates the hardware arithmetic of a custom hardware accelerator. The framework reduces the bit-width of network parameters and outputs of resource-intense layers, which reduces the chip area for multiplication units significantly. Alternatively, Ristretto can remove the need for multipliers altogether, resulting in an adder-only arithmetic. The tool fine-tunes trimmed networks to achieve high classification accuracy. Since training of deep neural networks can be time-consuming, Ristretto uses highly optimized routines which run on the GPU. This enables fast compression of any given network. Given a maximum tolerance of 1 , Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit. The code for Ristretto is available.", "High computational complexity hinders the widespread usage of Convolutional Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are arguably the most promising approach for reducing both execution time and power consumption. One of the most important steps in accelerator development is hardware-oriented model approximation. In this paper we present Ristretto, a model approximation framework that analyzes a given CNN with respect to numerical resolution used in representing weights and outputs of convolutional and fully connected layers. Ristretto can condense models by using fixed point arithmetic and representation instead of floating point. Moreover, Ristretto fine-tunes the resulting fixed point network. Given a maximum error tolerance of 1 , Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit. The code for Ristretto is available.", "" ] }
1703.03073
2595614461
Deep convolutional neural network (CNN) inference requires significant amount of memory and computation, which limits its deployment on embedded devices. To alleviate these problems to some extent, prior research utilize low precision fixed-point numbers to represent the CNN weights and activations. However, the minimum required data precision of fixed-point weights varies across different networks and also across different layers of the same network. In this work, we propose using floating-point numbers for representing the weights and fixed-point numbers for representing the activations. We show that using floating-point representation for weights is more efficient than fixed-point representation for the same bit-width and demonstrate it on popular large-scale CNNs such as AlexNet, SqueezeNet, GoogLeNet and VGG-16. We also show that such a representation scheme enables compact hardware multiply-and-accumulate (MAC) unit design. Experimental results show that the proposed scheme reduces the weight storage by up to 36 and power consumption of the hardware multiplier by up to 50 .
Researchers have also explored training neural networks directly with fixed-point weights. @cite_16 , the author presents a hardware architecture for on-chip learning with fixed-point operations. More recently, in @cite_19 , the authors train neural networks with floating-point, fixed-point and dynamic fixed-point formats and demonstrate that fixed-point weights are sufficient for training. @cite_18 demonstrate network training with 16-bit fixed-point weights using stochastic rounding scheme.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_16" ], "mid": [ "1825672851", "1841592590", "" ], "abstract": [ "Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. We train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training. We find that very low precision is sufficient not just for running trained networks but also for training them. For example, it is possible to train Maxout networks with 10 bits multiplications.", "Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.", "" ] }
1703.03073
2595614461
Deep convolutional neural network (CNN) inference requires significant amount of memory and computation, which limits its deployment on embedded devices. To alleviate these problems to some extent, prior research utilize low precision fixed-point numbers to represent the CNN weights and activations. However, the minimum required data precision of fixed-point weights varies across different networks and also across different layers of the same network. In this work, we propose using floating-point numbers for representing the weights and fixed-point numbers for representing the activations. We show that using floating-point representation for weights is more efficient than fixed-point representation for the same bit-width and demonstrate it on popular large-scale CNNs such as AlexNet, SqueezeNet, GoogLeNet and VGG-16. We also show that such a representation scheme enables compact hardware multiply-and-accumulate (MAC) unit design. Experimental results show that the proposed scheme reduces the weight storage by up to 36 and power consumption of the hardware multiplier by up to 50 .
Many other approaches for memory reduction of neural networks have been explored. @cite_25 propose a combination of network pruning, weight quantization during training and compression based on Huffman coding to reduce the VGG-16 network size by 49X. @cite_2 , the authors propose to store both 8-bit quantized floating-point weights and 32-bit full precision weights. At runtime, quantized weights or full-precision weights are randomly fetched in order to reduce memory bandwidth. The continuous research effort to reduce the data precision has led to many interesting demonstrations with 2-bit weights @cite_13 and even binary weights activations @cite_20 @cite_8 . @cite_3 demonstrate AlexNet training with 1-bit weights, 2-bit activations and 6-bit gradients. These techniques require additional re-training and can result in sub-optimal accuracies.
{ "cite_N": [ "@cite_8", "@cite_3", "@cite_2", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "2951978180", "2469490737", "", "2949390274", "2119144962", "2260663238" ], "abstract": [ "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9 less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16 in top-1 accuracy.", "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward backward passes can now operate on low bitwidth weights and activations gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1 top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.", "", "We explore techniques to significantly improve the compute efficiency and performance of Deep Convolution Networks without impacting their accuracy. To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values. We achieve the highest reported accuracy of 76.6 Top-1 93 Top-5 on the Imagenet object classification challenge with low-precision network github release of the source code coming soon while reducing the compute requirement by 3x compared to a full-precision network that achieves similar accuracy. Furthermore, to fully exploit the benefits of our low-precision networks, we build a deep learning accelerator core, dLAC, that can achieve up to 1 TFLOP mm^2 equivalent for single-precision floating-point operations ( 2 TFLOP mm^2 for half-precision).", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "" ] }
1703.03073
2595614461
Deep convolutional neural network (CNN) inference requires significant amount of memory and computation, which limits its deployment on embedded devices. To alleviate these problems to some extent, prior research utilize low precision fixed-point numbers to represent the CNN weights and activations. However, the minimum required data precision of fixed-point weights varies across different networks and also across different layers of the same network. In this work, we propose using floating-point numbers for representing the weights and fixed-point numbers for representing the activations. We show that using floating-point representation for weights is more efficient than fixed-point representation for the same bit-width and demonstrate it on popular large-scale CNNs such as AlexNet, SqueezeNet, GoogLeNet and VGG-16. We also show that such a representation scheme enables compact hardware multiply-and-accumulate (MAC) unit design. Experimental results show that the proposed scheme reduces the weight storage by up to 36 and power consumption of the hardware multiplier by up to 50 .
In contrast to prior works, this work proposes quantization of a pre-trained neural network weights into floating-point numbers and implementation of activations in fixed-point format both for memory reduction and hardware efficiency. It further shows that floating-point representation of weights achieves better range accuracy trade-off compared for the fixed-point representation of same number of bits and we empirically demonstrate it on state of the art CNNs such as AlexNet @cite_0 , VGG-16 @cite_10 , GoogLeNet @cite_5 and SqueezeNet @cite_14 . Although this work is based on quantization only without the need for retraining the network, retraining may also be applied to reclaim part of the accuracy loss due to quantization.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_14", "@cite_10" ], "mid": [ "", "2950179405", "2279098554", "1686810756" ], "abstract": [ "", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1703.03349
2950435666
This paper deals with the problem of detecting fallen people lying on the floor by means of a mobile robot equipped with a 3D depth sensor. In the proposed algorithm, inspired by semantic segmentation techniques, the 3D scene is over-segmented into small patches. Fallen people are then detected by means of two SVM classifiers: the first one labels each patch, while the second one captures the spatial relations between them. This novel approach showed to be robust and fast. Indeed, thanks to the use of small patches, fallen people in real cluttered scenes with objects side by side are correctly detected. Moreover, the algorithm can be executed on a mobile robot fitted with a standard laptop making it possible to exploit the 2D environmental map built by the robot and the multiple points of view obtained during the robot navigation. Additionally, this algorithm is robust to illumination changes since it does not rely on RGB data but on depth data. All the methods have been thoroughly validated on the IASLAB-RGBD Fallen Person Dataset, which is published online as a further contribution. It consists of several static and dynamic sequences with 15 different people and 2 different environments.
There exist also more specific approaches addressing the detection of falls. These comprehend wearable devices, whose great popularity is linked to the spread of open-source platforms which are small, powerful and connectible to low-cost sensors @cite_18 . In most cases, such sensors include accelerometers @cite_12 @cite_32 @cite_10 . These technologies suffer from the difficulty of correctly distinguishing falls from common actions like sitting or lying down. Furthermore, the elderlies easily forget to wear them. Other approaches specifically addressing falls need the installation of environmental devices like microphones @cite_0 , cameras for person tracking @cite_2 @cite_21 @cite_25 , infrared or vibration sensors @cite_24 . Anyway, these approaches are less effective and, being invasive, less accepted.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_32", "@cite_0", "@cite_24", "@cite_2", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2539105851", "2030972883", "", "2130103019", "2900501356", "2000184637", "", "1970411820", "" ], "abstract": [ "As we grow old, our desire for independence does not diminish; yet our health increasingly needs to be monitored. Injuries such as falling can be a serious problem for the elderly. If a person falls and is not able to get assistance within an hour, casualties arising from that fall can result in fatalities as early as 6 months later [1]. It would seem then that a choice between safety and independence must be made. Fortunately, as health care technology advances, simple devices can be made to detect or even predict falls in the elderly, which could easily save lives without too much intrusion on their independence. Much research has been done on the topic of fall detection and fall prediction. Some have attempted to detect falls using a variety of sensors such as: cameras, accelerometers, gyroscopes, microphones, or a combination of the like. This paper is aimed at reporting which existing methods have been found effective by others, as well as documenting the findings of our own experiments. The combination of which will assist in the progression towards a safe, unobtrusive monitoring system for independent seniors.", "In-house video surveillance can represent an excellent support for people with some difficulties (e.g. elderly or disabled people) living alone and with a limited autonomy. New hardware technologies and in particular digital cameras are now affordable and they have recently gained credit as tools for (semi-) automatically assuring people's safety. In this paper a multi-camera vision system for detecting and tracking people and recognizing dangerous behaviours and events such as a fall is presented. In such a situation a suitable alarm can be sent, e.g. by means of an SMS. A novel technique of warping people's silhouette is proposed to exchange visual information between partially overlapped cameras whenever a camera handover occurs. Finally, a multi-client and multi-threaded transcoding video server delivers live video streams to opera- tors=remote users in order to check the validity of a received alarm. Semantic and event-based transcoding algorithms are used to optimize the bandwidth usage. A two-room setup has been created in our laboratory to test the performance of the overall system and some of the results obtained are reported.", "", "More than one third of about 38 million adults 65 and older fall each year in the United States. To address the above problem we propose to develop an acoustic fall detection system (FADE) that will automatically signal a fall to the monitoring caregiver. As opposed to many existent fall detection systems that require the monitored person to wear devices such as accelerometers or gyroscopes at all times, our system is completely unobtrusive by not requiring any wearable devices. To reduce the false alarm rate we employ an array of acoustic sensors to obtain sound source height information. The sound is considered a false alarm if it comes from a source located at a height higher than 2 feet. We tested our system in a pilot study that consisted of a set of 23 falls performed by a stunt actor during six sessions of about 15 minutes each (1.3 hours in total). The actor was previously trained by our nursing collaborators to fall like an elderly person. The use of height information reduced the false alarm hourly rate from 32 to 5 at a 100 fall detection rate.", "", "This paper presents the design, implementation and evaluation of a distributed network of smart cameras whose function is to detect and localize falls, an important application in elderly living environments. A network of overlapping smart cameras uses a decentralized procedure for computing inter-image homographies that allows the location of a fall to be reported in 2D world coordinates by calibrating only one camera. Also, we propose a joint routing and homography transformation scheme for multi-hop localization that yields localization errors of less than 2 feet using very low resolution images. Our goal is to demonstrate that such a distributed low-power system can perform adequately in this and related applications. A prototype implementation is given for low-power Agilent UCLA Cyclops cameras running on the Crossbow MICAz platform. We demonstrate the effectiveness of the fall detection as well as the precision of the localization using a simulation of our sample implementation.", "", "This paper presents an ambient intelligence system designed for assisted living. The system processes the audio and video data acquired from multiple sensors spread in the environment to automatically detect dangerous events and generate automatic warning messages. The paper presents the distributed perception infrastructure that has been implemented by means of an open-source software middleware called NMM. Different processing nodes have been developed which can cooperate to extract high level information about the environment. Examples of implemented nodes running algorithms for people detection or face recognition are presented. Experiments on novel algorithms for people fall detection and sound classification and localization are discussed. Eventually, we present successful experiments in two test bed scenarios.", "" ] }
1703.03401
2598514480
The goal of cluster analysis in survival data is to identify clusters that are decidedly associated with the survival outcome. Previous research has explored this problem primarily in the medical domain with relatively small datasets, but the need for such a clustering methodology could arise in other domains with large datasets, such as social networks. Concretely, we wish to identify different survival classes in a social network by clustering the users based on their lifespan in the network. In this paper, we propose a decision tree based algorithm that uses a global normalization of @math -values to identify clusters with significantly different survival distributions. We evaluate the clusters from our model with the help of a simple survival prediction task and show that our model outperforms other competing methods.
In order to overcome this issue, Gaynor and Bair @cite_34 proposed supervised sparse clustering as a modification to the sparse clustering algorithm of Witten and Tibshirani @cite_31 . The sparse clustering algorithm uses an objective function similar to k-means but with the modification that each feature has a weight associated to it. Supervised sparse clustering @cite_34 initializes these feature weights depending on the feature's relation with the survival outcome and optimizes the same objective function. Once again, they use Cox scores @cite_19 to quantify the effect of each feature on the survival outcome. The authors show that this leads to a clustering that is relatively more linked to the survival outcome.
{ "cite_N": [ "@cite_19", "@cite_31", "@cite_34" ], "mid": [ "1580788756", "2048178552", "" ], "abstract": [ "The analysis of censored failure times is considered. It is assumed that on each individual arc available values of one or more explanatory variables. The hazard function (age-specific failure rate) is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time. A conditional likelihood is obtained, leading to inferences about the unknown regression coefficients. Some generalizations are outlined.", "We consider the problem of clustering observations using a potentially large set of features. One might expect that the true underlying clusters present in the data differ only with respect to a small fraction of the features, and will be missed if one clusters the observations using the full set of features. We propose a novel framework for sparse clustering, in which one clusters the observations using an adaptively chosen subset of the features. The method uses a lasso-type penalty to select the features. We use this framework to develop simple methods for sparse K-means and sparse hierarchical clustering. A single criterion governs both the selection of the features and the resulting clusters. These approaches are demonstrated on simulated and genomic data.", "" ] }
1703.03470
2595315035
We prove that a particular deep network architecture is more efficient at approximating radially symmetric functions than the best known 2 or 3 layer networks. We use this architecture to approximate Gaussian kernel SVMs, and subsequently improve upon them with further training. The architecture and initial weights of the Deep Radial Kernel Network are completely specified by the SVM and therefore sidesteps the problem of empirically choosing an appropriate deep network architecture.
Therefore evidence is building that deep networks are more powerful than their shallow counterparts in terms of the number of parameters or neurons required. Nevertheless, we shouldn't stop there because there is relatively little work linking this theory with practical applications of the same. In this work we directly extend the work of and . The latter work is extended to deeper networks for approximating radially symmetric functions that require fewer parameters than their construction. The former @cite_1 is extended by generalising their notion of folding transformations to work in multiple dimensions and more simply with ReLU networks. The proofs are constructive and allow us to build networks for approximating radially symmetric functions. These networks are used to approximate Gaussian kernel SVMs and the results show how we can further train these approximations to do better than the original SVM in many cases.
{ "cite_N": [ "@cite_1" ], "mid": [ "2033337358" ], "abstract": [ "We present a comparative theoretical analysis of representation in artificial neural networks with two extreme architectures, a shallow wide network and a deep narrow network, devised to maximally decouple their representative power due to layer width and network depth. We show that, given a specific activation function, models with comparable VC-dimension are required to guarantee zero error modeling of real functions over a binary input. However, functions that exhibit repeating patterns can be encoded much more efficiently in the deep representation, resulting in significant reduction in complexity. This paper provides some initial theoretical evidence of when and how depth can be extremely effective." ] }
1703.03055
2952121371
This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks.
The structure-evolving LSTM (dynamically evolving multi-level graphs) is superior to the most related Graph LSTM @cite_9 (a pre-fixed single-level graph) in two aspects: 1) Structure-evolving LSTM learns more powerful representations as it progressively exploits hierarchical information along stacked LSTM layers; 2) at its later layers, the structure-evolving LSTM captures the inherent structure of the desired output benefiting from the higher-level graph topologies. These superiorities bring significant improvements on several semantic parsing datasets, which gives apple-to-apple comparison with @cite_9 . Our work aims to develop a new and general principled graph evolving based learning method to learn more powerful Graph LSTM or other RNN models. Devising new Graph-LSTM unit is not within the scope of this paper. We use Graph-LSTM as a running example which by no means implies our method is limited to Graph LSTM.
{ "cite_N": [ "@cite_9" ], "mid": [ "2951729963" ], "abstract": [ "By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions." ] }
1703.03097
2605217525
Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have 'long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18 F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.
Information Extraction (IE) is a well-studied research area both in the Natural Language Processing community and in the World Wide Web, with the reader referred to the survey by for an accessible coverage of Web IE approaches @cite_31 . In the NLP literature, IE problems have predominantly been studied as Named Entity Recognition and Relationship Extraction @cite_8 , @cite_5 . The scope of Web IE has been broad in recent years, extending from wrappers to Open Information Extraction (OpenIE) @cite_21 , @cite_1 .
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_1", "@cite_5", "@cite_31" ], "mid": [ "2096765155", "1553019137", "2127978399", "2097960255", "2134150392" ], "abstract": [ "Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9 over state-of-the-art systems on two established information extraction tasks.", "", "To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information.", "Motivation: The discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases. Results: We developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted 150 000 relations with an estimated perfomance of both 80 precision and 80 recall. Availability: The used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http: www.bio.ifi.lmu.de publications RelEx ). Contact: katrin.fundel@bio.ifi.lmu.de", "The Internet presents a huge amount of useful information which is usually formatted for its users, which makes it difficult to extract relevant data from various sources. Therefore, the availability of robust, flexible information extraction (IE) systems that transform the Web pages into program-friendly structures such as a relational database will become a great necessity. Although many approaches for data extraction from Web pages have been developed, there has been limited effort to compare such tools. Unfortunately, in only a few cases can the results generated by distinct tools be directly compared since the addressed extraction tasks are different. This paper surveys the major Web data extraction approaches and compares them in three dimensions: the task domain, the automation degree, and the techniques used. The criteria of the first dimension explain why an IE system fails to handle some Web sites of particular structures. The criteria of the second dimension classify IE systems based on the techniques used. The criteria of the third dimension measure the degree of automation for IE systems. We believe these criteria provide qualitatively measures to evaluate various IE approaches" ] }
1703.03097
2605217525
Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have 'long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18 F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.
In the Semantic Web, domain-specific extraction of entities and properties is a fundamental aspect in constructing instance-rich knowledge bases (from unstructured corpora) that contribute to the Semantic Web vision and to ecosystems like Linked Open Data @cite_29 , @cite_4 . A good example of such a system is Lodifier @cite_6 . This work is along the same lines, in that we are interested in user-specified attributes and wish to construct a knowledge base (KB) with those attribute values using raw Web corpora. However, we are not aware of any IE work in the Semantic Web that has used word representations to accomplish this task, or that has otherwise outperformed state-of-the-art systems without manual feature engineering.
{ "cite_N": [ "@cite_29", "@cite_4", "@cite_6" ], "mid": [ "", "2153075544", "152228199" ], "abstract": [ "", "Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. The authors put a Semantic Web language through its paces and answer questions about how people can use it, such as: how do authors generate semantic descriptions; how do agents discover these descriptions; how can agents integrate information from different sites; and how can users query the Semantic Web.", "The automated extraction of information from text and its transformation into a formal description is an important goal in both Semantic Web research and computational linguistics. The extracted information can be used for a variety of tasks such as ontology generation, question answering and information retrieval. LODifier is an approach that combines deep semantic analysis with named entity recognition, word sense disambiguation and controlled Semantic Web vocabularies in order to extract named entities and relations between them from text and to convert them into an RDF representation which is linked to DBpedia and WordNet. We present the architecture of our tool and discuss design decisions made. An evaluation of the tool on a story link detection task gives clear evidence of its practical potential." ] }
1703.03097
2605217525
Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have 'long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18 F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.
The work presented in this paper is structurally similar to the geolocation prediction system (from Twitter) by and also ADRMine, an adverse drug reaction (ADR) extraction system from social media @cite_25 , @cite_12 . Unlike these works, our system is not optimized for specific attributes like locations and drug reactions, but generalizes to a range of attributes. Also, as mentioned earlier, illicit domains involve challenges not characteristic of social media, notably information obfuscation.
{ "cite_N": [ "@cite_25", "@cite_12" ], "mid": [ "2142191319", "2171469118" ], "abstract": [ "Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author's location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain \"location indicative words\". We then evaluate the impact of nongeotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability. We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems.", "Objective Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media. @PARASPLIT Methods We introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words’ semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique. @PARASPLIT Results ADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F -measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance. @PARASPLIT Conclusion It is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets." ] }
1703.03097
2605217525
Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have 'long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18 F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.
In recent years, state-of-the-art results have been achieved in a variety of NLP tasks using word representation methods like neural embeddings @cite_26 . Unlike the problem covered in this paper, those papers typically assume an existing KB (e.g. Freebase), and attempt to infer additional facts in the KB using word representations. In contrast, we study the problem of constructing and populating a KB per domain-specific attribute with only a small set of initial annotations from crawled Web corpora.
{ "cite_N": [ "@cite_26" ], "mid": [ "2950133940" ], "abstract": [ "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1703.03097
2605217525
Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have 'long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18 F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.
The problem studied in this paper also has certain resemblances to OpenIE @cite_1 . One assumption in OpenIE systems is that a given fact (codified, for example, as an RDF triple) is observed in multiple pages and contexts, which allows the system to learn new extraction patterns' and rank facts by confidence. In illicit domains, a fact' may only be observed once; furthermore, the arcane and high-variance language models employed in the domain makes direct application of any extraction pattern-based approach problematic. To the best of our knowledge, the specific problem of devising feature-agnostic, low-supervision IE approaches for illicit Web domains has not been studied in prior work.
{ "cite_N": [ "@cite_1" ], "mid": [ "2127978399" ], "abstract": [ "To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information." ] }
1703.02689
2591864458
Given a graphical model, one essential problem is MAP inference, that is, finding the most likely configuration of states according to the model. Although this problem is NP-hard, large instances can be solved in practice. A major open question is to explain why this is true. We give a natural condition under which we can provably perform MAP inference in polynomial time. We require that the number of fractional vertices in the LP relaxation exceeding the optimal solution is bounded by a polynomial in the problem size. This resolves an open question by Dimakis, Gohari, and Wainwright. In contrast, for general LP relaxations of integer programs, known techniques can only handle a constant number of fractional vertices whose value exceeds the optimal solution. We experimentally verify this condition and demonstrate how efficient various integer programming methods are at removing fractional solutions.
For some classes of graphical models, it is possible to solve the MAP problem exactly. For example see @cite_11 for balanced and almost balanced models, @cite_7 for perfect graphs, and @cite_2 for graphs with constant tree-width.
{ "cite_N": [ "@cite_2", "@cite_7", "@cite_11" ], "mid": [ "2120340025", "2949093466", "2411995537" ], "abstract": [ "The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.", "Efficiently finding the maximum a posteriori (MAP) configuration of a graphical model is an important problem which is often implemented using message passing algorithms. The optimality of such algorithms is only well established for singly-connected graphs and other limited settings. This article extends the set of graphs where MAP estimation is in P and where message passing recovers the exact solution to so-called perfect graphs. This result leverages recent progress in defining perfect graphs (the strong perfect graph theorem), linear programming relaxations of MAP estimation and recent convergent message passing schemes. The article converts graphical models into nand Markov random fields which are straightforward to relax into linear programs. Therein, integrality can be established in general by testing for graph perfection. This perfection test is performed efficiently using a polynomial time algorithm. Alternatively, known decomposition tools from perfect graph theory may be used to prove perfection for certain families of graphs. Thus, a general graph framework is provided for determining when MAP estimation in any graphical model is in P, has an integral linear programming relaxation and is exactly recoverable by message passing.", "This is the author accepted manuscript. The final version is available from MIcrotome Publishing via http: www.jmlr.org proceedings papers v51 weller16b.html." ] }
1703.02689
2591864458
Given a graphical model, one essential problem is MAP inference, that is, finding the most likely configuration of states according to the model. Although this problem is NP-hard, large instances can be solved in practice. A major open question is to explain why this is true. We give a natural condition under which we can provably perform MAP inference in polynomial time. We require that the number of fractional vertices in the LP relaxation exceeding the optimal solution is bounded by a polynomial in the problem size. This resolves an open question by Dimakis, Gohari, and Wainwright. In contrast, for general LP relaxations of integer programs, known techniques can only handle a constant number of fractional vertices whose value exceeds the optimal solution. We experimentally verify this condition and demonstrate how efficient various integer programming methods are at removing fractional solutions.
In @cite_0 , the authors investigate how pseudomarginals and relaxations relate to the success of the Bethe approximation of the partition function was done in @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2396077723" ], "abstract": [ "Belief propagation is a remarkably effective tool for inference, even when applied to networks with cycles. It may be viewed as a way to seek the minimum of the Bethe free energy, though with no convergence guarantee in general. A variational perspective shows that, compared to exact inference, this minimization employs two forms of approximation: (i) the true entropy is approximated by the Bethe entropy, and (ii) the minimization is performed over a relaxation of the marginal polytope termed the local polytope. Here we explore when and how the Bethe approximation can fail for binary pairwise models by examining each aspect of the approximation, deriving results both analytically and with new experimental methods." ] }
1703.02689
2591864458
Given a graphical model, one essential problem is MAP inference, that is, finding the most likely configuration of states according to the model. Although this problem is NP-hard, large instances can be solved in practice. A major open question is to explain why this is true. We give a natural condition under which we can provably perform MAP inference in polynomial time. We require that the number of fractional vertices in the LP relaxation exceeding the optimal solution is bounded by a polynomial in the problem size. This resolves an open question by Dimakis, Gohari, and Wainwright. In contrast, for general LP relaxations of integer programs, known techniques can only handle a constant number of fractional vertices whose value exceeds the optimal solution. We experimentally verify this condition and demonstrate how efficient various integer programming methods are at removing fractional solutions.
There has been substantial prior work on improving inference building on these LP relaxations, especially for LDPC codes in the information theory community. This work ranges from very fast solvers that exploit the special structure of the polytope @cite_10 , connections to unequal error protection @cite_8 , and graphical model covers @cite_22 . LP decoding currently provides the best known finite-length error-correction bounds for LDPC codes both for random @cite_35 @cite_3 , and adversarial bit-flipping errors @cite_21 .
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_8", "@cite_21", "@cite_3", "@cite_10" ], "mid": [ "2949560198", "2035309551", "2142740777", "1993508000", "", "2172190188" ], "abstract": [ "We initiate the probabilistic analysis of linear programming (LP) decoding of low-density parity-check (LDPC) codes. Specifically, we show that for a random LDPC code ensemble, the linear programming decoder of Feldman succeeds in correcting a constant fraction of errors with high probability. The fraction of correctable errors guaranteed by our analysis surpasses previous nonasymptotic results for LDPC codes, and in particular, exceeds the best previous finite-length result on LP decoding by a factor greater than ten. This improvement stems in part from our analysis of probabilistic bit-flipping channels, as opposed to adversarial channels. At the core of our analysis is a novel combinatorial characterization of LP decoding success, based on the notion of a flow on the Tanner graph of the code. An interesting by-product of our analysis is to establish the existence of ldquoprobabilistic expansionrdquo in random bipartite graphs, in which one requires only that almost every (as opposed to every) set of a certain size expands, for sets much larger than in the classical worst case setting.", "An important property of low-density parity-check codes is the existence of highly efficient algorithms for their decoding. Many of the most efficient, recent graph-based algorithms, e.g. message-passing iterative decoding and linear programming decoding, crucially depend on the efficient representation of a code in a graphical model. In order to understand the performance of these algorithms, we argue for the characterization of codes in terms of a so-called fundamental cone in Euclidean space. This cone depends upon a given parity-check matrix of a code, rather than on the code itself. We give a number of properties of this fundamental cone derived from its connection to unramified covers of the graphical models on which the decoding algorithms operate. For the class of cycle codes, these developments naturally lead to a characterization of the fundamental cone as the Newton polyhedron of the Hashimoto edge zeta function of the underlying graph.", "We investigate the design of fountain codes with good intermediate performance and built-in unequal error protection for low-delay video multicast. In particular, we design novel short-blocklength fountain codes for media streaming applications to multiple heterogeneous receivers and analyze their performance. Our theoretical contribution is the generalization of the growth code analysis for unequal error protection to suit the characteristics of video data. Simulation results show that the proposed method can effectively increase the number of decodable packets over a very wide range of packet drop rates and provide smooth and graceful video quality degradation for users with various channel conditions. The proposed scheme also enjoys the important benefits of much lower decoder complexity and simpler system architecture compared to traditional MDS erasure coding based solutions.", "We show that for low-density parity-check (LDPC) codes whose Tanner graphs have sufficient expansion, the linear programming (LP) decoder of Feldman, Karger, and Wainwright can correct a constant fraction of errors. A random graph will have sufficient expansion with high probability, and recent work shows that such graphs can be constructed efficiently. A key element of our method is the use of a dual witness: a zero-valued dual solution to the decoding linear program whose existence proves decoding success. We show that as long as no more than a certain constant fraction of the bits are flipped by the channel, we can find a dual witness. This new method can be used for proving bounds on the performance of any LP decoder, even in a probabilistic setting. Our result implies that the word error rate of the LP decoder decreases exponentially in the code length under the binary-symmetric channel (BSC). This is the first such error bound for LDPC codes using an analysis based on \"pseudocodewords.\" Recent work by Koetter and Vontobel shows that LP decoding and min-sum decoding of LDPC codes are closely related by the \"graph cover\" structure of their pseudocodewords; in their terminology, our result implies that that there exist families of LDPC codes where the minimum BSC pseudoweight grows linearly in the block length", "", "The problem of low complexity linear programming (LP) decoding of low-density parity-check (LDPC) codes is considered. An iterative algorithm, similar to min-sum and belief propagation, for efficient approximate solution of this problem was proposed by Vontobel and Koetter. In this paper, the convergence rate and computational complexity of this algorithm are studied using a scheduling scheme that we propose. In particular, we are interested in obtaining a feasible vector in the LP decoding problem that is close to optimal in the following sense. The distance, normalized by the block length, between the minimum and the objective function value of this approximate solution can be made arbitrarily small. It is shown that such a feasible vector can be obtained with a computational complexity which scales linearly with the block length. Combined with previous results that have shown that the LP decoder can correct some fixed fraction of errors we conclude that this error correction can be achieved with linear computational complexity. This is achieved by first applying the iterative LP decoder that decodes the correct transmitted codeword up to an arbitrarily small fraction of erroneous bits, and then correcting the remaining errors using some standard method. These conclusions are also extended to generalized LDPC codes." ] }
1703.02788
2950162708
Energy efficiency is becoming increasingly important for computing systems, in particular for large scale HPC facilities. In this work we evaluate, from an user perspective, the use of Dynamic Voltage and Frequency Scaling (DVFS) techniques, assisted by the power and energy monitoring capabilities of modern processors in order to tune applications for energy efficiency. We run selected kernels and a full HPC application on two high-end processors widely used in the HPC context, namely an NVIDIA K80 GPU and an Intel Haswell CPU. We evaluate the available trade-offs between energy-to-solution and time-to-solution, attempting a function-by-function frequency tuning. We finally estimate the benefits obtainable running the full code on a HPC multi-GPU node, with respect to default clock frequency governors. We instrument our code to accurately monitor power consumption and execution time without the need of any additional hardware, and we enable it to change CPUs and GPUs clock frequencies while running. We analyze our results on the different architectures using a simple energy-performance model, and derive a number of energy saving strategies which can be easily adopted on recent high-end HPC systems for generic applications.
Research work has focused in particular on communication phases, when processes within large parallel MPI applications stop their execution, waiting for data exchanges. These phases are indeed the best candidates to lower the CPU clock with minimal impact on overall performance @cite_17 @cite_0 . However, from the point of view of performances, it is good practice to overlap communication and computation whenever possible @cite_23 . Consequently, in several applications, in particular lattice based computations (such as the one we adopt as a benchmark in this paper), communication phases are often fully overlapped with computation, making this strategy less effective.
{ "cite_N": [ "@cite_0", "@cite_23", "@cite_17" ], "mid": [ "2060212752", "1503313641", "2002180734" ], "abstract": [ "Although high-performance computing has always been about efficient application execution, both energy and power consumption have become critical concerns owing to their effect on operating costs and failure rates of large-scale computing platforms. Modern processors provide techniques, such as dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (called throttling), to improve energy efficiency on-the-fly. Without careful application, however, DVFS and throttling may cause a significant performance loss due to system overhead. This paper proposes a novel runtime system that maximizes energy saving by selecting appropriate values for DVFS and throttling in parallel applications. Specifically, the system automatically predicts communication phases in parallel applications and applies frequency scaling considering both the CPU offload, provided by the network-interface card, and the architectural stalls during computation. Experiments, performed on NAS parallel benchmarks as well as on real-world applications in molecular dynamics and linear system solution, demonstrate that the proposed runtime system obtaining energy savings of as much as 14 with a low performance loss of about 2 .", "An increasingly large number of scientific applications run on large clusters based on GPU systems. In most cases the large scale parallelism of the applications uses MPI, widely recognized as the de-facto standard for building parallel applications, while several programming languages are used to express the parallelism available in the application and map it onto the parallel resources available on GPUs. Regular grids and stencil codes are used in a subset of these applications, often corresponding to computational “Grand Challenges”. One such class of applications are Lattice Boltzmann Methods (LB) used in computational fluid dynamics. The regular structure of LB algorithms makes them suitable for processor architectures with a large degree of parallelism like GPUs. Scalability of these applications on large clusters requires a careful design of processor-to-processor data communications, exploiting all possibilities to overlap communication and computation. This paper looks at these issues, considering as a use case a state-of-the-art two-dimensional LB model, that accurately reproduces the thermo-hydrodynamics of a 2D-fluid obeying the equation-of-state of a perfect gas. We study in details the interplay between data organization and data layout, data-communication options and overlapping of communication and computation. We derive partial models of some performance features and compare with experimental results for production-grade codes that we run on a large cluster of GPUs.", "Although users of high-performance computing are most interested in raw performance, both energy and power consumption have become critical concerns. Because the CPU is often the major power consumer, some microprocessors allow frequency and voltage scaling, which enables a system to efficiently reduce CPU performance and power. When the CPU is not on the critical path, such dynamic frequency and voltage scaling can produce significant energy savings with little performance penalty. This paper presents an MPI runtime system that dynamically reduces CPU frequency and voltage during communication phases in MPI programs. It dynamically identifies such phases and, without a priori knowledge, selects the CPU frequency in order to minimize energy-delay product. All analysis and subsequent frequency and voltage scaling is within MPI and so is entirely transparent to the application. This means that the large number of existing MPI programs, as well as new ones being developed, can use our system without modification. Results show that the median reduction in energy-delay product for twelve benchmarks is 8 , the median energy reduction is 11 , and the median increase in execution time increase is only 2 ." ] }
1703.02788
2950162708
Energy efficiency is becoming increasingly important for computing systems, in particular for large scale HPC facilities. In this work we evaluate, from an user perspective, the use of Dynamic Voltage and Frequency Scaling (DVFS) techniques, assisted by the power and energy monitoring capabilities of modern processors in order to tune applications for energy efficiency. We run selected kernels and a full HPC application on two high-end processors widely used in the HPC context, namely an NVIDIA K80 GPU and an Intel Haswell CPU. We evaluate the available trade-offs between energy-to-solution and time-to-solution, attempting a function-by-function frequency tuning. We finally estimate the benefits obtainable running the full code on a HPC multi-GPU node, with respect to default clock frequency governors. We instrument our code to accurately monitor power consumption and execution time without the need of any additional hardware, and we enable it to change CPUs and GPUs clock frequencies while running. We analyze our results on the different architectures using a simple energy-performance model, and derive a number of energy saving strategies which can be easily adopted on recent high-end HPC systems for generic applications.
Finally, the widespread adoption of accelerators in HPC systems means that the largest fraction of the power drained by computing systems is no more ascribable to CPUs. For instance, with the advent of multi-GPU compute nodes, where up to 8 dual GPU boards are hosted on a single computing node, up to @math 5-10 node of the COKA Cluster ( http: www.fe.infn.it coka ) installed at the University of Ferrara and obtained from the declared maximum possible power drain of the system and of the installed processors and accelerators. . Consequently, as recent GPUs improve their support of DVFS, in many cases allowing for a fine grained frequency selection @cite_19 @cite_6 @cite_24 , various studies focused on the optimization space made available by tuning GPU clock frequencies.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_6" ], "mid": [ "1980489165", "", "2074084090" ], "abstract": [ "Improving energy efficiency is an ongoing challenge in HPC because of the ever-increasing need for performance coupled with power and economic constraints. Though GPU-accelerated heterogeneous computing systems are capable of delivering impressive performance, it is necessary to explore all available power-aware technologies to meet the inevitable energy efficiency challenge. In this paper, we experimentally study the impacts of DVFS on application performance and energy efficiency for GPU computing and compare them with those of DVFS for CPU computing. Based on a power-aware heterogeneous system that includes dual Intel Sandy Bridge CPUs and the latest Nvidia K20c Kepler GPU, the study provides numerous new insights, general trends and exceptions of DVFS for GPU computing. In general, the effects of DVFS on a GPU differ from those of DVFS on a CPU. For example, on a GPU running compute-bound high-performance and high-throughput workloads, the system performance and the power consumption are approximately proportional to the GPU frequency. Hence, with a permissible power limit, increasing the GPU frequency leads to better performance without incurring a noticeable increase in energy. This paper further provides detailed analytical explanations of the causes of the observed trends and exceptions. The findings presented in this paper have the potential to impact future CPU and GPU architectures to achieve better energy efficiency and point out directions for designing effective DVFS schedulers for heterogeneous systems.", "", "Graphics processing units (GPUs) provide an order-of-magnitude improvement on peak performance and performance-per-watt as compared to traditional multicore CPUs. However, GPU-accelerated systems currently lack a generalized method of power and performance prediction, which prevents system designers from an ultimate goal of dynamic power and performance optimization. This is due to the fact that their power and performance characteristics are not well captured across architectures, and as a result, existing power and performance modeling approaches are only available for a limited range of particular GPUs. In this paper, we present power and performance characterization and modeling of GPU-accelerated systems across multiple generations of architectures. Characterization and modeling both play a vital role in optimization and prediction of GPU-accelerated systems. We quantify the impact of voltage and frequency scaling on each architecture with a particularly intriguing result that a cutting-edge Kepler-based GPU achieves energy saving of 75 by lowering GPU clocks in the best scenario, while Fermi- and Tesla-based GPUs achieve no greater than 40 and 13 , respectively. Considering these characteristics, we provide statistical power and performance modeling of GPU-accelerated systems simplified enough to be applicable for multiple generations of architectures. One of our findings is that even simplified statistical models are able to predict power and performance of cutting-edge GPUs within errors of 20 to 30 for any set of voltage and frequency pair." ] }
1703.02873
2604790346
Software tracing techniques are well-established and used by instrumentation tools to extract run-time information for program analysis and debugging. Dynamic binary instrumentation as one tool instruments program binaries to extract information. Unfortunately, instrumentation causes perturbation that is unacceptable for time-sensitive applications. Consequently we developed DIME*, a tool for dynamic binary instrumentation that considers timing constraints. DIME* uses Pin and a rate-based server approach to extract information only as long as user-specified constraints are maintained. Due to the large amount of redundancies in program traces, DIME* reduces the instrumentation overhead by one to three orders of magnitude compared to native Pin while extracting up to 99 of the information. We instrument VLC and PostgreSQL to demonstrate the usability of DIME*.
A program can be instrumented at the source code level either automatically or manually. In automatic instrumentation, a tool parses the program, may generate a CFG, and eventually insert instrumentation points. Multiple works @cite_41 @cite_11 @cite_9 investigate static source-code time-aware instrumentation tools. On the other hand, manual instrumentation requires that the developer specifies the instrumentation locations @cite_13 . Manual instrumentation is highly flexible, but the induced effect of instrumentation on the timing behavior is hard to estimate by the developer.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_13", "@cite_11" ], "mid": [ "2166778115", "1981053051", "", "2037435078" ], "abstract": [ "Software instrumentation is a key technique in many stages of the development process. It is particularly important for debugging embedded systems. Instrumented programs produce data traces which enable the developer to locate the origins of misbehaviors in the system under test. However, producing data traces incurs runtime overhead in the form of additional computation resources for capturing and copying the data. The instrumentation may therefore interfere with the system's timing and perturb its behavior. In this work, we propose an instrumentation technique for applications with temporal constraints, specifically targeting background foreground or cyclic executive systems. Our framework permits reasoning about space and time and enables the composition of software instrumentations. In particular, we propose a definition for trace reliability, which enables us to instrument real-time applications which aggressively push their time budgets. Using the framework, we present a method with low perturbation by optimizing the number of insertion points and trace buffer size with respect to code size and time budgets. Finally, we apply the theory to two concrete case studies: we instrument the OpenEC firmware for the keyboard controller of the One Laptop Per Child project, as well as an implementation of a flash file system.", "Tracing is a well-established method for debugging programs. Current approaches aim only at preserving functional correctness during the instrumentation. Preservation of functional correctness is a necessary feature of all instrumentation tools. However, few existing instrumentation tools preserve extra-functional properties of a program. Specific classes of software are unable to leverage software instrumentation; e.g., timing for real-time systems, memory consumption for embedded software, and tracing bandwidth for on-board software. We present the first instrumentation framework, INSTEP, that preserves logical correctness and a rich set of extra-functional properties. INSTEP derives instrumentation alternatives based on the developer's instrumentation intent (II), abstracts the program and prunes the search space, and then instruments the program based on constraints and cost models of competing properties. We demonstrate and experiment with a fully automated framework of INSTEP with different IIs and extra-functional properties.We also experiment with a large automotive case study to show the scalability of INSTEP.", "", "Instrumentation is a valuable technique to gain insight into a program's behavior. Safety-critical real-time embedded applications are time sensitive and so instrumentation techniques for this domain must especially consider timing. This work establishes the basis for measuring the effectiveness of approaches for time-aware instrumentation in addition to coverage. We define the ETP shift effectiveness metric and define its optimality criterion. We identify locations in the program where program transformation techniques can be applied to increase the instrumentability of the program. We subsequently use the proposed metric to evaluate two transformation methods that improve the effectiveness and coverage of current techniques for time-aware instrumentation by a factor of five." ] }
1703.02873
2604790346
Software tracing techniques are well-established and used by instrumentation tools to extract run-time information for program analysis and debugging. Dynamic binary instrumentation as one tool instruments program binaries to extract information. Unfortunately, instrumentation causes perturbation that is unacceptable for time-sensitive applications. Consequently we developed DIME*, a tool for dynamic binary instrumentation that considers timing constraints. DIME* uses Pin and a rate-based server approach to extract information only as long as user-specified constraints are maintained. Due to the large amount of redundancies in program traces, DIME* reduces the instrumentation overhead by one to three orders of magnitude compared to native Pin while extracting up to 99 of the information. We instrument VLC and PostgreSQL to demonstrate the usability of DIME*.
Some instrumentation tools are also capable of inserting instrumentation points to binary executables, either statically or dynamically. QPT @cite_4 , EEL @cite_10 , and ATOM @cite_26 are examples of static binary instrumentation tools. Static instrumentation is based on static analysis and, hence, cannot react to application changes at run time. Dynamic binary instrumentation, on the other hand, does not require any pre-processing of the program under analysis.
{ "cite_N": [ "@cite_10", "@cite_26", "@cite_4" ], "mid": [ "2040183246", "2047226031", "" ], "abstract": [ "EEL (Executable Editing Library) is a library for building tools to analyze and modify an executable (compiled) program. The systems and languages communities have built many tools for error detection, fault isolation, architecture translation, performance measurement, simulation, and optimization using this approach of modifying executables. Currently, however, tools of this sort are difficult and time-consuming to write and are usually closely tied to a particular machine and operating system. EEL supports a machine- and system-independent editing model that enables tool builders to modify an executable without being aware of the details of the underlying architecture or operating system or being concerned with the consequences of deleting instructions or adding foreign code.", "ATOM (Analysis Tools with OM) is a single framework for building a wide range of customized program analysis tools. It provides the common infrastructure present in all code-instrumenting tools; this is the difficult and time-consuming part. The user simply defines the tool-specific details in instrumentation and analysis routines. Building a basic block counting tool like Pixie with ATOM requires only a page of code. ATOM, using OM link-time technology, organizes the final executable such that the application program and user's analysis routines run in the same address space. Information is directly passed from the application program to the analysis routines through simple procedure calls instead of inter-process communication or files on disk. ATOM takes care that analysis routines do not interfere with the program's execution, and precise information about the program is presented to the analysis routines at all times. ATOM uses no simulation or interpretation. ATOM has been implemented on the Alpha AXP under OSF 1. It is efficient and has been used to build a diverse set of tools for basic block counting, profiling, dynamic memory recording, instruction and data cache simulation, pipeline simulation, evaluating branch prediction, and instruction scheduling.", "" ] }
1703.02873
2604790346
Software tracing techniques are well-established and used by instrumentation tools to extract run-time information for program analysis and debugging. Dynamic binary instrumentation as one tool instruments program binaries to extract information. Unfortunately, instrumentation causes perturbation that is unacceptable for time-sensitive applications. Consequently we developed DIME*, a tool for dynamic binary instrumentation that considers timing constraints. DIME* uses Pin and a rate-based server approach to extract information only as long as user-specified constraints are maintained. Due to the large amount of redundancies in program traces, DIME* reduces the instrumentation overhead by one to three orders of magnitude compared to native Pin while extracting up to 99 of the information. We instrument VLC and PostgreSQL to demonstrate the usability of DIME*.
Example of dynamic binary instrumentation tools that use code transformation during program execution include Dyninst @cite_27 and Vulcan @cite_31 . Most of these instrumentation tools, however, modify the native behavior of the program under analysis @cite_7 . Other tools have software code caches and are able to dynamically compile binaries such as Pin @cite_32 , DynamoRIO @cite_29 , and Valgrind @cite_3 .
{ "cite_N": [ "@cite_7", "@cite_29", "@cite_32", "@cite_3", "@cite_27", "@cite_31" ], "mid": [ "2133692747", "2161992906", "2134633067", "2156858199", "2160468841", "2149918819" ], "abstract": [ "Process virtualization provides a virtual execution environment within which an unmodified application can be monitored and controlled while it executes. The provided layer of control can be used for purposes ranging from sandboxing to compatibility to profiling. The additional operations required for this layer are performed clandestinely alongside regular program execution. Software dynamic instrumentation is one method for implementing process virtualization which dynamically instruments an application such that the application's code and the inserted code are interleaved together. DynamoRIO is a process virtualization system implemented using software code cache techniques that allows users to build customized dynamic instrumentation tools. There are many challenges to building such a runtime system. One major obstacle is transparency. In order to support executing arbitrary applications, DynamoRIO must be fully transparent so that an application cannot distinguish between running inside the virtual environment and native execution. In addition, any desired extra operations for a particular tool must avoid interfering with the behavior of the application. Transparency has historically been provided on an ad-hoc basis, as a reaction to observed problems in target applications. This paper identifies a necessary set of transparency requirements for running mainstream Windows and Linux applications. We discuss possible solutions to each transparency issue, evaluate tradeoffs between different choices, and identify cases where maintaining transparency is not practically solvable. We believe this will provide a guideline for better design and implementation of transparent dynamic instrumentation, as well as other similar process virtualization systems using software code caches.", "Dynamic optimization is emerging as a promising approach to overcome many of the obstacles of traditional static compilation. But while there are a number of compiler infrastructures for developing static optimizations, there are very few for developing dynamic optimizations. We present a framework for implementing dynamic analyses and optimizations. We provide an interface for building external modules, or clients, for the DynamoRIO dynamic code modification system. This interface abstracts away many low-level details of the DynamoRIO runtime system while exposing a simple and powerful, yet efficient and lightweight API. This is achieved by restricting optimization units to linear streams of code and using adaptive levels of detail for representing instructions. The interface is not restricted to optimization and can be used for instrumentation, profiling, dynamic translation, etc. To demonstrate the usefulness and effectiveness of our framework, we implemented several optimizations. These improve the performance of some applications by as much as 40 relative to native execution. The average speedup relative to base DynamoRIO performance is 12 .", "Robust and powerful software instrumentation tools are essential for program analysis tasks such as profiling, performance evaluation, and bug detection. To meet this need, we have developed a new instrumentation system called Pin. Our goals are to provide easy-to-use, portable, transparent, and efficient instrumentation. Instrumentation tools (called Pintools) are written in C C++ using Pin's rich API. Pin follows the model of ATOM, allowing the tool writer to analyze an application at the instruction level without the need for detailed knowledge of the underlying instruction set. The API is designed to be architecture independent whenever possible, making Pintools source compatible across different architectures. However, a Pintool can access architecture-specific details when necessary. Instrumentation with Pin is mostly transparent as the application and Pintool observe the application's original, uninstrumented behavior. Pin uses dynamic compilation to instrument executables while they are running. For efficiency, Pin uses several techniques, including inlining, register re-allocation, liveness analysis, and instruction scheduling to optimize instrumentation. This fully automated approach delivers significantly better instrumentation performance than similar tools. For example, Pin is 3.3x faster than Valgrind and 2x faster than DynamoRIO for basic-block counting. To illustrate Pin's versatility, we describe two Pintools in daily use to analyze production software. Pin is publicly available for Linux platforms on four architectures: IA32 (32-bit x86), EM64T (64-bit x86), Itanium®, and ARM. In the ten months since Pin 2 was released in July 2004, there have been over 3000 downloads from its website.", "Dynamic binary instrumentation (DBI) frameworks make it easy to build dynamic binary analysis (DBA) tools such as checkers and profilers. Much of the focus on DBI frameworks has been on performance; little attention has been paid to their capabilities. As a result, we believe the potential of DBI has not been fully exploited. In this paper we describe Valgrind, a DBI framework designed for building heavyweight DBA tools. We focus on its unique support for shadow values-a powerful but previously little-studied and difficult-to-implement DBA technique, which requires a tool to shadow every register and memory value with another value that describes it. This support accounts for several crucial design features that distinguish Valgrind from other DBI frameworks. Because of these features, lightweight tools built with Valgrind run comparatively slowly, but Valgrind can be used to build more interesting, heavyweight tools that are difficult or impossible to build with other DBI frameworks such as Pin and DynamoRIO.", "The authors present a postcompiler program manipulation tool called Dyninst, which provides a C++ class library for program instrumentation. Using this library, it is possible to instrument and modify application programs during execution. A unique feature of this library is that it permits machine-independent binary instrumentation programs to be written. The authors describe the interface that a tool sees when using this library. They also discuss three simple tools built using this interface: a utility to count the number of times a function is called, a program to capture the output of an already running program to a file, and an implementation of conditional breakpoints. For the conditional breakpoint example, the authors show that by using their interface compared with gdb, they are able to execute a program with conditional breakpoints up to 900 times faster.", "" ] }
1703.02873
2604790346
Software tracing techniques are well-established and used by instrumentation tools to extract run-time information for program analysis and debugging. Dynamic binary instrumentation as one tool instruments program binaries to extract information. Unfortunately, instrumentation causes perturbation that is unacceptable for time-sensitive applications. Consequently we developed DIME*, a tool for dynamic binary instrumentation that considers timing constraints. DIME* uses Pin and a rate-based server approach to extract information only as long as user-specified constraints are maintained. Due to the large amount of redundancies in program traces, DIME* reduces the instrumentation overhead by one to three orders of magnitude compared to native Pin while extracting up to 99 of the information. We instrument VLC and PostgreSQL to demonstrate the usability of DIME*.
The work by Arnold and Ryder @cite_20 is the most relevant to reducing the overhead of dynamic instrumentation following . The authors' approach involves duplicating code regions and using counter-based sampling to switch between the instrumented and non-instrumented versions of the code. Code duplication results in a large increase in code space. Since event-based sampling only samples events according to their frequency of occurrence, this results in a reduced instrumentation overhead. However, as explained in @cite_25 , event-based sampling can result in sampling bursts which can cause high degradation in performance Other sampling-based approaches are also used for performance optimizations @cite_14 . These approaches either apply optimizations specific to the instrumentation objective or use compiler-specific information to perform optimizations.
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_20" ], "mid": [ "", "2097157718", "2077324087" ], "abstract": [ "", "Phase Change Memory (PCM) is an emerging technology that has been recently considered as a cost-effective and energy-efficient alternative to traditional DRAM main memory. Due to the high energy consumption of writes and limited number of write cycles, reducing the number of writes to PCM can result in considerable energy savings and endurance improvement. In this paper, we introduce the concept of useless write-backs, which occur when a dirty cache line that belongs to a dead memory region is evicted from the cache (a dead region is a memory location that is not used again by a program). Since the evicted data is not used again, the write-back can be safely avoided to improve endurance and energy consumption. This paper presents a limit study on the improvement that passing information to the memory system about useless writebacks has on the endurance and energy consumption of systems based on PCM main memory. We developed algorithms to measure the number of useless write-backs to PCM for three different types of memory regions and we present an energy model to determine the maximum energy savings that could potentially be achieved through such a scheme. Our results show that avoiding useless write-backs can save up to 19.8 of energy and improve endurance by up to 26.2 .", "Instrumenting code to collect profiling information can cause substantial execution overhead. This overhead makes instrumentation difficult to perform at runtime, often preventing many known offline feedback-directed optimizations from being used in online systems. This paper presents a general framework for performing instrumentation sampling to reduce the overhead of previously expensive instrumentation. The framework is simple and effective, using code-duplication and counter-based sampling to allow switching between instrumented and non-instrumented code. Our framework does not rely on any hardware or operating system support, yet provides a high frequency sample rate that is tunable, allowing the tradeoff between overhead and accuracy to be adjusted easily at runtime. Experimental results are presented to validate that our technique can collect accurate profiles (93-98 overlap with a perfect profile) with low overhead (averaging 6 total overhead with a naive implementation). A Jalape no-specific optimization is also presented that reduces overhead further, resulting in an average total overhead of 3 ." ] }
1703.02949
2949600457
People can learn a wide range of tasks from their own experience, but can also learn from observing other creatures. This can accelerate acquisition of new skills even when the observed agent differs substantially from the learning agent in terms of morphology. In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents (e.g., different robots). We introduce a problem formulation where two agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. The process of learning these invariant feature spaces can be viewed as a kind of "analogy making", or implicit learning of partial correspondences between two distinct domains. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills, and illustrate that we can transfer knowledge between simulated robotic arms with different numbers of links, as well as simulated arms with different actuation mechanisms, where one robot is torque-driven while the other is tendon-driven.
Transfer learning has long been recognized as an important direction in robotics and reinforcement learning ( @cite_11 ). @cite_6 learned value functions on subsets of the state representation that were shared between tasks, providing a shaping reward in the target task. @cite_2 manually construct a function to map a @math -function from one Markov decision process (MDP) to another. @cite_4 manually define a common feature space between the states of two MDPs, and use this feature space to learn a mapping between states.
{ "cite_N": [ "@cite_4", "@cite_2", "@cite_6", "@cite_11" ], "mid": [ "1481405077", "2133040789", "2079247031", "" ], "abstract": [ "Agents in reinforcement learning tasks may learn slowly in large or complex tasks -- transfer learning is one technique to speed up learning by providing an informative prior. How to best enable transfer between tasks with different state representations and or actions is currently an open question. This paper introduces the concept of a common task subspace, which is used to autonomously learn how two tasks are related. Experiments in two different nonlinear domains empirically show that a learned inter-state mapping can successfully be used by fitted value iteration, to (1) improving the performance of a policy learned with a fixed number of samples, and (2) reducing the time required to converge to a (near-) optimal policy with unlimited samples.", "Temporal difference (TD) learning (Sutton and Barto, 1998) has become a popular reinforcement learning technique in recent years. TD methods, relying on function approximators to generalize learning to novel situations, have had some experimental successes and have been shown to exhibit some desirable properties in theory, but the most basic algorithms have often been found slow in practice. This empirical result has motivated the development of many methods that speed up reinforcement learning by modifying a task for the learner or helping the learner better generalize to novel situations. This article focuses on generalizing across tasks, thereby speeding up learning, via a novel form of transfer using handcoded task relationships. We compare learning on a complex task with three function approximators, a cerebellar model arithmetic computer (CMAC), an artificial neural network (ANN), and a radial basis function (RBF), and empirically demonstrate that directly transferring the action-value function can lead to a dramatic speedup in learning with all three. Using transfer via inter-task mapping (TVITM), agents are able to learn one task and then markedly reduce the time it takes to learn a more complex task. Our algorithms are fully implemented and tested in the RoboCup soccer Keepaway domain. This article contains and extends material published in two conference papers (Taylor and Stone, 2005; , 2005).", "We introduce the use of learned shaping rewards in reinforcement learning tasks, where an agent uses prior experience on a sequence of tasks to learn a portable predictor that estimates intermediate rewards, resulting in accelerated learning in later tasks that are related but distinct. Such agents can be trained on a sequence of relatively easy tasks in order to develop a more informative measure of reward that can be transferred to improve performance on more difficult tasks without requiring a hand coded shaping function. We use a rod positioning task to show that this significantly improves performance even after a very brief training period.", "" ] }
1703.02949
2949600457
People can learn a wide range of tasks from their own experience, but can also learn from observing other creatures. This can accelerate acquisition of new skills even when the observed agent differs substantially from the learning agent in terms of morphology. In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents (e.g., different robots). We introduce a problem formulation where two agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. The process of learning these invariant feature spaces can be viewed as a kind of "analogy making", or implicit learning of partial correspondences between two distinct domains. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills, and illustrate that we can transfer knowledge between simulated robotic arms with different numbers of links, as well as simulated arms with different actuation mechanisms, where one robot is torque-driven while the other is tendon-driven.
Later work by @cite_1 uses unsupervised manifold alignment to assign pairings between states for transfer. Like in our method, they aim to transfer skills between robots with different configurations and action spaces by guiding exploration in the target domain. The main difference from our work is that @cite_1 assume the presence of a feature mapping that provides distances between states, and use these (hand designed) features to assign correspondences between states in the different domains. In contrast, we assume that good correspondences in episodic tasks can be extracted through time alignment, and focus on learning the feature mapping itself. Additionally, we do not try to learn a direct mapping between state spaces but instead try to learn nonlinear embedding functions into a common feature space, as compared to linear mappings between state spaces learned in @cite_1 . In a similar vein, @cite_7 consider transfer learning across linear time-invariant (LTI) systems through simple alignment based methods. Although this method is quite effective in enabling transfer in these systems, it does not apply to the higher dimensional continuous control tasks we consider which may have non-linear dynamics, and may not be LTI.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "1848094219", "2585083595" ], "abstract": [ "The success of applying policy gradient reinforcement learning (RL) to difficult control tasks hinges crucially on the ability to determine a sensible initialization for the policy. Transfer learning methods tackle this problem by reusing knowledge gleaned from solving other related tasks. In the case of multiple task domains, these algorithms require an inter-task mapping to facilitate knowledge transfer across domains. However, there are currently no general methods to learn an inter-task mapping without requiring either background knowledge that is not typically present in RL settings, or an expensive analysis of an exponential number of inter-task mappings in the size of the state and action spaces. This paper introduces an autonomous framework that uses unsupervised manifold alignment to learn intertask mappings and effectively transfer samples between different task domains. Empirical results on diverse dynamical systems, including an application to quadrotor control, demonstrate its effectiveness for cross-domain transfer in the context of policy gradient RL.", "Methods from machine learning have successfully been used to improve the performance of control systems in cases when accurate models of the system or the environment are not available. These methods require the use of data generated from physical trials. Transfer Learning (TL) allows for this data to come from a different, similar system. The goal of this work is to understand in which cases a simple, alignment-based transfer of data is beneficial. A scalar, linear, time invariant(LTI) transformation is applied to the output from a source system to align with the output from a target system. In a theoretic study, we have already shown that for linear, single-input, single-output systems, the upper bound of the transformation error depends on the dynamic properties of the source and target system, and is small for systems with similar response times. We now consider two nonlinear, unicycle robots. Based on our previous work, we derive analytic error bounds for the linearized robot models. We then provide simulations of the nonlinear robot models and experiments with a Pioneer 3-AT robot that confirm the theoretical findings. As a result, key characteristics of alignment based transfer learning observed in our theoretic study prove to be also true for real, nonlinear unicycle robots." ] }
1703.02949
2949600457
People can learn a wide range of tasks from their own experience, but can also learn from observing other creatures. This can accelerate acquisition of new skills even when the observed agent differs substantially from the learning agent in terms of morphology. In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents (e.g., different robots). We introduce a problem formulation where two agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. The process of learning these invariant feature spaces can be viewed as a kind of "analogy making", or implicit learning of partial correspondences between two distinct domains. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills, and illustrate that we can transfer knowledge between simulated robotic arms with different numbers of links, as well as simulated arms with different actuation mechanisms, where one robot is torque-driven while the other is tendon-driven.
Learning feature spaces has also been studied in the domain of computer vision as a mechanism for domain adaptation and metric learning. @cite_9 finds a linear transformation of the input data to satisfy pairwise similarity contraints, while past work by @cite_16 used Siamese networks to learn a feature space where paired images are brought close together and unpaired images are pushed apart. This enables a semantically meaningful metric space to be learned with only pairs as labels. Later work on domain adaptation by @cite_12 and @cite_13 use an adversarial approach to learn an image embedding that is useful for classification and invariant to the input image's domain. We use the idea of learning a metric space from paired states, though the adversarial approach could also be used with our method as an alternative objective function in future work.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2117154949", "2952629144", "1731081199", "2953226914" ], "abstract": [ "Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.", "Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.", "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings." ] }
1703.02722
2949887587
DGCC protocol has been shown to achieve good performance on multi-core in-memory system. However, distributed transactions complicate the dependency resolution, and therefore, an effective transaction partitioning strategy is essential to reduce expensive multi-node distributed transactions. During failure recovery, log must be examined from the last checkpoint onwards and the affected transactions are re-executed based on the way they are partitioned and executed. Existing approaches treat both transaction management and recovery as two separate problems, even though recovery is dependent on the sequence in which transactions are executed. In this paper, we propose to treat the transaction management and recovery problems as one. We first propose an efficient Distributed Dependency Graph based Concurrency Control (DistDGCC) protocol for handling transactions spanning multiple nodes, and propose a new novel and efficient logging protocol called Dependency Logging that also makes use of dependency graphs for efficient logging and recovery. DistDGCC optimizes the average cost for each distributed transaction by processing transactions in batch. Moreover, it also reduces the effects of thread blocking caused by distributed transactions and consequently improves the runtime performance. Further, dependency logging exploits the same data structure that is used by DistDGCC to reduce the logging overhead, as well as the logical dependency information to improve the recovery parallelism. Extensive experiments are conducted to evaluate the performance of our proposed technique against state-of-the-art techniques. Experimental results show that DistDGCC is efficient and scalable, and dependency logging supports fast recovery with marginal runtime overhead. Hence, the overall system performance is significantly improved as a result.
High efficient concurrency control protocol that ensures correct execution of concurrent transactions is vital for in-memory database systems. Two-phase locking (2PL) @cite_45 and Optimistic Concurrency Control (OCC) @cite_51 are most widely adopted. As a pessimistic protocol, 2PL needs to acquire the lock before accessing a tuple and release it after transaction commits or aborts. With 2PL, conflict operations are resolved in advance and are executed in sequence. On the contrary, OCC assumes that conflicts are rare and does not check the conflicts during the transaction execution. Each transaction maintains read and write sets and conducts a conflict validation. Transaction commits only when the validation phase is passed, otherwise it restarts or directly aborts. With the advancement of new hardware techniques, many research efforts have been devoted to improve the efficiency of concurrency control protocols.
{ "cite_N": [ "@cite_45", "@cite_51" ], "mid": [ "1991199257", "2133386065" ], "abstract": [ "In database systems, users access shared data under the assumption that the data satisfies certain consistency constraints. This paper defines the concepts of transaction, consistency and schedule and shows that consistency requires that a transaction cannot request new locks after releasing a lock. Then it is argued that a transaction needs to lock a logical rather than a physical subset of the database. These subsets may be specified by predicates. An implementation of predicate locks which satisfies the consistency condition is suggested.", "Most current approaches to concurrency control in database systems rely on locking of data objects as a control mechanism. In this paper, two families of nonlocking concurrency controls are presented. The methods used are “optimistic” in the sense that they rely mainly on transaction backup as a control mechanism, “hoping” that conflicts between transactions will not occur. Applications for which these methods should be more efficient than locking are discussed." ] }
1703.02722
2949887587
DGCC protocol has been shown to achieve good performance on multi-core in-memory system. However, distributed transactions complicate the dependency resolution, and therefore, an effective transaction partitioning strategy is essential to reduce expensive multi-node distributed transactions. During failure recovery, log must be examined from the last checkpoint onwards and the affected transactions are re-executed based on the way they are partitioned and executed. Existing approaches treat both transaction management and recovery as two separate problems, even though recovery is dependent on the sequence in which transactions are executed. In this paper, we propose to treat the transaction management and recovery problems as one. We first propose an efficient Distributed Dependency Graph based Concurrency Control (DistDGCC) protocol for handling transactions spanning multiple nodes, and propose a new novel and efficient logging protocol called Dependency Logging that also makes use of dependency graphs for efficient logging and recovery. DistDGCC optimizes the average cost for each distributed transaction by processing transactions in batch. Moreover, it also reduces the effects of thread blocking caused by distributed transactions and consequently improves the runtime performance. Further, dependency logging exploits the same data structure that is used by DistDGCC to reduce the logging overhead, as well as the logical dependency information to improve the recovery parallelism. Extensive experiments are conducted to evaluate the performance of our proposed technique against state-of-the-art techniques. Experimental results show that DistDGCC is efficient and scalable, and dependency logging supports fast recovery with marginal runtime overhead. Hence, the overall system performance is significantly improved as a result.
ARIES @cite_16 is the most widely used logging approach in traditional database systems. By maintaining data in memory, new recovery techniques @cite_20 @cite_39 @cite_4 @cite_29 have been put forward, most of which inherits the idea from ARIES. Logical logging techniques @cite_1 @cite_33 are proposed recently that aim to reduce the log size. While they improve the runtime performance by reducing the number of disk I Os to an extent, they usually incur expensive cost for recovery, especially in distributed environment @cite_21 . Many optimizations are also proposed to increase the efficiency of logging and recovery. @cite_5 makes use of shadow pages to reduce the log size during the runtime. @cite_47 reduces the lock contention on the log buffers and the effects of context switching to improve the logging performance.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_29", "@cite_21", "@cite_1", "@cite_39", "@cite_5", "@cite_47", "@cite_16", "@cite_20" ], "mid": [ "", "2145170453", "", "2431277952", "2071414195", "", "2102712231", "2118995439", "2104954161", "2118411843" ], "abstract": [ "", "New hardware platforms, e.g. cloud, multi-core, etc., have led to a reconsideration of database system architecture. Our Deuteronomy project separates transactional functionality from data management functionality, enabling a flexible response to exploiting new platforms. This separation requires, however, that recovery is described logically. In this paper, we extend current recovery methods to work in this logical setting. While this is straightforward in principle, performance is an issue. We show how ARIES style recovery optimizations can work for logical recovery where page information is not captured on the log. In side-by-side performance experiments using a common log, we compare logical recovery with a state-of-the art ARIES style recovery implementation and show that logical redo performance can be competitive.", "", "By maintaining the data in main memory, in-memory databases dramatically reduce the I O cost of transaction processing. However, for recovery purposes, in-memory systems still need to flush the log to disk, which incurs a substantial number of I Os. Recently, command logging has been proposed to replace the traditional data log (e.g., ARIES logging) in in-memory databases. Instead of recording how the tuples are updated, command logging only tracks the transactions that are being executed, thereby effectively reducing the size of the log and improving the performance. However, when a failure occurs, all the transactions in the log after the last checkpoint must be redone sequentially and this significantly increases the cost of recovery. In this paper, we first extend the command logging technique to a distributed system, where all the nodes can perform their recovery in parallel. We show that in a distributed system, the only bottleneck of recovery caused by command logging is the synchronization process that attempts to resolve the data dependency among the transactions. We then propose an adaptive logging approach by combining data logging and command logging. The percentage of data logging versus command logging becomes a tuning knob between the performance of transaction processing and recovery to meet different OLTP requirements, and a model is proposed to guide such tuning. Our experimental study compares the performance of our proposed adaptive logging, ARIES-style data logging and command logging on top of H-Store. The results show that adaptive logging can achieve a 10x boost for recovery and a transaction throughput that is comparable to that of command logging.", "Fine-grained, record-oriented write-ahead logging, as exemplified by systems like ARIES, has been the gold standard for relational database recovery. In this paper, we show that in modern high-throughput transaction processing systems, this is no longer the optimal way to recover a database system. In particular, as transaction throughputs get higher, ARIES-style logging starts to represent a non-trivial fraction of the overall transaction execution time.", "", "The impact of updating policy and access pattern on the performance of post-crash log processing with a fuzzy checkpointing main memory database (MMDB) is discussed. The problem of restoring the database to a consistent state and several algorithms for post-crash log processing under the various updating alternatives are reviewed. Using an analytic model, the checkpoint behavior and post-crash log processing performance of these algorithms are examined. Analytic results show that deferred updating always takes less time to process the log after a crash. >", "The shift to multi-core hardware brings new challenges to database systems, as the software parallelism determines performance. Even though database systems traditionally accommodate simultaneous requests, a multitude of synchronization barriers serialize execution. Write-ahead logging is a fundamental, omnipresent component in ARIES-style concurrency and recovery, and one of the most important yet-to-be addressed potential bottlenecks, especially in OLTP workloads making frequent small changes to data. In this paper, we identify four logging-related impediments to database system scalability. Each issue challenges different level in the software architecture: (a) the high volume of small-sized I O requests may saturate the disk, (b) transactions hold locks while waiting for the log flush, (c) extensive context switching overwhelms the OS scheduler with threads executing log I Os, and (d) contention appears as transactions serialize accesses to in-memory log data structures. We demonstrate these problems and address them with techniques that, when combined, comprise a holistic, scalable approach to logging. Our solution achieves a 20 -69 speedup over a modern database system when running log-intensive workloads, such as the TPC-B and TATP benchmarks. Moreover, it achieves log insert throughput over 1.8GB s for small log records on a single socket server, an order of magnitude higher than the traditional way of accessing the log using a single mutex.", "DB2 TM , IMS, and Tandem TM systems. ARIES is applicable not only to database management systems but also to persistent object-oriented languages, recoverable file systems and transaction-based operating systems. ARIES has been implemented, to varying degrees, in IBM's OS 2 TM Extended Edition Database Manager, DB2, Workstation Data Save Facility VM, Starburst and QuickSilver, and in the University of Wisconsin's EXODUS and Gamma database machine.", "Performance needs of many database applications dictate that the entire database be stored in main memory. The Dali system is a main memory storage manager designed to provide the persistence, availability and safety guarantees one typically expects from a diskresident database, while at the same time providing very high performance by virtue of being tuned to support in-memory data. Dali follows the philosophy of treating all data, including system data, uniformly as database files that can be memory mapped and directly accessed updated by user processes. Direct access provides high performance; slower, but more secure, access is also provided through the use of a server process. Various features of Dali can be tailored to the needs of an application to achieve high performance - for example, concurrency control and logging can be turned off if not desired, which enables Dali to efficiently support applications that require non-persistent memory resident data to be shared by multiple processes. Both objectoriented and relational databases can be implemented on top of Dali." ] }
1703.02952
2596378825
The increasing quality of smartphone cameras and variety of photo editing applications, in addition to the rise in popularity of image-centric social media, have all led to a phenomenal growth in mobile-based photography. Advances in computer vision and machine learning techniques provide a large number of cloud-based services with the ability to provide content analysis, face recognition, and object detection facilities to third parties. These inferences and analytics might come with undesired privacy risks to the individuals. In this paper, we address a fundamental challenge: Can we utilize the local processing capabilities of modern smartphones efficiently to provide desired features to approved analytics services, while protecting against undesired inference attacks and preserving privacy on the cloud? We propose a hybrid architecture for a distributed deep learning model between the smartphone and the cloud. We rely on the Siamese network and machine learning approaches for providing privacy based on defined privacy constraints. We also use transfer learning techniques to evaluate the proposed method. Using the latest deep learning models for Face Recognition, Emotion Detection, and Gender Classification techniques, we demonstrate the effectiveness of our technique in providing highly accurate classification results for the desired analytics, while proving strong privacy guarantees.
Using pre-trained deep learning models can increase accuracy of different mobile sensors; e.g. in @cite_3 , Lane al use a 3 layer DNN which does not overburden the hardware. Complex networks with more layers need more processing power. DNN architectures such as the 16-layer model () proposed in @cite_19 and the 8-layer model () proposed in @cite_38 which are more complex, are implemented on the mobile in @cite_41 , and the resource usage such as time, CPU and energy overhead, are reported. As most of the state-of-the-art DNNs are pretty large in scale, fully evaluating all the layers on mobile results in serious drawbacks in processing time and memory requirements. Some methods are proposed to approximate these complex functions with simpler ones to reduce the cost of inference. Kim al @cite_41 aim to compress deep models and in @cite_33 the authors use sparsification and kernel separation. However, the increase in efficiency of these methods comes with a decrease in accuracy of the model.
{ "cite_N": [ "@cite_38", "@cite_33", "@cite_41", "@cite_3", "@cite_19" ], "mid": [ "1994002998", "2546536770", "2177847924", "1991539813", "1686810756" ], "abstract": [ "The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available.", "Deep learning has revolutionized the way sensor data are analyzed and interpreted. The accuracy gains these approaches offer make them attractive for the next generation of mobile, wearable and embedded sensory applications. However, state-of-the-art deep learning algorithms typically require a significant amount of device and processor resources, even just for the inference stages that are used to discriminate high-level classes from low-level data. The limited availability of memory, computation, and energy on mobile and embedded platforms thus pose a significant challenge to the adoption of these powerful learning techniques. In this paper, we propose SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms. As a result, SparseSep allows large-scale DNNs and CNNs to run efficiently on mobile and embedded hardware with only minimal impact on inference accuracy. We experiment using SparseSep across a variety of common processors such as the Qualcomm Snapdragon 400, ARM Cortex M0 and M3, and Nvidia Tegra K1, and show that it allows inference for various deep models to execute more efficiently; for example, on average requiring 11.3 times less memory and running 13.3 times faster on these representative platforms.", "Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging. To deploy deep CNNs on mobile devices, we present a simple and effective scheme to compress the entire CNN, which we call one-shot whole network compression. The proposed scheme consists of three steps: (1) rank selection with variational Bayesian matrix factorization, (2) Tucker decomposition on kernel tensor, and (3) fine-tuning to recover accumulated loss of accuracy, and each step can be easily implemented using publicly available tools. We demonstrate the effectiveness of the proposed scheme by testing the performance of various compressed CNNs (AlexNet, VGGS, GoogLeNet, and VGG-16) on the smartphone. Significant reductions in model size, runtime, and energy consumption are obtained, at the cost of small loss in accuracy. In addition, we address the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by our proposed scheme.", "Sensor-equipped smartphones and wearables are transforming a variety of mobile apps ranging from health monitoring to digital assistants. However, reliably inferring user behavior and context from noisy and complex sensor data collected under mobile device constraints remains an open problem, and a key bottleneck to sensor app development. In recent years, advances in the field of deep learning have resulted in nearly unprecedented gains in related inference tasks such as speech and object recognition. However, although mobile sensing shares many of the same data modeling challenges, we have yet to see deep learning be systematically studied within the sensing domain. If deep learning could lead to significantly more robust and efficient mobile sensor inference it would revolutionize the field by rapidly expanding the number of sensor apps ready for mainstream usage. In this paper, we provide preliminary answers to this potentially game-changing question by prototyping a low-power Deep Neural Network (DNN) inference engine that exploits both the CPU and DSP of a mobile device SoC. We use this engine to study typical mobile sensing tasks (e.g., activity recognition) using DNNs, and compare results to learning techniques in more common usage. Our early findings provide illustrative examples of DNN usage that do not overburden modern mobile hardware, while also indicating how they can improve inference accuracy. Moreover, we show DNNs can gracefully scale to larger numbers of inference classes and can be flexibly partitioned across mobile and remote resources. Collectively, these results highlight the critical need for further exploration as to how the field of mobile sensing can best make use of advances in deep learning towards robust and efficient sensor inference.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1703.02952
2596378825
The increasing quality of smartphone cameras and variety of photo editing applications, in addition to the rise in popularity of image-centric social media, have all led to a phenomenal growth in mobile-based photography. Advances in computer vision and machine learning techniques provide a large number of cloud-based services with the ability to provide content analysis, face recognition, and object detection facilities to third parties. These inferences and analytics might come with undesired privacy risks to the individuals. In this paper, we address a fundamental challenge: Can we utilize the local processing capabilities of modern smartphones efficiently to provide desired features to approved analytics services, while protecting against undesired inference attacks and preserving privacy on the cloud? We propose a hybrid architecture for a distributed deep learning model between the smartphone and the cloud. We rely on the Siamese network and machine learning approaches for providing privacy based on defined privacy constraints. We also use transfer learning techniques to evaluate the proposed method. Using the latest deep learning models for Face Recognition, Emotion Detection, and Gender Classification techniques, we demonstrate the effectiveness of our technique in providing highly accurate classification results for the desired analytics, while proving strong privacy guarantees.
There are several processor on the mobile which can be used for inferencing. CPU, GPU, and DSP are such processors. Alternative DNN models have been implemented which use the GPU for faster processing. The DNN implementation on GPU in @cite_41 has burdens on the battery, hence it is not a feasible solution for some practical applications that either users frequently use it or continuously require it for long periods @cite_32 . On the other hand, recent devices have DSP modules though their capacity for programming and storage can be limited. To tackle these problems, Lane al @cite_32 have implemented a software accelerator called DeepX for large-scale DNN to reduce the resources while the mobile is doing inference by using different kinds of mobile processor simultaneously.
{ "cite_N": [ "@cite_41", "@cite_32" ], "mid": [ "2177847924", "2297325673" ], "abstract": [ "Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging. To deploy deep CNNs on mobile devices, we present a simple and effective scheme to compress the entire CNN, which we call one-shot whole network compression. The proposed scheme consists of three steps: (1) rank selection with variational Bayesian matrix factorization, (2) Tucker decomposition on kernel tensor, and (3) fine-tuning to recover accumulated loss of accuracy, and each step can be easily implemented using publicly available tools. We demonstrate the effectiveness of the proposed scheme by testing the performance of various compressed CNNs (AlexNet, VGGS, GoogLeNet, and VGG-16) on the smartphone. Significant reductions in model size, runtime, and energy consumption are obtained, at the cost of small loss in accuracy. In addition, we address the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by our proposed scheme.", "Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract the high-level information needed by mobile apps. It is critical that the gains in inference accuracy that deep models afford become embedded in future generations of mobile apps. In this work, we present the design and implementation of DeepX, a software accelerator for deep learning execution. DeepX signif- icantly lowers the device resources (viz. memory, computation, energy) required by deep learning that currently act as a severe bottleneck to mobile adoption. The foundation of DeepX is a pair of resource control algorithms, designed for the inference stage of deep learning, that: (1) decompose monolithic deep model network architectures into unit- blocks of various types, that are then more efficiently executed by heterogeneous local device processors (e.g., GPUs, CPUs); and (2), perform principled resource scaling that adjusts the architecture of deep models to shape the overhead each unit-blocks introduces. Experiments show, DeepX can allow even large-scale deep learning models to execute efficently on modern mobile processors and significantly outperform existing solutions, such as cloud-based offloading." ] }
1703.02952
2596378825
The increasing quality of smartphone cameras and variety of photo editing applications, in addition to the rise in popularity of image-centric social media, have all led to a phenomenal growth in mobile-based photography. Advances in computer vision and machine learning techniques provide a large number of cloud-based services with the ability to provide content analysis, face recognition, and object detection facilities to third parties. These inferences and analytics might come with undesired privacy risks to the individuals. In this paper, we address a fundamental challenge: Can we utilize the local processing capabilities of modern smartphones efficiently to provide desired features to approved analytics services, while protecting against undesired inference attacks and preserving privacy on the cloud? We propose a hybrid architecture for a distributed deep learning model between the smartphone and the cloud. We rely on the Siamese network and machine learning approaches for providing privacy based on defined privacy constraints. We also use transfer learning techniques to evaluate the proposed method. Using the latest deep learning models for Face Recognition, Emotion Detection, and Gender Classification techniques, we demonstrate the effectiveness of our technique in providing highly accurate classification results for the desired analytics, while proving strong privacy guarantees.
Differential privacy @cite_24 is another method provides an exact way to publish statistics of a database with specified amount of privacy. A learning model trained on some dataset can be considered as a high level statistic of that dataset. Recently @cite_47 proposed concern of privacy for deep learning and @cite_12 provided differential private deep learning model.
{ "cite_N": [ "@cite_24", "@cite_47", "@cite_12" ], "mid": [ "2109426455", "2053637704", "2473418344" ], "abstract": [ "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.", "Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality." ] }
1703.02638
2596091892
A geometrical pattern is a set of points with all pairwise distances (or, more generally, relative distances) specified. Finding matches to such patterns has applications to spatial data in seismic, astronomical, and transportation contexts. For example, a particularly interesting geometric pattern in astronomy is the Einstein cross, which is an astronomical phenomenon in which a single quasar is observed as four distinct sky objects (due to gravitational lensing) when captured by earth telescopes. Finding such crosses, as well as other geometric patterns, is a challenging problem as the potential number of sets of elements that compose shapes is exponentially large in the size of the dataset and the pattern. In this paper, we denote geometric patterns as constellation queries and propose algorithms to find them in large data applications. Our methods combine quadtrees, matrix multiplication, and unindexed join processing to discover sets of points that match a geometric pattern within some additive factor on the pairwise distances. Our distributed experiments show that the choice of composition algorithm (matrix multiplication or nested loops) depends on the freedom introduced in the query geometry through the distance additive factor. Three clearly identified blocks of threshold values guide the choice of the best composition algorithm. Finally, solving the problem for relative distances requires a novel continuous-to-discrete transformation. To the best of our knowledge this paper is the first to investigate constellation queries at scale.
Finding collections of objects having some metric relationship of interest is an area with many applications. The problem has different names depending on the discipline, including @cite_0 , @cite_13 , and @cite_18 .
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_13" ], "mid": [ "2118384078", "2057175746", "2014871820" ], "abstract": [ "We introduce a new shape descriptor, the shape context, for measuring shape similarity and recovering point correspondences. The shape context describes the coarse arrangement of the shape with respect to a point inside or on the boundary of the shape. We use the shape context as a vector-valued attribute in a bipartite graph matching framework. Our proposed method makes use of a relatively small number of sample points selected from the set of detected edges; no special landmarks or keypoints are necessary. Tolerance and or invariance to common image transformations are available within our framework. Using examples involving both silhouettes and edge images, we demonstrate how the solution to the graph matching problem provides us with correspondences and a dissimilarity score that can be used for object recognition and similarity-based retrieval.", "We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.", "The growing popularity of graph databases has generated interesting data management problems, such as subgraph search, shortest path query, reachability verification, and pattern matching. Among these, a pattern match query is more flexible compared with a subgraph search and more informative compared with a shortest path or a reachability query. In this paper, we address distance-based pattern match queries over a large data graph G. Due to the huge search space, we adopt a filter-and-refine framework to answer a pattern match query over a large graph. We first find a set of candidate matches by a graph embedding technique and then evaluate these to find the exact matches. Extensive experiments confirm the superiority of our method." ] }
1703.02638
2596091892
A geometrical pattern is a set of points with all pairwise distances (or, more generally, relative distances) specified. Finding matches to such patterns has applications to spatial data in seismic, astronomical, and transportation contexts. For example, a particularly interesting geometric pattern in astronomy is the Einstein cross, which is an astronomical phenomenon in which a single quasar is observed as four distinct sky objects (due to gravitational lensing) when captured by earth telescopes. Finding such crosses, as well as other geometric patterns, is a challenging problem as the potential number of sets of elements that compose shapes is exponentially large in the size of the dataset and the pattern. In this paper, we denote geometric patterns as constellation queries and propose algorithms to find them in large data applications. Our methods combine quadtrees, matrix multiplication, and unindexed join processing to discover sets of points that match a geometric pattern within some additive factor on the pairwise distances. Our distributed experiments show that the choice of composition algorithm (matrix multiplication or nested loops) depends on the freedom introduced in the query geometry through the distance additive factor. Three clearly identified blocks of threshold values guide the choice of the best composition algorithm. Finally, solving the problem for relative distances requires a novel continuous-to-discrete transformation. To the best of our knowledge this paper is the first to investigate constellation queries at scale.
In a subgraph query, a query is a connected set of nodes and edges (which may or may not be labeled). A match is a (usually non-induced) subgraph of a large graph that is isomorphic to the query. While the literature in that field is vast [ @cite_7 , @cite_13 , @cite_16 ], the problem is fundamentally different, because there is no notion of space (so data structures like quadtrees are useless) and there is no distance notion of scale (the @math that plays such a big role for us).
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_7" ], "mid": [ "2145631416", "2014871820", "2111607365" ], "abstract": [ "GraphGrep is an application-independent method for querying graphs, finding all the occurrences of a subgraph in a database of graphs. The interface to GraphGrep is a regular expression graph query language Glide that combines features from Xpath and Smart. Glide incorporates both single node and variable-length wildcards. Our algorithm uses hash-based fingerprinting to represent the graphs in an abstract form and to filter the database. GraphGrep has been tested on databases of size up to 16,000 molecules and performs well in this entire range.", "The growing popularity of graph databases has generated interesting data management problems, such as subgraph search, shortest path query, reachability verification, and pattern matching. Among these, a pattern match query is more flexible compared with a subgraph search and more informative compared with a shortest path or a reachability query. In this paper, we address distance-based pattern match queries over a large data graph G. Due to the huge search space, we adopt a filter-and-refine framework to answer a pattern match query over a large graph. We first find a set of candidate matches by a graph embedding technique and then evaluate these to find the exact matches. Extensive experiments confirm the superiority of our method.", "The growing popularity of graph databases has generated interesting data management problems, such as subgraph search, shortest-path query, reachability verification, and pattern match. Among these, a pattern match query is more flexible compared to a subgraph search and more informative compared to a shortest-path or reachability query. In this paper, we address pattern match problems over a large data graph G. Specifically, given a pattern graph (i.e., query Q), we want to find all matches (in G) that have the similar connections as those in Q. In order to reduce the search space significantly, we first transform the vertices into points in a vector space via graph embedding techniques, coverting a pattern match query into a distance-based multi-way join problem over the converted vector space. We also propose several pruning strategies and a join order selection method to process join processing efficiently. Extensive experiments on both real and synthetic datasets show that our method outperforms existing ones by orders of magnitude." ] }
1703.02403
2950672178
We provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called "calibration function" relating the excess surrogate risk to the actual risk. In contrast to prior related work, we carefully monitor the effect of the exponential number of classes in the learning guarantees as well as on the optimization complexity. As an interesting consequence, we formalize the intuition that some task losses make learning harder than others, and that the classical 0-1 loss is ill-suited for general structured prediction.
Building on significant progress for the case of binary classification, see, e.g. @cite_7 , there has been a lot of interest in the multi-class case. and analyze the consistency of many existing surrogates for the 0-1 loss. focus on multi-label classification. provide a consistent algorithm for arbitrary multi-class loss defined by a function of the confusion matrix. Recently, introduce the notion of convex calibrated dimension, as the minimal dimensionality of the score vector that is required for consistency. In particular, they showed that for the Hamming loss on @math binary variables, this dimension is at most @math . In our analysis, we use scores of rank @math , see in App. , yielding a similar result.
{ "cite_N": [ "@cite_7" ], "mid": [ "2023163512" ], "abstract": [ "We study how closely the optimal Bayes error rate can be approximately reached using a classification algorithm that computes a classifier by minimizing a convex upper bound of the classification error function. The measurement of closeness is characterized by the loss function used in the estimation. We show that such a classification scheme can be generally regarded as a (nonmaximum-likelihood) conditional in-class probability estimate, and we use this analysis to compare various convex loss functions that have appeared in the literature. Furthermore, the theoretical insight allows us to design good loss functions with desirable properties. Another aspect of our analysis is to demonstrate the consistency of certain classification methods using convex risk minimization. This study sheds light on the good performance of some recently proposed linear classification methods including boosting and support vector machines. It also shows their limitations and suggests possible improvements." ] }
1703.02403
2950672178
We provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called "calibration function" relating the excess surrogate risk to the actual risk. In contrast to prior related work, we carefully monitor the effect of the exponential number of classes in the learning guarantees as well as on the optimization complexity. As an interesting consequence, we formalize the intuition that some task losses make learning harder than others, and that the classical 0-1 loss is ill-suited for general structured prediction.
The task of ranking has attracted a lot of attention and @cite_9 @cite_10 @cite_2 @cite_0 analyze different families of surrogate and task losses proving their (in-)consistency. In this line of work, propose a quadratic surrogate for an arbitrary low rank loss which is related to our quadratic surrogate . They also prove that several important ranking losses, i.e., precision@q, expected rank utility, mean average precision and pairwise disagreement, are of low-rank. We conjecture that our approach is compatible with these losses and leave precise connections as future work.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_10", "@cite_2" ], "mid": [ "2669179452", "2114028889", "2212774062", "2167247347" ], "abstract": [ "", "We present a theoretical analysis of supervised ranking, providing necessary and sufficient conditions for the asymptotic consistency of algorithms based on minimizing a surrogate loss function. We show that many commonly used surrogate losses are inconsistent; surprisingly, we show inconsistency even in low-noise settings. We present a new value-regularized linear loss, establish its consistency under reasonable assumptions on noise, and show that it outperforms conventional ranking losses in a collaborative filtering experiment.", "We address the problem of designing surrogate losses for learning scoring functions in the context of label ranking. We extend to ranking problems a notion of order-preserving losses previously introduced for multiclass classification, and show that these losses lead to consistent formulations with respect to a family of ranking evaluation metrics. An order-preserving loss can be tailored for a given evaluation metric by appropriately setting some weights depending on this metric and the observed supervision. These weights, called the standard form of the supervision, do not always exist, but we show that previous consistency results for ranking were proved in special cases where they do. We then evaluate a new pairwise loss consistent with the (Normalized) Discounted Cumulative Gain on benchmark datasets.", "We study surrogate losses for learning to rank, in a framework where the rankings are induced by scores and the task is to learn the scoring function. We focus on the calibration of surrogate losses with respect to a ranking evaluation metric, where the calibration is equivalent to the guarantee that near-optimal values of the surrogate risk imply near-optimal values of the risk defined by the evaluation metric. We prove that if a surrogate loss is a convex function of the scores, then it is not calibrated with respect to two evaluation metrics widely used for search engine evaluation, namely the Average Precision and the Expected Reciprocal Rank. We also show that such convex surrogate losses cannot be calibrated with respect to the Pairwise Disagreement, an evaluation metric used when learning from pair-wise preferences. Our results cast lights on the intrinsic difficulty of some ranking problems, as well as on the limitations of learning-to-rank algorithms based on the minimization of a convex surrogate risk." ] }
1703.02403
2950672178
We provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called "calibration function" relating the excess surrogate risk to the actual risk. In contrast to prior related work, we carefully monitor the effect of the exponential number of classes in the learning guarantees as well as on the optimization complexity. As an interesting consequence, we formalize the intuition that some task losses make learning harder than others, and that the classical 0-1 loss is ill-suited for general structured prediction.
SSVM is one of the most used convex surrogates for tasks with structured outputs, thus, its consistency has been a question of great interest. It is known that Crammer-Singer multi-class SVM , which SSVM is built on, is not consistent for 0-1 loss unless there is a majority class with probability at least @math . However, it is consistent for the abstain'' and ordinal losses in the case of @math classes @cite_5 . Structured ramp loss and probit surrogates are closely related to SSVM and are consistent , but not convex.
{ "cite_N": [ "@cite_5" ], "mid": [ "2188209982" ], "abstract": [ "We study consistency properties of surrogate loss functions for general multiclass learning problems, defined by a general multiclass loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be calibrated with respect to a loss matrix in this setting. We then introduce the notion of convex calibration dimension of a multiclass loss matrix, which measures the smallest 'size' of a prediction space in which it is possible to design a convex surrogate that is calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, we apply our framework to study various subset ranking losses, and use the convex calibration dimension as a tool to show both the existence and non-existence of various types of convex calibrated surrogates for these losses. Our results strengthen recent results of (2010) and (2012) on the non-existence of certain types of convex calibrated surrogates in subset ranking. We anticipate the convex calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems." ] }
1703.02510
2949887059
With the emergence of cloud computing and sensor technologies, Big Data analytics for the Internet of Things (IoT) has become the main force behind many innovative solutions for our society's problems. This paper provides practical explanations for the question "why is the number of Big Data applications that succeed and have an effect on our daily life so limited, compared with all of the solutions proposed and tested in the literature?", with examples taken from Smart Grids. We argue that "noninvariants" are the most challenging issues in IoT applications, which can be easily revealed if we use the term "invariant" to replace the more common terms such as "information", "knowledge", or "insight" in any Big Data for IoT research. From our experience with developing Smart Grid applications, we produced a list of "noninvariants", which we believe to be the main causes of the gaps between Big Data in a laboratory and in practice in IoT applications. This paper also proposes Graph of Virtual Actors (GOVA) as a Big Data analytics architecture for IoT applications, which not only can solve the noninvariants issues, but can also quickly scale horizontally in terms of computation, data storage, caching requirements, and programmability of the system.
In Big Data, this problem becomes even worse, especially in IoT applications. With the unprecedented scale, speed, and range of data available from IoT technologies, the phenomena that we want to analyze, model, and predict are inherently more complex. Most of these phenomena are more dynamic than what usually reflected by the data that used to explain them, which causes the fails of Big Data in practice. Hence we proposed the term to denote this type of problem. In the book Large Scale Inference @cite_0 , Efron provided various examples to help identify symptoms of the noninvariant problem in biomedical field (although he did not use the term noninvariant). Section plays a similar role in the IoT field, in which we defined the term noninvariant in detail and presented various typical noninvariant examples taken from Smart Grid.
{ "cite_N": [ "@cite_0" ], "mid": [ "1510659740" ], "abstract": [ "Introduction and foreword 1. Empirical Bayes and the James-Stein estimator 2. Large-scale hypothesis testing 3. Significance testing algorithms 4. False discovery rate control 5. Local false discovery rates 6. Theoretical, permutation and empirical null distributions 7. Estimation accuracy 8. Correlation questions 9. Sets of cases (enrichment) 10. Combination, relevance, and comparability 11. Prediction and effect size estimation A. Exponential families B. Programs and data sets Bibliography Index." ] }
1703.02510
2949887059
With the emergence of cloud computing and sensor technologies, Big Data analytics for the Internet of Things (IoT) has become the main force behind many innovative solutions for our society's problems. This paper provides practical explanations for the question "why is the number of Big Data applications that succeed and have an effect on our daily life so limited, compared with all of the solutions proposed and tested in the literature?", with examples taken from Smart Grids. We argue that "noninvariants" are the most challenging issues in IoT applications, which can be easily revealed if we use the term "invariant" to replace the more common terms such as "information", "knowledge", or "insight" in any Big Data for IoT research. From our experience with developing Smart Grid applications, we produced a list of "noninvariants", which we believe to be the main causes of the gaps between Big Data in a laboratory and in practice in IoT applications. This paper also proposes Graph of Virtual Actors (GOVA) as a Big Data analytics architecture for IoT applications, which not only can solve the noninvariants issues, but can also quickly scale horizontally in terms of computation, data storage, caching requirements, and programmability of the system.
has been proposed as a suitable data modeling tool for applications where relations between data are as important as the data itself @cite_29 . In Big Data analytics for IoT, subgraph matching, graph traversal, or graph analysis are commonly needed. These graph-based queries are expensive in the traditional database due to the need of recursive JOINs. A native graph database engine such as Neo4j @cite_29 , GraphX @cite_22 , or Trinity @cite_25 can easily outperform an RDBMS in those tasks. Moreover, graph database provides a more flexible data model, where adding a new type of entity or relationship does not necessarily require a change in the database schema. Furthermore, data is stored so that it semantically represents its structure. All of these properties make graph database a useful data modeling tool for IoT. Figure shows a small typical graph in a smart distribution system grid.
{ "cite_N": [ "@cite_29", "@cite_25", "@cite_22" ], "mid": [ "776871969", "2160459668", "1982003698" ], "abstract": [ "Graph databases (GDB) are now a viable alternative to Relational Database Systems (RDBMS). Chemistry, biology, semantic web, social networking and recommendation engines are all examples of applications that can be represented in a much more natural form. Comparisons will be drawn between relational database systems (Oracle, MySQL) and graph databases (Neo4J) focusing on aspects such as data structures, data model features and query facilities. Additionally, several of the inherent and contemporary limitations of current offerings comparing and contrasting graph vs. relational database implementations will be explored.", "Computations performed by graph algorithms are data driven, and require a high degree of random data access. Despite the great progresses made in disk technology, it still cannot provide the level of efficient random access required by graph computation. On the other hand, memory-based approaches usually do not scale due to the capacity limit of single machines. In this paper, we introduce Trinity, a general purpose graph engine over a distributed memory cloud. Through optimized memory management and network communication, Trinity supports fast graph exploration as well as efficient parallel computing. In particular, Trinity leverages graph access patterns in both online and offline computation to optimize memory and communication for best performance. These enable Trinity to support efficient online query processing and offline analytics on large graphs with just a few commodity machines. Furthermore, Trinity provides a high level specification language called TSL for users to declare data schema and communication protocols, which brings great ease-of-use for general purpose graph management and computing. Our experiments show Trinity's performance in both low latency graph queries as well as high throughput graph analytics on web-scale, billion-node graphs.", "From social networks to targeted advertising, big graphs capture the structure in data and are central to recent advances in machine learning and data mining. Unfortunately, directly applying existing data-parallel tools to graph computation tasks can be cumbersome and inefficient. The need for intuitive, scalable tools for graph computation has lead to the development of new graph-parallel systems (e.g., Pregel, PowerGraph) which are designed to efficiently execute graph algorithms. Unfortunately, these new graph-parallel systems do not address the challenges of graph construction and transformation which are often just as problematic as the subsequent computation. Furthermore, existing graph-parallel systems provide limited fault-tolerance and support for interactive data mining. We introduce GraphX, which combines the advantages of both data-parallel and graph-parallel systems by efficiently expressing graph computation within the Spark data-parallel framework. We leverage new ideas in distributed graph representation to efficiently distribute graphs as tabular data-structures. Similarly, we leverage advances in data-flow systems to exploit in-memory computation and fault-tolerance. We provide powerful new operations to simplify graph construction and transformation. Using these primitives we implement the PowerGraph and Pregel abstractions in less than 20 lines of code. Finally, by exploiting the Scala foundation of Spark, we enable users to interactively load, transform, and compute on massive graphs." ] }
1703.02484
2602975612
Abstract A novel parallel simulation algorithm on the GPU, implemented in CUDA and C++, is presented for the simulation of Brownian particles that display excluded volume repulsion and interact with long and short range forces. When an explicit Euler–Maruyama integration step is performed to take into account the pairwise forces and Brownian motion, particle overlaps can appear. The excluded volume property brings up the need for correcting these overlaps as they happen, since predicting them is not feasible due to the random displacement of Brownian particles. The proposed solution handles, at each time step, a Delaunay triangulation of the particle positions because it allows us to efficiently solve overlaps between particles by checking just their neighborhood. The algorithm starts by generating a periodic Delaunay triangulation of the particle initial positions on CPU, but after that the triangulation is always kept on GPU memory. We used a parallel edge-flip implementation to keep the triangulation updated during each time step, checking previously that the triangulation was not rendered invalid due to the particle displacements. We designed and implemented an exact long range force simulation with an all-pairs N -body simulation, tiling the particle interaction computations based on the warp size of the target device architecture. The resulting implementation was validated with two models of active colloidal particles, also showing a speedup of up to two orders of magnitude when compared to a sequential implementation. A short range forces simulation using Verlet lists for neighborhood handling was also developed and validated, showing similar performance improvements.
Finally, for overlap correction, Strating @cite_8 describes a brute-force sequential algorithm that checks all pairs of bodies for possible overlaps and corrects them following equation ). The algorithm may need to iterate an unbounded number of times at each time step because some corrections may generate new overlaps with neighboring particles.
{ "cite_N": [ "@cite_8" ], "mid": [ "1987730268" ], "abstract": [ "In this paper we discuss the nonequilibrium shear viscosity of a suspension of hard spheres that is modeled by neglecting hydrodynamic interactions in a consistent way. The aim is to establish the true capabilities of this model in predicting the properties of real suspensions. A Brownian dynamics algorithm is used to simulate the movements of hard spheres immersed in a Newtonian solvent in a nonequilibrium steady shear flow. A new development is the treatment of the overlap of spheres as elastic collisions, to simulate the no-flux boundary condition on the surfaces of rigid particles. This algorithm is compared with other algorithms suggested in the literature, and is shown to be simple and accurate even for two spheres at close distance. This provides an algorithm that is very suitable for calculating the pair distribution function and especially its hard-sphere contact value, both in equilibrium and nonequilibrium simulations. The algorithm is used to study the nonequilibrium stationary shear flow in the low shear limit. The simulations correctly reproduce the exact low-density limit of the perturbation of the pair distribution function. The perturbation of the pair distribution function in shear flow can be extracted from the simulation data and used to compute the stationary shear viscosity for a system of diffusing hard spheres without hydrodynamic interactions. This yields a flow curve for this model system including the low shear limit. It is found that the model shear viscosity fails at intermediate and high shear rates as can be expected from the neglect of hydrodynamic interactions, but also in the low shear limit at small and moderate volume fractions." ] }
1703.02291
2596763562
Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.
Recently, various approaches have been developed to learn directed DGMs, including Variational Autoencoders (VAEs) @cite_20 @cite_14 , Generative Moment Matching Networks (GMMNs) @cite_6 @cite_21 and Generative Adversarial Nets (GANs) @cite_7 . These criteria are systematically compared in @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_21", "@cite_6", "@cite_20", "@cite_11" ], "mid": [ "1909320841", "", "2949995983", "2950292946", "", "2099057450" ], "abstract": [ "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation -- rules for back-propagation through stochastic variables -- and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation.", "", "We consider training a deep neural network to generate samples from an unknown distribution given i.i.d. data. We frame learning as an optimization minimizing a two-sample test statistic---informally speaking, a good generator network produces samples that cause a two-sample test to fail to reject the null hypothesis. As our two-sample test statistic, we use an unbiased estimate of the maximum mean discrepancy, which is the centerpiece of the nonparametric kernel two-sample test proposed by (2012). We compare to the adversarial nets framework introduced by (2014), in which learning is a two-player game between a generator network and an adversarial discriminator network, both trained to outwit the other. From this perspective, the MMD statistic plays the role of the discriminator. In addition to empirical comparisons, we prove bounds on the generalization error incurred by optimizing the empirical MMD.", "We consider the problem of learning deep generative models from data. We formulate a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks (, 2014). Training a generative adversarial network, however, requires careful optimization of a difficult minimax program. Instead, we utilize a technique from statistical hypothesis testing known as maximum mean discrepancy (MMD), which leads to a simple objective that can be interpreted as matching all orders of statistics between a dataset and samples from the model, and can be trained by backpropagation. We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples. We show that the combination of these techniques yields excellent generative models compared to baseline approaches as measured on MNIST and the Toronto Face Database.", "", "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided." ] }
1703.02291
2596763562
Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.
One primal goal of DGMs is to generate realistic samples, for which GANs have proven effective. Specifically, LAP-GAN @cite_0 leverages a series of GANs to upscale the generated samples to high resolution images through the Laplacian pyramid framework @cite_4 . DCGAN @cite_19 adopts (fractionally) strided convolution layers and batch normalization @cite_25 in GANs and generates realistic natural images.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_4", "@cite_25" ], "mid": [ "2951523806", "2173520492", "2103504761", "2949117887" ], "abstract": [ "In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters." ] }