aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1506.04338
2949230572
In perspective cameras, images of a frontal-parallel 3D object preserve its aspect ratio invariant to its depth. Such an invariance is useful in photography but is unique to perspective projection. In this paper, we show that alternative non-perspective cameras such as the crossed-slit or XSlit cameras exhibit a different depth-dependent aspect ratio (DDAR) property that can be used to 3D recovery. We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error. We show that repeated shape patterns in real Manhattan World scenes can be used for 3D reconstruction using a single XSlit image. We also extend our analysis to model slopes of lines. Specifically, parallel 3D lines exhibit depth-dependent slopes (DDS) on their images which can also be used to infer their depths. We validate our analyses using real XSlit cameras, XSlit panoramas, and catadioptric mirrors. Experiments show that DDAR and DDS provide important depth cues and enable effective single-image scene reconstruction.
Our paper explores a different and previously overlooked properties of MW: the scene contains multiple objects with an identical aspect ratio or size (e.g., windows) but lie at different depths. In a perspective view, these patterns will map to 2D images of an identical aspect ratio. In contrast, we show that the aspect ratio changes with respect to depth if one adopts a non-centric or multi-perspective camera. Such imaging models widely exist in nature, e.g., a compound insect eye, reflections and refractions of curved specular surfaces, images seen through volumetric gas such as a mirage, etc. Rays in these cameras generally do not pass through a common CoP and hence do not follow pinhole geometry. Consequently, they lose some nice properties of the perspective camera (e.g., lines no longer project to lines); at the same time they also gain some unique properties such as the coplanar common points @cite_28 , special shaped curves @cite_27 , etc. In this paper, we focus on the depth-dependent aspect ratio (DDAR) property for inferring 3D geometry.
{ "cite_N": [ "@cite_28", "@cite_27" ], "mid": [ "14644311", "2128500870" ], "abstract": [ "Discovering and extracting new image features pertaining to scene geometry is important to 3D reconstruction and scene understanding. Examples include the classical vanishing points observed in a centric camera and the recent coplanar common points (CCPs) in a crossed-slit camera [21,17]. A CCP is a point in the image plane corresponding to the intersection of the projections of all lines lying on a common 3D plane. In this paper, we address the problem of determining CCP existence in general non-centric cameras. We first conduct a ray-space analysis to show that finding the CCP of a 3D plane is equivalent to solving an array of ray constraint equations. We then derive the necessary and sufficient conditions for CCP to exist in an arbitrary non-centric camera such as non-centric catadioptric mirrors. Finally, we present robust algorithms for extracting the CCPs from a single image and validate our theories and algorithms through experiments.", "A Manhattan World (MW) [3] is composed of planar surfaces and parallel lines aligned with three mutually orthogonal principal axes. Traditional MW understanding algorithms rely on geometry priors such as the vanishing points and reference (ground) planes for grouping coplanar structures. In this paper, we present a novel single-image MW reconstruction algorithm from the perspective of non-pinhole cameras. We show that by acquiring the MW using an XSlit camera, we can instantly resolve co planarity ambiguities. Specifically, we prove that parallel 3D lines map to 2D curves in an XSlit image and they converge at an XSlit Vanishing Point (XVP). In addition, if the lines are coplanar, their curved images will intersect at a second common pixel that we call Coplanar Common Point (CCP). CCP is a unique image feature in XSlit cameras that does not exist in pinholes. We present a comprehensive theory to analyze XVPs and CCPs in a MW scene and study how to recover 3D geometry in a complex MW scene from XVPs and CCPs. Finally, we build a prototype XSlit camera by using two layers of cylindrical lenses. Experimental results on both synthetic and real data show that our new XSlit-camera-based solution provides an effective and reliable solution for MW understanding." ] }
1506.04338
2949230572
In perspective cameras, images of a frontal-parallel 3D object preserve its aspect ratio invariant to its depth. Such an invariance is useful in photography but is unique to perspective projection. In this paper, we show that alternative non-perspective cameras such as the crossed-slit or XSlit cameras exhibit a different depth-dependent aspect ratio (DDAR) property that can be used to 3D recovery. We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error. We show that repeated shape patterns in real Manhattan World scenes can be used for 3D reconstruction using a single XSlit image. We also extend our analysis to model slopes of lines. Specifically, parallel 3D lines exhibit depth-dependent slopes (DDS) on their images which can also be used to infer their depths. We validate our analyses using real XSlit cameras, XSlit panoramas, and catadioptric mirrors. Experiments show that DDAR and DDS provide important depth cues and enable effective single-image scene reconstruction.
The special non-centric camera we employ here is the crossed-slit or XSlit camera. An XSlit camera collects rays simultaneously passing through two oblique lines (slits) in 3D space. The projection geometry of an XSlit has been examined in various forms in previous studies, , as projection model in @cite_8 , as general linear constraints in @cite_12 , and as ray regulus in @cite_19 . For long the XSlit camera has been restricted to a theoretical model as it is physically difficult to acquire ray geometry following the slit structure. The only exception is the XSlit panoramas @cite_29 @cite_26 where an XSlit panorama can be stitched from a translational sequence of images or more precisely a 3D light field @cite_0 . Recently, @cite_1 presented a practical XSlit camera. Their approach relays two cylindrical lenses with perpendicular axes, each coupled with a slit shaped aperture to achieve in-focus imaging.
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_29", "@cite_1", "@cite_0", "@cite_19", "@cite_12" ], "mid": [ "1552668356", "2124323475", "2148162135", "2147999262", "", "2121803203", "2146716365" ], "abstract": [ "We analyze the geometry of the two-slit camera and come to two conclusions. First, we show that the definition given [9] makes sense only if the two slits are not intersecting. Secondly, we prove that the complete image from a two-slit camera cannot be obtained as an intersection of the rays of the two-slit camera with a plane in space. Motivated by the quest for a unified representation of various cameras by simple geometrical objects, we give a new definition of linear oblique cameras as those which comprise all real lines incident with some non-real line and show that it is equivalent with the definition we gave earlier. We also show that no single line neither in the real projective space nor in its coplexification, can be used to define analogously a two-slit camera.", "We introduce anew kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the Crossed-Slits (X-Slits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. X-Slits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a X-Slits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions.", "A theory of stereo image formation is presented that enables a complete classification of all possible stereo views, including non-perspective varieties. Towards this end, the notion of epipolar geometry is generalized to apply to multiperspective images. It is shown that any stereo pair must consist of rays lying on one of three varieties of quadric surfaces. A unified representation is developed to model all classes of stereo views, based on the concept of a quadric view. The benefits include a unified treatment of projection and triangulation operations for all stereo views. The framework is applied to derive new types of stereo image representations with unusual and useful properties.", "Traditional stereo matching assumes perspective viewing cameras under a translational motion: the second camera is translated away from the first one to create parallax. In this paper, we investigate a different, rotational stereo model on a special multi-perspective camera, the XSlit camera [9, 24]. We show that rotational XSlit (R-XSlit) stereo can be effectively created by fixing the sensor and slit locations but switching the two slits' directions. We first derive the epipolar geometry of R-XSlit in the 4D light field ray space. Our derivation leads to a simple but effective scheme for locating corresponding epipolar \"curves\". To conduct stereo matching, we further derive a new disparity term in our model and develop a patch-based graph-cut solution. To validate our theory, we assemble an XSlit lens by using a pair of cylindrical lenses coupled with slit-shaped apertures. The XSlit lens can be mounted on commodity cameras where the slit directions are adjustable to form desirable R-XSlit pairs. We show through experiments that R-XSlit provides a potentially advantageous imaging system for conducting fixed-location, dynamic baseline stereo.", "", "This paper addresses the problem of characterizing a general class of cameras under reasonable, “linear” assumptions. Concretely, we use the formalism and terminology of classical projective geometry to model cameras by two-parameter linear families of straight lines-that is, degenerate reguli (rank-3 families) and non-degenerate linear congruences (rank-4 families). This model captures both the general linear cameras of Yu and McMillan and the linear oblique cameras of Pajdla. From a geometric perspective, it affords a simple classification of all possible camera configurations. From an analytical viewpoint, it also provides a simple and unified methodology for deriving general formulas for projection and inverse projection, triangulation, and binocular and trinocular geometry.", "We present a General Linear Camera (GLC) model that unifies many previous camera models into a single representation. The GLC model is capable of describing all perspective (pinhole), orthographic, and many multiperspective (including pushbroom and two-slit) cameras, as well as epipolar plane images. It also includes three new and previously unexplored multiperspective linear cameras. Our GLC model is both general and linear in the sense that, given any vector space where rays are represented as points, it describes all 2D affine subspaces (planes) that can be formed by affine combinations of 3 rays. The incident radiance seen along the rays found on subregions of these 2D affine subspaces are a precise definition of a projected image of a 3D scene. The GLC model also provides an intuitive physical interpretation, which can be used to characterize real imaging systems. Finally, since the GLC model provides a complete description of all 2D affine subspaces, it can be used as a tool for first-order differential analysis of arbitrary (higher-order) multiperspective imaging systems." ] }
1506.04549
2951062405
Tools that synchronize passwords over several user devices typically store the encrypted passwords in a central online database. For encryption, a low-entropy, password-based key is used. Such a database may be subject to unauthorized access which can lead to the disclosure of all passwords by an offline brute-force attack. In this paper, we present PALPAS, a secure and user-friendly tool that synchronizes passwords between user devices without storing information about them centrally. The idea of PALPAS is to generate a password from a high entropy secret shared by all devices and a random salt value for each service. Only the salt values are stored on a server but not the secret. The salt enables the user devices to generate the same password but is statistically independent of the password. In order for PALPAS to generate passwords according to different password policies, we also present a mechanism that automatically retrieves and processes the password requirements of services. PALPAS users need to only memorize a single password and the setup of PALPAS on a further device demands only a one-time transfer of few static data.
Hash-based approaches like PwdHash @cite_15 allow users to create different passwords for services by hashing a master password and the name or the URL of the service. Unfortunately, an adversary who stole a password can perform a brute-force attack to obtain the master password and thereby generate all user's passwords. Password Multiplier @cite_26 performs additional steps to strengthen the master password, which increases the costs for a brute-force attack. Nevertheless, these hash-based approaches cannot be used to generate a new password for the same service, which is e.g. necessary after a password breach. The authors propose to use an additional user-chosen input for each service, which is included in the hashing, but this has the disadvantage that users then need to memorize this input for each service. Thus, the existing hash-based approaches are not a feasible solution because they still require users to memorize a lot of information.
{ "cite_N": [ "@cite_15", "@cite_26" ], "mid": [ "1607915502", "2030993695" ], "abstract": [ "We describe a browser extension, PwdHash, that transparently produces a different password for each site, improving web password security and defending against password phishing and other attacks. Since the browser extension applies a cryptographic hash function to a combination of the plaintext password entered by the user, data associated with the web site, and (optionally) a private salt stored on the client machine, theft of the password received at one site will not yield a password that is useful at another site. While the scheme requires no changes on the server side, implementing this password method securely and transparently in a web browser extension turns out to be quite difficult. We describe the challenges we faced in implementing PwdHash and some techniques that may be useful to anyone facing similar security issues in a browser environment.", "Computer users are asked to generate, keep secret, and recall an increasing number of passwords for uses including host accounts, email servers, e-commerce sites, and online financial services. Unfortunately, the password entropy that users can comfortably memorize seems insufficient to store unique, secure passwords for all these accounts, and it is likely to remain constant as the number of passwords (and the adversary's computational power) increases into the future. In this paper, we propose a technique that uses a strengthened cryptographic hash function to compute secure passwords for arbitrarily many accounts while requiring the user to memorize only a single short password. This mechanism functions entirely on the client; no server-side changes are needed. Unlike previous approaches, our design is both highly resistant to brute force attacks and nearly stateless, allowing users to retrieve their passwords from any location so long as they can execute our program and remember a short secret. This combination of security and convenience will, we believe, entice users to adopt our scheme. We discuss the construction of our algorithm in detail, compare its strengths and weaknesses to those of related approaches, and present Password Multiplier, an implementation in the form of an extension to the Mozilla Firefox web browser." ] }
1506.04549
2951062405
Tools that synchronize passwords over several user devices typically store the encrypted passwords in a central online database. For encryption, a low-entropy, password-based key is used. Such a database may be subject to unauthorized access which can lead to the disclosure of all passwords by an offline brute-force attack. In this paper, we present PALPAS, a secure and user-friendly tool that synchronizes passwords between user devices without storing information about them centrally. The idea of PALPAS is to generate a password from a high entropy secret shared by all devices and a random salt value for each service. Only the salt values are stored on a server but not the secret. The salt enables the user devices to generate the same password but is statistically independent of the password. In order for PALPAS to generate passwords according to different password policies, we also present a mechanism that automatically retrieves and processes the password requirements of services. PALPAS users need to only memorize a single password and the setup of PALPAS on a further device demands only a one-time transfer of few static data.
Approaches which are using hardware tokens @cite_17 @cite_21 or mobile devices @cite_22 for authentication have the disadvantage that users always need to carry an additional device. Furthermore, such solutions require changes on the infrastructure of the service. The weak development of other authentication mechanisms than passwords (cf. @cite_29 for a survey) shows that service-side changes are a major obstacle for the wide adoption of authentication schemes.
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_22", "@cite_17" ], "mid": [ "2030112111", "197157878", "1968854550", "1573122102" ], "abstract": [ "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.", "Strong authentication for online service access typically requires some kind of hardware device for generating dynamic access credentials that are often used in combination with static passwords. This practice have the side effect that users fill up their pockets with more and more devices and their heads with more and more passwords. This situation becomes increasinlgy difficult to manage which in turn degrades the usability of online services. In order to cope with this situation users often adopt insecure ad hoc practices that enable them to practically manage their different identities and credentials. This paper explores how one single device can be used for authentication of user to service providers and server to users, as well as provide a range of other security services.", "Text password is the most popular form of user authentication on websites due to its convenience and simplicity. However, users' passwords are prone to be stolen and compromised under different threats and vulnerabilities. Firstly, users often select weak passwords and reuse the same passwords across different websites. Routinely reusing passwords causes a domino effect; when an adversary compromises one password, she will exploit it to gain access to more websites. Second, typing passwords into untrusted computers suffers password thief threat. An adversary can launch several password stealing attacks to snatch passwords, such as phishing, keyloggers and malware. In this paper, we design a user authentication protocol named oPass which leverages a user's cellphone and short message service to thwart password stealing and password reuse attacks. oPass only requires each participating website possesses a unique phone number, and involves a telecommunication service provider in registration and recovery phases. Through oPass, users only need to remember a long-term password for login on all websites. After evaluating the oPass prototype, we believe oPass is efficient and affordable compared with the conventional web authentication mechanisms.", "In previous work we presented Pico, an authentication system designed to be both more usable and more secure than passwords. One unsolved problem was that Pico, in its quest to explore the whole solution space without being bound by compatibility shackles, requires changes at both the prover and the verifier, which makes it hard to convince anyone to adopt it: users won’t buy an authentication gadget that doesn’t let them log into anything and service providers won’t support a system that no users are equipped to log in with. In this paper we present three measures to break this vicious circle, starting with the “Pico Lens” browser add-on that rewrites websites on the fly so that they appear Pico-enabled. Our add-on offers the user most (though not all) of the usability and security benefits of Pico, thus fostering adoption from users even before service providers are on board. This will enable Pico to build up a user base. We also developed a server-side Wordpress plugin which can serve both as a reference example and as a useful enabler in its own right (as Wordpress is one of the leading content management platforms on the web). Finally, we developed a software version of the Pico client running on a smartphone, the Pico App, so that people can try out Pico (at the price of slightly reduced security) without having to acquire and carry another gadget. Having broken the vicious circle we’ll be in a stronger position to persuade providers to offer support for Pico in parallel with passwords." ] }
1506.04549
2951062405
Tools that synchronize passwords over several user devices typically store the encrypted passwords in a central online database. For encryption, a low-entropy, password-based key is used. Such a database may be subject to unauthorized access which can lead to the disclosure of all passwords by an offline brute-force attack. In this paper, we present PALPAS, a secure and user-friendly tool that synchronizes passwords between user devices without storing information about them centrally. The idea of PALPAS is to generate a password from a high entropy secret shared by all devices and a random salt value for each service. Only the salt values are stored on a server but not the secret. The salt enables the user devices to generate the same password but is statistically independent of the password. In order for PALPAS to generate passwords according to different password policies, we also present a mechanism that automatically retrieves and processes the password requirements of services. PALPAS users need to only memorize a single password and the setup of PALPAS on a further device demands only a one-time transfer of few static data.
Single sign-on (SSO) like Facebook Connect @cite_6 allows users to authenticate themselves with a single password once and access multiple services without being prompted to log in at each service again. This can reduce the number of passwords users have to memorize but the adoption of SSO is still very limited @cite_28 . Furthermore, SSO bears the risk of phishing attacks @cite_12 and studies found out that users have several concerns and misconceptions about SSO and are not feeling comfortable with giving control of their passwords to external services @cite_2 @cite_8 . SSO has also serious privacy issues @cite_24 , because the SSO identity provider is aware of where and when a user performs a login. In summary, SSO does not solve the problem of managing many passwords and leads to new problems like the privacy issue.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_6", "@cite_24", "@cite_2", "@cite_12" ], "mid": [ "2086147103", "2033636833", "", "1980767298", "1963828660", "2237141602" ], "abstract": [ "OpenID and OAuth are open and simple Web SSO protocols that have been adopted by major service providers, and millions of supporting Web sites. However, the average user’s perception of Web SSO is still poorly understood. Through several user studies, this work investigates users’ perceptions and concerns when using Web SSO for authentication. We found that our participants had several misconceptions and concerns that impeded their adoption. This ranged from their inadequate mental models of Web SSO, to their concerns about personal data exposure, and a reduction in perceived Web SSO value due to the employment of password management practices. Informed by our findings, we offer a Web SSO technology acceptance model, and suggest design improvements.", "OpenID and InfoCard are two mainstream Web single sign-on (SSO) solutions intended for Internet-scale adoption. While they are technically sound, the business model of these solutions does not provide content-hosting and service providers (CSPs) with sufficient incentives to become relying parties (RPs). In addition, the pressure from users and identity providers (IdPs) is not strong enough to drive CSPs toward adopting Web SSO. As a result, there are currently over one billion OpenID-enabled user accounts provided by major CSPs, but only a few relying parties. In this paper, we discuss the problem of Web SSO adoption for RPs and argue that solutions in this space must offer RPs sufficient business incentives and trustworthy identity services in order to succeed. We suggest future Web SSO development should investigate and fulfill RPs' business needs, identify IdP business models, and build trust frameworks. Moreover, we propose that Web SSO technology should build identity support into browsers in order to facilitate RPs' adoption.", "", "We performed a laboratory experiment to study the privacy tradeoff offered by Facebook Connect: disclosing Facebook profile data to third-party websites for the convenience of logging in without creating separate accounts. We controlled for trustworthiness and amount of information each website requested, as well as the consent dialog layout. We discovered that these factors had no observable effects, likely because participants did not read the dialogs. Yet, 15 still refused to use Facebook Connect, citing privacy concerns. A likely explanation for subjects ignoring the dialogs while also understanding the privacy tradeoff - our exit survey indicated that 88 broadly understood what data would be collected - is that subjects were already familiar with the dialogs prior to the experiment. We discuss how our results demonstrate informed consent, but also how habituation prevented subjects from understanding the nuances between individual websites' data collection policies.", "OpenID is an open and promising Web single sign-on (SSO) solution. This work investigates the challenges and concerns web users face when using OpenID for authentication, and identifies what changes in the login flow could improve the users' experience and adoption incentives. We found our participants had several behaviors, concerns, and misconceptions that hinder the OpenID adoption process: (1) their existing password management strategies reduce the perceived usefulness of SSO; (2) many (26 ) expressed concerns with single-point-of-failure related issues; (3) most (71 ) held the incorrect belief that the OpenID credentials are being given to the content providers; (4) half exhibited an inability to distinguish a fake Google login form, even when prompted; (5) many (40 ) were hesitant to consent to the release of their personal profile information; and (6) many (36 ) expressed concern with the use of SSO on websites that contain valuable personal information or, conversely, are not trustworthy. We also found that with an improved affordance and privacy control, more than 60 of study participants would use Web SSO solutions on the websites they trust.", "" ] }
1506.03837
1973081445
We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.
The authors in @cite_24 study auction mechanisms considering arbitrage between CPC and CPM pricing schemes. The study aims for designing an auction mechanism on behalf of the ad exchange and yielding truthful bidding from advertisers and truthful CTR reporting from arbitrageurs. By contrast, our work focuses on developing a statistical method for mining and exploiting arbitrage opportunities between CPA and CPM.
{ "cite_N": [ "@cite_24" ], "mid": [ "2114833467" ], "abstract": [ "Online display advertising exchanges connect Web publishers with advertisers seeking to place ads. In many cases, the advertiser obtains value from an ad impression (a viewing by a user) only if it is clicked, and frequently advertisers prefer to pay contingent on this occurring. But at the same time, many publishers demand payment independent of clicks. Arbitragers with good estimates of click-probabilities can resolve this conflict by absorbing the risk and acting as an intermediary, paying the publisher on allocation and being paid only if a click occurs. This article examines the incentives of advertisers and arbitragers and contributes an efficient mechanism with truthful bidding by the advertisers and truthful reporting of click predictions by arbitragers as dominant strategies while, given that a hazard rate condition is satisfied, yielding increased revenue to the publisher. We provide empirical evidence based on bid data from Yahoo's Right Media Exchange suggesting that the mechanism would increase revenue in practice." ] }
1506.03837
1973081445
We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.
In financial markets, as a trading strategy, statistical arbitrage is a quantitative approach to security trading. It utilises statistical methods with high-frequency trading systems to detect statistical mispricing of securities caused by market inefficiency to make profit with a large number of transactions @cite_25 .
{ "cite_N": [ "@cite_25" ], "mid": [ "2108626041" ], "abstract": [ "This paper introduces the concept of statistical arbitrage, a long horizon trading opportunity that generates a riskless profit and is designed to exploit persistent anomalies. Statistical arbitrage circumvents the \"joint hypothesis\" dilemma of traditional market efficiency tests because its definition is independent of any equilibrium model and its existence is incompatible with market efficiency. We provide a methodology to test for statistical arbitrage and then empirically investigate whether momentum and value trading strategies constitute statistical arbitrage opportunities. Despite controlling for transaction costs and the influence of small stocks, we find evidence that these strategies generate statistical arbitrage. Furthermore, their profitability does not appear to decline over time." ] }
1506.03837
1973081445
We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.
Drawing an analogy with the statistical arbitrage of security pairs trading @cite_15 in finance, in our paper, the campaign's CPA contract and its performance in RTB spot markets can be regarded as a pair of correlated securities. Statistically speaking, if the campaign's performance in an RTB market ensures that the average cost to acquire a conversion (i.e., eCPA) is lower than the payoff from the CPA contract, then a statistical arbitrage opportunity exists. Such opportunity could also be considered to be caused by informational inefficiency of the advertising market where the advertisers fail to lower their CPA payoff when their campaigns in RTB spot market have a good performance.
{ "cite_N": [ "@cite_15" ], "mid": [ "2107535465" ], "abstract": [ "We test a Wall Street investment strategy known as \"pairs trading\" with daily data over the period 1962 through 1997. Stocks are matched into pairs according to minimum distance in historical normalized price space. We test the profitability of several trading rules with six-month trading periods over the 1962-1997 period, and find average annualized excess returns of up to 12 percent for a number of self-financing portfolios of top pairs. Part of these profits may be due to market microstructure effects. Nevertheless, our historical trading profits exceed a conservative estimate of transaction costs through most of the period. We bootstrap random pairs in order to distinguish pairs trading from pure mean-reversion strategies. The bootstrap results suggest that the ?pairs? effect differs from previously documented mean reversion profits." ] }
1506.03837
1973081445
We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.
Recently, MPT has been introduced into information retrieval (IR) fields to model the expectation and uncertainty of users' preference on the retrieved documents for search engines @cite_16 or from recommender systems @cite_21 . To our knowledge, there is no work adopting MPT into the revenue optimisation in online advertising. In our paper, we present a novel way of using MPT and it is naturally integrated into our bid optimisation framework.
{ "cite_N": [ "@cite_16", "@cite_21" ], "mid": [ "1980730196", "2028595520" ], "abstract": [ "This paper studies document ranking under uncertainty. It is tackled in a general situation where the relevance predictions of individual documents have uncertainty, and are dependent between each other. Inspired by the Modern Portfolio Theory, an economic theory dealing with investment in financial markets, we argue that ranking under uncertainty is not just about picking individual relevant documents, but about choosing the right combination of relevant documents. This motivates us to quantify a ranked list of documents on the basis of its expected overall relevance (mean) and its variance; the latter serves as a measure of risk, which was rarely studied for document ranking in the past. Through the analysis of the mean and variance, we show that an optimal rank order is the one that balancing the overall relevance (mean) of the ranked list against its risk level (variance). Based on this principle, we then derive an efficient document ranking algorithm. It generalizes the well-known probability ranking principle (PRP) by considering both the uncertainty of relevance predictions and correlations between retrieved documents. Moreover, the benefit of diversification is mathematically quantified; we show that diversifying documents is an effective way to reduce the risk of document ranking. Experimental results in text retrieval confirm performance.", "Personalization techniques have been widely adopted in many recommender systems. However, experiments on real-world datasets show that for some users in certain contexts, personalized recommendations do not necessarily perform better than recommendations that rely purely on popularity. Broadly, this can be interpreted by the fact that the parameters of a personalization model are usually estimated from sparse data; the resulting personalized prediction, despite of its low bias, is often volatile. In this paper, we study the problem further by investigating into the ranking of recommendation lists. From a risk management and portfolio retrieval perspective, there is no difference between the popularity-based and the personalized ranking as both of the recommendation outputs can be represented as the trade-off between expected relevance (reward) and associated uncertainty (risk). Through our analysis, we discover common scenarios and provide a technique to predict whether personalization will fail. Besides the theoretical understanding, our experimental results show that the resulting switch algorithm, which decides whether or not to personalize, outperforms the mainstream recommendation algorithms." ] }
1506.04158
2953325326
This paper presents a novel spectral algorithm with additive clustering designed to identify overlapping communities in networks. The algorithm is based on geometric properties of the spectrum of the expected adjacency matrix in a random graph model that we call stochastic blockmodel with overlap (SBMO). An adaptive version of the algorithm, that does not require the knowledge of the number of hidden communities, is proved to be consistent under the SBMO when the degrees in the graph are (slightly more than) logarithmic. The algorithm is shown to perform well on simulated data and on real-world graphs with known overlapping communities.
Several random graph models have been proposed in the literature to model networks with overlapping communities. In these models, each node @math is characterized by some community membership vector @math that is not always a binary vector, as in the SBMO. In the Mixed-Membership Stochastic Blockmodel (MMSB) @cite_24 , introduced as the first model with overlaps, membership vectors are probability vectors drawn from a Dirichlet distribution. In this model, conditionally to @math and @math , the probability that nodes @math and @math are connected is @math for some community connectivity matrix @math , just like in SBMO. However, the fact that @math and @math are probability vectors makes the model less interpretable. In particular, the probability that two nodes nodes are connected does not necessarily increase with the number of communities that they have in common, as pointed out by Yang and Leskovec @cite_7 , which contradicts a tendency empirically observed in social networks.
{ "cite_N": [ "@cite_24", "@cite_7" ], "mid": [ "2107107106", "2050239729" ], "abstract": [ "Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.", "One of the main organizing principles in real-world networks is that of network communities, where sets of nodes organize into densely linked clusters. Communities in networks often overlap as nodes can belong to multiple communities at once. Identifying such overlapping communities is crucial for the understanding the structure as well as the function of real-world networks. Even though community structure in networks has been widely studied in the past, practically all research makes an implicit assumption that overlaps between communities are less densely connected than the non-overlapping parts themselves. Here we validate this assumption on 6 large scale social, collaboration and information networks where nodes explicitly state their community memberships. By examining such ground-truth communities we find that the community overlaps are more densely connected than the non-overlapping parts, which is in sharp contrast to the conventional wisdom that community overlaps are more sparsely connected than the communities themselves. Practically all existing community detection methods fail to detect communities with dense overlaps. We propose Community-Affiliation Graph Model, a model-based community detection method that builds on bipartite node-community affiliation networks. Our method successfully captures overlapping, non-overlapping as well as hierarchically nested communities, and identifies relevant communities more accurately than the state-of-the-art methods in networks ranging from biological to social and information networks." ] }
1506.04158
2953325326
This paper presents a novel spectral algorithm with additive clustering designed to identify overlapping communities in networks. The algorithm is based on geometric properties of the spectrum of the expected adjacency matrix in a random graph model that we call stochastic blockmodel with overlap (SBMO). An adaptive version of the algorithm, that does not require the knowledge of the number of hidden communities, is proved to be consistent under the SBMO when the degrees in the graph are (slightly more than) logarithmic. The algorithm is shown to perform well on simulated data and on real-world graphs with known overlapping communities.
The Overlapping Continuous Community Assignment Model (OCCAM), proposed by @cite_25 relies on overlapping communities but also on individual degree parameters, which generalizes the degree-corrected stochastic blockmodel @cite_11 . In the OCCAM, a degree parameter @math is associated to each node @math . Letting @math , the expected adjacency matrix is @math , with a membership matrix @math . Identifiability of the model is proved assuming that @math is positive definite, each row @math satisfies @math , and the degree parameters satisfy @math . The SBMO can be viewed as a particular instance of the OCCAM, for which we provide new identifiability conditions, that allow for binary membership vectors.
{ "cite_N": [ "@cite_25", "@cite_11" ], "mid": [ "1475335089", "2119998616" ], "abstract": [ "Community detection is a fundamental problem in network analysis which is made more challenging by overlaps between communities which often occur in practice. Here we propose a general, flexible, and interpretable generative model for overlapping communities, which can be thought of as a generalization of the degree-corrected stochastic block model. We develop an efficient spectral algorithm for estimating the community memberships, which deals with the overlaps by employing the K-medians algorithm rather than the usual K-means for clustering in the spectral domain. We show that the algorithm is asymptotically consistent when networks are not too sparse and the overlaps between communities not too large. Numerical experiments on both simulated networks and many real social networks demonstrate that our method performs very well compared to a number of benchmark methods for overlapping community detection.", "Stochastic blockmodels have been proposed as a tool for detecting community structure in networks as well as for generating synthetic networks for use as benchmarks. Most blockmodels, however, ignore variation in vertex degree, making them unsuitable for applications to real-world networks, which typically display broad degree distributions that can significantly distort the results. Here we demonstrate how the generalization of blockmodels to incorporate this missing element leads to an improved objective function for community detection in complex networks. We also propose a heuristic algorithm for community detection using this objective function or its non-degree-corrected counterpart and show that the degree-corrected version dramatically outperforms the uncorrected one in both real-world and synthetic networks." ] }
1506.04158
2953325326
This paper presents a novel spectral algorithm with additive clustering designed to identify overlapping communities in networks. The algorithm is based on geometric properties of the spectrum of the expected adjacency matrix in a random graph model that we call stochastic blockmodel with overlap (SBMO). An adaptive version of the algorithm, that does not require the knowledge of the number of hidden communities, is proved to be consistent under the SBMO when the degrees in the graph are (slightly more than) logarithmic. The algorithm is shown to perform well on simulated data and on real-world graphs with known overlapping communities.
Several algorithmic methods have been proposed to identify overlapping community structure in networks @cite_12 . Among the model-based methods, that rely on the assumption that the observed network is drawn under a random graph model, some are approximations of the maximum likelihood or maximum a posteriori estimate of the membership vectors under one of the random graph models discussed above. For example, under the MMSB or the OSBM the membership vectors are assumed to be drawn from a probability (prior) distribution, and variational EM algorithms are proposed to approximate the posterior distributions @cite_24 @cite_27 . However, there is no proof of consistency of the proposed algorithms. In the MMSB, a different approach that uses tensor power iteration is proposed in @cite_23 to compute an estimator derived using the moments method, for which the first consistency results are provided.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_23", "@cite_12" ], "mid": [ "2107107106", "2029289148", "2167026441", "1977713568" ], "abstract": [ "Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.", "Complex systems in nature and in society are often represented as networks, describing the rich set of interactions between objects of interest. Many deterministic and probabilistic clustering methods have been developed to analyze such structures. Given a network, almost all of them partition the vertices into disjoint clusters, according to their connection profile. However, recent studies have shown that these techniques were too restrictive and that most of the existing networks contained overlapping clusters. To tackle this issue, we present in this paper the Overlapping Stochastic Block Model. Our approach allows the vertices to belong to multiple clusters, and, to some extent, generalizes the well-known Stochastic Block Model [Nowicki and Snijders (2001)]. We show that the model is generically identifiable within classes of equivalence and we propose an approximate inference procedure, based on global and local variational techniques. Using toy data sets as well as the French Political Blogosphere network and the transcriptional network of Saccharomyces cerevisiae, we compare our work with other approaches.", "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.", "This article reviews the state-of-the-art in overlapping community detection algorithms, quality measures, and benchmarks. A thorough comparison of different algorithms (a total of fourteen) is provided. In addition to community-level evaluation, we propose a framework for evaluating algorithms' ability to detect overlapping nodes, which helps to assess overdetection and underdetection. After considering community-level detection performance measured by normalized mutual information, the Omega index, and node-level detection performance measured by F-score, we reached the following conclusions. For low overlapping density networks, SLPA, OSLOM, Game, and COPRA offer better performance than the other tested algorithms. For networks with high overlapping density and high overlapping diversity, both SLPA and Game provide relatively stable performance. However, test results also suggest that the detection in such networks is still not yet fully resolved. A common feature observed by various algorithms in real-world networks is the relatively small fraction of overlapping nodes (typically less than 30p), each of which belongs to only 2 or 3 communities." ] }
1506.04158
2953325326
This paper presents a novel spectral algorithm with additive clustering designed to identify overlapping communities in networks. The algorithm is based on geometric properties of the spectrum of the expected adjacency matrix in a random graph model that we call stochastic blockmodel with overlap (SBMO). An adaptive version of the algorithm, that does not require the knowledge of the number of hidden communities, is proved to be consistent under the SBMO when the degrees in the graph are (slightly more than) logarithmic. The algorithm is shown to perform well on simulated data and on real-world graphs with known overlapping communities.
The first occurrence of a spectral algorithm to find overlapping communities goes back to @cite_18 . The proposed method is an adaptation of spectral clustering with the normalized Laplacian (see e.g., @cite_9 ) with a fuzzy clustering algorithm in place of @math -means, and its justification is rather heuristic. Another spectral algorithm has been proposed by @cite_25 , as an estimation procedure for the (non-binary) membership matrix under the OCCAM. The spectral embedding is a row-normalized version of @math , with @math the diagonal matrix containing @math leading eigenvalues of @math and @math the matrix of associated eigenvectors. The centroids obtained by a @math -median clustering algorithm are then used to estimate @math . This algorithm is proved to be consistent under the OCCAM, when moreover degree parameters and membership vectors are drawn according to some distributions. Similar assumptions have appeared before in the proof of consistency of some community detection algorithms in the SBM or DC-SBM @cite_20 . Our consistency results are established for fixed parameters of the model.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_25", "@cite_20" ], "mid": [ "2217748804", "2044719661", "1475335089", "1963954507" ], "abstract": [ "We consider three distinct and well-studied problems concerning network structure: community detection by modularity maximization, community detection by statistical inference, and normalized-cut graph partitioning. Each of these problems can be tackled using spectral algorithms that make use of the eigenvectors of matrix representations of the network. We show that with certain choices of the free parameters appearing in these spectral algorithms the algorithms for all three problems are, in fact, identical, and hence that, at least within the spectral approximations used here, there is no difference between the modularity- and inference-based community detection methods, or between either and graph partitioning.", "Identification of (overlapping) communities clusters in a complex network is a general problem in data mining of network data sets. In this paper, we devise a novel algorithm to identify overlapping communities in complex networks by the combination of a new modularity function based on generalizing NG's Q function, an approximation mapping of network nodes into Euclidean space and fuzzy c-means clustering. Experimental results indicate that the new algorithm is efficient at detecting both good clusterings and the appropriate number of clusters.", "Community detection is a fundamental problem in network analysis which is made more challenging by overlaps between communities which often occur in practice. Here we propose a general, flexible, and interpretable generative model for overlapping communities, which can be thought of as a generalization of the degree-corrected stochastic block model. We develop an efficient spectral algorithm for estimating the community memberships, which deals with the overlaps by employing the K-medians algorithm rather than the usual K-means for clustering in the spectral domain. We show that the algorithm is asymptotically consistent when networks are not too sparse and the overlaps between communities not too large. Numerical experiments on both simulated networks and many real social networks demonstrate that our method performs very well compared to a number of benchmark methods for overlapping community detection.", "Community detection is a fundamental problem in network analysis, with applications in many diverse areas. The stochastic block model is a common tool for model-based community detection, and asymptotic tools for checking consistency of community detection under the block model have been recently developed. However, the block model is limited by its assumption that all nodes within a community are stochastically equivalent, and provides a poor fit to networks with hubs or highly varying node degrees within communities, which are common in practice. The degree-corrected stochastic block model was proposed to address this shortcoming and allows variation in node degrees within a community while preserving the overall block community structure. In this paper we establish general theory for checking consistency of community detection under the degree-corrected stochastic block model and compare several community detection criteria under both the standard and the degree-corrected models. We show which criteria are consistent under which models and constraints, as well as compare their relative performance in practice. We find that methods based on the degree-corrected block model, which includes the standard block model as a special case, are consistent under a wider class of models and that modularity-type methods require parameter constraints for consistency, whereas likelihood-based methods do not. On the other hand, in practice, the degree correction involves estimating many more parameters, and empirically we find it is only worth doing if the node degrees within communities are indeed highly variable. We illustrate the methods on simulated networks and on a network of political blogs." ] }
1506.03648
2952004933
We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.
Weakly supervised learning seeks to capture the signal that is common to all the positives but absent from all the negatives. This is challenging due to nuisance variables such as pose, occlusion, and intra-class variation. Learning with weak labels is often phrased as Multiple Instance Learning @cite_35 . It is most frequently formulated as a maximum margin problem, although boosting @cite_16 @cite_11 and Noisy-OR models @cite_20 have been explored as well. The multiple instance max-margin classification problem is non-convex and solved as an alternating minimization of a biconvex objective @cite_24 . MI-SVM @cite_24 or LSVM @cite_7 are two classic methods in this paradigm. This setting naturally applies to weakly-labeled detection @cite_5 @cite_30 . However, most of these approaches are sensitive to the initialization of the detector @cite_13 . Several heuristics have been proposed to address these issues @cite_18 @cite_30 , however they are usually specific to detection.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_7", "@cite_24", "@cite_5", "@cite_16", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2952072685", "2110119381", "2951702175", "2168356304", "2108745803", "318792885", "", "2016016818", "2951753283", "2166010828" ], "abstract": [ "Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50 relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection.", "The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89 correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms.", "The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristic ally. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization.", "Localizing objects in cluttered backgrounds is a challenging task in weakly supervised localization. Due to large object variations in cluttered images, objects have large ambiguity with backgrounds. However, backgrounds contain useful latent information, e.g., the sky for aeroplanes. If we can learn this latent information, object-background ambiguity can be reduced to suppress the background. In this paper, we propose the latent category learning (LCL), which is an unsupervised learning problem given only image-level class labels. Firstly, inspired by the latent semantic discovery, we use the typical probabilistic Latent Semantic Analysis (pLSA) to learn the latent categories, which can represent objects, object parts or backgrounds. Secondly, to determine which category contains the target object, we propose a category selection method evaluating each category’s discrimination. We evaluate the method on the PASCAL VOC 2007 database and ILSVRC 2013 detection challenge. On VOC 2007, the proposed method yields the annotation accuracy of 48 , which outperforms previous results by 10 . More importantly, we achieve the detection average precision of 30.9 , which improves previous results by 8 and can be competitive with the supervised deformable part model (DPM) 5.0 baseline 33.7 . On ILSVRC 2013 detection, the method yields the precision of 6.0 , which is also competitive with the DPM 5.0.", "", "Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when high-dimensional representations, such as the Fisher vectors, are used. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset. Compared to state-of-the-art weakly supervised detectors, our approach better localizes objects in the training images, which translates into improved detection performance.", "We examine a probabilistic model for the diagnosis of multiple diseases. In the model, diseases and findings are represented as binary variables. Also, diseases are marginally independent, features are conditionally independent given disease instances, and diseases interact to produce findings via a noisy OR-gate. An algorithm for computing the posterior probability of each disease, given a set of observed findings, called quickscore, is presented. The time complexity of the algorithm is O(nm-2m+), where n is the number of diseases, m+ is the number of positive findings and m- is the number of negative findings. Although the time complexity of quickscore i5 exponential in the number of positive findings, the algorithm is useful in practice because the number of observed positive findings is usually far less than the number of diseases under consideration. Performance results for quickscore applied to a probabilistic version of Quick Medical Reference (QMR) are provided.", "A good image object detection algorithm is accurate, fast, and does not require exact locations of objects in a training set. We can create such an object detector by taking the architecture of the Viola-Jones detector cascade and training it with a new variant of boosting that we call MIL-Boost. MILBoost uses cost functions from the Multiple Instance Learning literature combined with the AnyBoost framework. We adapt the feature selection criterion of MILBoost to optimize the performance of the Viola-Jones cascade. Experiments show that the detection rate is up to 1.6 times better using MILBoost. This increased detection rate shows the advantage of simultaneously learning the locations and scales of the objects in the training set along with the parameters of the classifier." ] }
1506.03648
2952004933
We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.
Traditionally, the problem of weak segmentation and scene parsing with image level labels has been addressed using graphical models, and parametric structured models @cite_21 @cite_10 @cite_6 . Most works exploit low-level image information to connect regions similar in appearance @cite_21 . Chen al @cite_22 exploit top-down segmentation priors based on visual subcategories for object discovery. Pinheiro al @cite_33 and Pathak al @cite_4 extend the multiple-instance learning framework from detection to semantic segmentation using CNNs. Their methods iteratively reinforce well-predicted outputs while suppressing erroneous segmentations contradicting image-level tags. Both algorithms are very sensitive to the initialization, and rely on carefully pretrained classifiers for all layers in the convolutional network. In contrast, our constrained optimization is much less sensitive and recovers a good solution from any random initialization of the classification layer.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_22", "@cite_21", "@cite_6", "@cite_10" ], "mid": [ "1931270512", "1961881037", "2002754212", "2029731618", "2066606526", "2026581312" ], "abstract": [ "Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained end-to-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge.", "", "There have been some recent efforts to build visual knowledge bases from Internet images. But most of these approaches have focused on bounding box representation of objects. In this paper, we propose to enrich these knowledge bases by automatically discovering objects and their segmentations from noisy Internet images. Specifically, our approach combines the power of generative modeling for segmentation with the effectiveness of discriminative models for detection. The key idea behind our approach is to learn and exploit top-down segmentation priors based on visual subcategories. The strong priors learned from these visual subcategories are then combined with discriminatively trained detectors and bottom up cues to produce clean object segmentations. Our experimental results indicate state-of-the-art performance on the difficult dataset introduced by [29] We have integrated our algorithm in NEIL for enriching its knowledge base [5]. As of 14th April 2014, NEIL has automatically generated approximately 500K segmentations using web data.", "We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such coarse tagging of images is faster and easier to obtain as opposed to the tedious task of pixelwise labeling required in state of the art systems. We cast this task naturally as a multiple instance learning (MIL) problem. We use Semantic Texton Forest (STF) as the basic framework and extend it for the MIL setting. We make use of multitask learning (MTL) to regularize our solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. We report experimental results on the MSRC21 and the very challenging VOC2007 datasets. On MSRC21 dataset we are able, by using 276 weakly labeled images, to achieve the performance of a supervised STF trained on pixelwise labeled training set of 56 images, which is a significant reduction in supervision needed.", "Weakly-supervised image segmentation is a challenging problem with multidisciplinary applications in multimedia content analysis and beyond. It aims to segment an image by leveraging its image-level semantics (i.e., tags). This paper presents a weakly-supervised image segmentation algorithm that learns the distribution of spatially structural superpixel sets from image-level labels. More specifically, we first extract graphlets from a given image, which are small-sized graphs consisting of superpixels and encapsulating their spatial structure. Then, an efficient manifold embedding algorithm is proposed to transfer labels from training images into graphlets. It is further observed that there are numerous redundant graphlets that are not discriminative to semantic categories, which are abandoned by a graphlet selection scheme as they make no contribution to the subsequent segmentation. Thereafter, we use a Gaussian mixture model (GMM) to learn the distribution of the selected post-embedding graphlets (i.e., vectors output from the graphlet embedding). Finally, we propose an image segmentation algorithm, termed representative graphlet cut, which leverages the learned GMM prior to measure the structure homogeneity of a test image. Experimental results show that the proposed approach outperforms state-of-the-art weakly-supervised image segmentation methods, on five popular segmentation data sets. Besides, our approach performs competitively to the fully-supervised segmentation models.", "We address the problem of weakly supervised semantic segmentation. The training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method must predict a class label for every pixel. Our goal is to enable segmentation algorithms to use multiple visual cues in this weakly supervised setting, analogous to what is achieved by fully supervised methods. However, it is difficult to assess the relative usefulness of different visual cues from weakly supervised training data. We define a parametric family of structured models, were each model weights visual cues in a different way. We propose a Maximum Expected Agreement model selection principle that evaluates the quality of a model from the family without looking at superpixel labels. Searching for the best model is a hard optimization problem, which has no analytic gradient and multiple local optima. We cast it as a Bayesian optimization problem and propose an algorithm based on Gaussian processes to efficiently solve it. Our second contribution is an Extremely Randomized Hashing Forest that represents diverse superpixel features as a sparse binary vector. It enables using appearance models of visual classes that are fast at training and testing and yet accurate. Experiments on the SIFT-flow dataset show a significant improvement over previous weakly supervised methods and even over some fully supervised methods." ] }
1506.03648
2952004933
We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.
Papandreou al @cite_34 include an adaptive bias into the multi-instance learning framework. Their algorithm boosts classes known to be present and suppresses all others. We show that this simple heuristic can be viewed as a special case of a constrained optimization, where the adaptive bias controls the constraint satisfaction. However the constraints that can be modeled by this adaptive bias are limited and cannot leverage the full power of weak labels. In this paper, we show how to apply more general linear constraints which lead to better segmentation performance.
{ "cite_N": [ "@cite_34" ], "mid": [ "1529410181" ], "abstract": [ "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL" ] }
1506.03648
2952004933
We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.
The resulting algorithm is similar to generalized expectation @cite_31 and posterior regularization @cite_23 in natural language processing. Both methods train a parametric model that matches certain expectation constraints by applying a penalty to the objective function. Generalized expectation adds the expected constraint penalty directly to objective, which for convolutional networks is hard and expensive to evaluate directly. Ganchev al @cite_23 constrain an auxiliary variable yielding an algorithm similar to our objective in dual space.
{ "cite_N": [ "@cite_31", "@cite_23" ], "mid": [ "2122052811", "2154368244" ], "abstract": [ "In this paper, we present an overview of generalized expectation criteria (GE), a simple, robust, scalable method for semi-supervised training using weakly-labeled data. GE fits model parameters by favoring models that match certain expectation constraints, such as marginal label distributions, on the unlabeled data. This paper shows how to apply generalized expectation criteria to two classes of parametric models: maximum entropy models and conditional random fields. Experimental results demonstrate accuracy improvements over supervised training and a number of other state-of-the-art semi-supervised learning methods for these models.", "We present posterior regularization, a probabilistic framework for structured, weakly supervised learning. Our framework efficiently incorporates indirect supervision via constraints on posterior distributions of probabilistic models with latent variables. Posterior regularization separates model complexity from the complexity of structural constraints it is desired to satisfy. By directly imposing decomposable regularization on the posterior moments of latent variables during learning, we retain the computational efficiency of the unconstrained model while ensuring desired constraints hold in expectation. We present an efficient algorithm for learning with posterior regularization and illustrate its versatility on a diverse set of structural constraints such as bijectivity, symmetry and group sparsity in several large scale experiments, including multi-view learning, cross-lingual dependency grammar induction, unsupervised part-of-speech induction, and bitext word alignment." ] }
1506.03551
1149203955
Data Centers (DCs) are required to be scalable to large data sets so as to accommodate ever increasing demands of resource-limited embedded and mobile devices. Thanks to the availability of recent high data rate millimeter-wave frequency spectrum such as 60GHz and due to the favorable attributes of this technology, wireless DC (WDC) exhibits the potentials of being a promising solution especially for small to medium scale DCs. This paper investigates the problem of throughput scalability of WDCs using the established theory of the asymptotic throughput of wireless multi-hop networks that are primarily proposed for homogeneous traffic conditions. The rate-heterogeneous traffic distribution of a data center however, requires the asymptotic heterogeneous throughput knowledge of a wireless network in order to study the performance and feasibility of WDCs for practical purposes. To answer these questions this paper presents a lower bound for the throughput scalability of a multi-hop rate-heterogeneous network when traffic generation rates of all nodes are similar, except one node. We demonstrate that the throughput scalability of conventional multi-hopping and the spatial reuse of the above bi-rate network is inefficient and henceforth develop a speculative 2-partitioning scheme that improves the network throughput scaling potentials. A better lower bound of the throughput is then obtained. Finally, we obtain the throughput scaling of an i.i.d. rate-heterogeneous network and obtain its lower bound. Again we propose a speculative 2-partitioning scheme to achieve a network with higher throughput in terms of improved lower bound. All of the obtained results have been verified using simulation experiments.
@cite_10 attempts to address practical issues in realizing a WDC by proposing a hybrid wired wireless architecture and scheduling wireless links in a distributed manner. The architecture has been modeled and an optimization problem has been formulated to schedule the links, and to trade off complexity for practicality a heuristic algorithm is presented.
{ "cite_N": [ "@cite_10" ], "mid": [ "2037349970" ], "abstract": [ "Data centers play a key role in the expansion of cloud computing. However, the efficiency of data center networks is limited by oversubscription. The typical unbalanced traffic distributions of a DCN further aggravate the problem. Wireless networking, as a complementary technology to Ethernet, has the flexibility and capability to provide feasible approaches to handle the problem. In this article, we analyze the challenges of DCNs and articulate the motivations of employing wireless in DCNs. We also propose a hybrid Ethernet wireless DCN architecture and a mechanism to dynamically schedule wireless transmissions based on traffic demands. Our simulation study demonstrates the effectiveness of the proposed wireless DCN." ] }
1506.03551
1149203955
Data Centers (DCs) are required to be scalable to large data sets so as to accommodate ever increasing demands of resource-limited embedded and mobile devices. Thanks to the availability of recent high data rate millimeter-wave frequency spectrum such as 60GHz and due to the favorable attributes of this technology, wireless DC (WDC) exhibits the potentials of being a promising solution especially for small to medium scale DCs. This paper investigates the problem of throughput scalability of WDCs using the established theory of the asymptotic throughput of wireless multi-hop networks that are primarily proposed for homogeneous traffic conditions. The rate-heterogeneous traffic distribution of a data center however, requires the asymptotic heterogeneous throughput knowledge of a wireless network in order to study the performance and feasibility of WDCs for practical purposes. To answer these questions this paper presents a lower bound for the throughput scalability of a multi-hop rate-heterogeneous network when traffic generation rates of all nodes are similar, except one node. We demonstrate that the throughput scalability of conventional multi-hopping and the spatial reuse of the above bi-rate network is inefficient and henceforth develop a speculative 2-partitioning scheme that improves the network throughput scaling potentials. A better lower bound of the throughput is then obtained. Finally, we obtain the throughput scaling of an i.i.d. rate-heterogeneous network and obtain its lower bound. Again we propose a speculative 2-partitioning scheme to achieve a network with higher throughput in terms of improved lower bound. All of the obtained results have been verified using simulation experiments.
Recently, a methodology for building wire-free data centers based on 60-GHz radio frequency (RF) technology has been presented. Exploring the design space demonstrates the potentials of fully WDCs with respect to some major performance measures @cite_13 .
{ "cite_N": [ "@cite_13" ], "mid": [ "2156402011" ], "abstract": [ "Conventional datacenters, based on wired networks, entail high wiring costs, suffer from performance bottlenecks, and have low resilience to network failures. In this paper, we investigate a radically new methodology for building wire-free datacenters based on emerging 60GHz RF technology. We propose a novel rack design and a resulting network topology inspired by Cayley graphs that provide a dense interconnect. Our exploration of the resulting design space shows that wireless datacenters built with this methodology can potentially attain higher aggregate bandwidth, lower latency, and substantially higher fault tolerance than a conventional wired datacenter while improving ease of construction and maintenance." ] }
1506.03551
1149203955
Data Centers (DCs) are required to be scalable to large data sets so as to accommodate ever increasing demands of resource-limited embedded and mobile devices. Thanks to the availability of recent high data rate millimeter-wave frequency spectrum such as 60GHz and due to the favorable attributes of this technology, wireless DC (WDC) exhibits the potentials of being a promising solution especially for small to medium scale DCs. This paper investigates the problem of throughput scalability of WDCs using the established theory of the asymptotic throughput of wireless multi-hop networks that are primarily proposed for homogeneous traffic conditions. The rate-heterogeneous traffic distribution of a data center however, requires the asymptotic heterogeneous throughput knowledge of a wireless network in order to study the performance and feasibility of WDCs for practical purposes. To answer these questions this paper presents a lower bound for the throughput scalability of a multi-hop rate-heterogeneous network when traffic generation rates of all nodes are similar, except one node. We demonstrate that the throughput scalability of conventional multi-hopping and the spatial reuse of the above bi-rate network is inefficient and henceforth develop a speculative 2-partitioning scheme that improves the network throughput scaling potentials. A better lower bound of the throughput is then obtained. Finally, we obtain the throughput scaling of an i.i.d. rate-heterogeneous network and obtain its lower bound. Again we propose a speculative 2-partitioning scheme to achieve a network with higher throughput in terms of improved lower bound. All of the obtained results have been verified using simulation experiments.
Multiple-input multiple-output (MIMO) link design scheme for WDC applications has been studied in @cite_7 . The impacts of MIMO degrees of freedom have been explored in a multi-node packet networking environment.
{ "cite_N": [ "@cite_7" ], "mid": [ "2023633615" ], "abstract": [ "This paper deals with mm Wave MIMO link design strategy for wireless data center applications where the MIMO degrees of freedom is taken into account in multi-node packet networking environments. The problem is treated differently from the coordinated multiuser MIMO situation and the link design is optimized independently in each node with meeting control and data plane requirements for contention-based packet switching. In particular, we propose using an interference-aligned out-of-band control plane to improve the unidirectional bonded in-band data plane collision-related performance degradation with a limited number of antenna elements per node. We also present a high-level implementation plan." ] }
1506.03551
1149203955
Data Centers (DCs) are required to be scalable to large data sets so as to accommodate ever increasing demands of resource-limited embedded and mobile devices. Thanks to the availability of recent high data rate millimeter-wave frequency spectrum such as 60GHz and due to the favorable attributes of this technology, wireless DC (WDC) exhibits the potentials of being a promising solution especially for small to medium scale DCs. This paper investigates the problem of throughput scalability of WDCs using the established theory of the asymptotic throughput of wireless multi-hop networks that are primarily proposed for homogeneous traffic conditions. The rate-heterogeneous traffic distribution of a data center however, requires the asymptotic heterogeneous throughput knowledge of a wireless network in order to study the performance and feasibility of WDCs for practical purposes. To answer these questions this paper presents a lower bound for the throughput scalability of a multi-hop rate-heterogeneous network when traffic generation rates of all nodes are similar, except one node. We demonstrate that the throughput scalability of conventional multi-hopping and the spatial reuse of the above bi-rate network is inefficient and henceforth develop a speculative 2-partitioning scheme that improves the network throughput scaling potentials. A better lower bound of the throughput is then obtained. Finally, we obtain the throughput scaling of an i.i.d. rate-heterogeneous network and obtain its lower bound. Again we propose a speculative 2-partitioning scheme to achieve a network with higher throughput in terms of improved lower bound. All of the obtained results have been verified using simulation experiments.
Gupta and Kumar in @cite_19 , initiated the research on wireless network throughput scaling, when nodes are randomly and independently distributed with equal rates under unicast traffic. They showed that each source-destination pair can achieve a bit rate on the order of @math when @math tends to infinity, resulting in @math aggregate throughput. They also showed the @math scaling for networks with arbitrary placement of nodes. In @cite_30 @cite_25 , strategies are proposed to achieve the same bound.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_25" ], "mid": [ "2135377440", "2137775453", "" ], "abstract": [ "We study wireless ad hoc networks with a large number of nodes. We first focus on a network of n immobile nodes, each with a destination node chosen in random. We develop a scheme under which, in the absence of fading, the network can provide each node with a traffic rate spl lambda sub 1 (n)=K sub 1 (nlog n) sup -1 2 . This result was first shown in J. Hightower and G. Borriello (2001) under a similar setting, however the proof presented here is shorter and uses only basic probability tools. We then proceed to show that, under a general model of fading, each node can send data to its destination with a rate spl lambda sub 2 (n)=K sub 2 n sup -1 sup 2 (log n) sup -3 2 . Next, we extend our formulation to study the effects of node mobility. We first develop a simple scheme under which each of the a mobile nodes can send data to a randomly chosen destination node with a rate spl lambda sub 3 (n)=K sub 3 n sup -1 2 (log n) sup -3 2 , and with a fixed upper bound on the packet delay d sub max that does not depend on n. We subsequently develop a scheme under which each of the nodes can send data to its destination with a rate spl lambda sub 4 (n)=K sub 4 n sup (d-1) 2 (log n) sup -5 2 provided that nodes are willing to tolerate packet delays smaller than d sub max (n)<K sub 5 n sup d , where 0<d<1. With both schemes, a general model of fading is assumed. In addition, nodes require no global topology or routing information, and only need to coordinate locally. The above results hold for an appropriate choice of values for the constants K sub i , and with probability approaching 1 as the number of nodes n approaches infinity.", "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.", "" ] }
1506.03551
1149203955
Data Centers (DCs) are required to be scalable to large data sets so as to accommodate ever increasing demands of resource-limited embedded and mobile devices. Thanks to the availability of recent high data rate millimeter-wave frequency spectrum such as 60GHz and due to the favorable attributes of this technology, wireless DC (WDC) exhibits the potentials of being a promising solution especially for small to medium scale DCs. This paper investigates the problem of throughput scalability of WDCs using the established theory of the asymptotic throughput of wireless multi-hop networks that are primarily proposed for homogeneous traffic conditions. The rate-heterogeneous traffic distribution of a data center however, requires the asymptotic heterogeneous throughput knowledge of a wireless network in order to study the performance and feasibility of WDCs for practical purposes. To answer these questions this paper presents a lower bound for the throughput scalability of a multi-hop rate-heterogeneous network when traffic generation rates of all nodes are similar, except one node. We demonstrate that the throughput scalability of conventional multi-hopping and the spatial reuse of the above bi-rate network is inefficient and henceforth develop a speculative 2-partitioning scheme that improves the network throughput scaling potentials. A better lower bound of the throughput is then obtained. Finally, we obtain the throughput scaling of an i.i.d. rate-heterogeneous network and obtain its lower bound. Again we propose a speculative 2-partitioning scheme to achieve a network with higher throughput in terms of improved lower bound. All of the obtained results have been verified using simulation experiments.
in @cite_11 , removed the gap between the throughput of randomly located and arbitrarily located nodes, and showed the total throughput scales by @math .
{ "cite_N": [ "@cite_11" ], "mid": [ "2135356058" ], "abstract": [ "An achievable bit rate per source-destination pair in a wireless network of n randomly located nodes is determined adopting the scaling limit approach of statistical physics. It is shown that randomly scattered nodes can achieve, with high probability, the same 1 radicn transmission rate of arbitrarily located nodes. This contrasts with previous results suggesting that a 1 radicnlogn reduced rate is the price to pay for the randomness due to the location of the nodes. The network operation strategy to achieve the result corresponds to the transition region between order and disorder of an underlying percolation model. If nodes are allowed to transmit over large distances, then paths of connected nodes that cross the entire network area can be easily found, but these generate excessive interference. If nodes transmit over short distances, then such crossing paths do not exist. Percolation theory ensures that crossing paths form in the transition region between these two extreme scenarios. Nodes along these paths are used as a backbone, relaying data for other nodes, and can transport the total amount of information generated by all the sources. A lower bound on the achievable bit rate is then obtained by performing pairwise coding and decoding at each hop along the paths, and using a time division multiple access scheme" ] }
1506.03551
1149203955
Data Centers (DCs) are required to be scalable to large data sets so as to accommodate ever increasing demands of resource-limited embedded and mobile devices. Thanks to the availability of recent high data rate millimeter-wave frequency spectrum such as 60GHz and due to the favorable attributes of this technology, wireless DC (WDC) exhibits the potentials of being a promising solution especially for small to medium scale DCs. This paper investigates the problem of throughput scalability of WDCs using the established theory of the asymptotic throughput of wireless multi-hop networks that are primarily proposed for homogeneous traffic conditions. The rate-heterogeneous traffic distribution of a data center however, requires the asymptotic heterogeneous throughput knowledge of a wireless network in order to study the performance and feasibility of WDCs for practical purposes. To answer these questions this paper presents a lower bound for the throughput scalability of a multi-hop rate-heterogeneous network when traffic generation rates of all nodes are similar, except one node. We demonstrate that the throughput scalability of conventional multi-hopping and the spatial reuse of the above bi-rate network is inefficient and henceforth develop a speculative 2-partitioning scheme that improves the network throughput scaling potentials. A better lower bound of the throughput is then obtained. Finally, we obtain the throughput scaling of an i.i.d. rate-heterogeneous network and obtain its lower bound. Again we propose a speculative 2-partitioning scheme to achieve a network with higher throughput in terms of improved lower bound. All of the obtained results have been verified using simulation experiments.
What has been mentioned so far, is achieved by the assumption of no cooperation among nodes. Xie and Kumar in @cite_20 investigated the strategy of multi-hop wireless networks with nodes cooperation. " O zg " u in @cite_26 , proposed an order-optimal scheme, with help of the distributed MIMO technique.
{ "cite_N": [ "@cite_26", "@cite_20" ], "mid": [ "2002649876", "2162180430" ], "abstract": [ "n source and destination pairs randomly located in an area want to communicate with each other. Signals transmitted from one user to another at distance r apart are subject to a power loss of r-alpha as well as a random phase. We identify the scaling laws of the information-theoretic capacity of the network when nodes can relay information for each other. In the case of dense networks, where the area is fixed and the density of nodes increasing, we show that the total capacity of the network scales linearly with n. This improves on the best known achievability result of n2 3 of Aeron and Saligrama. In the case of extended networks, where the density of nodes is fixed and the area increasing linearly with n, we show that this capacity scales as n2-alpha 2 for 2lesalpha 4. Thus, much better scaling than multihop can be achieved in dense networks, as well as in extended networks with low attenuation. The performance gain is achieved by intelligent node cooperation and distributed multiple-input multiple-output (MIMO) communication. The key ingredient is a hierarchical and digital architecture for nodal exchange of information for realizing the cooperation.", "How much information can be carried over a wireless network with a multiplicity of nodes, and how should the nodes cooperate to transfer information? To study these questions, we formulate a model of wireless networks that particularly takes into account the distances between nodes, and the resulting attenuation of radio signals, and study a performance measure that weights information by the distance over which it is transported. Consider a network with the following features. I) n nodes located on a plane, with minimum separation distance spl rho sub min >0. II) A simplistic model of signal attenuation e sup - spl gamma spl rho spl rho sup spl delta over a distance spl rho , where spl gamma spl ges 0 is the absorption constant (usually positive, unless over a vacuum), and spl delta >0 is the path loss exponent. III) All receptions subject to additive Gaussian noise of variance spl sigma sup 2 . The performance measure we mainly, but not exclusively, study is the transport capacity C sub T :=sup spl Sigma on sub spl lscr =1 sup m R sub spl lscr spl middot spl rho sub spl lscr , where the supremum is taken over m, and vectors (R sub 1 ,R sub 2 ,...,R sub m ) of feasible rates for m source-destination pairs, and spl rho sub spl lscr is the distance between the spl lscr th source and its destination. It is the supremum distance-weighted sum of rates that the wireless network can deliver. We show that there is a dichotomy between the cases of relatively high and relatively low attenuation. When spl gamma >0 or spl delta >3, the relatively high attenuation case, the transport capacity is bounded by a constant multiple of the sum of the transmit powers of the nodes in the network. However, when spl gamma =0 and spl delta <3 2, the low-attenuation case, we show that there exist networks that can provide unbounded transport capacity for fixed total power, yielding zero energy priced communication. Examples show that nodes can profitably cooperate over large distances using coherence and multiuser estimation when the attenuation is low. These results are established by developing a coding scheme and an achievable rate for Gaussian multiple-relay channels, a result that may be of interest in its own right." ] }
1506.03478
2953250761
Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.
Larochelle & Murray @cite_20 derived a tractable density estimator (NADE) in a manner similar to how the MCGSM was derived @cite_1 , but using restricted Boltzmann machines (RBM) instead of mixture models as a starting point. In contrast to the MCGSM, NADE tries to keep the weight sharing constraints induced by the RBM (Equation ). extended NADE to real values @cite_25 and introduced hidden layers to the model @cite_13 . @cite_27 describe a related autoregressive network for binary data which additionally allows for stochastic hidden units.
{ "cite_N": [ "@cite_1", "@cite_27", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "2079256176", "2949595773", "2952295562", "2952366348", "2135181320" ], "abstract": [ "We present a probabilistic model for natural images that is based on mixtures of Gaussian scale mixtures and a simple multiscale representation. We show that it is able to generate images with interesting higher-order correlations when trained on natural images or samples from an occlusion-based model. More importantly, our multiscale model allows for a principled evaluation. While it is easy to generate visually appealing images, we demonstrate that our model also yields the best performance reported to date when evaluated with respect to the cross-entropy rate, a measure tightly linked to the average log-likelihood. The ability to quantitatively evaluate our model differentiates it from other multiscale models, for which evaluation of these kinds of measures is usually intractable.", "We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling. We derive an efficient approximate parameter estimation method based on the minimum description length (MDL) principle, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference. We demonstrate state-of-the-art generative performance on a number of classic data sets: several UCI data sets, MNIST and Atari 2600 games.", "The Neural Autoregressive Distribution Estimator (NADE) and its real-valued version RNADE are competitive density models of multidimensional data across a variety of domains. These models use a fixed, arbitrary ordering of the data dimensions. One can easily condition on variables at the beginning of the ordering, and marginalize out variables at the end of the ordering, however other inference tasks require approximate inference. In this work we introduce an efficient procedure to simultaneously train a NADE model for each possible ordering of the variables, by sharing parameters across all these models. We can thus use the most convenient model for each inference task at hand, and ensembles of such models with different orderings are immediately available. Moreover, unlike the original NADE, our training procedure scales to deep models. Empirically, ensembles of Deep NADE models obtain state of the art density estimation performance.", "We introduce RNADE, a new model for joint density estimation of real-valued vectors. Our model calculates the density of a datapoint as the product of one-dimensional conditionals modeled using mixture density networks with shared parameters. RNADE learns a distributed representation of the data, while having a tractable expression for the calculation of densities. A tractable likelihood allows direct comparison with other methods and training by standard gradient-based optimizers. We compare the performance of RNADE on several datasets of heterogeneous and perceptual data, finding it outperforms mixture models in all but one case.", "We describe a new approach for modeling the distribution of high-dimensional vectors of discrete variables. This model is inspired by the restricted Boltzmann machine (RBM), which has been shown to be a powerful model of such distributions. However, an RBM typically does not provide a tractable distribution estimator, since evaluating the probability it assigns to some given observation requires the computation of the so-called partition function, which itself is intractable for RBMs of even moderate size. Our model circumvents this diculty by decomposing the joint distribution of observations into tractable conditional distributions and modeling each conditional using a non-linear function similar to a conditional of an RBM. Our model can also be interpreted as an autoencoder wired such that its output can be used to assign valid probabilities to observations. We show that this new model outperforms other multivariate binary distribution estimators on several datasets and performs similarly to a large (but intractable) RBM." ] }
1506.03478
2953250761
Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.
@cite_5 used one-dimensional LSTMs to generate images in a sequential manner (DRAW). Because the model was defined over Bernoulli variables, normalized RGB values had to be treated as probabilities, making a direct comparison with other image models difficult. In contrast to our model, the presence of stochastic latent variables in DRAW means that its likelihood cannot be evaluated but has to be approximated.
{ "cite_N": [ "@cite_5" ], "mid": [ "1850742715" ], "abstract": [ "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye." ] }
1506.03478
2953250761
Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.
@cite_30 and @cite_32 use one-dimensional recurrent neural networks to model videos, but recurrency is not used to describe the distribution over individual frames. @cite_32 optimize a squared error corresponding to a Gaussian assumption, while @cite_30 try to side-step having to model pixel intensities by quantizing image patches. In contrast, here we also try to solve the problem of modeling pixel intensities by using an MCGSM, which is equipped to model heavy-tailed as well as multi-modal distributions.
{ "cite_N": [ "@cite_30", "@cite_32" ], "mid": [ "1568514080", "2952453038" ], "abstract": [ "We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance." ] }
1506.03495
1909366555
Emergency events involving fire are potentially harmful, demanding a fast and precise decision making. The use of crowd sourcing image and videos on crisis management systems can aid in these situations by providing more information than verbal textual descriptions. Due to the usual high volume of data, automatic solutions need to discard non-relevant content without losing relevant information. There are several methods for fire detection on video using color-based models. However, they are not adequate for still image processing, because they can suffer on high false-positive results. These methods also suffer from parameters with little physical meaning, which makes fine tuning a difficult task. In this context, we propose a novel fire detection method for still images that uses classification based on color features combined with texture classification on super pixel regions. Our method uses a reduced number of parameters if compared to previous works, easing the process of fine tuning the method. Results show the effectiveness of our method of reducing false-positives while its precision remains compatible with the state-of-the-art methods.
A fire detection method based on rules was proposed in the work of Chen @cite_9 . They define a set of three rules using a combination of the RGB and the HSI color spaces; the user, in turn, must set two threshold parameters to detect fire pixels. Another method based on color was proposed by Celik @cite_22 , who conducted a wide-ranging study regarding the color of fire pixels to define a model. This method defines a set of five mathematical rules to compare the intensity of the channels in the YCbCr color space; this was because the YCbCr has a better discrimination regarding fire @cite_22 @cite_21 . Also in this work, the user must set a threshold for one of the rules.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_22" ], "mid": [ "2111586857", "2053186823", "2011160765" ], "abstract": [ "The paper presents an early fire-alarm raising method based on video processing. The basic idea of the proposed of fire-detection is to adopt a RGB (red, green, blue) model based chromatic and disorder measurement for extracting fire-pixels and smoke-pixels. The decision function of fire-pixels is mainly deduced by the intensity and saturation of R component. The extracted fire-pixels will be verified if it is a real fire by both dynamics of growth and disorder, and further smoke. Based on iterative checking on the growing ratio of flames, a fire-alarm is given when the alarm-raising condition is met. Experimental results show that the developed technique can achieve fully automatic surveillance of fire accident with a lower false alarm rate and thus is very attractive for the important military, social security, commercial applications, and so on, at a general cost.", "To face fire it is crucial to understand its behaviour in order to maximize fighting means. To achieve this task, the development of a metrological tool is necessary for estimating both geometrical and physical parameters involved in forest fire modelling. A key parameter is to estimate fire positions accurately. In this paper an image processing tool especially dedicated to an accurate extraction of fire from an image is presented. In this work, the clustering on several colour spaces is investigated and it appears that the blue chrominance Cb from the YCbCr colour space is the most appropriate. As a consequence, a new segmentation algorithm dedicated to forest fire applications has been built using first an optimized k-means clustering in the Cb-channel and then some properties of fire pixels in the RGB colour space. Next, the performance of the proposed method is evaluated using three supervised evaluation criteria and then compared to other existing segmentation algorithms in the literature. Finally a conclusion is drawn, assessing the good behaviour of the developed algorithm.", "In this paper, a rule-based generic color model for flame pixel classification is proposed. The proposed algorithm uses YCbCr color space to separate the luminance from the chrominance more effectively than color spaces such as RGB or rgb. The performance of the proposed algorithm is tested on two sets of images, one of which contains fire, the other containing fire-like regions. The proposed method achieves up to 99 fire detection rate. The results are compared with two other methods in the literature and the proposed method is shown to have both a higher detection rate and a lower false alarm rate. Furthermore the proposed color model can be used for real-time fire detection in color video sequences, and we also present results for segmentation of fire in video using only the color model proposed in this paper." ] }
1506.03495
1909366555
Emergency events involving fire are potentially harmful, demanding a fast and precise decision making. The use of crowd sourcing image and videos on crisis management systems can aid in these situations by providing more information than verbal textual descriptions. Due to the usual high volume of data, automatic solutions need to discard non-relevant content without losing relevant information. There are several methods for fire detection on video using color-based models. However, they are not adequate for still image processing, because they can suffer on high false-positive results. These methods also suffer from parameters with little physical meaning, which makes fine tuning a difficult task. In this context, we propose a novel fire detection method for still images that uses classification based on color features combined with texture classification on super pixel regions. Our method uses a reduced number of parameters if compared to previous works, easing the process of fine tuning the method. Results show the effectiveness of our method of reducing false-positives while its precision remains compatible with the state-of-the-art methods.
Rossi @cite_17 proposed a method to extract geometric fire characteristics using stereoscope videos. One of the steps is a segmentation based on a clustering algorithm, in which the image is divided into two clusters based on the channel V of the YUV color space. The cluster with the highest value of V corresponds to fire. Thereafter, Rossi used a 3D-Gaussian model to classify pixels as fire. In this method, the accuracy of the classification depends on a parameter provided by the user. This method presents limitations, since the authors assume that the fire is registered in a controlled environment.
{ "cite_N": [ "@cite_17" ], "mid": [ "1964084023" ], "abstract": [ "This paper presents a new instrumentation system, based on stereovision, for the visualization and quantitative characterization of fire fronts in outdoor conditions. The system consists of a visible pre-calibrated stereo camera and a computer with dedicated software. In the proposed approach, images are captured simultaneously and processed using specialized algorithms. These algorithms permit to model 3D fire fronts and extract geometric characteristics like volume, surface area, heading direction and length. Experiments were carried out in outdoor scenarios and the obtained results show the efficiency of the proposed system. This system successfully measures 3D geometric parameters of fire fronts over a range of combustible and environmental conditions." ] }
1506.03495
1909366555
Emergency events involving fire are potentially harmful, demanding a fast and precise decision making. The use of crowd sourcing image and videos on crisis management systems can aid in these situations by providing more information than verbal textual descriptions. Due to the usual high volume of data, automatic solutions need to discard non-relevant content without losing relevant information. There are several methods for fire detection on video using color-based models. However, they are not adequate for still image processing, because they can suffer on high false-positive results. These methods also suffer from parameters with little physical meaning, which makes fine tuning a difficult task. In this context, we propose a novel fire detection method for still images that uses classification based on color features combined with texture classification on super pixel regions. Our method uses a reduced number of parameters if compared to previous works, easing the process of fine tuning the method. Results show the effectiveness of our method of reducing false-positives while its precision remains compatible with the state-of-the-art methods.
Rudz @cite_21 proposed another method based on clustering. Instead of using the YUV color space, Rudz computes four clusters using the channel Cb of the YCbCr color space. The cluster with the lowest value of Cb refers to a fire region. A second step eliminates false-positive pixels using a reference dataset. The method treats small and large regions with different approaches; small regions are compared with the mean value of a reference region, while large regions are compared to the reference histogram. This comparison is made for each RGB color channel. The user must set three constants for the small regions, and three thresholds for the large regions, resulting in a total of six parameters.
{ "cite_N": [ "@cite_21" ], "mid": [ "2053186823" ], "abstract": [ "To face fire it is crucial to understand its behaviour in order to maximize fighting means. To achieve this task, the development of a metrological tool is necessary for estimating both geometrical and physical parameters involved in forest fire modelling. A key parameter is to estimate fire positions accurately. In this paper an image processing tool especially dedicated to an accurate extraction of fire from an image is presented. In this work, the clustering on several colour spaces is investigated and it appears that the blue chrominance Cb from the YCbCr colour space is the most appropriate. As a consequence, a new segmentation algorithm dedicated to forest fire applications has been built using first an optimized k-means clustering in the Cb-channel and then some properties of fire pixels in the RGB colour space. Next, the performance of the proposed method is evaluated using three supervised evaluation criteria and then compared to other existing segmentation algorithms in the literature. Finally a conclusion is drawn, assessing the good behaviour of the developed algorithm." ] }
1506.03506
2228021716
Loop agreement is a family of wait-free tasks that includes set agreement and simplex agreement, and was used to prove the undecidability of wait-free solvability of distributed tasks by read write memory. Herlihy and Rajsbaum defined the algebraic signature of a loop agreement task, which consists of a group and a distinguished element. They used the algebraic signature to characterize the relative power of loop agreement tasks. In particular, they showed that one task implements another exactly when there is a homomorphism between their respective signatures sending one distinguished element to the other. In this paper, we extend the previous result by defining the composition of multiple loop agreement tasks to create a new one with the same combined power. We generalize the original algebraic characterization of relative power to compositions of tasks. In this way, we can think of loop agreement tasks in terms of their basic building blocks. We also investigate a category-theoretic perspective of loop agreement by defining a category of loops, showing that the algebraic signature is a functor, and proving that our definition of task composition is the "correct" one, in a categorical sense.
Loop agreement has also been generalized to higher dimensions. Liu, Xu, and Pan define @cite_13 , where processes begin on distinguished vertices of an embedded @math -sphere of an @math -dimensional complex, and converge on a simplex of the embedded sphere. They generalize the algebraic signature characterization to a subclass of rendezvous tasks called rendezvous tasks, which are tasks whose output complexes have trivial homology groups below and above dimension @math , and a free Abelian @math -th homology group. The authors apply their main result to show there are countably infinite inequivalent nice rendezvous tasks.
{ "cite_N": [ "@cite_13" ], "mid": [ "2162458446" ], "abstract": [ "The rendezvous is a type of distributed decision tasks including many well-known tasks such as set agreement, simplex agreement, and approximation agreement. An n-dimensional rendezvous task, n>=1, allows n+2 distinct input values, and each execution produces at most n+2 distinct output values. A rendezvous task is said to implement another if an instance of its solution, followed by a protocol based on shared read write registers, solves the other. The notion of implementation induces a classification of rendezvous tasks of every dimension: two tasks belong to the same class if they implement each other. Previous work on classifying rendezvous tasks only focused on 1-dimensional ones. This paper solves an open problem by presenting the classification of nice rendezvous of arbitrary dimension. An n-dimensional rendezvous task is said to be nice if the qth reduced homology group of its decision space is trivial for q n, and free for q=n. Well-known examples are set agreement, simplex agreement, and approximation agreement. Each n-dimensional rendezvous task is assigned an algebraic signature, which consists of the nth homology group of the decision space, as well as a distinguished element in the group. It is shown that an n-dimensional nice rendezvous task implements another if and only if there is a homomorphism from its signature to that of the other. Hence the computational power of a nice rendezvous task is completely characterized by its signature. In each dimension, there are infinitely many classes of rendezvous tasks, and exactly countable classes of nice ones. A representative is explicitly constructed for each class of nice rendezvous tasks." ] }
1506.03425
620617900
We present a new fast online clustering algorithm that reliably recovers arbitrary-shaped data clusters in high throughout data streams. Unlike the existing state-of-the-art online clustering methods based on k-means or k-medoid, it does not make any restrictive generative assumptions. In addition, in contrast to existing nonparametric clustering techniques such as DBScan or DenStream, it gives provable theoretical guarantees. To achieve fast clustering, we propose to represent each cluster by a skeleton set which is updated continuously as new data is seen. A skeleton set consists of weighted samples from the data where weights encode local densities. The size of each skeleton set is adapted according to the cluster geometry. The proposed technique automatically detects the number of clusters and is robust to outliers. The algorithm works for the infinite data stream where more than one pass over the data is not feasible. We provide theoretical guarantees on the quality of the clustering and also demonstrate its advantage over the existing state-of-the-art on several datasets.
Another popular method used in the context of incremental clustering is doubling algorithm @cite_2 . Its standard version encodes every cluster by just one point. Furthermore, even though it allows for merging clusters, it does not permit to split them. We implement a variant of the method, where instead of one center several centers are kept per cluster. As we will show in experimental section, this purely deterministic approach, even though with some theoretical guarantees, is too sensitive to outliers.
{ "cite_N": [ "@cite_2" ], "mid": [ "2016973429" ], "abstract": [ "Motivated by applications such as document and image classification in information retrieval, we consider the problem of clustering dynamic point sets in a metric space. We propose a model called incremental clustering which is based on a careful analysis of the requirements of the information retrieval application, and which should also be useful in other applications. The goal is to efficiently maintain clusters of small diameter as new points are inserted. We analyze several natural greedy algorithms and demonstrate that they perform poorly. We propose new deterministic and randomized incremental clustering algorithms which have a provably good performance, and which we believe should also perform well in practice. We complement our positive results with lower bounds on the performance of incremental algorithms. Finally, we consider the dual clustering problem where the clusters are of fixed diameter, and the goal is to minimize the number of clusters." ] }
1506.03144
1937620887
This paper provides a theoretical analysis of diffraction-limited superresolution, demonstrating that arbitrarily close point sources can be resolved in ideal situations. Precisely, we assume that the incoming signal is a linear combination of M shifted copies of a known waveform with unknown shifts and amplitudes, and one only observes a finite collection of evaluations of this signal. We characterize properties of the base waveform such that the exact translations and amplitudes can be recovered from 2M + 1 observations. This recovery is achieved by solving a a weighted version of basis pursuit over a continuous dictionary. Our methods combine classical polynomial interpolation techniques with contemporary tools from compressed sensing.
Much of the mathematical analysis on superresolution has relied heavily on the assumption that the point sources are separated by more than some minimum amount @cite_7 @cite_36 @cite_35 @cite_18 @cite_8 @cite_55 . We note that in practical situations with noisy observations, some form of minimum separation may be necessary. One can expect, however, that the required minimum separation should go to zero as the noise level decreases: a property that is not manifest in previous results. Our approach, by contrast, does away with the minimum separation condition by observing that this matrix need not be close to the identity to be invertible. Instead, we impose Conditions to guarantee invertibility directly. Not surprisingly, we use techniques from T-systems to construct an analog of the polynomial @math in for our specific problem.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_7", "@cite_8", "@cite_36", "@cite_55" ], "mid": [ "2964325628", "", "2045732060", "2062227017", "2963166099", "2159645248" ], "abstract": [ "This paper develops a mathematical theory of super-resolution. Broadly speaking, super-resolution is the problem of recovering the fine details of an object—the high end of its spectrum—from coarse scale information only—from samples at the low end of the spectrum. Suppose we have many point sources at unknown locations in [0,1] and with unknown complex-valued amplitudes. We only observe Fourier samples of this object up to a frequency cutoff fc. We show that one can super-resolve these point sources with infinite precision—i.e., recover the exact locations and amplitudes—by solving a simple convex optimization problem, which can essentially be reformulated as a semidefinite program. This holds provided that the distance between sources is at least 2 fc. This result extends to higher dimensions and other models. In one dimension, for instance, it is possible to recover a piecewise smooth function by resolving the discontinuity points with infinite precision as well. We also show that the theory and methods are robust to noise. In particular, in the discrete setting we develop some theoretical results explaining how the accuracy of the super-resolved signal is expected to degrade when both the noise level and the super-resolution factor vary. © 2014 Wiley Periodicals, Inc.", "", "Accurate reconstruction of piecewise-smooth functions from a finite number of Fourier coefficients is an important problem in various applications. The inherent inaccuracy, in particular the Gibbs phenomenon, is being intensively investigated during the last decades. Several nonlinear reconstruction methods have been proposed, and it is by now well-established that the \"classical\" convergence order can be completely restored up to the discontinuities. Still, the maximal accuracy of determining the positions of these discontinuities remains an open question. In this paper we prove that the locations of the jumps (and subsequently the pointwise values of the function) can be reconstructed with at least \"half the classical accuracy\". In particular, we develop a constructive approximation procedure which, given the first @math Fourier coefficients of a piecewise- @math function, recovers the locations of the jumps with accuracy @math , and the values of the function between the jumps with accuracy @math (similar estimates are obtained for the associated jump magnitudes). A key ingredient of the algorithm is to start with the case of a single discontinuity, where a modified version of one of the existing algebraic methods (due to K.Eckhoff) may be applied. It turns out that the additional orders of smoothness produce a highly correlated error terms in the Fourier coefficients, which eventually cancel out in the corresponding algebraic equations. To handle more than one jump, we propose to apply a localization procedure via a convolution in the Fourier domain.", "Kowledge of a truncated Fourier series expansion for a 2π-periodic function of finite regularity, which is assumed to be piecewise smooth in each period, is used to accurately reconstruct the corresponding function. An algebraic equation of degree M is constructed for the M singularity locations in each period for the function in question. The M coefficients in this algebraic equation are obtained by solving an algebraic system of M equations determined by the coefficients in the known truncated expansion. If discontinuities in the derivatives of the function are considered, in addition to discontinuities in the function itself, that algebraic system will be nonlinear with respect to the M unknown coefficients. The degree of the algebraic system will depend on the desired order of accuracy for the reconstruction, i.e., a higher degree will normally lead to a more accurate determination of the singularity locations. By solving an additional linear algebraic system for the jumps of the function and its derivatives up to the arbitrarily specified order at the calculated singularity locations, we are able to reconstruct the 2π-periodic function of finite regularity as the sum of a piecewise polynomial function and a function which is continuously differentiab1e up to the specified order", "Abstract This paper considers the problem of recovering the delays and amplitudes of a weighted superposition of pulses. This problem is motivated by a variety of applications, such as ultrasound and radar. We show that for univariate and bivariate stream of pulses, one can recover the delays and weights to any desired accuracy by solving a tractable convex optimization problem, provided that a pulse-dependent separation condition is satisfied. The main result of this paper states that the recovery is robust to additive noise or model mismatch.", "In single-molecule microscopy it is necessary to locate with high precision point sources from noisy observations of the spectrum of the signal at frequencies capped by @math , which is just about the frequency of natural light. This paper rigorously establishes that this super-resolution problem can be solved via linear programming in a stable manner. We prove that the quality of the reconstruction crucially depends on the Rayleigh regularity of the support of the signal; that is, on the maximum number of sources that can occur within a square of side length about @math . The theoretical performance guarantee is complemented with a converse result showing that our simple convex program convex is nearly optimal. Finally, numerical experiments illustrate our methods." ] }
1506.03144
1937620887
This paper provides a theoretical analysis of diffraction-limited superresolution, demonstrating that arbitrarily close point sources can be resolved in ideal situations. Precisely, we assume that the incoming signal is a linear combination of M shifted copies of a known waveform with unknown shifts and amplitudes, and one only observes a finite collection of evaluations of this signal. We characterize properties of the base waveform such that the exact translations and amplitudes can be recovered from 2M + 1 observations. This recovery is achieved by solving a a weighted version of basis pursuit over a continuous dictionary. Our methods combine classical polynomial interpolation techniques with contemporary tools from compressed sensing.
We are not the first to bring the theory of Tchebycheff systems to bear on the problem of recovering finitely supported measures. De Castro and Gamboa @cite_60 prove that a finitely supported positive measure @math can be recovered exactly from measurements of the form whenever @math form a T-system containing the constant function @math . These measurements are almost identical to ours; if we set @math for @math , where @math is our measurement set, then our measurements are of the form @math However, in practice it is often impossible to directly measure the mass @math as required by . Moreover, the requirement that @math form a T-system does not hold for the Gaussian point spread function @math . Therefore the theory of @cite_60 is not readily applicable to superresolution imaging.
{ "cite_N": [ "@cite_60" ], "mid": [ "2962898451" ], "abstract": [ "Abstract We show that measures with finite support on the real line are the unique solution to an algorithm, named generalized minimal extrapolation, involving only a finite number of generalized moments (which encompass the standard moments, the Laplace transform, the Stieltjes transformation, etc.). Generalized minimal extrapolation shares related geometric properties with the basis pursuit approach of (1998) [5] . Indeed we also extend some standard results of compressed sensing (the dual polynomial, the nullspace property) to the signed measure framework. We express exact reconstruction in terms of a simple interpolation problem. We prove that every nonnegative measure, supported by a set containing s points, can be exactly recovered from only 2 s + 1 generalized moments. This result leads to a new construction of deterministic sensing matrices for compressed sensing." ] }
1506.02804
2277642006
LTE 4G is the next generation of cellular network which specifically aims to improve the network performance for data traffic and is currently being rolled out by many network operators. We present results from an extensive LTE measurement campaign in Dublin, Ireland using a custom performance measurement tool. Performance data was measured at a variety of locations within the city (including cell edge locations, indoors, outdoors etc) as well as for mobile users on public transport within the city. Using this data we derive a model of the characteristics of link layer RTT and bandwidth vs link signal strength. This model is suited to use for performance evaluation of applications and services, and since it is based on real measurements it allows realistic evaluation of performance.
Since LTE is a relatively new technology, the existing literature on performance of cellular networks mostly deals with earlier or alternative cellular technologies @cite_17 @cite_1 @cite_23 and on particular measurements of specific physical-level characteristics or events @cite_15 @cite_9 @cite_4 @cite_16 .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_23", "@cite_15", "@cite_16", "@cite_17" ], "mid": [ "", "2054333671", "1973981542", "2012244461", "1986195874", "2129636357", "2055124819" ], "abstract": [ "", "As society continues to integrate information-based technologies into daily life, there is an increased need for small, powerful mobile phones. Recently, relaying technologies have been researched for standardization of the next generation of mobile communication systems, including third-Generation Partnership Project (3GPP) LTE-Advanced, IEEE 802.16j, and IEEE 802.16m. Especially, LTE-Advanced is an evolutionary version of IMT-2000 defined by the ITU. To satisfy these requirements, relaying technology is considered as a powerful candidate scheme with carrier aggregation, MIMO, and CoMP. Relaying technology has been introduced to guarantee high data rates to multiple users. It can also extend cell coverage or effectively increase the average throughput of the cell by installing relay nodes at cell edges or in shadow areas. Thus, in this paper, we propose a method for boosting reception performance using the downlink transmission method of the LTE system, which is the next-generation mobile communication technology standard currently underway in 3GPP. At the moment, orthogonal frequency division multiplexing (OFDM), which is suitable for high-speed data transmission and multipath, is commonly used in an LTE downlink system. However, the OFDM method has a disadvantage of displaying a relatively higher PAPR at the terminal since it basically uses a multi-carrier. To this end, single carrier division multiple access (SC-FDMA) is used in an LTE uplink system in order to compensate for this defect related to high PAPR of OFDM at such an important terminal where power efficiency is important. However, when the channels in the frequency domain deteriorate signals, SC-FDMA reveals a defect in that the impact of deteriorated parts spreads and causes performance degradation. To this end, we propose that a relay be installed between the station and terminal, the distance between BS and RS be set at 500 or 1,000 m, and orthogonal frequency division multiple access (OFDMA) and SC-FDMA be chosen as transmission methods of RS. This paper found SC-FDMA to be a better choice when RS is closer to BS, whereas OFDMA is a better choice when the distance between BS and RS is farther. The system's reception performance improved when the transmission method fit the circumstances in the middle between BS and MS.", "Mobile broadband networks play an increasingly important role in society, and there is a strong need for independent assessments of their robustness and performance. A promising source of such information is active end-to-end measurements. It is, however, a challenging task to go from individual measurements to an assessment of network reliability, which is a complex notion encompassing many stability and performance related metrics. This paper presents a framework for measuring the user-experienced reliability in mobile broadband networks. We argue that reliability must be assessed at several levels, from the availability of the network connection to the stability of application performance. Based on the proposed framework, we conduct a large-scale measurement study of reliability in 5 mobile broadband networks. The study builds on active measurements from hundreds of measurement nodes over a period of 10 months. The results show that the reliability of mobile broadband networks is lower than one could hope: more than 20 of connections from stationary nodes are unavailable more than 10 minutes per day. There is, however, a significant potential for improving robustness if a device can connect simultaneously to several networks. We find that in most cases, our devices can achieve 99.999 (\"five nines\") connection availability by combining two operators. We further show how both radio conditions and network configuration play important roles in determining reliability, and how external measurements can reveal weaknesses and incidents that are not always captured by the operators' existing monitoring tools.", "Network service providers, and other parties, require an accurate understanding of the performance cellular networks deliver to users. In particular, they often seek a measure of the network performance users experience solely when they are interacting with their device---a measure we call in-context. Acquiring such measures is challenging due to the many factors, including time and physical context, that influence cellular network performance. This paper makes two contributions. First, we conduct a large scale measurement study, based on data collected from a large cellular provider and from hundreds of controlled experiments, to shed light on the issues underlying in-context measurements. Our novel observations show that measurements must be conducted on devices which (i) recently used the network as a result of user interaction with the device, (ii) remain in the same macro-environment (e.g., indoors and stationary), and in some cases the same micro-environment (e.g., in the user's hand), during the period between normal usage and a subsequent measurement, and (iii) are currently sending receiving little or no user-generated traffic. Second, we design and deploy a prototype active measurement service for Android phones based on these key insights. Our analysis of 1650 measurements gathered from 12 volunteer devices shows that the system is able to obtain average throughput measurements that accurately quantify the performance experienced during times of active device and network usage.", "The performance of LTE at high velocities (larger than @math km h) is badly understood. Operators have largely deployed LTE in urban environments where velocities are low and the benefits of LTE core features, for instance MIMO, are well demonstrated. With the proliferation of smartphones and tablets, mobile Internet access became ubiquitous and the pressure of rising traffic demands on operator infrastructures is increasing. Furthermore, mobile users now expect access to audio and HD video streams or IPTV while traveling in cars, public transports or intercity trains. Expectations evolve and high-quality connectivity is desired anywhere. To address these demands, operators are expanding their LTE deployments to semi-urban and rural areas of importance, typically along main transportation axes. However, it is unclear (1) how much LTE still benefits from MIMO spatial multiplexing or link adaptation at velocities above a few tens of kilometer per hours; and (2) how overall performance degrades with higher velocities. This paper presents results of a measurement study of a live LTE system at velocities up to @math km h. The results show that while velocity has an effect on performance, its influence remains limited if the SNR coverage is well dimensioned. The percentage of spatial multiplexing usage can exceed 65 from @math to 200 km h.", "With the recent advent of 4G LTE networks, there has been increasing interest to better understand the performance and power characteristics, compared with 3G WiFi networks. In this paper, we take one of the first steps in this direction. Using a publicly deployed tool we designed for Android called 4GTest attracting more than 3000 users within 2 months and extensive local experiments, we study the network performance of LTE networks and compare with other types of mobile networks. We observe LTE generally has significantly higher downlink and uplink throughput than 3G and even WiFi, with a median value of 13Mbps and 6Mbps, respectively. We develop the first empirically derived comprehensive power model of a commercial LTE network with less than 6 error rate and state transitions matching the specifications. Using a comprehensive data set consisting of 5-month traces of 20 smartphone users, we carefully investigate the energy usage in 3G, LTE, and WiFi networks and evaluate the impact of configuring LTE-related parameters. Despite several new power saving improvements, we find that LTE is as much as 23 times less power efficient compared with WiFi, and even less power efficient than 3G, based on the user traces and the long high power tail is found to be a key contributor. In addition, we perform case studies of several popular applications on Android in LTE and identify that the performance bottleneck for web-based applications lies less in the network, compared to our previous study in 3G [24]. Instead, the device's processing power, despite the significant improvement compared to our analysis two years ago, becomes more of a bottleneck.", "This paper presents an empirical study on the performance of mobile High Speed Packet Access (HSPA, a 3.5G cellular standard) networks in Hong Kong via extensive field tests. Our study, from the viewpoint of end users, covers virtually all possible mobile scenarios in urban areas, including subways, trains, off-shore ferries and city buses. We have confirmed that mobility has largely negative impacts on the performance of HSPA networks, as fast-changing wireless environment causes serious service deterioration or even interruption. Meanwhile our field experiment results have shown unexpected new findings and thereby exposed new features of the mobile HSPA networks, which contradict commonly held views. We surprisingly find out that mobility can improve fairness of bandwidth sharing among users and traffic flows. Also the triggering and final results of handoffs in mobile HSPA networks are unpredictable and often inappropriate, thus calling for fast reacting fallover mechanisms. We have conducted in-depth research to furnish detailed analysis and explanations to what we have observed. We conclude that mobility is a double-edged sword for HSPA networks. To the best of our knowledge, this is the first public report on a large scale empirical study on the performance of commercial mobile HSPA networks." ] }
1506.02804
2277642006
LTE 4G is the next generation of cellular network which specifically aims to improve the network performance for data traffic and is currently being rolled out by many network operators. We present results from an extensive LTE measurement campaign in Dublin, Ireland using a custom performance measurement tool. Performance data was measured at a variety of locations within the city (including cell edge locations, indoors, outdoors etc) as well as for mobile users on public transport within the city. Using this data we derive a model of the characteristics of link layer RTT and bandwidth vs link signal strength. This model is suited to use for performance evaluation of applications and services, and since it is based on real measurements it allows realistic evaluation of performance.
A number of other studies measure throughput and or latency information, but tend to focus on their impact on TCP or its derivatives @cite_19 @cite_14 @cite_6 @cite_8 @cite_11 @cite_5 @cite_12 . Additionally, there are studies that present measurements but do not use these to develop a model suited to performance evaluation @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_6", "@cite_19", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "", "2065831914", "1964476812", "2153757920", "2153771889", "2051619378", "2147201699", "2172200121" ], "abstract": [ "", "Over the past two or three years, wireless cellular networks have become faster than before, most notably due to the deployment of LTE, HSPA+, and other similar networks. LTE throughputs can reach many megabits per second and can even rival WiFi throughputs in some locations. This paper addresses a fundamental question confronting transport and application-layer protocol designers: which network should an application use? WiFi, LTE, or Multi-Path TCP (MPTCP) running over both? We compare LTE and WiFi for transfers of different sizes along both directions (i.e. the uplink and the downlink) using a crowd-sourced mobile application run by 750 users over 180 days in 16 different countries. We find that LTE outperforms WiFi 40 of the time, which is a higher fraction than one might expect at first sight. We measure flow-level MPTCP performance and compare it with the performance of TCP running over exclusively WiFi or LTE in 20 different locations across 7 cities in the United States. For short flows, we find that MPTCP performs worse than regular TCP running over the faster link; further, selecting the correct network for the primary subflow in MPTCP is critical in achieving good performance. For long flows, however, selecting the proper MPTCP congestion control algorithm is equally important. To complement our flow-level analysis, we analyze the traffic patterns of several mobile apps, finding that apps can be categorized as \"short-flow dominated\" or \"long-flow dominated\". We then record and replay these patterns over emulated WiFi and LTE links. We find that application performance has a similar dependence on the choice of networks as flow-level performance: an application dominated by short flows sees little gain from MPTCP, while an application with longer flows can benefit much more from MPTCP --- if the application can pick the right network for the primary subflow and the right choice of MPTCP congestion control.", "This paper investigates the interactions between two-way TCP connections over 3GPP LTE networks. In the LTE network, the two-way TCP flows share buffers on a common bottleneck, i.e., the radio access links. The behaviors of TCPs significantly influence the others in the opposite direction. Specifically, the radio links of LTE are asymmetric, which may induce drastic interactions of TCPs and rapid draining of downlink buffer. The periodic idleness of downlink is a huge waste of the precious radio bandwidth and results in considerable performance degradation. In the viewpoint of Coupled Queues, we thoroughly understand the interacting TCPs and explain the reason for performance degradation. Based on a straightforward modeling procedure, we formalize the evolution of two-way TCPs and model the bottleneck queue size in every slot. The model indicates the queues are close coupled, which is verified with simulations on NS2. If the uplink (queue) is fully utilized, the downlink (queue) will always be underutilized even idle, and vice versa. Furthermore, an effective solution called Preemptive ACK Queueing (PAQ) is designed to decouple the queues, which improves the performance of two-way TCPs over LTE networks.", "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step towards this end by characterizing the operational performance of a tier-1 cellular network in the United States during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 seconds shorter RRC timeouts as compared to routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events; and opportunistic connection sharing can reduce connection failures by 95 when employed by a small number of devices in each cell sector.", "With lower latency and higher bandwidth than its predecessor 3G networks, the latest cellular technology 4G LTE has been attracting many new users. However, the interactions among applications, network transport protocol, and the radio layer still remain unexplored. In this work, we conduct an in-depth study of these interactions and their impact on performance, using a combination of active and passive measurements. We observed that LTE has significantly shorter state promotion delays and lower RTTs than those of 3G networks. We discovered various inefficiencies in TCP over LTE such as undesired slow start. We further developed a novel and lightweight passive bandwidth estimation technique for LTE networks. Using this tool, we discovered that many TCP connections significantly under-utilize the available bandwidth. On average, the actually used bandwidth is less than 50 of the available bandwidth. This causes data downloads to be longer, and incur additional energy overhead. We found that the under-utilization can be caused by both application behavior and TCP parameter setting. We found that 52.6 of all downlink TCP flows have been throttled by limited TCP receive window, and that data transfer patterns for some popular applications are both energy and network unfriendly. All these findings highlight the need to develop transport protocol mechanisms and applications that are more LTE-friendly.", "With the popularity of mobile devices and the pervasive use of cellular technology, there is widespread interest in hybrid networks and on how to achieve robustness and good performance from them. As most smart phones and mobile devices are equipped with dual interfaces (WiFi and 3G 4G), a promising approach is through the use of multi-path TCP, which leverages path diversity to improve performance and provide robust data transfers. In this paper we explore the performance of multi-path TCP in the wild, focusing on simple 2-path multi-path TCP scenarios. We seek to answer the following questions: How much can a user benefit from using multi-path TCP over cellular and WiFi relative to using the either interface alone? What is the impact of flow size on average latency? What is the effect of the rate route control algorithm on performance? We are especially interested in understanding how application level performance is affected when path characteristics (e.g., round trip times and loss rates) are diverse. We address these questions by conducting measurements using one commercial Internet service provider and three major cellular carriers in the US.", "Mobile network operators have a significant interest in the performance of streaming video on their networks because network dynamics directly influence the Quality of Experience (QoE). However, unlike video service providers, network operators are not privy to the client- or server-side logs typically used to measure key video performance metrics, such as user engagement. To address this limitation, this paper presents the first large-scale study characterizing the impact of cellular network performance on mobile video user engagement from the perspective of a network operator. Our study on a month-long anonymized data set from a major cellular network makes two main contributions. First, we quantify the effect that 31 different network factors have on user behavior in mobile video. Our results provide network operators direct guidance on how to improve user engagement --- for example, improving mean signal-to-interference ratio by 1 dB reduces the likelihood of video abandonment by 2 . Second, we model the complex relationships between these factors and video abandonment, enabling operators to monitor mobile video user engagement in real-time. Our model can predict whether a user completely downloads a video with more than 87 accuracy by observing only the initial 10 seconds of video streaming sessions. Moreover, our model achieves significantly better accuracy than prior models that require client- or server-side logs, yet we only use standard radio network statistics and or TCP IP headers available to network operators.", "The popularity of smartphones and smartphone applications means that data is the dominant traffic type in current mobile networks. In this paper we present our work on a systematic investigation into facets of the LTE EPC architecture that impact the performance of TCP as the predominant transport layer protocol used by applications on mobile networks. We found that (1) load increase in a cell causes dramatic bandwidth reduction on UEs and significantly degrades TCP performance, (2) seamless handover causes significant TCP losses while lossless handover increases TCP segments' delay." ] }
1506.03032
2272956610
Tamper-resistance is a fundamental software security research area. Many approaches have been proposed to thwart specific procedures of tampering, e.g., obfuscation and self-checksumming. However, to our best knowledge, none of them can achieve theoretically tamper-resistance. Our idea is to impede the replication of tampering via program diversification, and thus increasing the complexity to break the whole software system. To this end, we propose to deliver same featured, but functionally nonequivalent software copies to different machines. We formally define the problem as N-version obfuscation, and provide a viable means to solve the problem. Our evaluation result shows that the time required for breaking a software system is linearly increased with the number of software versions, which is O(n) complexity.
Software protection is a research problem since decades ago. The proposed solutions are generally two-fold: the hardware circuit assisted solutions which provide better security assurance, or the pure software solutions which have better adaptability to general hardware @cite_18 . For our research problem, hardware circuit assisted solutions are not applicable because of their requirement on specific hardware, so we mainly discuss the pure software solutions.
{ "cite_N": [ "@cite_18" ], "mid": [ "1718337629" ], "abstract": [ "We describe a novel software verification primitive called Oblivious Hashing. Unlike previous techniques that mainly verify the static shape of code, this primitive allows implicit computation of a hash value based on the actual execution (i.e., space-time history of computation) of the code. We also discuss its applications in local software tamper resistance and remote code authentication." ] }
1506.02753
2273348943
Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.
Our approach is related to a large body of work on inverting neural networks. These include works making use of backpropagation or sampling @cite_5 @cite_24 @cite_16 @cite_20 @cite_4 @cite_27 and, most similar to our approach, other neural networks @cite_1 . However, only recent advances in neural network architectures allow us to invert a modern large convolutional network with another network.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_24", "@cite_27", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "2159964742", "1554663460", "2079397195", "1504066675", "2154822588", "2104375222", "" ], "abstract": [ "There are many methods for performing neural network inversion. Multi-element evolutionary inversion procedures are capable of finding numerous inversion points simultaneously. Constrained neural network inversion requires that the inversion solution belong to one or more specified constraint sets. In many cases, iterating between the neural network inversion solution and the constraint set can successfully solve constrained inversion problems. This paper surveys existing methodologies for neural network inversion, which is illustrated by its use as a tool in query-based learning, sonar performance analysis, power system security assessment, control, and generation of codebook vectors.", "From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.", "The method of inversion for arbitrary continuous multilayer nets is developed. The inversion is done by computing iteratively an input vector which minimizes the least-mean-square errors to approximate a given output target. This inversion is not unique for given targets and depends on the starting point in input space. The inversion method turns out to be a valuable tool for the examination of multilayer nets (MLNs). Applications of the inversion method to constraint satisfaction, feature detection, and the testing of reliability and performance of MLNs are outlined. It is concluded that recurrent nets and even time-delay nets might be invertible. >", "Nowadays model based techniques play very important role in solving measurement and control problems. Recently for representing nonlinear systems fuzzy and neural network (NN) models became very popular. For evaluating measurement data and for controller design also the inverse models are of considerable interest. In this paper, different observer based techniques to perform fuzzy and neural network model inversion are presented. The methods are based on solving a nonlinear equation derived from the multiple-input single-output (MISO) forward fuzzy model simple by interchanging the role of the output and one of the inputs. The utilization of the inverse model can be either a direct compensation of some measurement nonlinearities or a controller mechanism for nonlinear plants. For discrete-time inputs the technique provides good performance if the iterative inversion is fast enough compared to system variations, i.e., the iteration is convergent within the sampling period applied. The proposed method can be considered also as a simple nonlinear state observer which reconstructs the selected input of the forward (fuzzy or NN) model from its output using an appropriate strategy and a copy of the fuzzy or neural network model itself. Improved performance can be obtained by introducing genetic algorithms in the prediction-correction mechanism. Although, the overall performance of the suggested technique is highly influenced by the nature of the non-linearity and the actual prediction-correction mechanism applied, it can also be shown that using this observer concept completely inverted models can be derived. The inversion can be extended towards anytime modes of operation, as well, providing short response time and flexibility during temporal loss of computational power and or time.", "This paper presents a method for solving inverse mapping of a continuous function learned by a multilayer feedforward mapping network. The method is based on the iterative update of input vector toward a solution, while escaping from local minima. The input vector update is determined by the pseudo-inverse of the gradient of Lyapunov function, and, should an optimal solution be searched for, the projection of the gradient of a performance index on the null space of the gradient of Lyapunov function. The update rule is allowed to detect an input vector approaching local minima through a phenomenon called \"update explosion\". At or near local minima, the input vector is guided by an escape trajectory generated based on \"global information\", where global information is referred to here as predefined or known information on forward mapping; or the input vector is relocated to a new position based on the probability density function (PDF) constructed over the input vector space by Parzen estimate. The constructed PDF reflects the history of local minima detected during the search process, and represents the probability that a particular input vector can lead to a solution based on the update rule. The proposed method has a substantial advantage in computational complexity as well as convergence property over the conventional methods based on Jacobian pseudo-inverse or Jacobian transpose. >", "The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem. We present a method for dealing with the inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem, a separable programming (SP) problem, or a linear programming problem according to the architectures of networks to be inverted or the types of network inversions to be computed. An important advantage of the method over the existing iterative inversion algorithm is that various designated network inversions of multilayer perceptrons and radial basis function neural networks can be obtained by solving the corresponding SP problems, which can be solved by a modified simplex method. We present several examples to demonstrate the proposed method and applications of network inversions to examine and improve the generalization performance of trained networks. The results show the effectiveness of the proposed method.", "" ] }
1506.02753
2273348943
Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.
Our approach is not to be confused with the DeconvNet @cite_21 , which propagates high level activations backward through a network to identify parts of the image responsible for the activation. In addition to the high-level feature activations, this reconstruction process uses extra information about maxima locations in intermediate max-pooling layers. This information has been shown to be crucial for the approach to work @cite_9 . A visualization method similar to DeconvNet is by @cite_9 , yet it also makes use of intermediate layer activations.
{ "cite_N": [ "@cite_9", "@cite_21" ], "mid": [ "2123045220", "1849277567" ], "abstract": [ "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1506.02753
2273348943
Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.
Mahendran and Vedaldi @cite_3 invert a differentiable image representation @math using gradient descent. Given a feature vector @math , they seek for an image @math which minimizes a loss function -- the squared Euclidean distance between @math and @math plus a regularizer enforcing a natural image prior. This method is fundamentally different from our approach in that it optimizes the difference between the feature vectors, not the image reconstruction error. Additionally, it includes a hand-designed natural image prior, while in our case the network implicitly learns such a prior. Technically, it involves optimization at test time, which requires computing the gradient of the feature representation and makes it relatively slow (the authors report 6s per image on a GPU). In contrast, the presented approach is only costly when training the inversion network. Reconstruction from a given feature vector just requires a single forward pass through the network, which takes roughly @math ms per image on a GPU. The method of @cite_3 requires gradients of the feature representation, therefore it could not be directly applied to non-differentiable representations such as LBP, or recordings from a real brain @cite_2 .
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "2949987032", "2126810579" ], "abstract": [ "Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.", "Summary Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1, 2] and can form the basis for brain decoding devices [3–5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6–8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10, 11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology." ] }
1506.02753
2273348943
Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.
There has been research on inverting various traditional computer vision representations: HOG and dense SIFT @cite_11 , keypoint-based SIFT @cite_0 , Local Binary Descriptors @cite_23 , Bag-of-Visual-Words @cite_17 . All these methods are either tailored for inverting a specific feature representation or restricted to shallow representations, while our method can be applied to any feature representation.
{ "cite_N": [ "@cite_0", "@cite_17", "@cite_23", "@cite_11" ], "mid": [ "1976101156", "2952111699", "2112455692", "1982428585" ], "abstract": [ "This paper shows that an image can be approximately reconstructed based on the output of a blackbox local description software such as those classically used for image indexing. Our approach consists first in using an off-the-shelf image database to find patches that are visually similar to each region of interest of the unknown input image, according to associated local descriptors. These patches are then warped into input image domain according to interest region geometry and seamlessly stitched together. Final completion of still missing texture-free regions is obtained by smooth interpolation. As demonstrated in our experiments, visually meaningful reconstructions are obtained just based on image local descriptors like SIFT, provided the geometry of regions of interest is known. The reconstruction most often allows the clear interpretation of the semantic image content. As a result, this work raises critical issues of privacy and rights when local descriptors of photos or videos are given away for indexing and search purpose.", "The objective of this work is to reconstruct an original image from Bag-of-Visual-Words (BoVW). Image reconstruction from features can be a means of identifying the characteristics of features. Additionally, it enables us to generate novel images via features. Although BoVW is the de facto standard feature for image recognition and retrieval, successful image reconstruction from BoVW has not been reported yet. What complicates this task is that BoVW lacks the spatial information for including visual words. As described in this paper, to estimate an original arrangement, we propose an evaluation function that incorporates the naturalness of local adjacency and the global position, with a method to obtain related parameters using an external image database. To evaluate the performance of our method, we reconstruct images of objects of 101 kinds. Additionally, we apply our method to analyze object classifiers and to generate novel images via BoVW.", "Local Binary Descriptors are becoming more and more popular for image matching tasks, especially when going mobile. While they are extensively studied in this context, their ability to carry enough information in order to infer the original image is seldom addressed. In this work, we leverage an inverse problem approach to show that it is possible to directly reconstruct the image content from Local Binary Descriptors. This process relies on very broad assumptions besides the knowledge of the pattern of the descriptor at hand. This generalizes previous results that required either a prior learning database or non-binarized features. Furthermore, our reconstruction scheme reveals differences in the way different Local Binary Descriptors capture and encode image information. Hence, the potential applications of our work are multiple, ranging from privacy issues caused by eavesdropping image keypoints streamed by mobile devices to the design of better descriptors through the visualization and the analysis of their geometric content.", "We introduce algorithms to visualize feature spaces used by object detectors. The tools in this paper allow a human to put on 'HOG goggles' and perceive the visual world as a HOG based object detector sees it. We found that these visualizations allow us to analyze object detection systems in new ways and gain new insight into the detector's failures. For example, when we visualize the features for high scoring false alarms, we discovered that, although they are clearly wrong in image space, they do look deceptively similar to true positives in feature space. This result suggests that many of these false alarms are caused by our choice of feature space, and indicates that creating a better learning algorithm or building bigger datasets is unlikely to correct these errors. By visualizing feature spaces, we can gain a more intuitive understanding of our detection systems." ] }
1506.02557
2951595529
We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.
Pioneering work in practical variational inference for neural networks was done in @cite_12 , where a (biased) variational lower bound estimator was introduced with good results on recurrent neural network models. In later work @cite_17 @cite_13 it was shown that even more practical estimators can be formed for most types of continuous latent variables or parameters using a (non-local) reparameterization trick, leading to efficient and unbiased stochastic gradient-based variational inference. These works focused on an application to latent-variable inference; extensive empirical results on inference of global model parameters were reported in @cite_22 , including succesful application to reinforcement learning. These earlier works used the relatively high-variance estimator , upon which we improve. Variable reparameterizations have a long history in the statistics literature, but have only recently found use for efficient machine learning and inference @cite_6 @cite_23 @cite_9 . Related is also @cite_10 , an algorithm for inferring marginal posterior probabilities; however, it requires certain tractabilities in the network making it insuitable for the type of models under consideration in this paper.
{ "cite_N": [ "@cite_22", "@cite_10", "@cite_9", "@cite_6", "@cite_23", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2951266961", "2950177356", "2034376463", "1583776211", "1597459461", "1909320841", "2108677974", "" ], "abstract": [ "We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.", "Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.", "textabstractWe propose a general algorithm for approximating nonstandard Bayesian posterior distributions. The algorithm minimizes the Kullback-Leibler divergence of an approximating distribution to the intractable posterior distribu- tion. Our method can be used to approximate any posterior distribution, provided that it is given in closed form up to the proportionality constant. The approxi- mation can be any distribution in the exponential family or any mixture of such distributions, which means that it can be made arbitrarily precise. Several exam- ples illustrate the speed and accuracy of our approximation method in practice.", "Stochastic neurons can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such s tochastic neurons, i.e., can we “back-propagate” through these stochastic neurons? We examine this question, existing approaches, and present two novel families of solutions, applicable in different settings. In particular, it is demonstrate d that a simple biologically plausible formula gives rise to an an unbiased (but noisy) estimator of the gradient with respect to a binary stochastic neuron firing proba bility. Unlike other estimators which view the noise as a small perturbation in order to estimate gradients by finite differences, this estimator is unbiased even w ithout assuming that the stochastic perturbation is small. This estimator is also in teresting because it can be applied in very general settings which do not allow gradient back-propagation, including the estimation of the gradient with respect to futur e rewards, as required in reinforcement learning setups. We also propose an approach to approximating this unbiased but high-variance estimator by learning to predict it using a biased estimator. The second approach we propose assumes that an estimator of the gradient can be back-propagated and it provides an unbiased estimator of the gradient, but can only work with non-linearities unlike the hard threshold, but like the rectifier, that are not flat for all of their range. This is similar to trad itional sigmoidal units but has the advantage that for many inputs, a hard decision (e.g., a 0 output) can be produced, which would be convenient for conditional computation and achieving sparse representations and sparse gradients.", "We propose a technique for increasing the efficiency of gradient-based inference and learning in Bayesian networks with multiple layers of continuous latent vari- ables. We show that, in many cases, it is possible to express such models in an auxiliary form, where continuous latent variables are conditionally deterministic given their parents and a set of independent auxiliary variables. Variables of mod- els in this auxiliary form have much larger Markov blankets, leading to significant speedups in gradient-based inference, e.g. rapid mixing Hybrid Monte Carlo and efficient gradient-based optimization. The relative efficiency is confirmed in ex- periments.", "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation -- rules for back-propagation through stochastic variables -- and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation.", "Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.", "" ] }
1506.02557
2951595529
We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.
As we show here, regularization by dropout @cite_20 @cite_21 can be interpreted as variational inference. DropConnect @cite_5 is similar to dropout, but with binary noise on the weights rather than hidden units. DropConnect thus has a similar interpretation as variational inference, with a uniform prior over the weights, and a mixture of two Dirac peaks as posterior. In @cite_1 , was introduced, a variation of dropout where a binary belief network is learned for producing dropout rates. Recently, @cite_15 proposed another Bayesian perspective on dropout. In recent work @cite_14 , a similar reparameterization is described and used for variational inference; their focus is on closed-form approximations of the variational bound, rather than unbiased Monte Carlo estimators. @cite_15 and @cite_11 also investigate a Bayesian perspective on dropout, but focus on the binary variant. @cite_11 reports various encouraging results on the utility of dropout's implied prediction uncertainty.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_1", "@cite_5", "@cite_15", "@cite_20", "@cite_11" ], "mid": [ "2281271687", "", "2136836265", "4919037", "", "2095705004", "582134693" ], "abstract": [ "Marginalising out uncertain quantities within the internal representations or parameters of neural networks is of central importance for a wide range of learning techniques, such as empirical, variational or full Bayesian methods. We set out to generalise fast dropout (Wang & Manning, 2013) to cover a wider variety of noise processes in neural networks. This leads to an efficient calculation of the marginal likelihood and predictive distribution which evades sampling and the consequential increase in training time due to highly variant gradient estimates. This allows us to approximate variational Bayes for the parameters of feed-forward neural networks. Inspired by the minimum description length principle, we also propose and experimentally verify the direct optimisation of the regularised predictive distribution. The methods yield results competitive with previous neural network based approaches and Gaussian processes on a wide range of regression tasks.", "", "Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50 of their activities. We describe a method called 'standout' in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This 'adaptive dropout network' can be trained jointly with the neural network by approximately computing local expectations of binary dropout variables, computing derivatives using back-propagation, and using stochastic gradient descent. Interestingly, experiments show that the learnt dropout network parameters recapitulate the neural network parameters, suggesting that a good dropout network regularizes activities according to magnitude. When evaluated on the MNIST and NORB datasets, we found that our method achieves lower classification error rates than other feature learning methods, including standard dropout, denoising auto-encoders, and restricted Boltzmann machines. For example, our method achieves 0.80 and 5.8 errors on the MNIST and NORB test sets, which is better than state-of-the-art results obtained using feature learning methods, including those that use convolutional architectures.", "We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.", "", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning." ] }
1506.02574
2949627381
The degree distribution is one of the most fundamental graph properties of interest for real-world graphs. It has been widely observed in numerous domains that graphs typically have a tailed or scale-free degree distribution. While the average degree is usually quite small, the variance is quite high and there are vertices with degrees at all scales. We focus on the problem of approximating the degree distribution of a large streaming graph, with small storage. We design an algorithm headtail, whose main novelty is a new estimator of infrequent degrees using truncated geometric random variables. We give a mathematical analysis of headtail and show that it has excellent behavior in practice. We can process streams will millions of edges with storage less than 1 and get extremely accurate approximations for all scales in the degree distribution. We also introduce a new notion of Relative Hausdorff distance between tailed histograms. Existing notions of distances between distributions are not suitable, since they ignore infrequent degrees in the tail. The Relative Hausdorff distance measures deviations at all scales, and is a more suitable distance for comparing degree distributions. By tracking this new measure, we are able to give strong empirical evidence of the convergence of headtail.
Finding frequent items, aka heavy hitters," is a classic problem in the data stream model. Cormode and Hadjieleftheriou @cite_0 compare three of the most important algorithms: the algorithm @cite_38 @cite_42 @cite_36 , the algorithm @cite_32 , and the algorithm @cite_23 . Other popular algorithms such as CountSketch @cite_7 and CountMin @cite_18 enable frequent items to be identified when the frequency of an item may be incremented and decremented. For large degrees, these approaches will give accurate results, but the error term dwarfs the degree at smaller scales. We demonstrate this empirically in Section . Much work has been done in approximating frequency moments @cite_10 @cite_15 @cite_22 @cite_0 , but they do not give an estimate for multiple scales. Nor has this work been implemented in practice for large data sets.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_22", "@cite_7", "@cite_36", "@cite_42", "@cite_32", "@cite_0", "@cite_23", "@cite_15", "@cite_10" ], "mid": [ "2597765082", "2080234606", "2103126020", "", "2056012370", "", "2069980026", "1493892051", "2113139394", "2069414131", "2080745194" ], "abstract": [ "We consider a router on the Internet analyzing the statistical properties of a TCP IP packet stream. A fundamental difficulty with measuring traffic behavior on the Internet is that there is simply too much data to be recorded for later analysis, on the order of gigabytes a second. As a result, network routers can collect only relatively few statistics about the data. The central problem addressed here is to use the limited memory of routers to determine essential features of the network traffic stream. A particularly difficult and representative subproblem is to determine the top k categories to which the most packets belong, for a desired value of k and for a given notion of categorization such as the destination IP address. We present an algorithm that deterministically finds (in particular) all categories having a frequency above 1 (m+1) using m counters, which we prove is best possible in the worst case. We also present a sampling-based algorithm for the case that packet categories follow an arbitrary distribution, but their order over time is permuted uniformly at random. Under this model, our algorithm identifies flows above a frequency threshold of roughly 1 √nm with high probability, where m is the number of counters and n is the number of packets observed. This guarantee is not far off from the ideal of identifying all flows (probability 1 n), and we prove that it is best possible up to a logarithmic factor. We show that the algorithm ranks the identified flows according to frequency within any desired constant factor of accuracy.", "We introduce a new sublinear space data structure--the count-min sketch--for summarizing data streams. Our sketch allows fundamental queries in data stream summarization such as point, range, and inner product queries to be approximately answered very quickly; in addition, it can be applied to solve several important problems in data streams such as finding quantiles, frequent items, etc. The time and space bounds we show for using the CM sketch to solve these problems significantly improve those previously known--typically from 1 e2 to 1 e in factor.", "We give the first optimal algorithm for estimating the number of distinct elements in a data stream, closing a long line of theoretical research on this problem begun by Flajolet and Martin in their seminal paper in FOCS 1983. This problem has applications to query optimization, Internet routing, network topology, and data mining. For a stream of indices in 1,...,n , our algorithm computes a (1 ± e)-approximation using an optimal O(1 e-2 + log(n)) bits of space with 2 3 success probability, where 0 We also give an algorithm to estimate the Hamming norm of a stream, a generalization of the number of distinct elements, which is useful in data cleaning, packet tracing, and database auditing. Our algorithm uses nearly optimal space, and has optimal O(1) update and reporting times.", "", "The problem of finding heavy hitters and approximating the frequencies of items is at the heart of many problems in data stream analysis. It has been observed that several proposed solutions to this problem can outperform their worst-case guarantees on real data. This leads to the question of whether some stronger bounds can be guaranteed. We answer this in the positive by showing that a class of counter-based algorithms (including the popular and very space-efficient Frequent and SpacesSaving algorithms) provides much stronger approximation guarantees than previously known. Specifically, we show that errors in the approximation of individual elements do not depend on the frequencies of the most frequent elements, but only on the frequency of the remaining tail. This shows that counter-based methods are the most space-efficient (in fact, space-optimal) algorithms having this strong error bound. This tail guarantee allows these algorithms to solve the sparse recovery problem. Here, the goal is to recover a faithful representation of the vector of frequencies, f. We prove that using space O(k), the algorithms construct an approximation f* to the frequency vector f so that the L1 error ppf−pf*p1 is close to the best possible error minf′ pf′ − fp1, where f′ ranges over all vectors with at most k non-zero entries. This improves the previously best known space bound of about O(k log n) for streams without element deletions (where n is the size of the domain from which stream elements are drawn). Other consequences of the tail guarantees are results for skewed (Zipfian) data, and guarantees for accuracy of merging multiple summarized streams.", "", "Research in data stream algorithms has blossomed since late 90s. The talk will trace the history of the Approximate Frequency Counts paper, how it was conceptualized and how it influenced data stream research. The talk will also touch upon a recent development: analysis of personal data streams for improving our quality of lives.", "We present a 1-pass algorithm for estimating the most frequent items in a data stream using limited storage space. Our method relies on a data structure called a COUNT SKETCH, which allows us to reliably estimate the frequencies of frequent items in the stream. Our algorithm achieves better space bounds than the previously known best algorithms for this problem for several natural distributions on the item frequencies. In addition, our algorithm leads directly to a 2-pass algorithm for the problem of estimating the items with the largest (absolute) change in frequency between two data streams. To our knowledge, this latter problem has not been previously studied in the literature.", "We present a simple, exact algorithm for identifying in a multiset the items with frequency more than a threshold θ. The algorithm requires two passes, linear time, and space 1 θ. The first pass is an on-line algorithm, generalizing a well-known algorithm for finding a majority element, for identifying a set of at most 1 θ items that includes, possibly among others, all items with frequency greater than θ.", "We give a 1-pass O(m1-2⁄k)-space algorithm for computing the k-th frequency moment of a data stream for any real k > 2. Together with the lower bounds of [1, 2, 4], this resolves the main problem left open by in 1996 [1]. Our algorithm also works for streams with deletions and thus gives an O(m 1-2⁄p) space algorithm for the L p difference problem for any p > 2. This essentially matches the known Ω(m1-2⁄p-o(1)) lower bound of [12, 2]. Finally the update time of our algorithms is O(1).", "The frequency moments of a sequence containingmielements of typei, 1?i?n, are the numbersFk=?ni=1mki. We consider the space complexity of randomized algorithms that approximate the numbersFk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbersF0,F1, andF2can be approximated in logarithmic space, whereas the approximation ofFkfork?6 requiresn?(1)space. Applications to data bases are mentioned as well." ] }
1506.02574
2949627381
The degree distribution is one of the most fundamental graph properties of interest for real-world graphs. It has been widely observed in numerous domains that graphs typically have a tailed or scale-free degree distribution. While the average degree is usually quite small, the variance is quite high and there are vertices with degrees at all scales. We focus on the problem of approximating the degree distribution of a large streaming graph, with small storage. We design an algorithm headtail, whose main novelty is a new estimator of infrequent degrees using truncated geometric random variables. We give a mathematical analysis of headtail and show that it has excellent behavior in practice. We can process streams will millions of edges with storage less than 1 and get extremely accurate approximations for all scales in the degree distribution. We also introduce a new notion of Relative Hausdorff distance between tailed histograms. Existing notions of distances between distributions are not suitable, since they ignore infrequent degrees in the tail. The Relative Hausdorff distance measures deviations at all scales, and is a more suitable distance for comparing degree distributions. By tracking this new measure, we are able to give strong empirical evidence of the convergence of headtail.
Over the last ten years, there has been a growing body of work focused on processing graphs in the data stream model. See @cite_33 for a summary of recent work on graph streaming and sketching. This work has included problems such as the number of triangles and related quantities such as the transitivity coefficient @cite_17 @cite_3 @cite_1 , estimating the connectivity properties of a graph @cite_34 , and solving combinatorial problems such as computing large matchings @cite_19 @cite_43 . Cormode and Muthukrishnan considered estimating properties of the degree distribution in multigraphs but not the distribution itself @cite_9 .
{ "cite_N": [ "@cite_33", "@cite_9", "@cite_1", "@cite_3", "@cite_19", "@cite_43", "@cite_34", "@cite_17" ], "mid": [ "", "", "2124450885", "2094308804", "2295466155", "1514707655", "2025622191", "2031082424" ], "abstract": [ "", "", "Sampling is a standard approach in big-graph analytics; the goal is to efficiently estimate the graph properties by consulting a sample of the whole population. A perfect sample is assumed to mirror every property of the whole population. Unfortunately, such a perfect sample is hard to collect in complex populations such as graphs (e.g. web graphs, social networks), where an underlying network connects the units of the population. Therefore, a good sample will be representative in the sense that graph properties of interest can be estimated with a known degree of accuracy. While previous work focused particularly on sampling schemes to estimate certain graph properties (e.g. triangle count), much less is known for the case when we need to estimate various graph properties with the same sampling scheme. In this paper, we pro- pose a generic stream sampling framework for big-graph analytics, called Graph Sample and Hold (gSH), which samples from massive graphs sequentially in a single pass, one edge at a time, while maintaining a small state in memory. We use a Horvitz-Thompson construction in conjunction with a scheme that samples arriving edges without adjacencies to previously sampled edges with probability p and holds edges with adjacencies with probability q. Our sample and hold framework facilitates the accurate estimation of subgraph patterns by enabling the dependence of the sampling process to vary based on previous history. Within our framework, we show how to produce statistically unbiased estimators for various graph properties from the sample. Given that the graph analytics will run on a sample instead of the whole population, the runtime complexity is kept under control. Moreover, given that the estimators are unbiased, the approximation error is also kept under control. Finally, we test the performance of the proposed framework (gSH) on various types of graphs, showing that from a sample with -- 40K edges, it produces estimates with relative errors", "This paper presents a new space-efficient algorithm for counting and sampling triangles--and more generally, constant-sized cliques--in a massive graph whose edges arrive as a stream. Compared to prior work, our algorithm yields significant improvements in the space and time complexity for these fundamental problems. Our algorithm is simple to implement and has very good practical performance on large graphs.", "We present a streaming algorithm that makes one pass over the edges of an unweighted graph presented in random order, and produces a polylogarithmic approximation to the size of the maximum matching in the graph, while using only polylogarithmic space. Prior to this work the only approximations known were a folklore O(√n) approximation with polylogarithmic space in an n vertex graph and a constant approximation with Ω(n) space. Our work thus gives the first algorithm where both the space and approximation factors are smaller than any polynomial in n. Our algorithm is obtained by effecting a streaming implementation of a simple \"local\" algorithm that we design for this problem. The local algorithm produces a O(k · n1 k) approximation to the size of a maximum matching by exploring the radius k neighborhoods of vertices, for any parameter k. We show, somewhat surprisingly, that our local algorithm can be implemented in the streaming setting even for k = Ω(log n log log n). Our analysis exposes some of the problems that arise in such conversions of local algorithms into streaming ones, and gives techniques to overcome such problems.", "We present algorithms for finding large graph matchings in the streaming model. In this model, applicable when dealing with massive graphs, edges are streamed-in in some arbitrary order rather than residing in randomly accessible memory. For e>0, we achieve a @math approximation for maximum cardinality matching and a @math approximation to maximum weighted matching. Both algorithms use a constant number of passes and @math space.", "A growing body of work addresses the challenge of processing dynamic graph streams: a graph is defined by a sequence of edge insertions and deletions and the goal is to construct synopses and compute properties of the graph while using only limited memory. Linear sketches have proved to be a powerful technique in this model and can also be used to minimize communication in distributed graph processing. We present the first linear sketches for estimating vertex connectivity and constructing hypergraph sparsifiers. Vertex connectivity exhibits markedly different combinatorial structure than edge connectivity and appears to be harder to estimate in the dynamic graph stream model. Our hypergraph result generalizes the work of (PODS 2012) on graph sparsification and has the added benefit of significantly simplifying the previous results. One of the main ideas is related to the problem of reconstructing subgraphs that satisfy a specific sparsity property. We introduce a more general notion of graph degeneracy and extend the graph reconstruction result of (IPDPS 2011).", "We design a space efficient algorithm that approximates the transitivity (global clustering coefficient) and total triangle count with only a single pass through a graph given as a stream of edges. Our procedure is based on the classic probabilistic result, the birthday paradox. When the transitivity is constant and there are more edges than wedges (common properties for social networks), we can prove that our algorithm requires O(√n) space (n is the number of vertices) to provide accurate estimates. We run a detailed set of experiments on a variety of real graphs and demonstrate that the memory requirement of the algorithm is a tiny fraction of the graph. For example, even for a graph with 200 million edges, our algorithm stores just 60,000 edges to give accurate results. Being a single pass streaming algorithm, our procedure also maintains a real-time estimate of the transitivity number of triangles of a graph, by storing a miniscule fraction of edges." ] }
1506.02574
2949627381
The degree distribution is one of the most fundamental graph properties of interest for real-world graphs. It has been widely observed in numerous domains that graphs typically have a tailed or scale-free degree distribution. While the average degree is usually quite small, the variance is quite high and there are vertices with degrees at all scales. We focus on the problem of approximating the degree distribution of a large streaming graph, with small storage. We design an algorithm headtail, whose main novelty is a new estimator of infrequent degrees using truncated geometric random variables. We give a mathematical analysis of headtail and show that it has excellent behavior in practice. We can process streams will millions of edges with storage less than 1 and get extremely accurate approximations for all scales in the degree distribution. We also introduce a new notion of Relative Hausdorff distance between tailed histograms. Existing notions of distances between distributions are not suitable, since they ignore infrequent degrees in the tail. The Relative Hausdorff distance measures deviations at all scales, and is a more suitable distance for comparing degree distributions. By tracking this new measure, we are able to give strong empirical evidence of the convergence of headtail.
Closest to this work is the series of graph sampling papers by @cite_27 @cite_26 @cite_13 @cite_1 . Their work focuses on estimating many properties (as opposed to a single property) with a fixed sampling method, and they study various sampling schemes. The results on estimating ccdhs typically use 20-30 recent Graph Sample and Hold framework gives extremely strong results for triangle counting @cite_1 , but is not applied for the ccdh. This technique is closely related to an approach for estimating frequency moments @cite_10 @cite_37 . Our sampling approach is also similar, and our main contribution is in the actual estimation procedure.
{ "cite_N": [ "@cite_13", "@cite_37", "@cite_26", "@cite_1", "@cite_27", "@cite_10" ], "mid": [ "2963316155", "2952007101", "", "2124450885", "180417844", "2080745194" ], "abstract": [ "Network sampling is integral to the analysis of social, information, and biological networks. Since many real-world networks are massive in size, continuously evolving, and or distributed in nature, the network structure is often sampled in order to facilitate study. For these reasons, a more thorough and complete understanding of network sampling is critical to support the field of network science. In this paper, we outline a framework for the general problem of network sampling by highlighting the different objectives, population and units of interest, and classes of network sampling methods. In addition, we propose a spectrum of computational models for network sampling methods, ranging from the traditionally studied model based on the assumption of a static domain to a more challenging model that is appropriate for streaming domains. We design a family of sampling methods based on the concept of graph induction that generalize across the full spectrum of computational models (from static to streaming) while efficiently preserving many of the topological properties of the input graphs. Furthermore, we demonstrate how traditional static sampling algorithms can be modified for graph streams for each of the three main classes of sampling methods: node, edge, and topology-based sampling. Experimental results indicate that our proposed family of sampling methods more accurately preserve the underlying properties of the graph in both static and streaming domains. Finally, we study the impact of network sampling algorithms on the parameter estimation and performance evaluation of relational classification algorithms.", "Given data stream @math of size @math of numbers from @math , the frequency of @math is defined as @math . The @math -th of @math is defined as @math . We consider the problem of approximating frequency moments in insertion-only streams for @math . For any constant @math we show an @math upper bound on the space complexity of the problem. Here @math is the iterative @math function. To simplify the presentation, we make the following assumptions: @math and @math are polynomially far; approximation error @math and parameter @math are constants. We observe a natural bijection between streams and special matrices. Our main technical contribution is a non-uniform sampling method on matrices. We call our method a ; it samples a heavy element (i.e., element @math with frequency @math ) with probability @math and gives approximation @math . In addition, the estimations never exceed the real values, that is @math for all @math . As a result, we reduce the space complexity of finding a heavy element to @math bits. We apply our method of recursive sketches and resolve the problem with @math bits.", "", "Sampling is a standard approach in big-graph analytics; the goal is to efficiently estimate the graph properties by consulting a sample of the whole population. A perfect sample is assumed to mirror every property of the whole population. Unfortunately, such a perfect sample is hard to collect in complex populations such as graphs (e.g. web graphs, social networks), where an underlying network connects the units of the population. Therefore, a good sample will be representative in the sense that graph properties of interest can be estimated with a known degree of accuracy. While previous work focused particularly on sampling schemes to estimate certain graph properties (e.g. triangle count), much less is known for the case when we need to estimate various graph properties with the same sampling scheme. In this paper, we pro- pose a generic stream sampling framework for big-graph analytics, called Graph Sample and Hold (gSH), which samples from massive graphs sequentially in a single pass, one edge at a time, while maintaining a small state in memory. We use a Horvitz-Thompson construction in conjunction with a scheme that samples arriving edges without adjacencies to previously sampled edges with probability p and holds edges with adjacencies with probability q. Our sample and hold framework facilitates the accurate estimation of subgraph patterns by enabling the dependence of the sampling process to vary based on previous history. Within our framework, we show how to produce statistically unbiased estimators for various graph properties from the sample. Given that the graph analytics will run on a sample instead of the whole population, the runtime complexity is kept under control. Moreover, given that the estimators are unbiased, the approximation error is also kept under control. Finally, we test the performance of the proposed framework (gSH) on various types of graphs, showing that from a sample with -- 40K edges, it produces estimates with relative errors", "Recently, there has been a great deal of research focusing on the development of sampling algorithms for networks with small-world and or power-law structure. The peerto-peer research community (e.g., [7]) have used sampling to quickly explore and obtain a good representative sample of the network topology, as these networks are hard to explore completely and have significant amounts of churn in their topology. For collecting data from social networks, researchers often use snowball sampling (e.g., [2]) due to the lack of access to the complete graph. have developed Forest Fire Sampling, which uses a hybrid combination of snowball sampling and random-walk sampling to produce samples that match the temporal evolution of the underlying social network [5]. have developed a Metropolis algorithm which samples in a manner designed to match desired properties in the original network [3]. Although there has been a great deal of research focusing on the the development of sampling algorithms, much of this work is based on empirical study and evaluation (i.e., measuring the similarity between sampled and original network properties). There has been some work (e.g., [4, 8, 6]) that has studied the statistical properties of samples of complex networks produced by traditional sampling algorithms such as node sampling, edge sampling and random walks. However, there has been relatively little attention paid to the development of a theoretical foundation for sampling from networks—including a formal framework for sampling, an understanding of various network characteristics and their dependencies, and an analysis of their impact on the accuracy of sampling algorithms. In this paper, we reconsider the foundations of network sampling and attempt to formalize the goals, and process of, sampling, in order to frame future development and analysis of sampling algorithms.", "The frequency moments of a sequence containingmielements of typei, 1?i?n, are the numbersFk=?ni=1mki. We consider the space complexity of randomized algorithms that approximate the numbersFk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbersF0,F1, andF2can be approximated in logarithmic space, whereas the approximation ofFkfork?6 requiresn?(1)space. Applications to data bases are mentioned as well." ] }
1506.02515
2949273893
We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance.
As ConvNets are growing in size and are spreading towards real-time and large-scale computer vision systems, a lot of attention is attracted to the problem of speeding up convolutional layers. In parallel to the lowering-based approaches mentioned above, which reduce convolutions to matrix multiplications, several works investigate the use of fast Fourier transforms @cite_4 @cite_20 . Despite the theoretical appeal, the use of Fourier transforms has its own limitations (mostly related to memory usage) and most existing packages stick to the lowering approach, which at the moment of the submission is also used by the fastest implementation @cite_6 .
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_20" ], "mid": [ "1922123711", "", "1789336918" ], "abstract": [ "Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.", "", "We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units. We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA's cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA's cuDNN implementation for many common convolutional layers (up to 23.5x for some synthetic kernel configurations). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided." ] }
1506.02515
2949273893
We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance.
Alternatively, several recent works investigate various kinds of tensor factorization in order to break generalized convolution into a sequence of smaller convolutions with fewer parameters @cite_28 @cite_32 @cite_25 . Using inexact low-rank factorizations within such approaches allows to obtain considerable speedup when low enough decomposition rank is used. Our approach is related to tensor-factorization approaches as we also seek to replace full convolution tensor with a tensor that has fewer parameters. Our approach however does not perform any sort of decomposition factorization for the kernel tensor. Another more distantly related approach is represented by a group of methods @cite_36 @cite_45 @cite_9 that compress the initial large ConvNet into a smaller network with different architecture while trying to match the outputs of the two networks.
{ "cite_N": [ "@cite_28", "@cite_36", "@cite_9", "@cite_32", "@cite_45", "@cite_25" ], "mid": [ "2950967261", "2952881492", "1690739335", "2167215970", "1821462560", "" ], "abstract": [ "The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on the TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.", "While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.", "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "" ] }
1506.02515
2949273893
We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance.
Our approach is also related to methods that use structured sparsity @cite_44 @cite_37 @cite_23 to discover optimal architectures of certain machine learners, e.g. to discover the optimal structure of a graphical model @cite_18 or the optimal receptive fields in the two layered image classifier @cite_21 . On the other hand, since our approach effectively learns receptive fields within a ConvNet, it can be related to other receptive field learning approaches, e.g. @cite_0 @cite_40 .
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_21", "@cite_44", "@cite_0", "@cite_40", "@cite_23" ], "mid": [ "1984915212", "2290703437", "", "2138019504", "", "2002648693", "2108687351" ], "abstract": [ "The Group-Lasso method for finding important explanatory factors suffers from the potential non-uniqueness of solutions and also from high computational costs. We formulate conditions for the uniqueness of Group-Lasso solutions which lead to an easily implementable test procedure that allows us to identify all potentially active groups. These results are used to derive an efficient algorithm that can deal with input dimensions in the millions and can approximate the solution path efficiently. The derived methods are applied to large-scale learning problems where they exhibit excellent performance and where the testing procedure helps to avoid misinterpretations of the solutions.", "We study the problem of learning the graph structure associated with a general discrete graphical models (each variable can take any of m > 1 values, the clique factors have maximum size c ≥ 2) from samples, under high-dimensional scaling where the number of variables p could be larger than the number of samples n. We provide a quantitative consistency analysis of a procedure based on node-wise multi-class logistic regression with group-sparse regularization. We first consider general m-ary pairwise models – where each factor depends on at most two variables. We show that when the number of samples scale as n > K(m − 1) 2 d 2 log((m −1) 2 (p −1))– where d is the maximum degree and K a fixed constant – the procedure succeeds in recovering the graph with high probability. For general models with c-way factors, the natural multi-way extension of the pairwise method quickly becomes very computationally complex. So we studied the effectiveness of using the pairwise method even while the true model has higher order factors. Surprisingly, we show that under slightly more stringent conditions, the pairwise procedure still recovers the graph structure, when the samples scale as n > K(m − 1) 2 d 3 2 c 1 log((m − 1) c (p − 1) c 1 ).", "", "Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.", "", "From the early HMAX model to Spatial Pyramid Matching, spatial pooling has played an important role in visual recognition pipelines. By aggregating local statistics, it equips the recognition pipelines with a certain degree of robustness to translation and deformation yet preserving spatial information. Despite of its predominance in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. In this paper, we propose a flexible parameterization of the spatial pooling step and learn the pooling regions together with the classifier. We investigate a smoothness regularization term that in conjuncture with an efficient learning scheme makes learning scalable. Our framework can work with both popular pooling operators: sum-pooling and max-pooling. Finally, we show benefits of our approach for object recognition tasks based on visual words and higher level event recognition tasks based on object-bank features. In both cases, we improve over the hand-crafted spatial pooling step showing the importance of its adaptation to the task.", "We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual l1-norm and the group l1-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problems. We first explore the relationship between the groups defining the norm and the resulting nonzero patterns, providing both forward and backward algorithms to go back and forth from groups to patterns. This allows the design of norms adapted to specific prior knowledge expressed in terms of nonzero patterns. We also present an efficient active set algorithm, and analyze the consistency of variable selection for least-squares linear regression in low and high-dimensional settings." ] }
1506.02515
2949273893
We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance.
The combination of sparsity and deep learning has been investigated within several unsupervised approaches such as sparse autoencoders @cite_41 @cite_38 and sparse deep belief networks @cite_31 . We also note two reports that use some form of sparsification of deep feedforward networks and appeared in the recent months as we were developing our approach. Similarly to @cite_30 , the work @cite_27 uses sparsification to reduce the number of parameters in the memory-bound scenario. Their goal is thus to save memory rather than to attain acceleration. In the report of @cite_34 , the output of the convolution is computed at a sparsified set of locations with the gaps being filled by interpolation. This approach does not sparsify the convolutional kernel and is therefore different from the group-wise brain damage approach we suggest here.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_41", "@cite_27", "@cite_31", "@cite_34" ], "mid": [ "2114766824", "", "", "1570197553", "2133257461", "2949234772" ], "abstract": [ "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "", "", "In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.", "Motivated in part by the hierarchical organization of the cortex, a number of algorithms have recently been proposed that try to learn hierarchical, or \"deep,\" structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hierarchy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear (\"contour\") features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex \"corner\" features matches well with the results from the Ito & Komatsu's study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.", "We propose a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones. Inspired by the loop perforation technique from source code optimization, we speed up the bottleneck convolutional layers by skipping their evaluation in some of the spatial positions. We propose and analyze several strategies of choosing these positions. We demonstrate that perforation can accelerate modern convolutional networks such as AlexNet and VGG-16 by a factor of 2x - 4x. Additionally, we show that perforation is complementary to the recently proposed acceleration method of" ] }
1506.02515
2949273893
We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance.
Our work focuses on the task of speeding up convolutional layers (as they represent the speed bottleneck) and is therefore complimentary to approaches that focus on the reduction of size memory footprint of fully-connected layers @cite_14 @cite_29 @cite_8 @cite_22 @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_8", "@cite_29", "@cite_2" ], "mid": [ "2952432176", "", "2952689122", "1841592590", "2294543795" ], "abstract": [ "As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.", "", "Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.", "Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.", "Recently proposed deep neural network (DNN) obtains significant accuracy improvements in many large vocabulary continuous speech recognition (LVCSR) tasks. However, DNN requires much more parameters than traditional systems, which brings huge cost during online evaluation, and also limits the application of DNN in a lot of scenarios. In this paper we present our new effort on DNN aiming at reducing the model size while keeping the accuracy improvements. We apply singular value decomposition (SVD) on the weight matrices in DNN, and then restructure the model based on the inherent sparseness of the original matrices. After restructuring we can reduce the DNN model size significantly with negligible accuracy loss. We also fine-tune the restructured model using the regular back-propagation method to get the accuracy back when reducing the DNN model size heavily. The proposed method has been evaluated on two LVCSR tasks, with context-dependent DNN hidden Markov model (CD-DNN-HMM). Experimental results show that the proposed approach dramatically reduces the DNN model size by more than 80 without losing any accuracy. Index Terms: deep neural network, singular value decomposition, model restructuring" ] }
1506.02312
2261683202
Behavior Trees are commonly used to model agents for robotics and games, where constrained behaviors must be designed by human experts in order to guarantee that these agents will execute a specific chain of actions given a specific set of perceptions. In such application areas, learning is a desirable feature to provide agents with the ability to adapt and improve interactions with humans and environment, but often discarded due to its unreliability. In this paper, we propose a framework that uses Reinforcement Learning nodes as part of Behavior Trees to address the problem of adding learning capabilities in constrained agents. We show how this framework relates to Options in Hierarchical Reinforcement Learning, ensuring convergence of nested learning nodes, and we empirically show that the learning nodes do not affect the execution of other nodes in the tree.
Behavior Trees were created as alternative to Hierarchical Finite State Machines (HFSMs) and similar methods, aiming to provide more flexible controller for Non-Playable Characters (NPCs) in video games @cite_2 . The method had a quick acceptance in the game industry and, recently, it has been applied to robotics @cite_1 @cite_11 @cite_7 @cite_10 @cite_6 , where BTs received a more formal and standard definition.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_6", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2115409846", "1982511065", "1504354166", "", "1994005705", "2171246729" ], "abstract": [ "This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved.In this paper, we compute performance measures, such as success failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.", "In this paper, we argue that the modularity, reusability and complexity of Unmanned Aerial Vehicle (UAV) guidance and control systems might be improved by using a Behavior Tree (BT) architecture. BTs are a particular kind of Hybrid Dynamical Systems (HDS), where the state transitions of the HDS are implicitly encoded in a tree structure, instead of explicitly stated in transition maps. In the gaming industry, BTs have gained a lot of interest, and are now replacing HDS in the control architecture of many automated in-game opponents. Below, we explore the relationship between HDS and BTs. We show that any HDS can be written as a BT and that many common UAV control constructs are quite naturally formulated as BTs. Finally, we discuss the positive implications of making the above mentioned state transitions implicit in the BTs.", "Multi-robot teams offer possibilities of improved performance and fault tolerance, compared to single robot solutions. In this paper, we show how to realize those possibilities when starting from a single robot system controlled by a Behavior Tree (BT). By extending the single robot BT to a multi-robot BT, we are able to combine the fault tolerant properties of the BT, in terms of built-in fallbacks, with the fault tolerance inherent in multi-robot approaches, in terms of a faulty robot being replaced by another one. Furthermore, we improve performance by identifying and taking advantage of the opportunities of parallel task execution, that are present in the single robot BT. Analyzing the proposed approach, we present results regarding how mission performance is affected by minor faults (a robot losing one capability) as well as major faults (a robot losing all its capabilities). Finally, a detailed example is provided to illustrate the approach.", "", "Behavior Trees (BTs) have become a popular framework for designing controllers of in-game opponents in the computer gaming industry. In this paper, we formalize and analyze the reasons behind the success of the BTs using standard tools of robot control theory, focusing on how properties such as robustness and safety are addressed in a modular way. In particular, we show how these key properties can be traced back to the ideas of subsumption and sequential compositions of robot behaviors. Thus BTs can be seen as a recent addition to a long research effort towards increasing modularity, robustness and safety of robot control software. To illustrate the use of BTs, we provide a set of solutions to example problems.", "This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution tool. The available literature lacks the consistency and mathematical rigor required for roboti ..." ] }
1506.02312
2261683202
Behavior Trees are commonly used to model agents for robotics and games, where constrained behaviors must be designed by human experts in order to guarantee that these agents will execute a specific chain of actions given a specific set of perceptions. In such application areas, learning is a desirable feature to provide agents with the ability to adapt and improve interactions with humans and environment, but often discarded due to its unreliability. In this paper, we propose a framework that uses Reinforcement Learning nodes as part of Behavior Trees to address the problem of adding learning capabilities in constrained agents. We show how this framework relates to Options in Hierarchical Reinforcement Learning, ensuring convergence of nested learning nodes, and we empirically show that the learning nodes do not affect the execution of other nodes in the tree.
We show that our framework has a close relation to the Options framework @cite_0 , but it has also similarities to other models of Hierarchical Reinforcement Learning, such as the Hierarchies of Abstract Machines @cite_8 and the MAXQ model @cite_5 . As a general case, authors in Hierarchical Reinforcement Learning area see the manual division of behaviors as a problem to be dealt while we use this as an intrinsic part of our approach, i.e., the manual definition of behaviors is viewed as a mean to use prior and expert knowledge of the problem.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_8" ], "mid": [ "2109910161", "2121517924", "" ], "abstract": [ "Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem.", "This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics--as a subroutine hierarchy--and a declarative semantics--as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this nonhierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.", "" ] }
1506.02400
575684422
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
Appearance reproduction and multi-material fabrication. Spec2Fab @cite_7 is a general reducer-tuner framework for specification-driven digital fabrication, which allows textures to be replicated on a 3D print. They use an error diffusion optimization of material layerings, effectively a contone, with a uniform error filter (error is pushed equally to all neighbors). Although an important first step in texture-mapping for multi-material printers, it does not allow for anisotropic error diffusion filters, and the iterative optimization prohibits a streaming architecture. OpenFab @cite_0 is a programmable fabrication pipeline for @math D printers, which uses in-slice @math D error diffusion dithering @cite_33 . The authors observe that by dithering in @math D they could avoid streaks. Our approach treats the color signal where it is defined--on the surface--by mapping @math D filters into the tangent space of the surface. As discussed below, this allows us to colorimetrically characterize the 3D printer in a geometry-independent way.
{ "cite_N": [ "@cite_0", "@cite_33", "@cite_7" ], "mid": [ "2040005668", "", "1985025469" ], "abstract": [ "3D printing hardware is rapidly scaling up to output continuous mixtures of multiple materials at increasing resolution over ever larger print volumes. This poses an enormous computational challenge: large high-resolution prints comprise trillions of voxels and petabytes of data and simply modeling and describing the input with spatially varying material mixtures at this scale is challenging. Existing 3D printing software is insufficient; in particular, most software is designed to support only a few million primitives, with discrete material choices per object. We present OpenFab, a programmable pipeline for synthesis of multi-material 3D printed objects that is inspired by RenderMan and modern GPU pipelines. The pipeline supports procedural evaluation of geometric detail and material composition, using shader-like fablets, allowing models to be specified easily and efficiently. We describe a streaming architecture for OpenFab; only a small fraction of the final volume is stored in memory and output is fed to the printer with little startup delay. We demonstrate it on a variety of multi-material objects.", "", "Multi-material 3D printing allows objects to be composed of complex, heterogenous arrangements of materials. It is often more natural to define a functional goal than to define the material composition of an object. Translating these functional requirements to fabri-cable 3D prints is still an open research problem. Recently, several specific instances of this problem have been explored (e.g., appearance or elastic deformation), but they exist as isolated, monolithic algorithms. In this paper, we propose an abstraction mechanism that simplifies the design, development, implementation, and reuse of these algorithms. Our solution relies on two new data structures: a reducer tree that efficiently parameterizes the space of material assignments and a tuner network that describes the optimization process used to compute material arrangement. We provide an application programming interface for specifying the desired object and for defining parameters for the reducer tree and tuner network. We illustrate the utility of our framework by implementing several fabrication algorithms as well as demonstrating the manufactured results." ] }
1506.02400
575684422
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
Multi-material @math D Printing has been used to reproduce specified subsurface scattering properties @cite_38 @cite_19 . Fabrication of directional BRDFs for planar or near-planar surfaces has been done using multi-material printing @cite_43 and photolithography @cite_47 .
{ "cite_N": [ "@cite_19", "@cite_38", "@cite_47", "@cite_43" ], "mid": [ "2132713314", "2088255295", "2056267614", "" ], "abstract": [ "Many real world surfaces exhibit translucent appearance due to subsurface scattering. Although various methods exists to measure, edit and render subsurface scattering effects, no solution exists for manufacturing physical objects with desired translucent appearance. In this paper, we present a complete solution for fabricating a material volume with a desired surface BSSRDF. We stack layers from a fixed set of manufacturing materials whose thickness is varied spatially to reproduce the heterogeneity of the input BSSRDF. Given an input BSSRDF and the optical properties of the manufacturing materials, our system efficiently determines the optimal order and thickness of the layers. We demonstrate our approach by printing a variety of homogenous and heterogenous BSSRDFs using two hardware setups: a milling machine and a 3D printer.", "We investigate a complete pipeline for measuring, modeling, and fabricating objects with specified subsurface scattering behaviors. The process starts with measuring the scattering properties of a given set of base materials, determining their radial reflection and transmission profiles. We describe a mathematical model that predicts the profiles of different stackings of base materials, at arbitrary thicknesses. In an inverse process, we can then specify a desired reflection profile and compute a layered composite material that best approximates it. Our algorithm efficiently searches the space of possible combinations of base materials, pruning unsatisfactory states imposed by physical constraints. We validate our process by producing both homogeneous and heterogeneous composites fabricated using a multi-material 3D printer. We demonstrate reproductions that have scattering properties approximating complex materials.", "Recent attempts to fabricate surfaces with custom reflectance functions boast impressive angular resolution, yet their spatial resolution is limited. In this paper we present a method to construct spatially varying reflectance at a high resolution of up to 220dpi, orders of magnitude greater than previous attempts, albeit with a lower angular resolution. The resolution of previous approaches is limited by the machining, but more fundamentally, by the geometric optics model on which they are built. Beyond a certain scale geometric optics models break down and wave effects must be taken into account. We present an analysis of incoherent reflectance based on wave optics and gain important insights into reflectance design. We further suggest and demonstrate a practical method, which takes into account the limitations of existing micro-fabrication techniques such as photolithography to design and fabricate a range of reflection effects, based on wave interference.", "" ] }
1506.02400
575684422
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
Tone reproduction in FDM prints. Some recent work has focused on the challenging task of improving tone reproduction in fused deposition modeling (FDM) printing. Hergel and Lefebvre @cite_36 optimize seam placement in multi-filament FDM prints to hide or reduce artifacts from changing filaments. @cite_14 perform a type of halftoning for FDM printers while maintain long filament paths. Switching filament heads not only creates artifacts, but also increases print time. Both of these methods are specific to FDM printers.
{ "cite_N": [ "@cite_36", "@cite_14" ], "mid": [ "2169539576", "2165285514" ], "abstract": [ "Fused Filament Fabrication is an additive manufacturing process by which a 3D object is created from plastic filament. The filament is pushed through a hot nozzle where it melts. The nozzle deposits plastic layer after layer to create the final object. This process has been popularized by the RepRap community.", "In this work we detail a method that leverages the two color heads of recent low-end fused deposition modeling FDM 3D printers to produce continuous tone imagery. The challenge behind producing such two-tone imagery is how to finely interleave the two colors while minimizing the switching between print heads, making each color printed span as long and continuous as possible to avoid artifacts associated with printing short segments. The key insight behind our work is that by applying small geometric offsets, tone can be varied without the need to switch color print heads within a single layer. We can now effectively print two-tone texture mapped models capturing both geometric and color information in our output 3D prints." ] }
1506.02400
575684422
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
3D Halftoning. @math D Halftoning has been applied to material composition using @math D error diffusion filters @cite_49 @cite_29 and @math D dispersed-dot dithering @cite_40 . For color and appearance reproduction, 3D error diffusion is not appropriate because material assignments closer to the surface will have a greater influence on the appearance of the object than material assignments deeper within the object. Thus, a 3D error diffusion filter would have to adjust its orientation during traversal to account for this and maintain a consistent orientation with respect to the surface. An isotropic filter would produce similar artifacts as are observed with isotropic filters in 2D error diffusion. In contrast, our approach of halftoning on multiple offset surfaces within the object, in addition to the surface itself, results in a halftone that inherently accounts for the geometry of the surface. The relative influence of voxels at different depths from the surface is calibrated in an offline process and built into an International Color Consortium (ICC) profile. Such an offline color calibration process would be very challenging for a 3D filter, because it would require calibrating every possible surface orientation.
{ "cite_N": [ "@cite_40", "@cite_29", "@cite_49" ], "mid": [ "1970597525", "2135129171", "1836705036" ], "abstract": [ "A dithering algorithm is presented for application to local composition control (LCC) with three-dimensional printing (3D printing) to convert continuous-tone representation of objects with LCC into discrete (pointwise) version of machine instructions. The algorithm presented effectively reduces undesirable low frequency textures of composition for individual 3D layers and also for 3D volumes. Peculiarities of the 3D printing machine, including anisotropic geometry of its picture elements (PELs) and uncertainties in droplet placement, are addressed by adapting a standard digital halftoning algorithm. Without loss of generality, our algorithm also accounts for technical limitations in the printing device, only generating lattices that can be represented within the finite memory limits of the hardware.", "We present a bitmap printing method and digital workflow using multi-material high resolution Additive Manufacturing (AM). Material composition is defined based on voxel resolution and used to fabricate?a design object?with locally varying material stiffness, aiming to?satisfy the design objective. In this workflow voxel resolution is set by the printer's native resolution, eliminating the need for slicing and path planning. Controlling geometry and material property variation at the resolution of the printer provides significantly greater control over structure-property-function relationships. To demonstrate the utility of the bitmap printing approach we apply it to the design of a?customized prosthetic socket. Pressure-sensing elements are concurrently fabricated with the socket, providing possibilities for evaluation of the socket's fit. The level of control demonstrated in this study?cannot be achieved using traditional CAD tools and volume-based AM workflows, implying that new CAD workflows must be developed in order to enable designers to harvest the capabilities of AM. Bitmap printing workflow enables digital fabrication in printer's native resolution.Voxel-based design and representation of objects for multi-material printing.Using 3D printed light guides, deformation of materials can be sensed.", "3D halftoning is a new technique that allows the approximation of digital volumetric objects of varying material density e.g. porous media for example, by an ensemble of binary material volume elements called vels. In theory, 3D halftoning is basically an extension of the well known 2D halftoning process, as widely used in binary printing applications. In practice, however, the development of 3D halftoning algorithms is strongly related to hardware specific boundary conditions, such as particular characteristics of additive volumetric object manufacturing procedures. This paper addresses theoretical as well as practical aspects of 3D halftoning that allow the rendition of digital volumetric objects of varying density using the stereolithographic additive fabrication technique. An ultimate application of 3D halftoning is the reproduction of volumetric objects in medicine that consist of a mixture of bone, cartilage and soft-tissues, for example." ] }
1506.02400
575684422
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
2D Halftoning. Generations of researchers in the field of 2D halftoning focused on finding methodologies to optimally arrange printed dots for maximizing print quality (by preserving tone and structure and shifting quantization errors to the highest spatial frequencies possible - see Section ) subject to technical limitations of printing systems (e.g. the ability to accurately deposit isolated dots) @cite_8 .
{ "cite_N": [ "@cite_8" ], "mid": [ "2919099581" ], "abstract": [ "Introduction AM Digital Halftoning FM Digital Halftoning AM-FM Hybrids AM Halftoning Dot Shape Screen Angles and Moire Screen Frequency Supercells Zero-Angle Dither Arrays Stochastic Halftone Analysis Point Processes Spatial Statistics Spectral Statistics Color Halftoning Halftone Visibility Campbell's CSF Model Nasanen (Exponential) Model Mixed Gaussian Models Alpha Stable HVS Models Blue-Noise Dithering Spatial and Spectral Characteristics Error Diffusion Blue-Noise Dither Arrays Simulated Annealing Void and Cluster BIPPSMA Dither Pattern Ordering Direct Binary Search Halftoning by DBS Efficient Implementation of DBS Effect of HVS model Hexagonal Grid Halftoning Spectral Aliasing Modified Blue-Noise Model Hexagonal Sampling Grids Printers: Distortions and Models Printer Distortion Dot Models Corrective Measures Green-Noise Dithering Spatial and Spectral Characteristics EDODF Green-Noise Masks BIPPCCA Optimal Green-Noise Masks Color Printing Generalized Error Diffusion Multichannel Green-Noise Masks Stochastic Moire Spatial Analysis of Periodic Moire Spatial Analysis of Aperiodic Moire Spectral Analysis of Aperiodic Moire Minimizing Stochastic Moire Stochastic Moire and Green-Noise Multitone Dithering Spectral Statistics of Multitones Multitone Blue-Noise Model Blue-Noise Multitoning Optimization Lenticular Halftoning Model Based Error Diffusion Iterative Tone Correction Conclusions Bibliography List of Figures Index" ] }
1506.02400
575684422
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
One category of algorithms, called , allow a very fast computation of the halftone screens by thresholding pixels using a precomputed threshold mask that is tiled over the 2D image. The traditional clustered-dot order dithering to create amplitude modulated screens and dispersed-dot ordered dithering @cite_21 fall into this category. The latter technique was adapted to 3D printing @cite_40 accounting particularly for dot placement limitations of binder-jetting systems. One drawback of dispersed-dot ordered dithering is that frequency components given by the screen period are visible resulting in cross-hatch pattern artifacts. Avoiding such artifacts in point process techniques, requires large threshold masks -- e.g. blue-noise masks @cite_15 or green-noise masks @cite_32 -- which are heavily distorted if applied on surface manifolds with non-zero Gaussian curvature.
{ "cite_N": [ "@cite_40", "@cite_21", "@cite_32", "@cite_15" ], "mid": [ "1970597525", "1240370484", "1990631152", "2102493018" ], "abstract": [ "A dithering algorithm is presented for application to local composition control (LCC) with three-dimensional printing (3D printing) to convert continuous-tone representation of objects with LCC into discrete (pointwise) version of machine instructions. The algorithm presented effectively reduces undesirable low frequency textures of composition for individual 3D layers and also for 3D volumes. Peculiarities of the 3D printing machine, including anisotropic geometry of its picture elements (PELs) and uncertainties in droplet placement, are addressed by adapting a standard digital halftoning algorithm. Without loss of generality, our algorithm also accounts for technical limitations in the printing device, only generating lattices that can be represented within the finite memory limits of the hardware.", "", "We introduce a novel technique for generating green-noise halftones—stochastic dither patterns composed of homogeneously distributed pixel clusters. Although techniques employing error diffusion have been proposed previously, the technique here employs a dither array referred to as a green-noise mask, which greatly reduces the computational complexity formerly associated with green noise. Compared with those generated with blue-noise masks, halftones generated with green-noise masks are less susceptible to printer distortions. Because green noise constitutes patterns with widely varying cluster sizes and shapes, the technique introduced here for constructing these green-noise masks is tunable; that is, it allows for specific printer traits, with small clusters reserved for printers with low distortion and large clusters reserved for printers with high distortion. Given that blue noise is a limiting case of green noise, this new technique can even create blue-noise masks.", "A novel digital halftoning technique, by which the halftoning is achieved by a pixelwise comparison of the gray-scale image to an array (halftone screen), the blue-noise mask, is presented. This mask is designed so that the halftone image has blue-noise (high-frequency) characteristics in the frequency domain. The algorithm for the construction of the blue-noise mask and an algorithm for the construction of binary patterns with the same first-order but different second-order statistics are presented. Two psychovisual tests in which human subjects rated halftone patterns and images according to various criteria are also described." ] }
1506.02400
575684422
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
Due to the low computational effort and high quality of 2D error diffusion achieved with small diffusion filters, we decided to adapt it to 3D color printing. One prior work addresses error diffusion on a surface @cite_11 . This approach operates on meshes and traverses the vertices based on the availability of subsequent moves or neighbors to diffuse error to. While this approach could be applied to any graph structure, including voxels, it is not clear that it can be applied in a streaming architecture. To the best of our knowledge, we are the first to consider error diffusion halftoning in the context of both non-Euclidean domains and highly translucent materials. Moreover, we are the first to propose such a technique demonstrated to be applicable in practice to tens of billions of elements.
{ "cite_N": [ "@cite_11" ], "mid": [ "2007290376" ], "abstract": [ "We consider the problem of quantization for surface graphs. In particular, we generalize the process of error diffusion to meshes and then compare different paths on the surface when used for error diffusion. We suggest paths for processing mesh elements that lead to better distributions of available neighbors for error diffusion. We demonstrate the potential benefit of error diffusion at several mesh processing applications: quantization of differential mesh coordinates, including an extension to animated geometry, and vertex subset selection for mesh simplification. These applications allow us to compare different paths objectively. We find that the linear time solution results in excellent overall performance, outperforming other traversals taken from the literature. We conclude that the proposed path can be taken as a starting point for any application of error diffusion on meshes. Graphical abstractDisplay Omitted HighlightsThe problem of quantization for surface graphs is considered.The process of error diffusion generalized to meshes.Paths for processing mesh elements are suggested.The benefit of error diffusion at mesh processing applications is demonstrated." ] }
1506.02106
2949145768
The semantic image segmentation task presents a trade-off between test time accuracy and training-time annotation cost. Detailed per-pixel annotations enable training accurate models but are very time-consuming to obtain, image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of 12.9 mIOU over image-level supervision. Further, we demonstrate that models trained with point-level supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.
Types of Supervision for Semantic Segmentation To reduce the up-front annotation time for semantic segmentation, recent works have focused on training models in a weakly- or semi-supervised setting. Many forms of supervision have been explored, such as eye tracks @cite_37 , free-form squiggles @cite_18 @cite_9 , noisy web tags @cite_26 , size constraints on objects @cite_23 or heterogeneous annotations @cite_15 . Common settings are image-level labels @cite_13 @cite_11 @cite_14 and bounding boxes @cite_13 @cite_1 . @cite_28 @cite_22 @cite_33 use co-segmentation methods trained from image-level labels to automatically infer the segmentations. @cite_23 @cite_11 @cite_14 train CNNs supervised only with image-level labels by extending the Multiple-Instance Learning (MIL) framework for semantic segmentation. @cite_13 @cite_1 use an EM procedure, which alternates between estimating pixel labels from bounding box annotations and optimizing the parameters of a CNN.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_18", "@cite_14", "@cite_22", "@cite_33", "@cite_28", "@cite_9", "@cite_1", "@cite_23", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "67472587", "2050952849", "1927251054", "1945608308", "2003033789", "2086052791", "2117741877", "2337429362", "2949086864", "2952004933", "2949847866", "2221898772", "1931270512" ], "abstract": [ "Training an object class detector typically requires a large set of images annotated with bounding-boxes, which is expensive and time consuming to create. We propose novel approach to annotate object locations which can substantially reduce annotation time. We first track the eye movements of annotators instructed to find the object and then propose a technique for deriving object bounding-boxes from these fixations. To validate our idea, we collected eye tracking data for the trainval part of 10 object classes of Pascal VOC 2012 (6,270 images, 5 observers). Our technique correctly produces bounding-boxes in 50 of the images, while reducing the total annotation time by factor 6.8× compared to drawing bounding-boxes. Any standard object class detector can be trained on the bounding-boxes predicted by our model. Our large scale eye tracking dataset is available at groups.inf.ed.ac.uk calvin eyetrackdataset .", "Interactive object segmentation has great practical importance in computer vision. Many interactive methods have been proposed utilizing user input in the form of mouse clicks and mouse strokes, and often requiring a lot of user intervention. In this paper, we present a system with a far simpler input method: the user needs only give the name of the desired object. With the tag provided by the user we do a text query of an image database to gather exemplars of the object. Using object proposals and borrowing ideas from image retrieval and object detection, the object is localized in the target image. An appearance model generated from the exemplars and the location prior are used in an energy minimization framework to select the object. Our method outperforms the state-of-the-art on existing datasets and on a more challenging dataset we collected.", "Despite the promising performance of conventional fully supervised algorithms, semantic segmentation has remained an important, yet challenging task. Due to the limited availability of complete annotations, it is of great interest to design solutions for semantic segmentation that take into account weakly labeled data, which is readily available at a much larger scale. Contrasting the common theme to develop a different algorithm for each type of weak annotation, in this work, we propose a unified approach that incorporates various forms of weak supervision - image level tags, bounding boxes, and partial labels - to produce a pixel-wise labeling. We conduct a rigorous evaluation on the challenging Siftflow dataset for various weakly labeled settings, and show that our approach outperforms the state-of-the-art by 12 on per-class accuracy, while maintaining comparable per-pixel accuracy.", "We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches.", "The objective of this paper is the unsupervised segmentation of image training sets into foreground and background in order to improve image classification performance. To this end we introduce a new scalable, alternation-based algorithm for co-segmentation, BiCoS, which is simpler than many of its predecessors, and yet has superior performance on standard benchmark image datasets.", "Purely bottom-up, unsupervised segmentation of a single image into foreground and background regions remains a challenging task for computer vision. Co-segmentation is the problem of simultaneously dividing multiple images into regions (segments) corresponding to different object classes. In this paper, we combine existing tools for bottom-up image segmentation such as normalized cuts, with kernel methods commonly used in object recognition. These two sets of techniques are used within a discriminative clustering framework: the goal is to assign foreground background labels jointly to all images, so that a supervised classifier trained with these labels leads to maximal separation of the two classes. In practice, we obtain a combinatorial optimization problem which is relaxed to a continuous convex optimization problem, that can itself be solved efficiently for up to dozens of images. We illustrate the proposed method on images with very similar foreground objects, as well as on more challenging problems with objects with higher intra-class variations.", "ImageNet is a large-scale hierarchical database of object classes with millions of images.We propose to automatically populate it with pixelwise object-background segmentations, by leveraging existing manual annotations in the form of class labels and bounding-boxes. The key idea is to recursively exploit images segmented so far to guide the segmentation of new images. At each stage this propagation process expands into the images which are easiest to segment at that point in time, e.g. by moving to the semantically most related classes to those segmented so far. The propagation of segmentation occurs both (a) at the image level, by transferring existing segmentations to estimate the probability of a pixel to be foreground, and (b) at the class level, by jointly segmenting images of the same class and by importing the appearance models of classes that are already segmented. Through experiments on 577 classes and 500k images we show that our technique (i) annotates a wide range of classes with accurate segmentations; (ii) effectively exploits the hierarchical structure of ImageNet; (iii) scales efficiently, especially when implemented on superpixels; (iv) outperforms a baseline GrabCut ( 2004) initialized on the image center, as well as segmentation transfer from a fixed source pool and run independently on each target image (Kuettel and Ferrari 2012). Moreover, our method also delivers state-of-the-art results on the recent iCoseg dataset for co-segmentation.", "Large-scale data is of crucial importance for learning semantic segmentation models, but annotating per-pixel masks is a tedious and inefficient procedure. We note that for the topic of interactive image segmentation, scribbles are very widely used in academic research and commercial software, and are recognized as one of the most userfriendly ways of interacting. In this paper, we propose to use scribbles to annotate images, and develop an algorithm to train convolutional networks for semantic segmentation supervised by scribbles. Our algorithm is based on a graphical model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. We present competitive object semantic segmentation results on the PASCAL VOC dataset by using scribbles as annotations. Scribbles are also favored for annotating stuff (e.g., water, sky, grass) that has no well-defined shape, and our method shows excellent results on the PASCALCONTEXT dataset thanks to extra inexpensive scribble annotations. Our scribble annotations on PASCAL VOC are available at http: research.microsoft.com en-us um people jifdai downloads scribble_sup.", "Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called BoxSup, produces competitive results supervised by boxes only, on par with strong baselines fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further unleashes the power of deep convolutional networks and yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT.", "We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.", "We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset.", "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: bitbucket.org deeplab deeplab-public.", "Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained end-to-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge." ] }
1506.02106
2949145768
The semantic image segmentation task presents a trade-off between test time accuracy and training-time annotation cost. Detailed per-pixel annotations enable training accurate models but are very time-consuming to obtain, image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of 12.9 mIOU over image-level supervision. Further, we demonstrate that models trained with point-level supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.
There is a trade-off between annotation time and accuracy: models trained with higher levels of supervision perform far better than weakly-supervised models, but require large strongly-supervised datasets, which are costly and scarce. We propose an intermediate form of supervision, using points, which adds negligible additional annotation time to image-level labels, yet achieves better results. @cite_25 also uses point supervision during training, but it trains a patch-level CNN classifier to serve as a unary potential in a CRF, whereas we use point supervision directly during CNN training.
{ "cite_N": [ "@cite_25" ], "mid": [ "2950672966" ], "abstract": [ "Recognizing materials in real-world images is a challenging task. Real-world materials have rich surface texture, geometry, lighting conditions, and clutter, which combine to make the problem particularly difficult. In this paper, we introduce a new, large-scale, open dataset of materials in the wild, the Materials in Context Database (MINC), and combine this dataset with deep learning to achieve material recognition and segmentation of images in the wild. MINC is an order of magnitude larger than previous material databases, while being more diverse and well-sampled across its 23 categories. Using MINC, we train convolutional neural networks (CNNs) for two tasks: classifying materials from patches, and simultaneous material recognition and segmentation in full images. For patch-based classification on MINC we found that the best performing CNN architectures can achieve 85.2 mean class accuracy. We convert these trained CNN classifiers into an efficient fully convolutional framework combined with a fully connected conditional random field (CRF) to predict the material at every pixel in an image, achieving 73.1 mean class accuracy. Our experiments demonstrate that having a large, well-sampled dataset such as MINC is crucial for real-world material recognition and segmentation." ] }
1506.02106
2949145768
The semantic image segmentation task presents a trade-off between test time accuracy and training-time annotation cost. Detailed per-pixel annotations enable training accurate models but are very time-consuming to obtain, image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of 12.9 mIOU over image-level supervision. Further, we demonstrate that models trained with point-level supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.
CNNs for Segmentation Recent successes in semantic segmentation have been driven by methods that train CNNs originally built for image classification to assign semantic labels to each pixel in an image @cite_34 @cite_35 @cite_32 @cite_12 . One extension of the fully convolutional network (FCN) architecture developed by @cite_34 is to train a multi-layer deconvolution network end-to-end @cite_6 . More inventive forms of post-processing have also been developed, such as combining the responses at the final layer of the network with a fully-connected CRF @cite_12 . We develop our approach on top of the basic framework common to many of these methods.
{ "cite_N": [ "@cite_35", "@cite_32", "@cite_6", "@cite_34", "@cite_12" ], "mid": [ "2022508996", "2950612966", "2952637581", "1903029394", "1923697677" ], "abstract": [ "Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.", "We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.", "We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU." ] }
1506.02106
2949145768
The semantic image segmentation task presents a trade-off between test time accuracy and training-time annotation cost. Detailed per-pixel annotations enable training accurate models but are very time-consuming to obtain, image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of 12.9 mIOU over image-level supervision. Further, we demonstrate that models trained with point-level supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.
Interactive Segmentation Some semantic segmentation methods are interactive, in that they collect additional annotations at test time to refine the segmentation. These annotations can be collected as points @cite_30 or free-form squiggles @cite_10 . These methods require additional user input at test time; in contrast, we only collect user points once and only use them at training time.
{ "cite_N": [ "@cite_30", "@cite_10" ], "mid": [ "2169374938", "2124351162" ], "abstract": [ "We present TouchCut; a robust and efficient algorithm for segmenting image and video sequences with minimal user interaction. Our algorithm requires only a single finger touch to identify the object of interest in the image or first frame of video. Our approach is based on a level set framework, with an appearance model fusing edge, region texture and geometric information sampled local to the touched point. We first present our image segmentation solution, then extend this framework to progressive (per-frame) video segmentation, encouraging temporal coherence by incorporating motion estimation and a shape prior learned from previous frames. This new approach to visual object cut-out provides a practical solution for image and video segmentation on compact touch screen devices, facilitating spatially localized media manipulation. We describe such a case study, enabling users to selectively stylize video objects to create a hand-painted effect. We demonstrate the advantages of TouchCut by quantitatively comparing against the state of the art both in terms of accuracy, and run-time performance.", "The problem of efficient, interactive foreground background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for \"border matting\" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools." ] }
1506.02025
2951005624
Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.
In this section we discuss the prior work related to the paper, covering the central ideas of modelling transformations with neural networks @cite_26 @cite_35 @cite_22 , learning and analysing transformation-invariant representations @cite_4 @cite_1 @cite_23 @cite_9 @cite_27 @cite_14 , as well as attention and detection mechanisms for feature selection @cite_25 @cite_33 @cite_2 @cite_17 @cite_5 @cite_3 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_4", "@cite_22", "@cite_14", "@cite_33", "@cite_9", "@cite_1", "@cite_3", "@cite_27", "@cite_23", "@cite_2", "@cite_5", "@cite_25", "@cite_17" ], "mid": [ "", "112688168", "2136026194", "2185466002", "1912570122", "2964036520", "252252322", "2963829960", "2949150497", "2072072671", "2952390042", "1724369340", "2102605133", "", "2962741254" ], "abstract": [ "", "A viewpoint-independent description of the shape of an object can be generated by imposing a canonical frame of reference on the object and describing the spatial dispositions of the parts relative to this object-based frame. When a familiar object is in an unusual orientation, the deciding factor in the choice of the canonical object-based frame may be the fact that relative to this frame the object has a familiar shape description. This may suggest that we first hypothesise an object-based frame and then test the resultant shape description for familiarity. However, it is possible to organise the interactions between units in a parallel network so that the pattern of activity in the network simultaneously converges on a representation of the shape and a representation of the object-based frame of reference. The connections in the network are determined by the constraints inherent in the image formation process.", "The chief difficulty in object recognition is that objects' classes are obscured by a large number of extraneous sources of variability, such as pose and part deformation. These sources of variation can be represented by symmetry groups, sets of composable transformations that preserve object identity. Convolutional neural networks (convnets) achieve a degree of translational invariance by computing feature maps over the translation group, but cannot handle other groups. As a result, these groups' effects have to be approximated by small translations, which often requires augmenting datasets and leads to high sample complexity. In this paper, we introduce deep symmetry networks (symnets), a generalization of convnets that forms feature maps over arbitrary symmetry groups. Symnets use kernel-based interpolation to tractably tie parameters and pool over symmetry spaces of any dimension. Like convnets, they are trained with backpropagation. The composition of feature transformations through the layers of a symnet provides a new approach to deep learning. Experiments on NORB and MNIST-rot show that symnets over the affine group greatly reduce sample complexity relative to convnets by better capturing the symmetries in the data.", "Optimizing Neural Networks that Generate Images Tijmen Tieleman Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2014 Image recognition, also known as computer vision, is one of the most prominent applications of neural networks. The image recognition methods presented in this thesis are based on the reverse process: generating images. Generating images is easier than recognizing them, for the computer systems that we have today. This work leverages the ability to generate images, for the purpose of recognizing other", "Despite the importance of image representations such as histograms of oriented gradients and deep Convolutional Neural Networks (CNN), our theoretical understanding of them remains limited. Aiming at filling this gap, we investigate three key mathematical properties of representations: equivariance, invariance, and equivalence. Equivariance studies how transformations of the input image are encoded by the representation, invariance being a special case where a transformation has no effect. Equivalence studies whether two representations, for example two different parametrisations of a CNN, capture the same visual information or not. A number of methods to establish these properties empirically are proposed, including introducing transformation and stitching layers in CNNs. These methods are then applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved. While the focus of the paper is theoretical, direct applications to structured-output regression are demonstrated too.", "Abstract: We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "Convolutional Neural Networks (ConvNets) have shown excellent results on many visual classification tasks. With the exception of ImageNet, these datasets are carefully crafted such that objects are well-aligned at similar scales. Naturally, the feature learning problem gets more challenging as the amount of variation in the data increases, as the models have to learn to be invariant to certain changes in appearance. Recent results on the ImageNet dataset show that given enough data, ConvNets can learn such invariances producing very discriminative features [1]. But could we do more: use less parameters, less data, learn more discriminative features, if certain invariances were built into the learning process? In this paper we present a simple model that allows ConvNets to learn features in a locally scale-invariant manner without increasing the number of model parameters. We show on a modified MNIST dataset that when faced with scale variation, building in scale-invariance allows ConvNets to learn more discriminative features with reduced chances of over-fitting.", "Abstract: When a three-dimensional object moves relative to an observer, a change occurs on the observer's image plane and in the visual representation computed by a learned model. Starting with the idea that a good visual representation is one that transforms linearly under scene motions, we show, using the theory of group representations, that any such representation is equivalent to a combination of the elementary irreducible representations. We derive a striking relationship between irreducibility and the statistical dependency structure of the representation, by showing that under restricted conditions, irreducible representations are decorrelated. Under partial observability, as induced by the perspective projection of a scene onto the image plane, the motion group does not have a linear action on the space of images, so that it becomes necessary to perform inference over a latent representation that does transform linearly. This idea is demonstrated in a model of rotating NORB objects that employs a latent representation of the non-commutative 3D rotation group SO(3).", "Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.", "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.", "Learning invariant representations is an important problem in machine learning and pattern recognition. In this paper, we present a novel framework of transformation-invariant feature learning by incorporating linear transformations into the feature learning algorithms. For example, we present the transformation-invariant restricted Boltzmann machine that compactly represents data by its weights and their transformations, which achieves invariance of the feature representation via probabilistic max pooling. In addition, we show that our transformation-invariant feature learning framework can also be extended to other unsupervised learning methods, such as autoencoders or sparse coding. We evaluate our method on several image classification benchmark datasets, such as MNIST variations, CIFAR-10, and STL-10, and show competitive or superior classification performance when compared to the state-of-the-art. Furthermore, our method achieves state-of-the-art performance on phone classification tasks with the TIMIT dataset, which demonstrates wide applicability of our proposed algorithms to other domains.", "This paper presents experiments extending the work of (2014) on recurrent neural models for attention into less constrained visual environments, specifically fine-grained categorization on the Stanford Dogs data set. In this work we use an RNN of the same structure but substitute a more powerful visual network and perform large-scale pre-training of the visual network outside of the attention RNN. Most work in attention models to date focuses on tasks with toy or more constrained visual environments, whereas we present results for fine-grained categorization better than the state-of-the-art GoogLeNet classification model. We show that our model learns to direct high resolution attention to the most discriminative regions without any spatial supervision such as bounding boxes, and it is able to discriminate fine-grained dog breeds moderately well even when given only an initial low-resolution context image and narrow, inexpensive glimpses at faces and fur patterns. This and similar attention models have the major advantage of being trained end-to-end, as opposed to other current detection and recognition pipelines with hand-engineered components where information is lost. While our model is state-of-the-art, further work is needed to fully leverage the sequential input.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye." ] }
1506.02025
2951005624
Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.
Early work by Hinton @cite_26 looked at assigning canonical frames of reference to object parts, a theme which recurred in @cite_35 where 2D affine transformations were modeled to create a generative model composed of transformed parts. The targets of the generative training scheme are the transformed input images, with the transformations between input images and targets given as an additional input to the network. The result is a generative model which can learn to generate transformed images of objects by composing parts. The notion of a composition of transformed parts is taken further by Tieleman @cite_22 , where learnt parts are explicitly affine-transformed, with the transform predicted by the network. Such generative capsule models are able to learn discriminative features for classification from transformation supervision.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_22" ], "mid": [ "", "112688168", "2185466002" ], "abstract": [ "", "A viewpoint-independent description of the shape of an object can be generated by imposing a canonical frame of reference on the object and describing the spatial dispositions of the parts relative to this object-based frame. When a familiar object is in an unusual orientation, the deciding factor in the choice of the canonical object-based frame may be the fact that relative to this frame the object has a familiar shape description. This may suggest that we first hypothesise an object-based frame and then test the resultant shape description for familiarity. However, it is possible to organise the interactions between units in a parallel network so that the pattern of activity in the network simultaneously converges on a representation of the shape and a representation of the object-based frame of reference. The connections in the network are determined by the constraints inherent in the image formation process.", "Optimizing Neural Networks that Generate Images Tijmen Tieleman Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2014 Image recognition, also known as computer vision, is one of the most prominent applications of neural networks. The image recognition methods presented in this thesis are based on the reverse process: generating images. Generating images is easier than recognizing them, for the computer systems that we have today. This work leverages the ability to generate images, for the purpose of recognizing other" ] }
1506.02025
2951005624
Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.
The invariance and equivariance of CNN representations to input image transformations are studied in @cite_14 by estimating the linear relationships between representations of the original and transformed images. Cohen & Welling @cite_1 analyse this behaviour in relation to symmetry groups, which is also exploited in the architecture proposed by Gens & Domingos @cite_4 , resulting in feature maps that are more invariant to symmetry groups. Other attempts to design transformation invariant representations are scattering networks @cite_27 , and CNNs that construct filter banks of transformed filters @cite_23 @cite_9 . Stollenga al @cite_18 use a policy based on a network's activations to gate the responses of the network's filters for a subsequent forward pass of the same image and so can allow attention to specific features. In this work, we aim to achieve invariant representations by manipulating the data rather than the feature extractors, something that was done for clustering in @cite_39 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_9", "@cite_1", "@cite_39", "@cite_27", "@cite_23" ], "mid": [ "2172010943", "1912570122", "2136026194", "252252322", "2963829960", "2096198554", "2072072671", "2952390042" ], "abstract": [ "Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNets feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.", "Despite the importance of image representations such as histograms of oriented gradients and deep Convolutional Neural Networks (CNN), our theoretical understanding of them remains limited. Aiming at filling this gap, we investigate three key mathematical properties of representations: equivariance, invariance, and equivalence. Equivariance studies how transformations of the input image are encoded by the representation, invariance being a special case where a transformation has no effect. Equivalence studies whether two representations, for example two different parametrisations of a CNN, capture the same visual information or not. A number of methods to establish these properties empirically are proposed, including introducing transformation and stitching layers in CNNs. These methods are then applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved. While the focus of the paper is theoretical, direct applications to structured-output regression are demonstrated too.", "The chief difficulty in object recognition is that objects' classes are obscured by a large number of extraneous sources of variability, such as pose and part deformation. These sources of variation can be represented by symmetry groups, sets of composable transformations that preserve object identity. Convolutional neural networks (convnets) achieve a degree of translational invariance by computing feature maps over the translation group, but cannot handle other groups. As a result, these groups' effects have to be approximated by small translations, which often requires augmenting datasets and leads to high sample complexity. In this paper, we introduce deep symmetry networks (symnets), a generalization of convnets that forms feature maps over arbitrary symmetry groups. Symnets use kernel-based interpolation to tractably tie parameters and pool over symmetry spaces of any dimension. Like convnets, they are trained with backpropagation. The composition of feature transformations through the layers of a symnet provides a new approach to deep learning. Experiments on NORB and MNIST-rot show that symnets over the affine group greatly reduce sample complexity relative to convnets by better capturing the symmetries in the data.", "Convolutional Neural Networks (ConvNets) have shown excellent results on many visual classification tasks. With the exception of ImageNet, these datasets are carefully crafted such that objects are well-aligned at similar scales. Naturally, the feature learning problem gets more challenging as the amount of variation in the data increases, as the models have to learn to be invariant to certain changes in appearance. Recent results on the ImageNet dataset show that given enough data, ConvNets can learn such invariances producing very discriminative features [1]. But could we do more: use less parameters, less data, learn more discriminative features, if certain invariances were built into the learning process? In this paper we present a simple model that allows ConvNets to learn features in a locally scale-invariant manner without increasing the number of model parameters. We show on a modified MNIST dataset that when faced with scale variation, building in scale-invariance allows ConvNets to learn more discriminative features with reduced chances of over-fitting.", "Abstract: When a three-dimensional object moves relative to an observer, a change occurs on the observer's image plane and in the visual representation computed by a learned model. Starting with the idea that a good visual representation is one that transforms linearly under scene motions, we show, using the theory of group representations, that any such representation is equivalent to a combination of the elementary irreducible representations. We derive a striking relationship between irreducibility and the statistical dependency structure of the representation, by showing that under restricted conditions, irreducible representations are decorrelated. Under partial observability, as induced by the perspective projection of a scene onto the image plane, the motion group does not have a linear action on the space of images, so that it becomes necessary to perform inference over a latent representation that does transform linearly. This idea is demonstrated in a model of rotating NORB objects that employs a latent representation of the non-commutative 3D rotation group SO(3).", "In previous work on \"transformed mixtures of Gaussians\" and \"transformed hidden Markov models\", we showed how the EM algorithm in a discrete latent variable model can be used to jointly normalize data (e.g., center images, pitch-normalize spectrograms) and learn a mixture model of the normalized data. The only input to the algorithm is the data, a list of possible transformations, and the number of clusters to find. The main criticism of this work was that the exhaustive computation of the posterior probabilities over transformations would make scaling up to large feature vectors and large sets of transformations intractable. Here, we describe how a tremendous speed-up is acheived through the use of a variational technique for decoupling transformations, and a fast Fourier transform method for computing posterior probabilities. For N × N images, learning C clusters under N rotations, N scales, N x-translations and N y-translations takes only (C + 2 log N)N2 scalar operations per iteration. In contrast, the original algorithm takes CN6 operations to account for these transformations. We give results on learning a 4-component mixture model from a video sequence with frames of size 320×240. The model accounts for 360 rotations and 76,800 translations. Each iteration of EM takes only 10 seconds per frame in MATLAB, which is over 5 million times faster than the original algorithm.", "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.", "Learning invariant representations is an important problem in machine learning and pattern recognition. In this paper, we present a novel framework of transformation-invariant feature learning by incorporating linear transformations into the feature learning algorithms. For example, we present the transformation-invariant restricted Boltzmann machine that compactly represents data by its weights and their transformations, which achieves invariance of the feature representation via probabilistic max pooling. In addition, we show that our transformation-invariant feature learning framework can also be extended to other unsupervised learning methods, such as autoencoders or sparse coding. We evaluate our method on several image classification benchmark datasets, such as MNIST variations, CIFAR-10, and STL-10, and show competitive or superior classification performance when compared to the state-of-the-art. Furthermore, our method achieves state-of-the-art performance on phone classification tasks with the TIMIT dataset, which demonstrates wide applicability of our proposed algorithms to other domains." ] }
1506.02025
2951005624
Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.
Neural networks with selective attention manipulate the data by taking crops, and so are able to learn translation invariance. Work such as @cite_33 @cite_2 are trained with reinforcement learning to avoid the need for a differentiable attention mechanism, while @cite_17 use a differentiable attention mechansim by utilising Gaussian kernels in a generative model. The work by Girshick al @cite_5 uses a region proposal algorithm as a form of attention, and @cite_3 show that it is possible to regress salient regions with a CNN. The framework we present in this paper can be seen as a generalisation of differentiable attention to any spatial transformation.
{ "cite_N": [ "@cite_33", "@cite_3", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2964036520", "2949150497", "1724369340", "2102605133", "2962741254" ], "abstract": [ "Abstract: We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.", "This paper presents experiments extending the work of (2014) on recurrent neural models for attention into less constrained visual environments, specifically fine-grained categorization on the Stanford Dogs data set. In this work we use an RNN of the same structure but substitute a more powerful visual network and perform large-scale pre-training of the visual network outside of the attention RNN. Most work in attention models to date focuses on tasks with toy or more constrained visual environments, whereas we present results for fine-grained categorization better than the state-of-the-art GoogLeNet classification model. We show that our model learns to direct high resolution attention to the most discriminative regions without any spatial supervision such as bounding boxes, and it is able to discriminate fine-grained dog breeds moderately well even when given only an initial low-resolution context image and narrow, inexpensive glimpses at faces and fur patterns. This and similar attention models have the major advantage of being trained end-to-end, as opposed to other current detection and recognition pipelines with hand-engineered components where information is lost. While our model is state-of-the-art, further work is needed to fully leverage the sequential input.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye." ] }
1506.01762
629236042
Verification of temporal logic properties plays a crucial role in proving the desired behaviors of hybrid systems. In this paper, we propose an interval method for verifying the properties described by a bounded linear temporal logic. We relax the problem to allow outputting an inconclusive result when verification process cannot succeed with a prescribed precision, and present an efficient and rigorous monitoring algorithm that demonstrates that the problem is decidable. This algorithm performs a forward simulation of a hybrid automaton, detects a set of time intervals in which the atomic propositions hold, and validates the property by propagating the time intervals. A continuous state at a certain time computed in each step is enclosed by an interval vector that is proven to contain a unique solution. In the experiments, we show that the proposed method provides a useful tool for formal analysis of nonlinear and complex hybrid systems.
Many previous studies have applied interval methods to reachability analysis of hybrid systems @cite_5 @cite_13 @cite_2 @cite_25 @cite_22 @cite_26 @cite_9 . The outcome of these methods is an over-approximation of a set of reachable states with a set of boxes. In interval analysis, a computation often provides a proof of unique existence of a solution within a resulting interval. This technique also applies in interval-based reachability analysis @cite_25 @cite_15 , but it is not considered in most of the methods for hybrid systems. Our method enforces the use of the proof to verify more generic temporal properties.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_9", "@cite_2", "@cite_5", "@cite_15", "@cite_13", "@cite_25" ], "mid": [ "", "2084859862", "1504003583", "1990733120", "2124397961", "1969710845", "2134675215", "2006705367" ], "abstract": [ "", "We propose an approach for verifying non-linear hybrid systems using higher-order Taylor models that are a combination of bounded degree polynomials over the initial conditions and time, bloated by an interval. Taylor models are an effective means for computing rigorous bounds on the complex time trajectories of non-linear differential equations. As a result, Taylor models have been successfully used to verify properties of non-linear continuous systems. However, the handling of discrete (controller) transitions remains a challenging problem. In this paper, we provide techniques for handling the effect of discrete transitions on Taylor model flow pipe construction. We explore various solutions based on two ideas: domain contraction and range over-approximation. Instead of explicitly computing the intersection of a Taylor model with a guard set, domain contraction makes the domain of a Taylor model smaller by cutting away parts for which the intersection is empty. It is complemented by range over-approximation that translates Taylor models into commonly used representations such as template polyhedra or zonotopes, on which intersections with guard sets have been previously studied. We provide an implementation of the techniques described in the paper and evaluate the various design choices over a set of challenging benchmarks.", "Abstract : We present the framework of delta-complete analysis for bounded reachability problems of general hybrid systems. We perform bounded reachability checking through solving delta-decision problems over the reals. The techniques take into account of robustness properties of the systems under numerical perturbations. We prove that the verification problems become much more mathematically tractable in this new framework. Our implementation of the techniques, an open-source tool dReach, scales well on several highly nonlinear hybrid system models that arise in biomedical and robotics applications.", "Abstract We investigate solution techniques for numerical constraint-satisfaction problems and validated numerical set integration methods for computing reachable sets of nonlinear hybrid dynamical systems in the presence of uncertainty. To use interval simulation tools with higher-dimensional hybrid systems, while assuming large domains for either initial continuous state or model parameter vectors, we need to solve the problem of flow sets intersection in an effective and reliable way. The main idea developed in this paper is first to derive an analytical expression for the boundaries of continuous flows, using interval Taylor methods and techniques for controlling the wrapping effect. Then, the event detection and localization problems underlying flow sets intersection are expressed as numerical constraint-satisfaction problems, which are solved using global search methods based on branch-and-prune algorithms, interval analysis and consistency techniques. The method is illustrated with hybrid systems with uncertain nonlinear continuous dynamics and nonlinear invariants and guards.", "In order to facilitate automated reasoning about large Boolean combinations of non-linear arithmetic constraints involving ordinary differential equations (ODEs), we provide a seamless integration of safe numeric overapproximation of initial-value problems into a SAT-modulo-theory (SMT) approach to interval-based arithmetic constraint solving. Interval-based safe numeric approximation of ODEs is used as an interval contractor being able to narrow candidate sets in phase space in both temporal directions: post-images of ODEs (i.e., sets of states reachable from a set of initial values) are narrowed based on partial information about the initial values and, vice versa, pre-images are narrowed based on partial knowledge about post-sets. In contrast to the related CLP(F) approach of Hickey and Wittenberg [12], we do (a) support coordinate transformations mitigating the wrapping effect encountered upon iterating interval-based overapproximations of reachable state sets and (b) embed the approach into an SMT framework, thus accelerating the solving process through the algorithmic enhancements of recent SAT solving technology.", "Computing a tight inner approximation of the range of a function over some set is notoriously difficult, way beyond obtaining outer approximations. We propose here a new method to compute a tight inner approximation of the set of reachable states of non-linear dynamical systems on a bounded time interval. This approach involves affine forms and Kaucher arithmetic, plus a number of extra ingredients from set-based methods. An implementation of the method is discussed, and illustrated on representative numerical schemes, discrete-time and continuous-time dynamical systems.", "This paper introduces a new algorithm dedicated to the rigorous reachability analysis of nonlinear dynamical systems. The algorithm is initially presented in the context of discrete time dynamical systems, and then extended to continuous time dynamical systems driven by ODEs. In continuous time, this algorithm is called the Reach and Evolve algorithm. The Reach and Evolve algorithm is based on interval analysis and a rigorous discretization of space and time. Promising numerical experiments are presented.", "This paper presents a bounded model checking tool called @math for hybrid systems. It translates a reachability problem of a nonlinear hybrid system into a predicate logic formula involving arithmetic constraints and checks the satisfiability of the formula based on a satisfiability modulo theories method. We tightly integrate (i) an incremental SAT solver to enumerate the possible sets of constraints and (ii) an interval-based solver for hybrid constraint systems (HCSs) to solve the constraints described in the formulas. The HCS solver verifies the occurrence of a discrete change by using a set of boxes to enclose continuous states that may cause the discrete change. We utilize the existence property of a unique solution in the boxes computed by the HCS solver as (i) a proof of the reachability of a model and (ii) a guide in the over-approximation refinement procedure. Our @math implementation successfully handled several examples including those with nonlinear constraints." ] }
1506.01762
629236042
Verification of temporal logic properties plays a crucial role in proving the desired behaviors of hybrid systems. In this paper, we propose an interval method for verifying the properties described by a bounded linear temporal logic. We relax the problem to allow outputting an inconclusive result when verification process cannot succeed with a prescribed precision, and present an efficient and rigorous monitoring algorithm that demonstrates that the problem is decidable. This algorithm performs a forward simulation of a hybrid automaton, detects a set of time intervals in which the atomic propositions hold, and validates the property by propagating the time intervals. A continuous state at a certain time computed in each step is enclosed by an interval vector that is proven to contain a unique solution. In the experiments, we show that the proposed method provides a useful tool for formal analysis of nonlinear and complex hybrid systems.
Reasoning of real-time temporal logic has been a research topic of interest @cite_0 @cite_12 . Numerical method for of a temporal property is straightforward @cite_10 . It simulates a trajectory of a bounded length and checks the satisfiability of the negation of the property described by a bounded temporal logic. This paper presents an interval extension of this falsification method.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_12" ], "mid": [ "", "1547304883", "2038366595" ], "abstract": [ "", "In this paper we introduce a variant of temporal logic tailored for specifying desired properties of continuous signals. The logic is based on a bounded subset of the real-time logic mitl, augmented with a static mapping from continuous domains into propositions. From formulae in this logic we create automatically property monitors that can check whether a given signal of bounded length and finite variability satisfies the property. A prototype implementation of this procedure was used to check properties of simulation traces generated by Matlab Simulink.", "We demonstrate an automated method for proving temporal logic statements about solutions to ordinary differential equations (ODEs), even in the face of an incomplete specification of the ODE. The method combines an implemented, on-the-fly, model-checking algorithm for statements in the temporal logic CTL* [3, 7, 8] with the output of the qualitative simulation algorithm QSIM [13, 16]. Based on the QSIM Guaranteed Coverage Theorem, we prove that for certain CTL* statements, ), if is true for the temporal structure produced by QSIM, then a corresponding temporal statement, '', holds for the solution of any ODE consistent with the qualitative differential equation (QDE) that QSIM used to generate the temporal structure." ] }
1506.01911
633646897
Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition. For the task of capturing temporal structure in video, however, there still remain numerous open research questions. Current research suggests using a simple temporal feature pooling strategy to take into account the temporal aspect of video. We demonstrate that this method is not sufficient for gesture recognition, where temporal information is more discriminative compared to general video classification tasks. We explore deep architectures for gesture recognition in video and propose a new end-to-end trainable neural network architecture incorporating temporal convolutions and bidirectional recurrence. Our main contributions are twofold; first, we show that recurrence is crucial for this task; second, we show that adding temporal convolutions leads to significant improvements. We evaluate the different approaches on the Montalbano gesture recognition dataset, where we achieve state-of-the-art results.
Another way to capture motion is to convert a video stream to a dense optical flow. This is a way to represent motion spatially by estimating displacement vectors of each pixel. It is a core component in the two-stream architecture described by @cite_6 and is used for human pose estimation , for global video descriptor learning and for video captioning . We have not experimented with optical flow, because (i) it has a greater computational preprocessing complexity and (ii) our models should implicitly learn to infer motion features in an end-to-end fashion, so we chose not to engineer them.
{ "cite_N": [ "@cite_6" ], "mid": [ "2952186347" ], "abstract": [ "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification." ] }
1506.01911
633646897
Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition. For the task of capturing temporal structure in video, however, there still remain numerous open research questions. Current research suggests using a simple temporal feature pooling strategy to take into account the temporal aspect of video. We demonstrate that this method is not sufficient for gesture recognition, where temporal information is more discriminative compared to general video classification tasks. We explore deep architectures for gesture recognition in video and propose a new end-to-end trainable neural network architecture incorporating temporal convolutions and bidirectional recurrence. Our main contributions are twofold; first, we show that recurrence is crucial for this task; second, we show that adding temporal convolutions leads to significant improvements. We evaluate the different approaches on the Montalbano gesture recognition dataset, where we achieve state-of-the-art results.
@cite_7 present an extended overview of their winning solution for the ChaLearn LAP 2014 gesture recognition challenge and achieve a state-of-the-art score on the Montalbano dataset. They propose a multi-modal ModDrop' network operating at three temporal scales and use an ensemble method to merge the features at different scales. They also developed a new training strategy, ModDrop, that makes the network's predictions robust to missing or corrupted channels.
{ "cite_N": [ "@cite_7" ], "mid": [ "1533025524" ], "abstract": [ "We present a method for gesture detection and localisation based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at three temporal scales. Key to our technique is a training strategy which exploits: i) careful initialization of individual modalities; and ii) gradual fusion involving random dropping of separate channels (dubbed ModDrop) for learning cross-modality correlations while preserving uniqueness of each modality-specific representation. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams. Fusing multiple modalities at several spatial and temporal scales leads to a significant increase in recognition rates, allowing the model to compensate for errors of the individual classifiers as well as noise in the separate channels. Futhermore, the proposed ModDrop training technique ensures robustness of the classifier to missing signals in one or several channels to produce meaningful predictions from any number of available modalities. In addition, we demonstrate the applicability of the proposed fusion scheme to modalities of arbitrary nature by experiments on the same dataset augmented with audio." ] }
1506.01911
633646897
Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition. For the task of capturing temporal structure in video, however, there still remain numerous open research questions. Current research suggests using a simple temporal feature pooling strategy to take into account the temporal aspect of video. We demonstrate that this method is not sufficient for gesture recognition, where temporal information is more discriminative compared to general video classification tasks. We explore deep architectures for gesture recognition in video and propose a new end-to-end trainable neural network architecture incorporating temporal convolutions and bidirectional recurrence. Our main contributions are twofold; first, we show that recurrence is crucial for this task; second, we show that adding temporal convolutions leads to significant improvements. We evaluate the different approaches on the Montalbano gesture recognition dataset, where we achieve state-of-the-art results.
Most of the constituent parts in our architectures have been used before in other work for different purposes. Learning motion features with three-dimensional convolution layers has been studied by @cite_0 and @cite_1 to classify short clips of human actions on the KTH dataset. @cite_3 proposed including a two-step scheme to model the temporal evolution of learned features with an LSTM. Finally, the combination of a CNN with an RNN has been used for speech recognition , image captioning and video narration .
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_3" ], "mid": [ "1983364832", "1586730761", "28988658" ], "abstract": [ "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.", "We address the problem of learning good features for understanding video data. We introduce a model that learns latent representations of image sequences from pairs of successive images. The convolutional architecture of our model allows it to scale to realistic image sizes whilst using a compact parametrization. In experiments on the NORB dataset, we show our model extracts latent \"flow fields\" which correspond to the transformation between the pair of input frames. We also use our model to extract low-level motion features in a multi-stage architecture for action recognition, demonstrating competitive performance on both the KTH and Hollywood2 datasets.", "We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works." ] }
1506.02004
2949291345
Current distributed representations of words show little resemblance to theories of lexical semantics. The former are dense and uninterpretable, the latter largely based on familiar, discrete classes (e.g., supersenses) and relations (e.g., synonymy and hypernymy). We propose methods that transform word vectors into sparse (and optionally binary) vectors. The resulting representations are more similar to the interpretable features typically used in NLP, though they are discovered automatically from raw corpora. Because the vectors are highly sparse, they are computationally easy to work with. Most importantly, we find that they outperform the original vectors on benchmark tasks.
To the best of our knowledge, there has been no prior work on obtaining overcomplete word vector representations that are sparse and categorical. However, overcomplete features have been widely used in image processing, computer vision @cite_35 @cite_40 and signal processing @cite_0 . Nonnegative matrix factorization is often used for interpretable coding of information @cite_54 @cite_57 @cite_17 .
{ "cite_N": [ "@cite_35", "@cite_54", "@cite_0", "@cite_57", "@cite_40", "@cite_17" ], "mid": [ "2105464873", "1902027874", "2097323375", "2104457585", "2140499889", "1246381107" ], "abstract": [ "The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli. © 1997 Elsevier Science Ltd", "Is perception of the whole based on perception of its parts? There is psychological1 and physiological2,3 evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations4,5. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.", "Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.", "This paper combines linear spun coding and nonnegative matrix factorization into sparse non-negative matrix factorization. In contrast to non-negative matrix factorization, the new model can learn much sparser representation via imposing sparseness constraints explicitly; in contrast to a close model -non-negative sparse coding, the new model can learn parts-based representation via fully multiplicative updates because of adapting a generalized Kullback-Leibler divergence instead of the conventional mean error for approximation error. Experiments on MIT-CBCL training facts data demonstrate the effectiveness of the proposed method.", "In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.", "This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF). This includes NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD). NMF NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets. It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been successfully applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining. As such, the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models. Key features: Acts as a single source reference guide to NMF, collating information that is widely dispersed in current literature, including the authors own recently developed techniques in the subject area. Uses generalized cost functions such as Bregman, Alpha and Beta divergences, to present practical implementations of several types of robust algorithms, in particular Multiplicative, Alternating Least Squares, Projected Gradient and Quasi Newton algorithms. Provides a comparative analysis of the different methods in order to identify approximation error and complexity. Includes pseudo codes and optimized MATLAB source codes for almost all algorithms presented in the book. The increasing interest in nonnegative matrix and tensor factorizations, as well as decompositions and sparse representation of data, will ensure that this book is essential reading for engineers, scientists, researchers, industry practitioners and graduate students across signal and image processing; neuroscience; data mining and data analysis; computer science; bioinformatics; speech processing; biomedical engineering; and multimedia." ] }
1506.02004
2949291345
Current distributed representations of words show little resemblance to theories of lexical semantics. The former are dense and uninterpretable, the latter largely based on familiar, discrete classes (e.g., supersenses) and relations (e.g., synonymy and hypernymy). We propose methods that transform word vectors into sparse (and optionally binary) vectors. The resulting representations are more similar to the interpretable features typically used in NLP, though they are discovered automatically from raw corpora. Because the vectors are highly sparse, they are computationally easy to work with. Most importantly, we find that they outperform the original vectors on benchmark tasks.
Sparsity constraints are in general useful in NLP problems @cite_56 @cite_42 @cite_20 , like POS tagging @cite_48 , dependency parsing @cite_30 , text classification @cite_45 , and representation learning @cite_46 @cite_34 . Including sparsity constraints in Bayesian models of lexical semantics like LDA in the form of sparse Dirichlet priors has been shown to be useful for downstream tasks like POS-tagging @cite_24 , and improving interpretation @cite_43 @cite_52 .
{ "cite_N": [ "@cite_30", "@cite_48", "@cite_42", "@cite_52", "@cite_56", "@cite_24", "@cite_43", "@cite_45", "@cite_46", "@cite_34", "@cite_20" ], "mid": [ "", "2168820925", "2132555912", "1489737923", "2076467305", "", "2148830595", "2103385190", "2163922914", "1808991731", "1542491098" ], "abstract": [ "", "We address the problem of learning structured unsupervised models with moment sparsity typical in many natural language induction tasks. For example, in unsupervised part-of-speech (POS) induction using hidden Markov models, we introduce a bias for words to be labeled by a small number of tags. In order to express this bias of posterior sparsity as opposed to parametric sparsity, we extend the posterior regularization framework [7]. We evaluate our methods on three languages — English, Bulgarian and Portuguese — showing consistent and significant accuracy improvement over EM-trained HMMs, and HMMs with sparsity-inducing Dirichlet priors trained by variational EM. We increase accuracy with respect to EM by 2.3 -6.5 in a purely unsupervised setting as well as in a weakly-supervised setting where the closed-class words are provided. Finally, we show improvements using our method when using the induced clusters as features of a discriminative model in a semi-supervised setting.", "SUMMARY We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm—the graphical lasso—that is remarkably fast: It solves a 1000-node problem (∼500 000 parameters) in at most a minute and is 30–4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and B¨ uhlmann (2006). We illustrate the method on some cell-signaling data from proteomics.", "We present sparse topical coding (STC), a non-probabilistic formulation of topic models for discovering latent representations of large collections of data. Unlike probabilistic topic models, STC relaxes the normalization constraint of admixture proportions and the constraint of defining a normalized likelihood function. Such relaxations make STC amenable to: 1) directly control the sparsity of inferred representations by using sparsity-inducing regularizers; 2) be seamlessly integrated with a convex error function (e.g., SVM hinge loss) for supervised learning; and 3) be efficiently learned with a simply structured coordinate descent algorithm. Our results demonstrate the advantages of STC and supervised MedSTC on identifying topical meanings of words and improving classification accuracy and time efficiency.", "A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. However, the equality constraint is inappropriate for sparse and therefore unreliable features. This study explores an ME model with box-type inequality constraints, where the equality can be violated to reflect this unreliability. We evaluate the inequality ME model using text categorization datasets. We also propose an extension of the inequality ME model, which results in a natural integration with the Gaussian MAP estimation. Experimental results demonstrate the advantage of the inequality models and the proposed extension.", "", "Latent variable models can be enriched with a multi-dimensional structure to consider the many latent factors in a text corpus, such as topic, author perspective and sentiment. We introduce factorial LDA, a multi-dimensional model in which a document is influenced by K different factors, and each word token depends on a K-dimensional vector of latent variables. Our model incorporates structured word priors and learns a sparse product of factors. Experiments on research abstracts show that our model can learn latent factors such as research topic, scientific discipline, and focus (methods vs. applications). Our modeling improvements reduce test perplexity and improve human interpretability of the discovered factors.", "We introduce three linguistically motivated structured regularizers based on parse trees, topics, and hierarchical word clusters for text categorization. These regularizers impose linguistic bias in feature weights, enabling us to incorporate prior knowledge into conventional bagof-words models. We show that our structured regularizers consistently improve classification accuracies compared to standard regularizers that penalize features in isolation (such as lasso, ridge, and elastic net regularizers) on a range of datasets for various text prediction problems: topic classification, sentiment analysis, and forecasting.", "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.", "We propose a new method for learning word representations using hierarchical regularization in sparse coding inspired by the linguistic study of word meanings. We show an efficient learning algorithm based on stochastic proximal methods that is significantly faster than previous approaches, making it possible to perform hierarchical sparse coding on a corpus of billions of word tokens. Experiments on various benchmark tasks---word similarity ranking, analogies, sentence completion, and sentiment analysis---demonstrate that the method outperforms or is competitive with state-of-the-art methods. Our word representations are available at this http URL .", "The subject invention provides for systems and methods that facilitate optimizing one or mores sets of training data by utilizing an Exponential distribution as the prior on one or more parameters in connection with a maximum entropy (maxent) model to mitigate overfitting. Maxent is also known as logistic regression. More specifically, the systems and methods can facilitate optimizing probabilities that are assigned to the training data for later use in machine learning processes, for example. In practice, training data can be assigned their respective weights and then a probability distribution can be assigned to those weights." ] }
1506.02075
580074167
Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (, 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance.
The first approaches to open-domain QA were search engine-based systems, where keywords extracted from the question are sent to a search engine, and the answer is extracted from the top results @cite_8 @cite_12 . This method has been adapted to KB-based QA @cite_8 @cite_12 , and obtained competitive results with respect to semantic parsing and embedding-based approaches.
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2151149636", "2156233801" ], "abstract": [ "As an increasing amount of RDF data is published as Linked Data, intuitive ways of accessing this data become more and more important. Question answering approaches have been proposed as a good compromise between intuitiveness and expressivity. Most question answering systems translate questions into triples which are matched against the RDF data to retrieve an answer, typically relying on some similarity metric. However, in many cases, triples do not represent a faithful representation of the semantic structure of the natural language question, with the result that more expressive queries can not be answered. To circumvent this problem, we present a novel approach that relies on a parse of the question to produce a SPARQL template that directly mirrors the internal structure of the question. This template is then instantiated using statistical entity identification and predicate detection. We show that this approach is competitive and discuss cases of questions that can be answered with our approach but not with competing approaches.", "The Linked Data initiative comprises structured databases in the Semantic-Web data model RDF. Exploring this heterogeneous data by structured query languages is tedious and error-prone even for skilled users. To ease the task, this paper presents a methodology for translating natural language questions into structured SPARQL queries over linked-data sources. Our method is based on an integer linear program to solve several disambiguation tasks jointly: the segmentation of questions into phrases; the mapping of phrases to semantic entities, classes, and relations; and the construction of SPARQL triple patterns. Our solution harnesses the rich type system provided by knowledge bases in the web of linked data, to constrain our semantic-coherence objective function. We present experiments on both the question translation and the resulting query answering." ] }
1506.02075
580074167
Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (, 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance.
Like our work, embedding-based methods for QA can be seen as simple MemNNs. The algorithms of @cite_16 @cite_3 use an approach similar to ours but are based on rather than , and relied purely on bag-of-word for both questions and facts. The approach of @cite_13 uses a different representation of questions, in which recognized entities are replaced by an entity token, and a different training data using entity mentions from Wikipedia . Our model is closest to the one presented in @cite_2 , which is discussed in more details in the experiments.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_3", "@cite_2" ], "mid": [ "2952792693", "2251289180", "2951008357", "2951622387" ], "abstract": [ "Building computers able to answer questions on any subject is a long standing goal of artificial intelligence. Promising progress has recently been achieved by methods that learn to map questions to logical forms or database queries. Such approaches can be effective but at the cost of either large amounts of human-labeled data or by defining lexicons and grammars tailored by practitioners. In this paper, we instead take the radical approach of learning to map questions to vectorial feature representations. By mapping answers into the same space one can query any knowledge base independent of its schema, without requiring any grammar or lexicon. Our method is trained with a new optimization procedure combining stochastic gradient descent followed by a fine-tuning step using the weak supervision provided by blending automatically and collaboratively generated resources. We empirically demonstrate that our model can capture meaningful signals from its noisy supervision leading to major improvements over paralex, the only existing method able to be trained on similar weakly labeled data.", "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.", "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.", "This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a competitive benchmark of the literature." ] }
1506.01644
2951462697
The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links. This paper introduces the notion of the meta distribution of the SIR, which is the distribution of the conditional success probability @math given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it. The meta distribution provides fine-grained information on the SIR and answers questions such as "What fraction of users in a Poisson cellular network achieve 90 link reliability if the required SIR is 5 dB?". Interestingly, in the bipolar model, if the transmit probability @math is reduced while increasing the network density @math such that the density of concurrent transmitters @math stays constant as @math , @math degenerates to a constant, i.e., all links have exactly the same success probability in the limit, which is the one of the typical link. In contrast, in the cellular case, if the interfering base stations are active independently with probability @math , the variance of @math approaches a non-zero constant when @math is reduced to @math while keeping the mean success probability constant.
The calculation of the (mean) success probability @math in Poisson bipolar networks is provided in @cite_0 but can be traced back to @cite_13 . In @cite_15 , the moments @math of the link success probabilities are calculated under the assumption of no MAC scheme (i.e., all nodes always transmit), and bounds on the distribution are obtained.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_13" ], "mid": [ "2132987440", "", "2106334285" ], "abstract": [ "An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density.", "", "The evaluation of optimum transmission ranges in a packet radio network in a fading and shadowing environment is considered. It is shown that the optimal probability of transmission of each user is independent of the system model and is p sub o spl sime 0.271. The optimum range should be chosen so that on the average there are spl chi (G b) sup 2 spl eta terminals closer to the transmitter than the receiver, where G is the spread spectrum processing gain, b is the outage signal-to-noise ratio threshold, spl eta is the power loss factor and spl chi depends on the system parameters and the propagation model. The performance index is given in terms of the optimal normalized expected progress per slot, given by spl thetav (G b) sup 1 spl eta where spl thetav is proportional to the square root of spl chi . A comparison with the results obtained by using deterministic propagation models shows, for typical values of fading and shadowing parameters, a reduction up to 40 of the performance index. >" ] }
1506.01644
2951462697
The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links. This paper introduces the notion of the meta distribution of the SIR, which is the distribution of the conditional success probability @math given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it. The meta distribution provides fine-grained information on the SIR and answers questions such as "What fraction of users in a Poisson cellular network achieve 90 link reliability if the required SIR is 5 dB?". Interestingly, in the bipolar model, if the transmit probability @math is reduced while increasing the network density @math such that the density of concurrent transmitters @math stays constant as @math , @math degenerates to a constant, i.e., all links have exactly the same success probability in the limit, which is the one of the typical link. In contrast, in the cellular case, if the interfering base stations are active independently with probability @math , the variance of @math approaches a non-zero constant when @math is reduced to @math while keeping the mean success probability constant.
For Poisson cellular models, where the typical user is associated with the nearest base station (strongest base station on average), the result was derived in @cite_6 and extended to the multi-tier Poisson case (HIP model) in @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_6" ], "mid": [ "2058477717", "2150166076" ], "abstract": [ "Motivated by the ongoing discussion on coordinated multipoint in wireless cellular standard bodies, this paper considers the problem of base station cooperation in the downlink of heterogeneous cellular networks. The focus of this paper is the joint transmission scenario, where an ideal backhaul network allows a set of randomly located base stations, possibly belonging to different network tiers, to jointly transmit data, to mitigate intercell interference and hence improve coverage and spectral efficiency. Using tools from stochastic geometry, an integral expression for the network coverage probability is derived in the scenario where the typical user located at an arbitrary location, i.e., the general user, receives data from a pool of base stations that are selected based on their average received power levels. An expression for the coverage probability is also derived for the typical user located at the point equidistant from three base stations, which we refer to as the worst case user. In the special case where cooperation is limited to two base stations, numerical evaluations illustrate absolute gains in coverage probability of up to 17 for the general user and 24 for the worst case user compared with the noncooperative case. It is also shown that no diversity gain is achieved using noncoherent joint transmission, whereas full diversity gain can be achieved at the receiver if the transmitting base stations have channel state information.", "Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks." ] }
1506.01644
2951462697
The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links. This paper introduces the notion of the meta distribution of the SIR, which is the distribution of the conditional success probability @math given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it. The meta distribution provides fine-grained information on the SIR and answers questions such as "What fraction of users in a Poisson cellular network achieve 90 link reliability if the required SIR is 5 dB?". Interestingly, in the bipolar model, if the transmit probability @math is reduced while increasing the network density @math such that the density of concurrent transmitters @math stays constant as @math , @math degenerates to a constant, i.e., all links have exactly the same success probability in the limit, which is the one of the typical link. In contrast, in the cellular case, if the interfering base stations are active independently with probability @math , the variance of @math approaches a non-zero constant when @math is reduced to @math while keeping the mean success probability constant.
The joint success probability of multiple transmissions in Poisson bipolar networks is calculated in @cite_3 . Similarly, @cite_8 determined the joint success probabilities of multiple transmissions (or transmissions over multiple resource blocks) for Poisson cellular networks. As we shall see, these joint probabilities are related to the integer moments @math of the conditional success probabilities.
{ "cite_N": [ "@cite_3", "@cite_8" ], "mid": [ "2019715307", "2092923645" ], "abstract": [ "The interference in wireless networks is temporally correlated, since the node or user locations are correlated over time and the interfering transmitters are a subset of these nodes. For a wireless network where (potential) interferers form a Poisson point process and use ALOHA for channel access, we calculate the joint success and outage probabilities of n transmissions over a reference link. The results are based on the diversity polynomial, which captures the temporal interference correlation. The joint outage probability is used to determine the diversity gain (as the SIR goes to infinity), and it turns out that there is no diversity gain in simple retransmission schemes, even with independent Rayleigh fading over all links. We also determine the complete joint SIR distribution for two transmissions and the distribution of the local delay, which is the time until a repeated transmission over the reference link succeeds.", "Inter-cell interference coordination (ICIC) and intra-cell diversity (ICD) play important roles in improving cellular downlink coverage. By modeling cellular base stations (BSs) as a homogeneous Poisson point process (PPP), this paper provides explicit finite-integral expressions for the coverage probability with ICIC and ICD, taking into account the temporal spectral correlation of the signal and interference. In addition, we show that, in the high-reliability regime, where the user outage probability goes to zero, ICIC and ICD affect the network coverage in drastically different ways: ICD can provide order gain, whereas ICIC only offers linear gain. In the high-spectral efficiency regime where the SIR threshold goes to infinity, the order difference in the coverage probability does not exist; however, a linear difference makes ICIC a better scheme than ICD for realistic path loss exponents. Consequently, depending on the SIR requirements, different combinations of ICIC and ICD optimize the coverage probability." ] }
1506.01394
640676175
This paper presents a systematic approach to exploiting TV white space (TVWS) for device-to-device (D2D) communications with the aid of the existing cellular infrastructure. The goal is to build a location-specific TVWS database, which provides a lookup table service for any D2D link to determine its maximum permitted emission power (MPEP) in an unlicensed digital TV (DTV) band. To achieve this goal, the idea of mobile crowd sensing is first introduced to collect active spectrum measurements from massive personal mobile devices. Considering the incompleteness of crowd measurements, we formulate the problem of unknown measurements recovery as a matrix completion problem and apply a powerful fixed point continuation algorithm to reconstruct the unknown elements from the known elements. By joint exploitation of the big spectrum data in its vicinity, each cellular base station further implements a nonlinear support vector machine algorithm to perform irregular coverage boundary detection of a licensed DTV transmitter. With the knowledge of the detected coverage boundary, an opportunistic spatial reuse algorithm is developed for each D2D link to determine its MPEP. Simulation results show that the proposed approach can successfully enable D2D communications in TVWS while satisfying the interference constraint from the licensed DTV services. In addition, to our best knowledge, this is the first try to explore and exploit TVWS inside the DTV protection region resulted from the shadowing effect. Potential application scenarios include communications between internet of vehicles in the underground parking and D2D communications in hotspots such as subway, game stadiums, and airports.
During the past few years, the idea of enabling D2D communications in cellular networks for handling local traffic has gained growing attention. The prior studies in @cite_49 @cite_14 @cite_51 @cite_10 have shown that better resource utilization can be achieved by non-orthogonal spectrum sharing between D2D communications and cellular networks. Among many others, the authors of @cite_5 @cite_59 @cite_52 @cite_18 have proposed various interference management schemes to coordinate D2D and cellular users for achieving improved spatial reuse of the cellular spectrum. The work of this paper is complementary to prior studies and the goal here is to explore unlicensed TV spectrum for D2D communications with the assistance from cellular networks. The sharing of TV spectrum between D2D and DTV users brings unique challenges, mainly due to the lack of explicit signaling cooperation.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_52", "@cite_49", "@cite_59", "@cite_51", "@cite_5", "@cite_10" ], "mid": [ "1988834815", "2136375880", "2054767147", "2044576662", "2066106876", "2140656373", "2130171753", "1968495205" ], "abstract": [ "We present a new architecture to handle the ongoing explosive increase in the demand for video content in wireless networks. It is based on distributed caching of the content in femtobasestations with small or non-existing backhaul capacity but with considerable storage space, called helper nodes. We also consider using the wireless terminals themselves as caching helpers, which can distribute video through device-todevice communications. This approach allows an improvement in the video throughput without deployment of any additional infrastructure. The new architecture can improve video throughput by one to two orders-of-magnitude.", "In this article we propose to facilitate local peer-to-peer communication by a Device-to-Device (D2D) radio that operates as an underlay network to an IMT-Advanced cellular network. It is expected that local services may utilize mobile peer-to-peer communication instead of central server based communication for rich multimedia services. The main challenge of the underlay radio in a multi-cell environment is to limit the interference to the cellular network while achieving a reasonable link budget for the D2D radio. We propose a novel power control mechanism for D2D connections that share cellular uplink resources. The mechanism limits the maximum D2D transmit power utilizing cellular power control information of the devices in D2D communication. Thereby it enables underlaying D2D communication even in interference-limited networks with full load and without degrading the performance of the cellular network. Secondly, we study a single cell scenario consisting of a device communicating with the base station and two devices that communicate with each other. The results demonstrate that the D2D radio, sharing the same resources as the cellular network, can provide higher capacity (sum rate) compared to pure cellular communication where all the data is transmitted through the base station.", "Device-to-device (D2D) communication as an underlay to cellular networks can bring significant benefits to users' throughput. However, as D2D user equipments (TIEs) can cause interference to cellular TIEs, the scheduling and allocation of channel resources and power to D2D communication need elaborate coordination. In this paper, we propose a joint scheduling and resource allocation scheme to improve the performance of D2D communication. We take network throughput and TIEs' fairness into account by performing interference management. Specifically, we develop a Stackelberg game framework in which we group a cellular TIE and a D2D TIE to form a leader-follower pair. The cellular user is the leader, and the D2D TIE is the follower who buys channel resources from the leader. We analyze the equilibrium of the game, and propose an algorithm for joint scheduling and resource allocation. Finally, we perform computer simulations to study the performance of the proposed algorithm.", "Spectrum sharing is a novel opportunistic strategy to improve spectral efficiency of wireless networks. Much of the research to quantify such a gain is done under the premise that the spectrum is being used inefficiently by the primary network. Our main result is that even in a spectrally efficient network, device to device users can exploit the network topology to render gains in additional throughput. The focus will be on providing ad-hoc multihop access to a network for device to device users, that are transparent to the primary wireless cellular network, while sharing the primary network's resources.", "A new interference management strategy is proposed to enhance the overall capacity of cellular networks (CNs) and device-to-device (D2D) systems. We consider M out of K cellular user equipments (CUEs) and one D2D pair exploiting the same resources in the uplink (UL) period under the assumption of M multiple antennas at the base station (BS). First, we use the conventional mechanism which limits the maximum transmit power of the D2D transmitter so as not to generate harmful interference from D2D systems to CNs. Second, we propose a δD-interference limited area (ILA) control scheme to manage interference from CNs to D2D systems. The method does not allow the coexistence (i.e., use of the same resources) of CUEs and a D2D pair if the CUEs are located in the δD-ILA defined as the area in which the interference to signal ratio (ISR) at the D2D receiver is greater than the predetermined threshold, δD. Next, we analyze the coverage of the δD-ILA and derive the lower bound of the ergodic capacity as a closed form. Numerical results show that the δD-ILA based D2D gain is much greater than the conventional D2D gain, whereas the capacity loss to the CNs caused by using the δD-ILA is negligibly small.", "In this article device-to-device (D2D) communication underlaying a 3GPP LTE-Advanced cellular network is studied as an enabler of local services with limited interference impact on the primary cellular network. The approach of the study is a tight integration of D2D communication into an LTE-Advanced network. In particular, we propose mechanisms for D2D communication session setup and management involving procedures in the LTE System Architecture Evolution. Moreover, we present numerical results based on system simulations in an interference limited local area scenario. Our results show that D2D communication can increase the total throughput observed in the cell area.", "We consider Device-to-Device (D2D) communication underlaying cellular networks to improve local services. The system aims to optimize the throughput over the shared resources while fulfilling prioritized cellular service constraints. Optimum resource allocation and power control between the cellular and D2D connections that share the same resources are analyzed for different resource sharing modes. Optimality is discussed under practical constraints such as minimum and maximum spectral efficiency restrictions, and maximum transmit power or energy limitation. It is found that in most of the considered cases, optimum power control and resource allocation for the considered resource sharing modes can either be solved in closed form or searched from a finite set. The performance of the D2D underlay system is evaluated in both a single-cell scenario, and a Manhattan grid environment with multiple WINNER II A1 office buildings. The results show that by proper resource management, D2D communication can effectively improve the total throughput without generating harmful interference to cellular networks.", "This article studies direct communications between user equipments in the LTE-advanced cellular networks. Different from traditional device-to-device communication technologies such as Bluetooth and WiFi-direct, the operator controls the communication process to provide better user experience and make profit accordingly. The related usage cases and business models are analyzed. Some technical considerations are discussed, and a resource allocation and data transmission procedure is provided." ] }
1506.01394
640676175
This paper presents a systematic approach to exploiting TV white space (TVWS) for device-to-device (D2D) communications with the aid of the existing cellular infrastructure. The goal is to build a location-specific TVWS database, which provides a lookup table service for any D2D link to determine its maximum permitted emission power (MPEP) in an unlicensed digital TV (DTV) band. To achieve this goal, the idea of mobile crowd sensing is first introduced to collect active spectrum measurements from massive personal mobile devices. Considering the incompleteness of crowd measurements, we formulate the problem of unknown measurements recovery as a matrix completion problem and apply a powerful fixed point continuation algorithm to reconstruct the unknown elements from the known elements. By joint exploitation of the big spectrum data in its vicinity, each cellular base station further implements a nonlinear support vector machine algorithm to perform irregular coverage boundary detection of a licensed DTV transmitter. With the knowledge of the detected coverage boundary, an opportunistic spatial reuse algorithm is developed for each D2D link to determine its MPEP. Simulation results show that the proposed approach can successfully enable D2D communications in TVWS while satisfying the interference constraint from the licensed DTV services. In addition, to our best knowledge, this is the first try to explore and exploit TVWS inside the DTV protection region resulted from the shadowing effect. Potential application scenarios include communications between internet of vehicles in the underground parking and D2D communications in hotspots such as subway, game stadiums, and airports.
The idea of opportunistic access of TVWS for cellular networks has also received increasing interests. A related work in @cite_50 proposed a spectrum sensing-based mechanism to explore TVWS for cellular users, where a simplified circular DTV coverage model was assumed and collaborative sensing was done among neighboring cellular BSs with fixed topology. In contrast, our proposed approach introduces mobile crowd sensing to collect spectrum measurements from massive personal devices and exploits TVWS in a much finer granularity by considering the practical irregular DTV coverage.
{ "cite_N": [ "@cite_50" ], "mid": [ "2104532745" ], "abstract": [ "Motivated by the Federal Communications Commission's recent approval of commercial unlicensed operations of some television (TV) spectrum, we propose to integrate cognitive radios (CRs) that operate on unoccupied TV bands with an existing cellular network to increase bandwidth for mobile users. The existing cellular infrastructure is used to enable the operation of such CRs. Because base stations (BSs) can sense spectrum and exchange the sensed information for the reliable detection of primary users (PUs) and white spaces, we propose a collaborative sensing mechanism based on cell topology, where the BS declares its cell to be PU-free when neither the BS nor its neighboring BSs detect any PU. This way, in a PU-free cell, the following two types of channels are available: 1) channels that are originally licensed for the cellular system and 2) CR channels that are discovered through spectrum sensing. Because the CR channels that operate on TV bands usually suffer less path loss than the cellular channels, we derive two important results. First, each user gains more capacity when accessing a cellular channel than an empty TV channel, as long as intercell interferences are caused by the same sources. Second, assigning TV bands to cell-edge users is better in maximizing cell capacity. These two effects and the performance of the proposed sensing mechanism are verified through numerical evaluation." ] }
1506.01394
640676175
This paper presents a systematic approach to exploiting TV white space (TVWS) for device-to-device (D2D) communications with the aid of the existing cellular infrastructure. The goal is to build a location-specific TVWS database, which provides a lookup table service for any D2D link to determine its maximum permitted emission power (MPEP) in an unlicensed digital TV (DTV) band. To achieve this goal, the idea of mobile crowd sensing is first introduced to collect active spectrum measurements from massive personal mobile devices. Considering the incompleteness of crowd measurements, we formulate the problem of unknown measurements recovery as a matrix completion problem and apply a powerful fixed point continuation algorithm to reconstruct the unknown elements from the known elements. By joint exploitation of the big spectrum data in its vicinity, each cellular base station further implements a nonlinear support vector machine algorithm to perform irregular coverage boundary detection of a licensed DTV transmitter. With the knowledge of the detected coverage boundary, an opportunistic spatial reuse algorithm is developed for each D2D link to determine its MPEP. Simulation results show that the proposed approach can successfully enable D2D communications in TVWS while satisfying the interference constraint from the licensed DTV services. In addition, to our best knowledge, this is the first try to explore and exploit TVWS inside the DTV protection region resulted from the shadowing effect. Potential application scenarios include communications between internet of vehicles in the underground parking and D2D communications in hotspots such as subway, game stadiums, and airports.
As opposed to propagation model estimates, there are few related work that use actual measurements to build spatial TVWS maps or database. One representative research work has been done by the European project FARAMIR (during 2010-2012) (see, e.g., @cite_32 @cite_33 @cite_44 @cite_43 ), where extensive spectrum measurements have been conducted at several locations in Europe to provide a valuable basis for spectrum use modeling in time, frequency, and space, and to increase the radio environmental and spectral awareness of future wireless systems. @cite_46 and @cite_23 , a large set of active measurements have been collected to evaluate the accuracy of propagation models in making radio link predictions, where a conclusion has been reached that these models can be used for nationwide coverage planning, but perform poorly at predicting accurate path loss even in relatively simple outdoor environments and more complex models that consider a larger number of variables (e.g., terrain, climactic, soil conductivity, etc) do not necessarily make better predictions. These studies reinforce the motivation of this paper, which extends those studies by developing effective data mining algorithms to build TVWS database from actual measurements.
{ "cite_N": [ "@cite_33", "@cite_32", "@cite_44", "@cite_43", "@cite_23", "@cite_46" ], "mid": [ "1989881065", "", "1970435247", "1975068280", "2093540532", "2110002846" ], "abstract": [ "In this paper, we study the availability of TV white spaces in Europe. Specifically, we focus on the 470-790 MHz UHF band, which will predominantly remain in use for TV broadcasting after the analog-to-digital switch-over and the assignment of the 800 MHz band to licensed services have been completed. The expected number of unused, available TV channels in any location of the 11 countries we studied is 56 percent when we adopt the statistical channel model of the ITU-R. Similarly, a person residing in these countries can expect to enjoy 49 percent unused TV channels. If, in addition, restrictions apply to the use of adjacent TV channels, these numbers reduce to 25 and 18 percent, respectively. These figures are significantly smaller than those recently reported for the United States. We also study how these results change when we use the Longley-Rice irregular terrain model instead. We show that while the overall expected availability of white spaces is essentially the same, the local variability of the available spectrum shows significant changes. This underlines the importance of using appropriate system models before making far-reaching conclusions.", "", "In this paper we present results from a week long measurement campaign on spectrum use in London (UK). The measurements were conducted in order to understand the characteristics and especially the variability in spectrum use over different types of areas in a major metropolitan area. Three spectrum analyzers were used in the measurement campaign, one used for long-term measurements at a single location in a given area, with the other two used to sample spectrum use around the stationary measurement point. This measurement approach yields much more detailed information about spectrum use than the typical single-location campaigns reported in the literature. We give a detailed description of the measurement campaign, including the equipment setup and rationale for the choice of areas in which the measurements were conducted. We also present results from the first exploratory data analysis of the obtained data, and study in detail the correlation structures and dynamics in spectrum use in temporal, spatial and frequency domains.", "In this demonstration we show an approach for constructing radio environment maps from massive data sets. Unlike earlier approaches, the methods used in the demonstrator can scale to millions of measurement points, enabling coverage prediction and other spatial estimation problems to be solved at country-wide scales. The demonstrator GUI enables attendees to construct different types of simulated measurement data sets, and perform spatial estimation based on those. The framework also allows the attendees to study the accuracy of the obtained estimates, as well as the computational time required for data processing.", "In this paper we provide a thorough and up to date survey of path loss prediction methods, spanning more than 60 years of fairly continuous research. These methods take a variety of approaches to modeling the signal attenuation between wireless transceivers: purely theoretical models, empirically fitted (often statistical) models, deterministic ray-optical models, and measurement-directed methods. Our work here extends and updates excellent, but now dated prior surveys of this important field. We provide a new taxonomy for reasoning about the similarities and differences of the many approaches and provide a brief but complete overview of the various methods as well as describing insights into future directions for research in this area.", "In this paper we analyze the efficacy of basic path loss models at predicting median path loss in urban environments. We attempt to bound the practical error of these models and look at how they may hinder practical wireless applications, and in particular dynamic spectrum access networks. This analysis is made using a large set of measurements from production networks in two US cities. We are able to show quantitatively what many experienced radio engineers understand: these models perform poorly at predicting path loss in even relatively simple outdoor environments and are of little practical use aside from making crude estimates of coverage in the least demanding applications. As a solution, we advocate a renewed focus on measurement-based, adaptive path loss models built on appropriate statistical methods." ] }
1506.01062
2064677871
We describe Quizz, a gamified crowdsourcing system that simultaneously assesses the knowledge of users and acquires new knowledge from them. Quizz operates by asking users to complete short quizzes on specific topics; as a user answers the quiz questions, Quizz estimates the user's competence. To acquire new knowledge, Quizz also incorporates questions for which we do not have a known answer; the answers given by competent users provide useful signals for selecting the correct answers for these questions. Quizz actively tries to identify knowledgeable users on the Internet by running advertising campaigns, effectively leveraging the targeting capabilities of existing, publicly available, ad placement services. Quizz quantifies the contributions of the users using information theory and sends feedback to the advertisingsystem about each user. The feedback allows the ad targeting mechanism to further optimize ad placement. Our experiments, which involve over ten thousand users, confirm that we can crowdsource knowledge curation for niche and specialized topics, as the advertising network can automatically identify users with the desired expertise and interest in the given topic. We present controlled experiments that examine the effect of various incentive mechanisms, highlighting the need for having short-term rewards as goals, which incentivize the users to contribute. Finally, our cost-quality analysis indicates that the cost of our approach is below that of hiring workers through paid-crowdsourcing platforms, while offering the additional advantage of giving access to billions of potential users all over the planet, and being able to reach users with specialized expertise that is not typically available through existing labor marketplaces.
Quizz crowdsources the acquisition of knowledge by asking users to participate in thematically-focuses quizzes, which contain also collection'' questions with no known answer. ReCAPTCHA @cite_15 is close conceptually, as it asks users to type two digitized words, out of which one is known and the other is unknown, which is similar to our calibration and collection questions, respectively. In terms of use of advertising for recruiting users, @cite_17 use advertising to attract participants for a Wikipedia-editing experiment; however there was no discussion or experiments with targeting, or with optimizing the ad campaigns for maximizing the user contributions.
{ "cite_N": [ "@cite_15", "@cite_17" ], "mid": [ "2022710553", "1970188685" ], "abstract": [ "CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are widespread security measures on the World Wide Web that prevent automated programs from abusing online services. They do so by asking humans to perform a task that computers cannot yet perform, such as deciphering distorted characters. Our research explored whether such human effort can be channeled into a useful purpose: helping to digitize old printed material by asking users to decipher scanned words from books that computerized optical character recognition failed to recognize. We showed that this method can transcribe text with a word accuracy exceeding 99 , matching the guarantee of professional human transcribers. Our apparatus is deployed in more than 40,000 Web sites and has transcribed over 440 million words.", "Although existing work has explored both information extraction and community content creation, most research has focused on them in isolation. In contrast, we see the greatest leverage in the synergistic pairing of these methods as two interlocking feedback cycles. This paper explores the potential synergy promised if these cycles can be made to accelerate each other by exploiting the same edits to advance both community content creation and learning-based information extraction. We examine our proposed synergy in the context of Wikipedia infoboxes and the Kylin information extraction system. After developing and refining a set of interfaces to present the verification of Kylin extractions as a non primary task in the context of Wikipedia articles, we develop an innovative use of Web search advertising services to study people engaged in some other primary task. We demonstrate our proposed synergy by analyzing our deployment from two complementary perspectives: (1) we show we accelerate community content creation by using Kylin's information extraction to significantly increase the likelihood that a person visiting a Wikipedia article as a part of some other primary task will spontaneously choose to help improve the article's infobox, and (2) we show we accelerate information extraction by using contributions collected from people interacting with our designs to significantly improve Kylin's extraction performance." ] }
1506.01062
2064677871
We describe Quizz, a gamified crowdsourcing system that simultaneously assesses the knowledge of users and acquires new knowledge from them. Quizz operates by asking users to complete short quizzes on specific topics; as a user answers the quiz questions, Quizz estimates the user's competence. To acquire new knowledge, Quizz also incorporates questions for which we do not have a known answer; the answers given by competent users provide useful signals for selecting the correct answers for these questions. Quizz actively tries to identify knowledgeable users on the Internet by running advertising campaigns, effectively leveraging the targeting capabilities of existing, publicly available, ad placement services. Quizz quantifies the contributions of the users using information theory and sends feedback to the advertisingsystem about each user. The feedback allows the ad targeting mechanism to further optimize ad placement. Our experiments, which involve over ten thousand users, confirm that we can crowdsource knowledge curation for niche and specialized topics, as the advertising network can automatically identify users with the desired expertise and interest in the given topic. We present controlled experiments that examine the effect of various incentive mechanisms, highlighting the need for having short-term rewards as goals, which incentivize the users to contribute. Finally, our cost-quality analysis indicates that the cost of our approach is below that of hiring workers through paid-crowdsourcing platforms, while offering the additional advantage of giving access to billions of potential users all over the planet, and being able to reach users with specialized expertise that is not typically available through existing labor marketplaces.
In our work, we explicitly assess the competence of users with calibration questions. Alternatively, we can use unsupervised techniques for estimating the competence of users, through redundancy. Dawid and Skene @cite_7 presented an EM algorithm to estimate the quality of the participants in the absence of known ground truth, and a large number of recent papers examined the same topic @cite_24 @cite_20 @cite_33 improving significantly the state of the art. Being closer to our work, @cite_2 also use a Markov Decision Process, in order to decide whether the answers provided by a user are promising enough to warrant a hiring decision. In the future, we plan to use these algorithms for quality inference together with our exploration exploitation approach, to decide optimally how to combine assessment with knowledge acquisition. A key challenge is being able to provide immediate feedback to the users, when the questions have no certain answer.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_24", "@cite_2", "@cite_20" ], "mid": [ "2149273804", "9014458", "2134305421", "2109021302", "2142518823" ], "abstract": [ "Distributing labeling tasks among hundreds or thousands of annotators is an increasingly important method for annotating large datasets. We present a method for estimating the underlying value (e.g. the class) of each image from (noisy) annotations provided by multiple annotators. Our method is based on a model of the image formation and annotation process. Each image has different characteristics that are represented in an abstract Euclidean space. Each annotator is modeled as a multidimensional entity with variables representing competence, expertise and bias. This allows the model to discover and represent groups of annotators that have different sets of skills and knowledge, as well as groups of images that differ qualitatively. We find that our model predicts ground truth labels on both synthetic and real data more accurately than state of the art methods. Experiments also show that our model, starting from a set of binary labels, may discover rich information, such as different \"schools of thought\" amongst the annotators, and can group together images belonging to separate categories.", "In compiling a patient record many facets are subject to errors of measurement. A model is presented which allows individual error-rates to be estimated for polytomous facets even when the patient's \"true\" response is not available. The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest. Some preliminary experience is reported and the limitations of the method are described.", "For many supervised learning tasks it may be infeasible (or very expensive) to obtain objective and reliable labels. Instead, we can collect subjective (possibly noisy) labels from multiple experts or annotators. In practice, there is a substantial amount of disagreement among the annotators, and hence it is of great practical interest to address conventional supervised learning problems in this scenario. In this paper we describe a probabilistic approach for supervised learning when we have multiple annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method is superior to the commonly used majority voting baseline.", "We show how machine learning and inference can be harnessed to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks. We construct a set of Bayesian predictive models from data and describe how the models operate within an overall crowd-sourcing architecture that combines the efforts of people and machine vision on the task of classifying celestial bodies defined within a citizens' science project named Galaxy Zoo. We show how learned probabilistic models can be used to fuse human and machine contributions and to predict the behaviors of workers. We employ multiple inferences in concert to guide decisions on hiring and routing workers to tasks so as to maximize the efficiency of large-scale crowdsourcing processes based on expected utility.", "Modern machine learning-based approaches to computer vision require very large databases of hand labeled images. Some contemporary vision systems already require on the order of millions of images for training (e.g., Omron face detector [9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings interesting theoretical and practical challenges: (1) The labelers may have wide ranging levels of expertise which are unknown a priori, and in some cases may be adversarial; (2) images may vary in their level of difficulty; and (3) multiple labels for the same image must be combined to provide an estimate of the actual label of the image. Probabilistic approaches provide a principled way to approach these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the difficulty of each image. On both simulated and real data, we demonstrate that the model outperforms the commonly used \"Majority Vote\" heuristic for inferring image labels, and is robust to both noisy and adversarial labelers." ] }
1506.01565
2479026286
The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a " one fits it all " model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.
Graph structures are often characterized by the frequency of small patterns called @cite_17 @cite_0 @cite_13 @cite_37 , also known as @cite_24 , or @cite_10 . Another important graph characterization, which is studied in this paper, are @cite_25 . Dozens of different centrality indices have been defined over the last years, and their study is still ongoing, with no unified theory yet. We believe that our centrality distance framework can provide new inputs for this discussion.
{ "cite_N": [ "@cite_37", "@cite_10", "@cite_0", "@cite_24", "@cite_13", "@cite_25", "@cite_17" ], "mid": [ "2161917555", "2170643162", "2153624566", "2104812688", "1507763859", "", "2070722739" ], "abstract": [ "Motifs in a given network are small connected subnetworks that occur in significantly higher frequencies than would be expected in random networks. They have recently gathered much attention as a concept to uncover structural design principles of complex networks. [Bioinformatics, 2004] proposed a sampling algorithm for performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from a sampling bias and scales poorly with increasing subgraph size. Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Furthermore, we present an efficient new approach for estimating the frequency of subgraphs in random networks that, in contrast to previous approaches, does not require the explicit generation of random networks. Experiments on a testbed of biological networks show our new algorithms to be orders of magnitude faster than previous approaches, allowing for the detection of larger motifs in bigger networks than previously possible and thus facilitating deeper insight into the field.", "Network forms of organization, unlike hierarchies or marketplaces, are agile and are constantly adapting as new links are added and dysfunctional ones dropped. We review some of the theoretical and methodological accomplishments and challenges of contemporary research on organizational networks. We then offer an analytic framework that can be used to specify and statistically test simultaneously multilevel, multitheoretical hypotheses about the structural tendencies of organizational networks. We conclude with an empirical study illustrating some of the capabilities of this framework.", "Complex networks are studied across many fields of science. To uncover their structural design principles, we defined “network motifs,” patterns of interconnections occurring in complex networks at numbers that are significantly higher than those in randomized networks. We found such motifs in networks from biochemistry, neurobiology, ecology, and engineering. The motifs shared by ecological food webs were distinct from the motifs shared by the genetic networks of Escherichia coli and Saccharomyces cerevisiae or from those found in the World Wide Web. Similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans. Motifs may thus define universal classes of networks. This", "Motivation: Analogous to biological sequence comparison, comparing cellular networks is an important problem that could provide insight into biological understanding and therapeutics. For technical reasons, comparing large networks is computationally infeasible, and thus heuristics, such as the degree distribution, clustering coefficient, diameter, and relative graphlet frequency distribution have been sought. It is easy to demonstrate that two networks are different by simply showing a short list of properties in which they differ. It is much harder to show that two networks are similar, as it requires demonstrating their similarity in all of their exponentially many properties. Clearly, it is computationally prohibitive to analyze all network properties, but the larger the number of constraints we impose in determining network similarity, the more likely it is that the networks will truly be similar. Results: We introduce a new systematic measure of a network's local structure that imposes a large number of similarity constraints on networks being compared. In particular, we generalize the degree distribution, which measures the number of nodes 'touching' k edges, into distributions measuring the number of nodes 'touching' k graphlets, where graphlets are small connected non-isomorphic subgraphs of a large network. Our new measure of network local structure consists of 73 graphlet degree distributions of graphlets with 2--5 nodes, but it is easily extendible to a greater number of constraints (i.e. graphlets), if necessary, and the extensions are limited only by the available CPU. Furthermore, we show a way to combine the 73 graphlet degree distributions into a network 'agreement' measure which is a number between 0 and 1, where 1 means that networks have identical distributions and 0 means that they are far apart. Based on this new network agreement measure, we show that almost all of the 14 eukaryotic PPI networks, including human, resulting from various high-throughput experimental techniques, as well as from curated databases, are better modeled by geometric random graphs than by Erdos--Reny, random scale-free, or Barabasi--Albert scale-free networks. Availability: Software executables are available upon request. Contact: natasha@ics.uci.edu", "Network motifs, patterns of local interconnections with potential functional properties, are important for the analysis of biological networks. To analyse motifs in networks the first step is to find patterns of interest. This paper presents 1) three different concepts for the determination of pattern frequency and 2) an algorithm to compute these frequencies. The different concepts of pattern frequency depend on the reuse of network elements. The presented algorithm finds all or highly frequent patterns under consideration of these concepts. The utility of this method is demonstrated by applying it to biological data.", "", "Coupled biological and chemical systems, neural networks, social interacting species, the Internet and the World Wide Web, are only a few examples of systems composed by a large number of highly interconnected dynamical units. The first approach to capture the global properties of such systems is to model them as graphs whose nodes represent the dynamical units, and whose links stand for the interactions between them. On the one hand, scientists have to cope with structural issues, such as characterizing the topology of a complex wiring architecture, revealing the unifying principles that are at the basis of real networks, and developing models to mimic the growth of a network and reproduce its structural properties. On the other hand, many relevant questions arise when studying complex networks’ dynamics, such as learning how a large ensemble of dynamical systems that interact through a complex wiring topology can behave collectively. We review the major concepts and results recently achieved in the study of the structure and dynamics of complex networks, and summarize the relevant applications of these ideas in many different disciplines, ranging from nonlinear science to biology, from statistical mechanics to medicine and engineering. © 2005 Elsevier B.V. All rights reserved." ] }
1506.01565
2479026286
The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a " one fits it all " model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.
Among the most well-known evolutionary patterns are the shrinking diameter and densification @cite_33 . A lot of recent work studies link prediction algorithms @cite_26 @cite_19 @cite_29 . Others focus on methods for finding frequent, coherent or dense temporal structures @cite_4 @cite_3 @cite_16 , or the evolution of communities and user behavior @cite_31 @cite_22 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_33", "@cite_22", "@cite_29", "@cite_3", "@cite_19", "@cite_31", "@cite_16" ], "mid": [ "1559129532", "2036868363", "2108614537", "2949722302", "2149490995", "1964669181", "2420733993", "2033228235", "2155640700" ], "abstract": [ "Many real-world complex networks, like actor-movie or file-provider relations, have a bipartite nature and evolve over time. Predicting links that will appear in them is one of the main approach to understand their dynamics. Only few works address the bipartite case, though, despite its high practical interest and the specific challenges it raises. We define in this paper the notion of internal links in bipartite graphs and propose a link prediction method based on them. We thoroughly describe the method and its variations, and experimentally compare it to a basic collaborative filtering approach. We present results obtained for a typical practical case. We reach the conclusion that our method performs very well, and we study in details how its parameters may influence obtained results.", "The problem of finding frequent patterns from graph-based datasets is an important one that finds applications in drug discovery, protein structure analysis, XML querying, and social network analysis among others. In this paper we propose a framework to mine frequent large-scale structures, formally defined as frequent topological structures, from graph datasets. Key elements of our framework include, fast algorithms for discovering frequent topological patterns based on the well known notion of a topological minor, algorithms for specifying and pushing constraints deep into the mining process for discovering constrained topological patterns, and mechanisms for specifying approximate matches when discovering frequent topological patterns in noisy datasets. We demonstrate the viability and scalability of the proposed algorithms on real and synthetic datasets and also discuss the use of the framework to discover meaningful topological structures from protein structure data.", "How do real graphs evolve over timeq What are normal growth patterns in social, technological, and information networksq Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time. Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)). Existing graph generation models do not exhibit these types of behavior even at a qualitative level. We provide a new graph generator, based on a forest fire spreading process that has a simple, intuitive justification, requires very few parameters (like the flammability of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study. We also notice that the forest fire model exhibits a sharp transition between sparse graphs and graphs that are densifying. Graphs with decreasing distance between the nodes are generated around this transition point. Last, we analyze the connection between the temporal evolution of the degree distribution and densification of a graph. We find that the two are fundamentally related. We also observe that real networks exhibit this type of relation between densification and the degree distribution.", "Data confidentiality policies at major social network providers have severely limited researchers' access to large-scale datasets. The biggest impact has been on the study of network dynamics, where researchers have studied citation graphs and content-sharing networks, but few have analyzed detailed dynamics in the massive social networks that dominate the web today. In this paper, we present results of analyzing detailed dynamics in the Renren social network, covering a period of 2 years when the network grew from 1 user to 19 million users and 199 million edges. Rather than validate a single model of network dynamics, we analyze dynamics at different granularities (user-, community- and network- wide) to determine how much, if any, users are influenced by dynamics processes at different scales. We observe in- dependent predictable processes at each level, and find that while the growth of communities has moderate and sustained impact on users, significant events such as network merge events have a strong but short-lived impact that is quickly dominated by the continuous arrival of new users.", "Targeting interest to match a user with services (e.g. news, products, games, advertisements) and predicting friendship to build connections among users are two fundamental tasks for social network systems. In this paper, we show that the information contained in interest networks (i.e. user-service interactions) and friendship networks (i.e. user-user connections) is highly correlated and mutually helpful. We propose a framework that exploits homophily to establish an integrated network linking a user to interested services and connecting different users with common interests, upon which both friendship and interests could be efficiently propagated. The proposed friendship-interest propagation (FIP) framework devises a factor-based random walk model to explain friendship connections, and simultaneously it uses a coupled latent factor model to uncover interest interactions. We discuss the flexibility of the framework in the choices of loss objectives and regularization penalties and benchmark different variants on the Yahoo! Pulse social networking system. Experiments demonstrate that by coupling friendship with interest, FIP achieves much higher performance on both interest targeting and friendship prediction than systems using only one source of information.", "How can we describe a large, dynamic graph over time? Is it random? If not, what are the most apparent deviations from randomness -- a dense block of actors that persists over time, or perhaps a star with many satellite nodes that appears with some fixed periodicity? In practice, these deviations indicate patterns -- for example, botnet attackers forming a bipartite core with their victims over the duration of an attack, family members bonding in a clique-like fashion over a difficult period of time, or research collaborations forming and fading away over the years. Which patterns exist in real-world dynamic graphs, and how can we find and rank them in terms of importance? These are exactly the problems we focus on in this work. Our main contributions are (a) formulation: we show how to formalize this problem as minimizing the encoding cost in a data compression paradigm, (b) algorithm: we propose TIMECRUNCH, an effective, scalable and parameter-free method for finding coherent, temporal patterns in dynamic graphs and (c) practicality: we apply our method to several large, diverse real-world datasets with up to 36 million edges and 6.3 million nodes. We show that TIMECRUNCH is able to compress these graphs by summarizing important temporal structures and finds patterns that agree with intuition.", "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures.", "Given publication titles and authors, what can we say about the evolution of scientific topics and communities over time? Which communities shrunk, which emerged, and which split, over time? And, when in time were the turning points? We propose TimeFall, which can automatically answer these questions given a social network graph that evolves over time. The main novelty of the proposed approach is that it needs no user-defined parameters, relying instead on the principle of minimum description length (MDL), to extract the communities, and to find good cut-points in time when communities change abruptly: a cut-point is good, if it leads to shorter data description. We illustrate our algorithm on synthetic and large real datasets, and we show that the results of the TimeFall agree with human intuition.", "How can we find communities in dynamic networks of socialinteractions, such as who calls whom, who emails whom, or who sells to whom? How can we spot discontinuity time-points in such streams of graphs, in an on-line, any-time fashion? We propose GraphScope, that addresses both problems, using information theoretic principles. Contrary to the majority of earlier methods, it needs no user-defined parameters. Moreover, it is designed to operate on large graphs, in a streaming fashion. We demonstrate the efficiency and effectiveness of our GraphScope on real datasets from several diverse domains. In all cases it produces meaningful time-evolving patterns that agree with human intuition." ] }
1506.01565
2479026286
The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a " one fits it all " model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.
Another line of research attempts to extend the concept of centralities to dynamic graphs @cite_20 @cite_5 @cite_21 @cite_8 . Some researchers study how the importance of nodes changes over time in dynamic networks @cite_8 . Others define temporal centralities which to rank nodes in dynamic networks and study their distribution over time @cite_5 @cite_21 . Time centralities which describe the relative importance of time instants in dynamic networks are proposed in @cite_20 . In contrast to this existing body of work, our goal is to facilitate the direct comparison of entire networks and their dynamics, not only parts thereof.
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_20", "@cite_8" ], "mid": [ "2327974553", "2119075126", "2158580119", "2040899521" ], "abstract": [ "Many networks are dynamic in that their topology changes rapidly—on the same time scale as the communications of interest between network nodes. Examples are the human contact networks involved in the transmission of disease, ad hoc radio networks between moving vehicles, and the transactions between principals in a market. While we have good models of static networks, so far these have been lacking for the dynamic case. In this paper we present a simple but powerful model, the time-ordered graph, which reduces a dynamic network to a static network with directed flows. This enables us to extend network properties such as vertex degree, closeness, and betweenness centrality metrics in a very natural way to the dynamic case. We then demonstrate how our model applies to a number of interesting edge cases, such as where the network connectivity depends on a small number of highly mobile vertices or edges, and show that our centrality definition allows us to track the evolution of connectivity. Finally we apply our model and techniques to two real-world dynamic graphs of human contact networks and then discuss the implication of temporal centrality metrics in the real world.", "Centrality is an important notion in network analysis and is used to measure the degree to which network structure contributes to the importance of a node in a network. While many different centrality measures exist, most of them apply to static networks. Most networks, on the other hand, are dynamic in nature, evolving over time through the addition or deletion of nodes and edges. A popular approach to analyzing such networks represents them by a static network that aggregates all edges observed over some time period. This approach, however, under or overestimates centrality of some nodes. We address this problem by introducing a novel centrality metric for dynamic network analysis. This metric exploits an intuition that in order for one node in a dynamic network to influence another over some period of time, there must exist a path that connects the source and destination nodes through intermediaries at different times. We demonstrate on an example network that the proposed metric leads to a very different ranking than analysis of an equivalent static network. We use dynamic centrality to study a dynamic citations network and contrast results to those reached by static network analysis.", "There is an ever-increasing interest in investigating dynamics in time-varying graphs (TVGs). Nevertheless, so far, the notion of centrality in TVG scenarios usually refers to metrics that assess the relative importance of nodes along the temporal evolution of the dynamic complex network. For some TVG scenarios, however, more important than identifying the central nodes under a given node centrality definition is identifying the key time instants for taking certain actions. In this paper, we thus introduce and investigate the notion of time centrality in TVGs. Analogously to node centrality, time centrality evaluates the relative importance of time instants in dynamic complex networks. In this context, we present two time centrality metrics related to diffusion processes. We evaluate the two defined metrics using both a real-world dataset representing an in-person contact dynamic network and a synthetically generated randomized TVG. We validate the concept of time centrality showing that diffusion starting at the best ranked time instants (i.e., the most central ones), according to our metrics, can perform a faster and more efficient diffusion process.", "The article introduces the concept of snapshot dynamic indices as centrality measures to analyse how the importance of nodes changes over time in dynamic networks. In particular, the dynamic stress-snapshot and dynamic betweenness snapshot are investigated. We present theoretical results on dynamic shortest paths in first-in first-out dynamic networks, and then introduce some algorithms for computing these indices in the discrete-time case. Finally, we present some experimental results exploring the algorithms' efficiency and illustrating the variation of the dynamic betweenness snapshot index for some sample dynamic networks." ] }
1506.01565
2479026286
The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a " one fits it all " model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.
A closely related work but using a different approach is by Kunegis @cite_30 . Kunegis studies the evolution of networks from a spectral graph theory perspective. He argues that the graph spectrum describes a network on the global level, whereas eigenvectors describe a network at the local level, and uses these results to devise link prediction algorithms.
{ "cite_N": [ "@cite_30" ], "mid": [ "1834762374" ], "abstract": [ "In this thesis, I study the spectral characteristics of large dynamic networks and formulate the spectral evolution model. The spectral evolution model applies to networks that evolve over time, and describes their spectral decompositions such as the eigenvalue and singular value decomposition. The spectral evolution model states that over time, the eigenvalues of a network change while its eigenvectors stay approximately constant. I validate the spectral evolution model empirically on over a hundred network datasets, and theoretically by showing that it generalizes arncertain number of known link prediction functions, including graph kernels, path counting methods, rank reduction and triangle closing. The collection of datasets I use contains 118 distinct network datasets. One dataset, the signed social network of the Slashdot Zoo, was specifically extracted during work on this thesis. I also show that the spectral evolution model can be understood as a generalization of the preferential attachment model, if we consider growth in latent dimensions of a network individually. As applications of the spectral evolution model, I introduce two new link prediction algorithms that can be used for recommender systems, search engines, collaborative filtering, rating prediction, link sign prediction and more. The first link prediction algorithm reduces to a one-dimensional curve fitting problem from which a spectral transformation is learned. The second method uses extrapolation of eigenvalues to predict future eigenvalues. As special cases, I show that the spectral evolution model applies to directed, undirected, weighted, unweighted, signed and bipartite networks. For signed graphs, I introduce new applications of the Laplacian matrix for graph drawing, spectral clustering, and describe new Laplacian graph kernels. I also define the algebraic conflict, a measure of the conflict present in a signed graph based on the signed graph Laplacian. I describe the problem of link sign prediction spectrally, and introduce the signed resistance distance. For bipartite and directed graphs, I introduce the hyperbolic sine and odd Neumann kernels, which generalize the exponential and Neumann kernels for undirected unipartite graphs. I show that the problem of directed and bipartite link prediction are related by the fact that both can be solved by considering spectral evolution in the singular value decomposition." ] }
1506.01186
2544860310
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
A review of the early work on adaptive learning rates can be found in George and Powell @cite_29 . Duchi, al @cite_21 proposed AdaGrad, which is one of the early adaptive methods that estimates the learning rates from the gradients.
{ "cite_N": [ "@cite_29", "@cite_21" ], "mid": [ "2146917784", "2146502635" ], "abstract": [ "We address the problem of determining optimal stepsizes for estimating parameters in the context of approximate dynamic programming. The sufficient conditions for convergence of the stepsize rules have been known for 50 years, but practical computational work tends to use formulas with parameters that have to be tuned for specific applications. The problem is that in most applications in dynamic programming, observations for estimating a value function typically come from a data series that can be initially highly transient. The degree of transience affects the choice of stepsize parameters that produce the fastest convergence. In addition, the degree of initial transience can vary widely among the value function parameters for the same dynamic program. This paper reviews the literature on deterministic and stochastic stepsize rules, and derives formulas for optimal stepsizes for minimizing estimation error. This formula assumes certain parameters are known, and an approximation is proposed for the case where the parameters are unknown. Experimental work shows that the approximation provides faster convergence than other popular formulas.", "We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms." ] }
1506.01186
2544860310
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
Schaul al @cite_16 discuss an adaptive learning rate based on a diagonal estimation of the Hessian of the gradients. One of the features of their method is that they allow their automatic method to decrease or increase the learning rate. However, their paper seems to limit the idea of increasing learning rate to non-stationary problems. On the other hand, this paper demonstrates that a schedule of increasing the learning rate is more universally valuable.
{ "cite_N": [ "@cite_16" ], "mid": [ "2950351588" ], "abstract": [ "The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search, and effectively removes the need for learning rate tuning." ] }
1506.01186
2544860310
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
Zeiler @cite_15 describes his AdaDelta method, which improves on AdaGrad based on two ideas: limiting the sum of squared gradients over all time to a limited window, and making the parameter update rule consistent with a units evaluation on the relationship between the update and the Hessian.
{ "cite_N": [ "@cite_15" ], "mid": [ "6908809" ], "abstract": [ "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment." ] }
1506.01186
2544860310
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
More recently, several papers have appeared on adaptive learning rates. Gulcehre and Bengio @cite_3 propose an adaptive learning rate algorithm, called AdaSecant, that utilizes the root mean square statistics and variance of the gradients. Dauphin al @cite_13 show that RMSProp provides a biased estimate and go on to describe another estimator, named ESGD, that is unbiased. Kingma and Lei-Ba @cite_11 introduce Adam that is designed to combine the advantages from AdaGrad and RMSProp. Bache, al @cite_5 propose exploiting solutions to a multi-armed bandit problem for learning rate selection. A summary and tutorial of adaptive learning rates can be found in a recent paper by Ruder @cite_19 .
{ "cite_N": [ "@cite_3", "@cite_19", "@cite_5", "@cite_13", "@cite_11" ], "mid": [ "1813485996", "", "1570413585", "2951037516", "1522301498" ], "abstract": [ "Stochastic gradient algorithms have been the main focus of large-scale learning problems and they led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our preliminary experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.", "", "We describe a general framework for online adaptation of optimization hyperparameters by hot swapping' their values during learning. We investigate this approach in the context of adaptive learning rate selection using an explore-exploit strategy from the multi-armed bandit literature. Experiments on a benchmark neural network show that the hot swapping approach leads to consistently better solutions compared to well-known alternatives such as AdaDelta and stochastic gradient with exhaustive hyperparameter search.", "Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the so-called equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments show that ESGD performs as well or better than RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent.", "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm." ] }
1506.01186
2544860310
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
Adaptive learning rates are fundamentally different from CLR policies, and CLR can be combined with adaptive learning rates, as shown in Section . In addition, CLR policies are computationally simpler than adaptive learning rates. CLR is likely most similar to the SGDR method @cite_8 that appeared recently.
{ "cite_N": [ "@cite_8" ], "mid": [ "2518108298" ], "abstract": [ "Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on CIFAR-10 and CIFAR-100 datasets where we demonstrate new state-of-the-art results below 4 and 19 , respectively. Our source code is available at this https URL" ] }
1506.01414
1722180796
High availability is no longer just a business continuity concern. Users are increasingly dependant on devices that consume and produce data in ever increasing volumes. A popular solution is to have a central repository which each device accesses after centrally managed authentication. This model of use is facilitated by cloud based file synchronisation services such as Dropbox, OneDrive, Google Drive and Apple iCloud. Cloud architecture allows the provisioning of storage space with "always-on" access. Recent concerns over unauthorised access to third party systems and large scale exposure of private data have made an alternative solution desirable. These events have caused users to assess their own security practices and the level of trust placed in third party storage services. One option is BitTorrent Sync, a cloudless synchronisation utility provides data availability and redundancy. This utility replicates files stored in shares to remote peers with access controlled by keys and permissions. While lacking the economies brought about by scale, complete control over data access has made this a popular solution. The ability to replicate data without oversight introduces risk of abuse by users as well as difficulties for forensic investigators. This paper suggests a methodology for investigation and analysis of the protocol to assist in the control of data flow across security perimeters.
This paper is focused on the network communication protocol employed by BTSync and the investigation thereof. The work presented as part of this paper builds upon the work of @cite_9 , which outlines the forensic analysis of the BTSync client application on a host machine. This paper outlines the procedures for identifying a current or previous install of the BTSync application and the extraction of secrets from gain physical access to a machines hard drive and performing a regular digital forensic investigation on its image. At the time of publication, there are no other academic publications focusing on BTSync. However, seeing as BTSync shares a number of attributes and functionalities with cloud synchronisation services, e.g., Dropbox, Google Drive, etc., and is largely based on the BitTorrent protocol, this section outlines a number of related case studies and investigative techniques for these technologies.
{ "cite_N": [ "@cite_9" ], "mid": [ "2093283549" ], "abstract": [ "Keywords: BitTorrent Sync Peer-to-Peer Synchronisation Privacy Digital forensics abstract With professional and home Internet users becoming increasingly concerned with data protection and privacy, the privacy afforded by popular cloud file synchronisation services, such as Dropbox, OneDrive and Google Drive, is coming under scrutiny in the press. A number of these services have recently been reported as sharing information with governmental security agencies without warrants. BitTorrent Sync is seen as an alternative by many and has gathered over two million users by December 2013 (doubling since the previous month). The service is completely decentralised, offers much of the same syn- chronisation functionality of cloud powered services and utilises encryption for data transmission (and optionally for remote storage). The importance of understanding Bit- Torrent Sync and its resulting digital investigative implications for law enforcement and forensic investigators will be paramount to future investigations. This paper outlines the client application, its detected network traffic and identifies artefacts that may be of value as evidence for future digital investigations. a 2014 The Authors. Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http: creativecommons.org licenses by-nc-nd 3.0 )." ] }
1506.01414
1722180796
High availability is no longer just a business continuity concern. Users are increasingly dependant on devices that consume and produce data in ever increasing volumes. A popular solution is to have a central repository which each device accesses after centrally managed authentication. This model of use is facilitated by cloud based file synchronisation services such as Dropbox, OneDrive, Google Drive and Apple iCloud. Cloud architecture allows the provisioning of storage space with "always-on" access. Recent concerns over unauthorised access to third party systems and large scale exposure of private data have made an alternative solution desirable. These events have caused users to assess their own security practices and the level of trust placed in third party storage services. One option is BitTorrent Sync, a cloudless synchronisation utility provides data availability and redundancy. This utility replicates files stored in shares to remote peers with access controlled by keys and permissions. While lacking the economies brought about by scale, complete control over data access has made this a popular solution. The ability to replicate data without oversight introduces risk of abuse by users as well as difficulties for forensic investigators. This paper suggests a methodology for investigation and analysis of the protocol to assist in the control of data flow across security perimeters.
Numerous investigations have been made into identifying the peer information of those involved in BitTorrent swarms. Most of these publications focus on the investigation of the unauthorised distributed of copyrighted material @cite_15 , @cite_18 and @cite_12 . Depending on the focus of the investigation, peer information may be recorded for a particular piece of material under investigation or a larger landscape view of the peer activity across numerous pieces of content.
{ "cite_N": [ "@cite_15", "@cite_18", "@cite_12" ], "mid": [ "2165603508", "2219846289", "2952766239" ], "abstract": [ "The objective of this paper is to introduce a model to guide the analysis of the impact of churn in P2P networks. Using this model, a variety of node membership scenarios is created. These scenarios are used to capture and analyze the performance trends of chord, a distributed hash table (DHT) based resource lookup protocol for peer-to-peer overlay networks. The performance study focuses both on the performance of routing and content retrieval. This study also identifies the limitations of various churn-alleviating mechanisms, frequently proposed in the literature. The study highlights the importance of the content nature and access pattern on the performance of P2P, DHT-based overlay networks. The results show that the type of content being accessed and the way the content is accessed has a significant impact on the performance of P2P networks", "The 5th Annual Symposium on Information Assurance (ASIA '10): Academic Track of 13th Annual NYS Cyber Security Conference, Albany, New York, USA, 16 - 17 June 2010", "This paper presents a set of exploits an adversary can use to continuously spy on most BitTorrent users of the Internet from a single machine and for a long period of time. Using these exploits for a period of 103 days, we collected 148 million IPs downloading 2 billion copies of contents. We identify the IP address of the content providers for 70 of the BitTorrent contents we spied on. We show that a few content providers inject most contents into BitTorrent and that those content providers are located in foreign data centers. We also show that an adversary can compromise the privacy of any peer in BitTorrent and identify the big downloaders that we define as the peers who subscribe to a large number of contents. This infringement on users' privacy poses a significant impediment to the legal adoption of BitTorrent." ] }
1506.01414
1722180796
High availability is no longer just a business continuity concern. Users are increasingly dependant on devices that consume and produce data in ever increasing volumes. A popular solution is to have a central repository which each device accesses after centrally managed authentication. This model of use is facilitated by cloud based file synchronisation services such as Dropbox, OneDrive, Google Drive and Apple iCloud. Cloud architecture allows the provisioning of storage space with "always-on" access. Recent concerns over unauthorised access to third party systems and large scale exposure of private data have made an alternative solution desirable. These events have caused users to assess their own security practices and the level of trust placed in third party storage services. One option is BitTorrent Sync, a cloudless synchronisation utility provides data availability and redundancy. This utility replicates files stored in shares to remote peers with access controlled by keys and permissions. While lacking the economies brought about by scale, complete control over data access has made this a popular solution. The ability to replicate data without oversight introduces risk of abuse by users as well as difficulties for forensic investigators. This paper suggests a methodology for investigation and analysis of the protocol to assist in the control of data flow across security perimeters.
Forensics of cloud storage utilities can prove challenging, as presented by in their 2012 paper @cite_17 . The difficulty arises because, unless complete local synchronisation has been performed, the data can be stored across various distributed locations. For example, it may only reside in temporary local files, volatile storage (such as the system's RAM) or dispersed across multiple datacentres of the service provider's cloud storage facility. Any digital forensic examination of these systems must pay particular attention to the method of access, usually the Internet browser connecting to the service provider's storage access page (https: www.dropbox.com login for Dropbox for example). This temporary access serves to highlight the importance of live forensic techniques when investigating a suspect machine as a pull out the plug'' anti-forensic technique would not only lose access to any currently opened documents but may also lose any currently stored sessions or other authentication tokens that are stored in RAM.
{ "cite_N": [ "@cite_17" ], "mid": [ "1991458033" ], "abstract": [ "Abstract The demand for cloud computing is increasing because of the popularity of digital devices and the wide use of the Internet. Among cloud computing services, most consumers use cloud storage services that provide mass storage. This is because these services give them various additional functions as well as storage. It is easy to access cloud storage services using smartphones. With increasing utilization, it is possible for malicious users to abuse cloud storage services. Therefore, a study on digital forensic investigation of cloud storage services is necessary. This paper proposes new procedure for investigating and analyzing the artifacts of all accessible devices, such as Windows system, Mac system, iPhone, and Android smartphone." ] }
1506.01414
1722180796
High availability is no longer just a business continuity concern. Users are increasingly dependant on devices that consume and produce data in ever increasing volumes. A popular solution is to have a central repository which each device accesses after centrally managed authentication. This model of use is facilitated by cloud based file synchronisation services such as Dropbox, OneDrive, Google Drive and Apple iCloud. Cloud architecture allows the provisioning of storage space with "always-on" access. Recent concerns over unauthorised access to third party systems and large scale exposure of private data have made an alternative solution desirable. These events have caused users to assess their own security practices and the level of trust placed in third party storage services. One option is BitTorrent Sync, a cloudless synchronisation utility provides data availability and redundancy. This utility replicates files stored in shares to remote peers with access controlled by keys and permissions. While lacking the economies brought about by scale, complete control over data access has made this a popular solution. The ability to replicate data without oversight introduces risk of abuse by users as well as difficulties for forensic investigators. This paper suggests a methodology for investigation and analysis of the protocol to assist in the control of data flow across security perimeters.
In 2013, Martini and Choo published the results of a cloud storage forensics investigation on the ownCloud service from both the perspective of the client and the server elements of the service @cite_11 . They found that artefacts were found on both the client machine and on the server facilitating the identification of files stored by different users. The module client application was found to store authentication and file metadata relating to files stored on the device itself and on files only stored on the server. Using the client artefacts, the authors were able to decrypt the associated files stored on the server instance.
{ "cite_N": [ "@cite_11" ], "mid": [ "2020315857" ], "abstract": [ "The storage as a service (StaaS) cloud computing architecture is showing significant growth as users adopt the capability to store data in the cloud environment across a range of devices. Cloud (storage) forensics has recently emerged as a salient area of inquiry. Using a widely used open source cloud StaaS application - ownCloud - as a case study, we document a series of digital forensic experiments with the aim of providing forensic researchers and practitioners with an in-depth understanding of the artefacts required to undertake cloud storage forensics. Our experiments focus upon client and server artefacts, which are categories of potential evidential data specified before commencement of the experiments. A number of digital forensic artefacts are found as part of these experiments and are used to support the selection of artefact categories and provide a technical summary to practitioners of artefact types. Finally we provide some general guidelines for future forensic analysis on open source StaaS products and recommendations for future work." ] }
1506.01414
1722180796
High availability is no longer just a business continuity concern. Users are increasingly dependant on devices that consume and produce data in ever increasing volumes. A popular solution is to have a central repository which each device accesses after centrally managed authentication. This model of use is facilitated by cloud based file synchronisation services such as Dropbox, OneDrive, Google Drive and Apple iCloud. Cloud architecture allows the provisioning of storage space with "always-on" access. Recent concerns over unauthorised access to third party systems and large scale exposure of private data have made an alternative solution desirable. These events have caused users to assess their own security practices and the level of trust placed in third party storage services. One option is BitTorrent Sync, a cloudless synchronisation utility provides data availability and redundancy. This utility replicates files stored in shares to remote peers with access controlled by keys and permissions. While lacking the economies brought about by scale, complete control over data access has made this a popular solution. The ability to replicate data without oversight introduces risk of abuse by users as well as difficulties for forensic investigators. This paper suggests a methodology for investigation and analysis of the protocol to assist in the control of data flow across security perimeters.
In 2014, , outlined a case study on BTSync whereby the remote recovery of evidence from a BTSync shared folder can enable the recovery of evidence that is no longer accessible on the local machine @cite_10 . This evidence may have been securely deleted, corrupted or overwritten on the local device or viewed (not stored) on a mobile device using the BitTorrent Sync app. The paper outlines a number of entry points from the local machine into the investigation and the remote recovery of such evidence including local and network sources.
{ "cite_N": [ "@cite_10" ], "mid": [ "1495154457" ], "abstract": [ "6th International Conference on Digital Forensics and Cyber Crime (ICDF2C 2014), New Haven, Connecticut, United States, 18-20 September 2014" ] }
1506.01432
2952358528
Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a given MLN using maximum a posteriori (MAP) inference. Unfortunately, the size of this theory is exponential in general. We therefore also propose two methods which can derive compact theories that still capture MAP inference, but only for specific types of evidence. These theories can be used, among others, to make explicit the hidden assumptions underlying an MLN or to explain the predictions it makes.
In this paper, we have mainly focused on MAP inference. An interesting question is whether it would be possible to construct a (possibilistic) logic base that captures the set of accepted beliefs encoded by a probability distribution, where @math is accepted if @math . Unfortunately, the results in @cite_18 show that this is only possible for the limited class of so-called big-stepped probability distributions. In practice, this means that we would have to define a partition of the set of possible worlds, such that the probability distribution over the partition classes is big-stepped, and only capture the beliefs that are encoded by the latter, less informative, probability distribution. A similar approach was taken in @cite_7 to learn default rules from data.
{ "cite_N": [ "@cite_18", "@cite_7" ], "mid": [ "2150172747", "2084365007" ], "abstract": [ "An accepted belief is a proposition considered likely enough by an agent, to be inferred from as if it were true. This paper bridges the gap between probabilistic and logical representations of accepted beliefs. To this end, natural properties of relations on propositions, describing relative strength of belief are augmented with some conditions ensuring that accepted beliefs form a deductively closed set. This requirement turns out to be very restrictive. In particular, it is shown that the sets of accepted belief of an agent can always be derived from a family of possibility rankings of states. An agent accepts a proposition in a given context if this proposition is considered more possible than its negation in this context, for all possibility rankings in the family. These results are closely connected to the non-monotonic 'preferential' inference system of Kraus, Lehmann and Magidor and the so-called plausibility functions of Friedman and Halpern. The extent to which probability theory is compatible with acceptance relations is laid bare. A solution to the lottery paradox, which is considered as a major impediment to the use of non-monotonic inference is proposed using a special kind of probabilities (called lexicographic, or big-stepped). The setting of acceptance relations also proposes another way of approaching the theory of belief change after the works of Gauml;rdenfors and colleagues. Our view considers the acceptance relation as a primitive object from which belief sets are derived in various contexts.", "This paper deals with the extraction of default rules from a database of examples. The proposed approach is based on a special kind of probability distributions, called \"big-stepped probabilities\", which are known to provide a semantics for non-monotonic reasoning. The rules which are learnt are genuine default rules, which could be used (under some conditions) in a non-monotonic reasoning system and can be encoded in possibilistic logic." ] }
1506.01077
2952516752
Biclustering involves the simultaneous clustering of objects and their attributes, thus defining local two-way clustering models. Recently, efficient algorithms were conceived to enumerate all biclusters in real-valued datasets. In this case, the solution composes a complete set of maximal and non-redundant biclusters. However, the ability to enumerate biclusters revealed a challenging scenario: in noisy datasets, each true bicluster may become highly fragmented and with a high degree of overlapping. It prevents a direct analysis of the obtained results. To revert the fragmentation, we propose here two approaches for properly aggregating the whole set of enumerated biclusters: one based on single linkage and the other directly exploring the rate of overlapping. Both proposals were compared with each other and with the actual state-of-the-art in several experiments, and they not only significantly reduced the number of biclusters but also consistently increased the quality of the solution.
Triclustering was proposed by Haczar & Nadif @cite_15 as a biclustering ensemble algorithm. First, they transform each bicluster into a binary matrix. After that, they propose a triclustering algorithm to find the @math most relevant biclusters. As they were able to improve the biological relevance of biclustering for microarray data @cite_0 , we will use this method as a contender in this paper. One major point in ensemble is that we want to combine the results reinforcing the biclusters that seem to be important for several components, and discarding the ones that may come from noise. Due to the way the triclustering algorithm handles the optimization step, non-maximal biclusters can interfere in the final results. Bicluster aggregation is slightly different from bicluster ensemble. While on ensemble tasks we discard biclusters that seem unimportant and combine the ones that contribute the most for the solution, in bicluster aggregation we never discard any bicluster. Given this characteristic, the bicluster ensemble solution is expected to show a high with an impacted (see Section ), as it eliminates biclusters.
{ "cite_N": [ "@cite_0", "@cite_15" ], "mid": [ "1985928293", "2087977949" ], "abstract": [ "Biclustering is become undoubtedly a current tool for micro array data analysis. Its objective is to identify a set of biclusters, i.e. sub matrices of the original data matrix, presenting a particular pattern. A large number of biclustering methods have already been proposed for gene expression data. Based on ensemble methods, w e propose a new approach improving the performance of all existing biclustering algorithms. Further, we show that the ensemble biclustering can be seen as a problem of binary triclustering and propose an algorithm to solve it. The results on three public micro array datasets show that the ensemble approach produces better biclusters than single approach.", "Several biclustering algorithms have been proposed in different fields of microarray data analysis. We present a new approach that improves their performance in using the ensemble methods. An ensemble biclustering is considered and formalized by a problem of binary triclustering. We propose a simple and efficient algorithm to solve it. To illustrate the interest of our ensemble approach, numerical experiments are performed on both artificial and real datasets with two biclustering algorithms commonly used in bioinformatics." ] }