aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1904.07312 | 2936027599 | Drowsiness can put lives of many drivers and workers in danger. It is important to design practical and easy-to-deploy real-world systems to detect the onset of drowsiness.In this paper, we address early drowsiness detection, which can provide early alerts and offer subjects ample time to react. We present a large and public real-life dataset of 60 subjects, with video segments labeled as alert, low vigilant, or drowsy. This dataset consists of around 30 hours of video, with contents ranging from subtle signs of drowsiness to more obvious ones. We also benchmark a temporal model for our dataset, which has low computational and storage demands. The core of our proposed method is a Hierarchical Multiscale Long Short-Term Memory (HM-LSTM) network, that is fed by detected blink features in sequence. Our experiments demonstrate the relationship between the sequential blink features and drowsiness. In the experimental results, our baseline method produces higher accuracy than human judgment. | Park al @cite_23 fine-tune three CNNs and apply an SVM to the combined features of those three networks to classify each frame into four classes of alert, yawning, nodding and drowsy with blinking. The model is trained on the NTHU drowsiness dataset that is based on pretended drowsiness, and tested on the evaluation portion of NTHU dataset which includes 20 videos of only four people, resulting in 73 Bhargava al @cite_15 show how a distilled deep network can be of use for embedded systems. This is relevant to the baseline method proposed in this paper, which also aims for low computational requirements. The reported accuracy in @cite_15 is 89 | {
"cite_N": [
"@cite_15",
"@cite_23"
],
"mid": [
"2738749209",
"2604676963"
],
"abstract": [
"Driver’s status is crucial because one of the main reasons for motor vehicular accidents is related to driver’s inattention or drowsiness. Drowsiness detector on a car can reduce numerous accidents. Accidents occur because of a single moment of negligence, thus driver monitoring system which works in real-time is necessary. This detector should be deployable to an embedded device and perform at high accuracy. In this paper, a novel approach towards real-time drowsiness detection based on deep learning which can be implemented on a low cost embedded board and performs with a high accuracy is proposed. Main contribution of our paper is compression of heavy baseline model to a light weight model deployable to an embedded board. Moreover, minimized network structure was designed based on facial landmark input to recognize whether driver is drowsy or not. The proposed model achieved an accuracy of 89.5 on 3-class classification and speed of 14.9 frames per second (FPS) on Jetson TK1.",
"Statistics have shown that (20 ) of all road accidents are fatigue-related, and drowsy detection is a car safety algorithm that can alert a snoozing driver in hopes of preventing an accident. This paper proposes a deep architecture referred to as deep drowsiness detection (DDD) network for learning effective features and detecting drowsiness given a RGB input video of a driver. The DDD network consists of three deep networks for attaining global robustness to background and environmental variations and learning local facial movements and head gestures important for reliable detection. The outputs of the three networks are integrated and fed to a softmax classifier for drowsiness detection. Experimental results show that DDD achieves (73.06 ) detection accuracy on NTHU-drowsy driver detection benchmark dataset."
]
} |
1904.07271 | 2963761620 | We consider the problem of makespan minimization on unrelated machines when job sizes are stochastic. The goal is to find a fixed assignment of jobs to machines, to minimize the expected value of the maximum load over all the machines. For the identical machines special case when the size of a job is the same across all machines, a constant-factor approximation algorithm has long been known. Our main result is the first constant-factor approximation algorithm for the general case of unrelated machines. This is achieved by (i) formulating a lower bound using an exponential-size linear program that is efficiently computable, and (ii) rounding this linear program while satisfying only a specific subset of the constraints that still suffice to bound the expected makespan. We also consider two generalizations. The first is the budgeted makespan minimization problem, where the goal is to minimize the expected makespan subject to scheduling a target number (or reward) of jobs. We extend our main result to obtain a constant-factor approximation algorithm for this problem. The second problem involves @math -norm objectives, where we want to minimize the expected q-norm of the machine loads. Here we give an @math -approximation algorithm, which is a constant-factor approximation for any fixed @math . | Very recently (after the preliminary version of this paper appeared), Molinaro @cite_9 obtained an @math -approximation algorithm for the stochastic @math -norm problem for all @math , which improves over Theorem . In addition to the techniques in our paper, the main idea in @cite_9 is to use a different notion of effective size, based on the L-function method @cite_16 . We still present our algorithm analysis for Theorem as it is conceptually simpler and may provide better constant factors for small @math . | {
"cite_N": [
"@cite_9",
"@cite_16"
],
"mid": [
"2949276956",
"2012362409"
],
"abstract": [
"This paper considers stochastic optimization problems whose objective functions involve powers of random variables. For example, consider the classic Stochastic lp Load Balancing Problem (SLBp): There are @math machines and @math jobs, and known independent random variables @math decribe the load incurred on machine @math if we assign job @math to it. The goal is to assign each jobs to machines in order to minimize the expected @math -norm of the total load on the machines. While convex relaxations represent one of the most powerful algorithmic tools, in problems such as SLBp the main difficulty is to capture the objective function in a way that only depends on each random variable separately. We show how to capture @math -power-type objectives in such separable way by using the @math -function method, introduced by Lata a to relate the moment of sums of random variables to the individual marginals. We show how this quickly leads to a constant-factor approximation for very general subset selection problem with @math -moment objective. Moreover, we give a constant-factor approximation for SLBp, improving on the recent @math -approximation of [, SODA 18]. Here the application of the method is much more involved. In particular, we need to sharply connect the expected @math -norm of a random vector with the @math -moments of its marginals (machine loads), taking into account simultaneously the different scales of the loads that are incurred by an unknown assignment.",
"For the sum S = Σ X i of a sequence (X i ) of independent symmetric (or nonnegative) random variables, we give lower and upper estimates of moments of S. The estimates are exact, up to some universal constants, and extend the previous results for particular types of variables X i ."
]
} |
1904.06764 | 2937599623 | Physical agents that can autonomously generate engaging, life-like behaviour will lead to more responsive and interesting robots and other autonomous systems. Although many advances have been made for one-to-one interactions in well controlled settings, future physical agents should be capable of interacting with humans in natural settings, including group interaction. In order to generate engaging behaviours, the autonomous system must first be able to estimate its human partners' engagement level. In this paper, we propose an approach for estimating engagement from behaviour and use the measure within a reinforcement learning framework to learn engaging interactive behaviours. The proposed approach is implemented in an interactive sculptural system in a museum setting. We compare the learning system to a baseline using pre-scripted interactive behaviours. Analysis based on sensory data and survey data shows that adaptable behaviours within a perceivable and understandable range can achieve higher engagement and likeability. | Interactions in natural settings, e.g., public spaces, are too complex to be simulated in controlled laboratory settings, so to understand natural HRI it is necessary to conduct field studies. Many human-robot interaction systems have been tested in public spaces such as hospitals @cite_18 , train stations @cite_4 , service points @cite_17 , airports @cite_31 , shopping malls @cite_12 , hotels @cite_5 and museums @cite_24 , among others. Although robots investigated in these papers are sometimes surrounded by a group of people in a public place, these robots typically interact with only one person at a specific time, i.e., one-to-one interaction. Unlike these works, the larger scale of LAS enables LAS to simultaneously accommodate interactions with multiple people. In addition, the behaviours of robots studied in most of these works are pre-designed by researchers without continuous learning and adaptability. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_24",
"@cite_5",
"@cite_31",
"@cite_12",
"@cite_17"
],
"mid": [
"2103530214",
"2132446817",
"2061309618",
"2899845710",
"2591977925",
"2768514369",
"2895990499"
],
"abstract": [
"Robots are becoming increasingly integrated into the workplace, impacting organizational structures and processes, and affecting products and services created by these organizations. While robots promise significant benefits to organizations, their introduction poses a variety of design challenges. In this paper, we use ethnographic data collected at a hospital using an autonomous delivery robot to examine how organizational factors affect the way its members respond to robots and the changes engendered by their use. Our analysis uncovered dramatic differences between the medical and post-partum units in how people integrated the robot into their workflow and their perceptions of and interactions with it. Different patient profiles in these units led to differences in workflow, goals, social dynamics, and the use of the physical environment. In medical units, low tolerance for interruptions, a discrepancy between the perceived cost and benefits of using the robot, and breakdowns due to high traffic and clutter in the robot's path caused the robot to have a negative impact on the workflow and staff resistance. On the contrary, post-partum units integrated the robot into their workflow and social context. Based on our findings, we provide design guidelines for the development of robots for organizations.",
"This paper reports a method that uses humanoid robots as a communication medium. There are many interactive robots under development, but due to their limited perception, their interactivity is still far poorer than that of humans. Our approach in this paper is to limit robots' purpose to a non-interactive medium and to look for a way to attract people's interest in the information that robots convey. We propose using robots as a passive-social medium, in which multiple robots converse with each other. We conducted a field experiment at a train station for eight days to investigate the effects of a passive-social medium.",
"This paper reports on a field trial with interactive humanoid robots at a science museum where visitors are supposed to study and develop an interest in science. In the trial, each visitor wore an RFID tag while looking around the museum's exhibits. Information obtained from the RFID tags was used to direct the robots' interaction with the visitors. The robots autonomously interacted with visitors via gestures and utterances resembling the free play of children [1]. In addition, they performed exhibit-guiding by moving around several exhibits and explaining the exhibits based on sensor information. The robots were highly evaluated by visitors during the two-month trial. Moreover, we conducted an experiment in the field trial to compare the detailed effects of exhibit-guiding and free-play interaction under three operating conditions. This revealed that the combination of the free-play interaction and exhibit-guiding positively affected visitors' experiences at the science museum.",
"This paper presents four exploratory studies of the potential use of robots for gathering customer feedback in the hospitality industry. To account for the viewpoints of both hotels and guests, we administered need finding interviews at five hotels and an online survey concerning hotel guest experiences with 60 participants. We then conducted the two deployment studies based on deploying software prototypes for Savioke Relay robots we designed to collect customer feedback: (i) a hotel deployment study (three hotels over three months) to explore the feasibility of robot use for gathering customer feedback as well as issues such deployment might pose and (ii) a hotel kitchen deployment study (at Savioke headquarters over three weeks) to explore the role of different robot behaviors (mobility and social attributes) in gathering feedback and understand the customers' thought process in the context that they experience a service. We found that hotels want to collect customer feedback in real-time to disseminate positive feedback immediately and to respond to unhappy customers while they are still on-site. Guests want to inform the hotel staff about their experiences without compromising their convenience and privacy. We also found that the robot users, e.g. hotel staff, use their domain knowledge to increase the response rate to customer feedback surveys at the hotels. Finally, environmental factors, such as robot's location in the building influenced customer response rates more than altering the behaviors of the robot collecting the feedback.",
"In order to be successful, guide robots in public space require socially-intelligent navigation behaviors. Evaluation of these behaviors can be done through lab studies, though these do not always capture the complexities of interactions in \"the wild\". In this extended abstract we present initial results of a field trial of a multi-year project in which we developed and deployed a robot which provided guiding services to real passengers at one of the top-20 busiest airports in the world. During this field trial 9 groups of passengers were guided by the robot. We will present initial results and implications for field studies.",
"This paper reports our research on developing a robot that distributes flyers to pedestrians. The difficulty is that since the potential receivers are pedestrians who are not necessarily cooperative, the robot needs to appropriately plan its motions, making it easy and non-obstructive for the potential receivers to accept the flyers. We analyzed peoples distributing behavior in a real shopping mall and found that successful distributors approach pedestrians from the front and only extend their arms near the target pedestrian. We also found that pedestrians tend to accept flyers if previous pedestrians took them. Based on these analyses, we developed a behavior model of the robots behavior, implemented it in a humanoid robot, and confirmed its effectiveness in a field experiment.",
"Social robots are emerging as potentially useful tools for customer service in various contexts including retail, healthcare or education. Their capacity for human-like interaction could provide a satisfactory customer experience in simple and repetitive tasks, while allowing human staff to focus on issues that are more complex. In this paper, we report the initial findings of a two-day field study of using a social robot Pepper for guidance and edutainment at the service point of a medium-sized city in Finland. We collected observations and semi-structured interviews of altogether 89 customers visiting the service point. Fifteen specific experiences evoked by the interaction with the robot were identified and categorized under five basic needs of Autonomy, Competence, Relatedness, Stimulation and Security. We discuss implications for an experience-driven design process of applications for social service robots."
]
} |
1904.06764 | 2937599623 | Physical agents that can autonomously generate engaging, life-like behaviour will lead to more responsive and interesting robots and other autonomous systems. Although many advances have been made for one-to-one interactions in well controlled settings, future physical agents should be capable of interacting with humans in natural settings, including group interaction. In order to generate engaging behaviours, the autonomous system must first be able to estimate its human partners' engagement level. In this paper, we propose an approach for estimating engagement from behaviour and use the measure within a reinforcement learning framework to learn engaging interactive behaviours. The proposed approach is implemented in an interactive sculptural system in a museum setting. We compare the learning system to a baseline using pre-scripted interactive behaviours. Analysis based on sensory data and survey data shows that adaptable behaviours within a perceivable and understandable range can achieve higher engagement and likeability. | Interactive artworks are outcomes of combining arts and engineering, and have brought a new research direction for understanding HRI, especially with non-anthropomorphic robots, not only from the robotics perspective but also from an artistic perspective. LAS in this paper and previous installations @cite_47 are examples of such immersive interactive systems that promote roboticists' and architects' understanding of lifelike interactive behaviour. Other examples include @cite_57 , who investigated visitors' interaction and observation behaviours with mobile pianos in a museum. @cite_48 @cite_30 studied flying cubes in multiple publicly accessible spaces where flying cubes play the role of living creatures. In @cite_13 , roboticists collaborated with a professional dancer to design interactive dancing robots. @cite_58 created spatial interactions with immersive virtual reality technology. @cite_22 studied dancing quadcopters, without an interactive component. Close collaboration between artists and roboticists has been fostering the creation of richer modes of interaction and extending the scope of studies in HRI. Most works mentioned here rely heavily on the design of choreographers, while in this paper we add adaptability on top of choreography by learning engaging behaviour based on the action space defined by choreographers. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_48",
"@cite_57",
"@cite_47",
"@cite_58",
"@cite_13"
],
"mid": [
"",
"2066811447",
"",
"2530125069",
"2609287760",
"2107632930",
"2885458004"
],
"abstract": [
"",
"Imagine a troupe of dancers flying together across a big open stage, their movement choreographed to the rhythm of the music. Their performance is both coordinated and skilled; the dancers are well rehearsed, and the choreography well suited to their abilities. They are no ordinary dancers, however, and this is not an ordinary stage. The performers are quadrocopters, and the stage is the ETH Zurich Flying Machine Arena, a state-of-the-art mobile testbed for aerial motion control research (Figure 1).",
"",
"Art installations involving robotic artifacts provide an opportunity to examine human relationships with robots designed solely for the purpose of sustaining evocative behaviours. In an attempt to determine the behavioural characteristics and personality traits attributed by a human to a robotic artifact, we investigated an audience’s experience of an installation that presented three robotic artifacts moving autonomously in an exhibition space. In order to describe the audience’s experience, we present two studies that revealed the psychological attributions spontaneously produced from observing the robots, and visitors’ physical exploration patterns inside the exhibition. We propose a psychological profile for the artwork, and a tentative organization for the attribution process. Using a cluster analysis performed on visitors’ trajectories inside the installation, we highlight four different exploration and interaction heuristics characterized by patterns of approach or withdrawal, passive observation and exploration.",
"Canada’s Living Architecture Systems Group (LASG) combines scientists, engineers, architects and artists working together to create large-scale prototypes of immersive architectural spaces with qualities that come strikingly close to those of living systems. Working in interdis- ciplinary groups combining architects, engineers and scientists, LASG is building environments that can move, respond, and learn; environments that renew themselves with chemical exchanges and that are adaptive and empathic toward their inhabitants. This paper provides a detailed review of the history and current research of the group, including a detailed case study of Epiphyte Chamber, a complex immersive environment presented at the new Museum of Modern and Contemporary Art in Seoul, 2014. We also include a context of precedents in the rapidly evolving field of responsive architecture, and a description of the current implementation of a new proprioceptive distributed control system employing a curiosity-based learning algorithm. Also included are detailed reviews of industrial design methods and digital fabrication of specialized structures and mechanisms. The paper is illustrated with industrial design drawings, interactive system design documents, and photography of a series of current environments and test-beds authored by the group. We hope that this presentation offers an inspiring vision, and associated working methods, for the future of the built environment.",
"The AlloSphere provides multiuser spatial interaction through a curved surround screen and surround sound. Two projects illustrate how researchers employed the AlloSphere to investigate the combined use of personal-device displays and the shared display. Another two projects combined multiuser interaction with multiagent systems. These projects point to directions for future ensemble-style collaborative interaction.",
"\"Time to Compile\"1 is the result of an extended in-house residency of an artist in a robotics lab. The piece explores the temporal and spatial dislocations enabled by digital technology and the internet and plays with human responses to articulated machines (robots) in that setting. The audience journeys through a suspended, disparate landscape that aims to reconcile these responses to technology and machines. This proposal offers to bring an excerpt of the piece, live dance performance surrounded by videos of robots created in the lab, to MOCO. Additionally, an interactive installation could be produced if MOCO has the timing bandwidth to offer this more involved setup."
]
} |
1904.07016 | 2951433779 | Future electricity distribution grids will host a considerable share of the renewable energy sources needed for enforcing the energy transition. Demand side management mechanisms play a key role in the integration of such renewable energy resources by exploiting the flexibility of elastic loads, generation or electricity storage technologies. In particular, local energy markets enable households to exchange energy with each other while increasing the amount of renewable energy that is consumed locally. Nevertheless, as most ex-ante mechanisms, local market schedules rely on hour-ahead forecasts whose accuracy may be low. In this paper we cope with forecast errors by proposing a game theory approach to model the interactions among prosumers and distribution system operators for the control of electricity flows in real-time. The presented game has an aggregative equilibrium which can be attained in a semi-distributed manner, driving prosumers towards a final exchange of energy with the grid that benefits both households and operators, favoring the enforcement of prosumers' local market commitments while respecting the constraints defined by the operator. The proposed mechanism requires only one-to-all broadcast of price signals, which do not depend either on the amount of players or their local objective function and constraints, making the approach highly scalable. Its impact on distribution grid quality of supply was evaluated through load flow analysis and realistic load profiles, demonstrating the capacity of the mechanism ensure that voltage deviation and thermal limit constraints are respected. | We focus on the literature related to real-time control of electricity flows on distribution grids. Several centralized approaches have been proposed for real-time energy management in distribution grid systems @cite_5 @cite_9 @cite_15 . The difficulty in such approaches is that flexible DER are owned and controlled by households, which hold the local information needed for a centralized control and will release part of this information only if adequate incentives are provided. In @cite_29 and @cite_2 authors propose decentralized mechanisms with the objective to flatten the aggregated demand of a large set of households. In @cite_6 they consider the goal of the electricity supplier is to maximize social welfare, which is achieved in a distributed fashion by households optimizing their own benefits. | {
"cite_N": [
"@cite_9",
"@cite_29",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_15"
],
"mid": [
"2525707554",
"",
"2042454807",
"1934866980",
"2110449981",
""
],
"abstract": [
"Energy management in microgrids is typically formulated as an offline optimization problem for day-ahead scheduling by previous studies. Most of these offline approaches assume perfect forecasting of the renewables, the demands, and the market, which is difficult to achieve in practice. Existing online algorithms, on the other hand, oversimplify the microgrid model by only considering the aggregate supply-demand balance while omitting the underlying power distribution network and the associated power flow and system operational constraints. Consequently, such approaches may result in control decisions that violate the real-world constraints. This paper focuses on developing an online energy management strategy (EMS) for real-time operation of microgrids that takes into account the power flow and system operational constraints on a distribution network. We model the online energy management as a stochastic optimal power flow problem and propose an online EMS based on Lyapunov optimization. The proposed online EMS is subsequently applied to a real-microgrid system. The simulation results demonstrate that the performance of the proposed EMS exceeds a greedy algorithm and is close to an optimal offline algorithm. Lastly, the effect of the underlying network structure on energy management is observed and analyzed.",
"",
"Demand side management will be a key component of future smart grid that can help reduce peak load and adapt elastic demand to fluctuating generations. In this paper, we consider households that operate different appliances including PHEVs and batteries and propose a demand response approach based on utility maximization. Each appliance provides a certain benefit depending on the pattern or volume of power it consumes. Each household wishes to optimally schedule its power consumption so as to maximize its individual net benefit subject to various consumption and power flow constraints. We show that there exist time-varying prices that can align individual optimality with social optimality, i.e., under such prices, when the households selfishly optimize their own benefits, they automatically also maximize the social welfare. The utility company can thus use dynamic pricing to coordinate demand responses to the benefit of the overall system. We propose a distributed algorithm for the utility company and the customers to jointly compute this optimal prices and demand schedules. Finally, we present simulation results that illustrate several interesting properties of the proposed scheme.",
"Central to the vision of the smart grid is the deployment of smart meters that will allow autonomous software agents, representing the consumers, to optimise their use of devices and heating in the smart home while interacting with the grid. However, without some form of coordination, the population of agents may end up with overly-homogeneous optimised consumption patterns that may generate significant peaks in demand in the grid. These peaks, in turn, reduce the efficiency of the overall system, increase carbon emissions, and may even, in the worst case, cause blackouts. Hence, in this paper, we introduce a novel model of a Decentralised Demand Side Management (DDSM) mechanism that allows agents, by adapting the deferment of their loads based on grid prices, to coordinate in a decentralised manner. Specifically, using average UK consumption profiles for 26M homes, we demonstrate that, through an emergent coordination of the agents, the peak demand of domestic consumers in the grid can be reduced by up to 17 and carbon emissions by up to 6 . We also show that our DDSM mechanism is robust to the increasing electrification of heating in UK homes (i.e., it exhibits a similar efficiency).",
"The integration of renewable energy systems (RESs) in smart grids (SGs) is a challenging task, mainly due to the intermittent and unpredictable nature of the sources, typically wind or sun. Another issue concerns the way to support the consumers' participation in the electricity market aiming at minimizing the costs of the global energy consumption. This paper proposes an energy management system (EMS) aiming at optimizing the SG's operation. The EMS behaves as a sort of aggregator of distributed energy resources allowing the SG to participate in the open market. By integrating demand side management (DSM) and active management schemes (AMS), it allows a better exploitation of renewable energy sources and a reduction of the customers' energy consumption costs with both economic and environmental benefits. It can also improve the grid resilience and flexibility through the active participation of distribution system operators (DSOs) and electricity supply demand that, according to their preferences and costs, respond to real-time price signals using market processes. The efficiency of the proposed EMS is verified on a 23-bus 11-kV distribution network.",
""
]
} |
1904.07016 | 2951433779 | Future electricity distribution grids will host a considerable share of the renewable energy sources needed for enforcing the energy transition. Demand side management mechanisms play a key role in the integration of such renewable energy resources by exploiting the flexibility of elastic loads, generation or electricity storage technologies. In particular, local energy markets enable households to exchange energy with each other while increasing the amount of renewable energy that is consumed locally. Nevertheless, as most ex-ante mechanisms, local market schedules rely on hour-ahead forecasts whose accuracy may be low. In this paper we cope with forecast errors by proposing a game theory approach to model the interactions among prosumers and distribution system operators for the control of electricity flows in real-time. The presented game has an aggregative equilibrium which can be attained in a semi-distributed manner, driving prosumers towards a final exchange of energy with the grid that benefits both households and operators, favoring the enforcement of prosumers' local market commitments while respecting the constraints defined by the operator. The proposed mechanism requires only one-to-all broadcast of price signals, which do not depend either on the amount of players or their local objective function and constraints, making the approach highly scalable. Its impact on distribution grid quality of supply was evaluated through load flow analysis and realistic load profiles, demonstrating the capacity of the mechanism ensure that voltage deviation and thermal limit constraints are respected. | Game theory has been applied mainly to day-ahead energy scheduling rather than to real-time control, and particularly to modeling the interactions between the electricity supplier and its clients, with the goal of minimizing energy costs rather than enforcing DSO constraints @cite_19 @cite_21 @cite_22 @cite_6 @cite_26 . In @cite_21 they propose a Stackelberg game model that allows electricity suppliers to define prices leading to an equilibrium that minimizes Peak to Average Ratio. In @cite_22 they follow the same approach from @cite_21 , but with a strictly convex cost function. In @cite_23 , an online version of a scheduling mechanism for flexible appliances is proposed, which copes with price prediction errors. The literature related to real-time control of electricity flows on distribution grids does not consider RES and storage resources or does not take particular care of voltage deviations or thermal constraints. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_21",
"@cite_6",
"@cite_19",
"@cite_23"
],
"mid": [
"",
"2023865237",
"2068060907",
"2042454807",
"2045362395",
"2101060125"
],
"abstract": [
"",
"We study the demand side management (DSM) problem when customers are equipped with energy storage devices. Two games are discussed: the first is a non-cooperative one played between the residential energy consumers, while the second is a Stackelberg game played between the utility provider and the energy consumers. We introduce a new cost function applicable to the case of users selling back stored energy. The non-cooperative energy consumption game is played between users who schedule their energy use to minimize energy cost. The game is shown to have a unique Nash equilibrium, that is also the global system optimal point. In the Stackelberg game, the utility provider sets the prices to maximize its profit knowing that users will respond by minimizing their cost. We provide existence and uniqueness results for the Stackelberg equilibrium. The Stackelberg game is shown to be the general case of the minimum Peak-to-Average power ratio (PAR) problem. Two algorithms, centralized and distributed, are presented to solve the Stackelberg game. We present results that elucidate the interplay between storage capacity, energy requirements, number of users and system performance measured in total cost and peak-to-average power ratio (PAR).",
"Most of the existing demand-side management programs focus primarily on the interactions between a utility company and its customers users. In this paper, we present an autonomous and distributed demand-side energy management system among users that takes advantage of a two-way digital communication infrastructure which is envisioned in the future smart grid. We use game theory and formulate an energy consumption scheduling game, where the players are the users and their strategies are the daily schedules of their household appliances and loads. It is assumed that the utility company can adopt adequate pricing tariffs that differentiate the energy usage in time and level. We show that for a common scenario, with a single utility company serving multiple customers, the global optimal performance in terms of minimizing the energy costs is achieved at the Nash equilibrium of the formulated energy consumption scheduling game. The proposed distributed demand-side energy management strategy requires each user to simply apply its best response strategy to the current total load and tariffs in the power distribution system. The users can maintain privacy and do not need to reveal the details on their energy consumption schedules to other users. We also show that users will have the incentives to participate in the energy consumption scheduling game and subscribing to such services. Simulation results confirm that the proposed approach can reduce the peak-to-average ratio of the total energy demand, the total energy costs, as well as each user's individual daily electricity charges.",
"Demand side management will be a key component of future smart grid that can help reduce peak load and adapt elastic demand to fluctuating generations. In this paper, we consider households that operate different appliances including PHEVs and batteries and propose a demand response approach based on utility maximization. Each appliance provides a certain benefit depending on the pattern or volume of power it consumes. Each household wishes to optimally schedule its power consumption so as to maximize its individual net benefit subject to various consumption and power flow constraints. We show that there exist time-varying prices that can align individual optimality with social optimality, i.e., under such prices, when the households selfishly optimize their own benefits, they automatically also maximize the social welfare. The utility company can thus use dynamic pricing to coordinate demand responses to the benefit of the overall system. We propose a distributed algorithm for the utility company and the customers to jointly compute this optimal prices and demand schedules. Finally, we present simulation results that illustrate several interesting properties of the proposed scheme.",
"In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies.",
"This paper investigates the residential energy consumption scheduling problem, which is formulated as a coupled-constraint game by taking the interaction among users and the temporally-coupled constraint into consideration. The proposed solution consists of two parts. Firstly, dual decomposition is applied to transform the original coupled-constraint game into a decoupled one. Then, Nash equilibrium of the decoupled game is proven to be achievable via best response, which is computed by gradient projection. The proposed solution is also extended to an online version, which is able to alleviate the impact of the price prediction error. Numerical results demonstrate that the proposed approach can effectively shift the peak-hour demand to off-peak hours, enhance the welfare of each user, and minimize the peak-to-average ratio. The scalability of the approach and the impact of the user number are also investigated."
]
} |
1904.07069 | 2939704921 | We study the problem of efficiently disseminating authenticated blockchain information from blockchain nodes (servers) to Internet of Things (IoT) devices, through a wireless base station (BS). In existing blockchain protocols, upon generation of a new block, each IoT device receives a copy of the block header, authenticated via digital signature by one or more trusted servers. Since it relies on unicast transmissions, the required communication resources grow linearly with the number of IoT devices. We propose a more efficient scheme, in which a single copy of each block header is multicasted, together with the signatures of servers. In addition, if IoT devices tolerate a delay, we exploit the blockchain structure to amortize the authentication in time, by transmitting only a subset of signature in each block period. Finally, the BS sends redundant information, via a repetition code, to deal with the unreliable wireless channel, with the aim of decreasing the amount of feedback required from IoT devices. Our analysis shows the trade-off between timely authentication of blocks and reliability of the communication, depending on the channel quality (packet loss rate), the block interaction, and the presence of forks in the blockchain. The numerical results show the performance of the scheme, that is a viable starting point to design new blockchain lightweight protocols. | The problem of reliable and authenticated multicasting of data streams is a well-investigated subject @cite_2 . Here, a message is authenticated when it is digitally signed by a trusted party and the signature is received. However, message and signature are not necessarily sent together. Hence, when a message is received, but not authenticated, it is considered useless. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2160766950"
],
"abstract": [
"The success of next-generation mobile communication systems depends on the ability of service providers to engineer new added-value multimedia-rich services, which impose stringent constraints on the underlying delivery transport architecture. The reliability of real-time services is essential for the viability of any such service offering. The sporadic packet loss typical of wireless channels can be addressed using appropriate techniques such as the widely used packet-level forward error correction. In designing channel-aware media streaming applications, two interrelated and challenging issues should be tackled: accuracy of characterizing channel fluctuations and effectiveness of application-level adaptation. The first challenge requires thorough insight into channel fluctuations and their manifestations at the application level, while the second concerns the way those fluctuations are interpreted and dealt with by adaptive mechanisms such as FEC. In this article we review the major issues that arise when designing a reliable media streaming system for wireless networks."
]
} |
1904.07069 | 2939704921 | We study the problem of efficiently disseminating authenticated blockchain information from blockchain nodes (servers) to Internet of Things (IoT) devices, through a wireless base station (BS). In existing blockchain protocols, upon generation of a new block, each IoT device receives a copy of the block header, authenticated via digital signature by one or more trusted servers. Since it relies on unicast transmissions, the required communication resources grow linearly with the number of IoT devices. We propose a more efficient scheme, in which a single copy of each block header is multicasted, together with the signatures of servers. In addition, if IoT devices tolerate a delay, we exploit the blockchain structure to amortize the authentication in time, by transmitting only a subset of signature in each block period. Finally, the BS sends redundant information, via a repetition code, to deal with the unreliable wireless channel, with the aim of decreasing the amount of feedback required from IoT devices. Our analysis shows the trade-off between timely authentication of blocks and reliability of the communication, depending on the channel quality (packet loss rate), the block interaction, and the presence of forks in the blockchain. The numerical results show the performance of the scheme, that is a viable starting point to design new blockchain lightweight protocols. | When a delay is not acceptable, the signatures of different parties can be combined to a shorter one, generated by aggregate signature schemes @cite_3 . When the aggregate signature is valid, it means that all servers authenticated it. However, the signature verification algorithm has a high computational cost and requires the storage of the public keys of all signers @cite_3 . For this reason, this solution is not suitable for the IoT system that we consider. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2811448169"
],
"abstract": [
"We construct new multi-signature schemes that provide new functionality. Our schemes are designed to reduce the size of the Bitcoin blockchain, but are useful in many other settings where multi-signatures are needed. All our constructions support both signature compression and public-key aggregation. Hence, to verify that a number of parties signed a common message m, the verifier only needs a short multi-signature, a short aggregation of their public keys, and the message m. We give new constructions that are derived from Schnorr signatures and from BLS signatures. Our constructions are in the plain public key model, meaning that users do not need to prove knowledge or possession of their secret key."
]
} |
1904.07069 | 2939704921 | We study the problem of efficiently disseminating authenticated blockchain information from blockchain nodes (servers) to Internet of Things (IoT) devices, through a wireless base station (BS). In existing blockchain protocols, upon generation of a new block, each IoT device receives a copy of the block header, authenticated via digital signature by one or more trusted servers. Since it relies on unicast transmissions, the required communication resources grow linearly with the number of IoT devices. We propose a more efficient scheme, in which a single copy of each block header is multicasted, together with the signatures of servers. In addition, if IoT devices tolerate a delay, we exploit the blockchain structure to amortize the authentication in time, by transmitting only a subset of signature in each block period. Finally, the BS sends redundant information, via a repetition code, to deal with the unreliable wireless channel, with the aim of decreasing the amount of feedback required from IoT devices. Our analysis shows the trade-off between timely authentication of blocks and reliability of the communication, depending on the channel quality (packet loss rate), the block interaction, and the presence of forks in the blockchain. The numerical results show the performance of the scheme, that is a viable starting point to design new blockchain lightweight protocols. | Finally, in the context of wireless multicasting of data streams, FEC techniques are used to limit the feedback from receivers, in case of packet loss @cite_2 . | {
"cite_N": [
"@cite_2"
],
"mid": [
"2160766950"
],
"abstract": [
"The success of next-generation mobile communication systems depends on the ability of service providers to engineer new added-value multimedia-rich services, which impose stringent constraints on the underlying delivery transport architecture. The reliability of real-time services is essential for the viability of any such service offering. The sporadic packet loss typical of wireless channels can be addressed using appropriate techniques such as the widely used packet-level forward error correction. In designing channel-aware media streaming applications, two interrelated and challenging issues should be tackled: accuracy of characterizing channel fluctuations and effectiveness of application-level adaptation. The first challenge requires thorough insight into channel fluctuations and their manifestations at the application level, while the second concerns the way those fluctuations are interpreted and dealt with by adaptive mechanisms such as FEC. In this article we review the major issues that arise when designing a reliable media streaming system for wireless networks."
]
} |
1904.06960 | 2936660777 | Automated hyperparameter tuning aspires to facilitate the application of machine learning for non-experts. In the literature, different optimization approaches are applied for that purpose. This paper investigates the performance of Differential Evolution for tuning hyperparameters of supervised learning algorithms for classification tasks. This empirical study involves a range of different machine learning algorithms and datasets with various characteristics to compare the performance of Differential Evolution with Sequential Model-based Algorithm Configuration (SMAC), a reference Bayesian Optimization approach. The results indicate that Differential Evolution outperforms SMAC for most datasets when tuning a given machine learning algorithm - particularly when breaking ties in a first-to-report fashion. Only for the tightest of computational budgets SMAC performs better. On small datasets, Differential Evolution outperforms SMAC by 19 (37 after tie-breaking). In a second experiment across a range of representative datasets taken from the literature, Differential Evolution scores 15 (23 after tie-breaking) more wins than SMAC. | Auto-sklearn @cite_6 is probably the most prominent example of applying Bayesian Optimization (through the use of SMAC @cite_14 ) for the automated configuration of machine learning pipelines. It supports reusing knowledge about well performing hyperparameter configurations when a given base learner is tested on similar datasets (denoted or ), ensembling, and data preprocessing. For base learner implementations @cite_6 leverages scikit-learn @cite_11 . @cite_6 studies individual base learners' performances on specific datasets, but does not focus exclusively on tuning hyperparameters in its experiments. This work takes inspiration from @cite_6 for Experiment 1 (base learner selection) and 2 (dataset and base learner selection). | {
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_11"
],
"mid": [
"60686164",
"2182361439",
"2101234009"
],
"abstract": [
"State-of-the-art algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance. However, manually exploring the resulting combinatorial space of parameter settings is tedious and tends to lead to unsatisfactory outcomes. Recently, automated approaches for solving this algorithm configuration problem have led to substantial improvements in the state of the art for solving various problems. One promising approach constructs explicit regression models to describe the dependence of target algorithm performance on parameter settings; however, this approach has so far been limited to the optimization of few numerical algorithm parameters on single instances. In this paper, we extend this paradigm for the first time to general algorithm configuration problems, allowing many categorical parameters and optimization for sets of instances. We experimentally validate our new algorithm configuration procedure by optimizing a local search and a tree search solver for the propositional satisfiability problem (SAT), as well as the commercial mixed integer programming (MIP) solver CPLEX. In these experiments, our procedure yielded state-of-the-art performance, and in many cases outperformed the previous best configuration approach.",
"The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts. To be effective in practice, such systems need to automatically choose a good algorithm and feature preprocessing steps for a new dataset at hand, and also set their respective hyperparameters. Recent work has started to tackle this automated machine learning (AutoML) problem with the help of efficient Bayesian optimization methods. Building on this, we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters). This system, which we dub AUTO-SKLEARN, improves on existing AutoML methods by automatically taking into account past performance on similar datasets, and by constructing ensembles from the models evaluated during the optimization. Our system won the first phase of the ongoing ChaLearn AutoML challenge, and our comprehensive analysis on over 100 diverse datasets shows that it substantially outperforms the previous state of the art in AutoML. We also demonstrate the performance gains due to each of our contributions and derive insights into the effectiveness of the individual components of AUTO-SKLEARN.",
"Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http: scikit-learn.sourceforge.net."
]
} |
1904.06960 | 2936660777 | Automated hyperparameter tuning aspires to facilitate the application of machine learning for non-experts. In the literature, different optimization approaches are applied for that purpose. This paper investigates the performance of Differential Evolution for tuning hyperparameters of supervised learning algorithms for classification tasks. This empirical study involves a range of different machine learning algorithms and datasets with various characteristics to compare the performance of Differential Evolution with Sequential Model-based Algorithm Configuration (SMAC), a reference Bayesian Optimization approach. The results indicate that Differential Evolution outperforms SMAC for most datasets when tuning a given machine learning algorithm - particularly when breaking ties in a first-to-report fashion. Only for the tightest of computational budgets SMAC performs better. On small datasets, Differential Evolution outperforms SMAC by 19 (37 after tie-breaking). In a second experiment across a range of representative datasets taken from the literature, Differential Evolution scores 15 (23 after tie-breaking) more wins than SMAC. | A recent approach to modeling base learner performance applies a concept from recommender systems denoted @cite_5 . It discretizes the space of machine learning pipeline configurations and - typical for recommender systems - establishes a matrix of tens of thousands of machine learning pipeline configurations' performances on hundreds of datasets. Factorizing this matrix allows estimating the performance of yet-to-be-tested pipeline-dataset combinations. On a hold out set of datasets @cite_5 outperforms @cite_6 . We do not include Probabilistic Matrix Factorization in this work as recommender systems only work well in settings with previously collected correlation data, which is at odds with our focus on cold start hyperparameter tuning settings. | {
"cite_N": [
"@cite_5",
"@cite_6"
],
"mid": [
"2616602896",
"2182361439"
],
"abstract": [
"In order to achieve state-of-the-art performance, modern machine learning techniques require careful data pre-processing and hyperparameter tuning. Moreover, given the ever increasing number of machine learning models being developed, model selection is becoming increasingly important. Automating the selection and tuning of machine learning pipelines, which can include different data pre-processing methods and machine learning models, has long been one of the goals of the machine learning community. In this paper, we propose to solve this meta-learning task by combining ideas from collaborative filtering and Bayesian optimization. Specifically, we use a probabilistic matrix factorization model to transfer knowledge across experiments performed in hundreds of different datasets and use an acquisition function to guide the exploration of the space of possible ML pipelines. In our experiments, we show that our approach quickly identifies high-performing pipelines across a wide range of datasets, significantly outperforming the current state-of-the-art.",
"The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts. To be effective in practice, such systems need to automatically choose a good algorithm and feature preprocessing steps for a new dataset at hand, and also set their respective hyperparameters. Recent work has started to tackle this automated machine learning (AutoML) problem with the help of efficient Bayesian optimization methods. Building on this, we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters). This system, which we dub AUTO-SKLEARN, improves on existing AutoML methods by automatically taking into account past performance on similar datasets, and by constructing ensembles from the models evaluated during the optimization. Our system won the first phase of the ongoing ChaLearn AutoML challenge, and our comprehensive analysis on over 100 diverse datasets shows that it substantially outperforms the previous state of the art in AutoML. We also demonstrate the performance gains due to each of our contributions and derive insights into the effectiveness of the individual components of AUTO-SKLEARN."
]
} |
1904.06960 | 2936660777 | Automated hyperparameter tuning aspires to facilitate the application of machine learning for non-experts. In the literature, different optimization approaches are applied for that purpose. This paper investigates the performance of Differential Evolution for tuning hyperparameters of supervised learning algorithms for classification tasks. This empirical study involves a range of different machine learning algorithms and datasets with various characteristics to compare the performance of Differential Evolution with Sequential Model-based Algorithm Configuration (SMAC), a reference Bayesian Optimization approach. The results indicate that Differential Evolution outperforms SMAC for most datasets when tuning a given machine learning algorithm - particularly when breaking ties in a first-to-report fashion. Only for the tightest of computational budgets SMAC performs better. On small datasets, Differential Evolution outperforms SMAC by 19 (37 after tie-breaking). In a second experiment across a range of representative datasets taken from the literature, Differential Evolution scores 15 (23 after tie-breaking) more wins than SMAC. | BOHB @cite_15 is an efficient combination of Bayesian Optimization with Hyperband @cite_7 . In each BOHB iteration, a multi-armed bandit (Hyperband) determines the number of hyperparameter configurations to evaluate and the associated computational budget. This way, configurations that are likely to perform poorly are stopped early. Consequently, promising configurations receive more computing resources. The identification of configurations at the beginning of each iteration relies on Bayesian Optimization. Instead of identifying ill-performing configurations early on, our work focuses on the hyperparameter tuning aspect. In particular, we study empirically whether alternative optimization heuristics such as evolutionary algorithms can outperform the widely used model-based hyperparameter tuning approaches. | {
"cite_N": [
"@cite_15",
"@cite_7"
],
"mid": [
"2804268694",
"2963815651"
],
"abstract": [
"Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible. On the other hand, bandit-based configuration evaluation approaches based on random search lack guidance and do not converge to the best configurations as quickly. Here, we propose to combine the benefits of both Bayesian optimization and bandit-based methods, in order to achieve the best of both worlds: strong anytime performance and fast convergence to optimal configurations. We propose a new practical state-of-the-art hyperparameter optimization method, which consistently outperforms both Bayesian optimization and Hyperband on a wide range of problem types, including high-dimensional toy functions, support vector machines, feed-forward neural networks, Bayesian neural networks, deep reinforcement learning, and convolutional neural networks. Our method is robust and versatile, while at the same time being conceptually simple and easy to implement.",
"Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation and early-stopping. We formulate hyperparameter optimization as a pure-exploration nonstochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce a novel algorithm, Hyperband, for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare Hyperband with popular Bayesian optimization methods on a suite of hyperparameter optimization problems. We observe that Hyperband can provide over an order-of-magnitude speedup over our competitor set on a variety of deep-learning and kernel-based learning problems."
]
} |
1904.06960 | 2936660777 | Automated hyperparameter tuning aspires to facilitate the application of machine learning for non-experts. In the literature, different optimization approaches are applied for that purpose. This paper investigates the performance of Differential Evolution for tuning hyperparameters of supervised learning algorithms for classification tasks. This empirical study involves a range of different machine learning algorithms and datasets with various characteristics to compare the performance of Differential Evolution with Sequential Model-based Algorithm Configuration (SMAC), a reference Bayesian Optimization approach. The results indicate that Differential Evolution outperforms SMAC for most datasets when tuning a given machine learning algorithm - particularly when breaking ties in a first-to-report fashion. Only for the tightest of computational budgets SMAC performs better. On small datasets, Differential Evolution outperforms SMAC by 19 (37 after tie-breaking). In a second experiment across a range of representative datasets taken from the literature, Differential Evolution scores 15 (23 after tie-breaking) more wins than SMAC. | This work differs from the referenced articles in that it attempts to isolate the hyperparameter tuning methods' performances, e.g., by limiting CPU resources (a single CPU core) and tight computational budgets (smaller time frames than in @cite_6 and @cite_5 ). These tightly limited resources are vital to identifying the algorithmic advantages and drawbacks of different optimization approaches for hyperparameter tuning. Different to, e.g., @cite_5 we do not limit the invocation of individual hyperparameter configurations. That penalizes ill-performing but computationally expensive parameter choices. To the best of our knowledge, such scenarios have not been studied in the related literature. | {
"cite_N": [
"@cite_5",
"@cite_6"
],
"mid": [
"2616602896",
"2182361439"
],
"abstract": [
"In order to achieve state-of-the-art performance, modern machine learning techniques require careful data pre-processing and hyperparameter tuning. Moreover, given the ever increasing number of machine learning models being developed, model selection is becoming increasingly important. Automating the selection and tuning of machine learning pipelines, which can include different data pre-processing methods and machine learning models, has long been one of the goals of the machine learning community. In this paper, we propose to solve this meta-learning task by combining ideas from collaborative filtering and Bayesian optimization. Specifically, we use a probabilistic matrix factorization model to transfer knowledge across experiments performed in hundreds of different datasets and use an acquisition function to guide the exploration of the space of possible ML pipelines. In our experiments, we show that our approach quickly identifies high-performing pipelines across a wide range of datasets, significantly outperforming the current state-of-the-art.",
"The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts. To be effective in practice, such systems need to automatically choose a good algorithm and feature preprocessing steps for a new dataset at hand, and also set their respective hyperparameters. Recent work has started to tackle this automated machine learning (AutoML) problem with the help of efficient Bayesian optimization methods. Building on this, we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters). This system, which we dub AUTO-SKLEARN, improves on existing AutoML methods by automatically taking into account past performance on similar datasets, and by constructing ensembles from the models evaluated during the optimization. Our system won the first phase of the ongoing ChaLearn AutoML challenge, and our comprehensive analysis on over 100 diverse datasets shows that it substantially outperforms the previous state of the art in AutoML. We also demonstrate the performance gains due to each of our contributions and derive insights into the effectiveness of the individual components of AUTO-SKLEARN."
]
} |
1904.07110 | 2969770887 | Temperature sensing and control systems are widely used in the closed-loop control of critical processes such as maintaining the thermal stability of patients, or in alarm systems for detecting temperature-related hazards. However, the security of these systems has yet to be completely explored, leaving potential attack surfaces that can be exploited to take control over critical systems. In this paper we investigate the reliability of temperature-based control systems from a security and safety perspective. We show how unexpected consequences and safety risks can be induced by physical-level attacks on analog temperature sensing components. For instance, we demonstrate that an adversary could remotely manipulate the temperature sensor measurements of an infant incubator to cause potential safety issues, without tampering with the victim system or triggering automatic temperature alarms. This attack exploits the unintended rectification effect that can be induced in operational and instrumentation amplifiers to control the sensor output, tricking the internal control loop of the victim system to heat up or cool down. Furthermore, we show how the exploit of this hardware-level vulnerability could affect different classes of analog sensors that share similar signal conditioning processes. Our experimental results indicate that conventional defenses commonly deployed in these systems are not sufficient to mitigate the threat, so we propose a prototype design of a low-cost anomaly detector for critical applications to ensure the integrity of temperature sensor signals. | Analog circuits of sensors are especially susceptible to EMI. Various works show how is possible exploit different non-linearities of circuit components to cause sensors misreadings. Foo @cite_56 showed that bogus signals can be injected into analog sensors such as microphones and electrocardiogram (ECG) sensors in proximity through low-power EMI. Their amplitude-modulated EMI attack method exploited the generation of subharmonics caused by the passage of high frequency signals through common circuit components (e.g. capacitors in the path between the microphones and the amplifier). | {
"cite_N": [
"@cite_56"
],
"mid": [
"2151370481"
],
"abstract": [
"Electromagnetic interference (EMI) affects circuits by inducing voltages on conductors. Analog sensing of signals on the order of a few millivolts is particularly sensitive to interference. This work (1) measures the susceptibility of analog sensor systems to signal injection attacks by intentional, low-power emission of chosen electromagnetic waveforms, and (2) proposes defense mechanisms to reduce the risks. Our experiments use specially crafted EMI at varying power and distance to measure susceptibility of sensors in implantable medical devices and consumer electronics. Results show that at distances of 1-2m, consumer electronic devices containing microphones are vulnerable to the injection of bogus audio signals. Our measurements show that in free air, intentional EMI under 10 W can inhibit pacing and induce defibrillation shocks at distances up to 1-2m on implantable cardiac electronic devices. However, with the sensing leads and medical devices immersed in a saline bath to better approximate the human body, the same experiment decreases to about 5 cm. Our defenses range from prevention with simple analog shielding to detection with a signal contamination metric based on the root mean square of waveform amplitudes. Our contribution to securing cardiac devices includes a novel defense mechanism that probes for forged pacing pulses inconsistent with the refractory period of cardiac tissue."
]
} |
1904.07110 | 2969770887 | Temperature sensing and control systems are widely used in the closed-loop control of critical processes such as maintaining the thermal stability of patients, or in alarm systems for detecting temperature-related hazards. However, the security of these systems has yet to be completely explored, leaving potential attack surfaces that can be exploited to take control over critical systems. In this paper we investigate the reliability of temperature-based control systems from a security and safety perspective. We show how unexpected consequences and safety risks can be induced by physical-level attacks on analog temperature sensing components. For instance, we demonstrate that an adversary could remotely manipulate the temperature sensor measurements of an infant incubator to cause potential safety issues, without tampering with the victim system or triggering automatic temperature alarms. This attack exploits the unintended rectification effect that can be induced in operational and instrumentation amplifiers to control the sensor output, tricking the internal control loop of the victim system to heat up or cool down. Furthermore, we show how the exploit of this hardware-level vulnerability could affect different classes of analog sensors that share similar signal conditioning processes. Our experimental results indicate that conventional defenses commonly deployed in these systems are not sufficient to mitigate the threat, so we propose a prototype design of a low-cost anomaly detector for critical applications to ensure the integrity of temperature sensor signals. | Sensors have become pervasive in control systems and IoT. Systems inherently trust sensors to measure physical properties and make automated decisions. However, the physical properties measured by sensors can be spoofed by an adversary @cite_4 @cite_54 @cite_30 @cite_24 @cite_17 @cite_36 and the security of analog sensor signals before digitization has been considered as an increasingly important concern @cite_55 @cite_33 @cite_43 . | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_33",
"@cite_36",
"@cite_54",
"@cite_55",
"@cite_24",
"@cite_43",
"@cite_17"
],
"mid": [
"2805972105",
"2497078867",
"2909789829",
"2751902866",
"2166920706",
"2018999076",
"2740323046",
"2913715209",
""
],
"abstract": [
"Embedded and cyber-physical systems are critically dependent on the integrity of input and output signals for proper operation. Input signals acquired from sensors are assumed to correspond to the phenomenon the system is monitoring and responding to. Similarly, when such systems issue an actuation signal it is expected that the mechanism being controlled will respond in a predictable manner. Recent work has shown that sensors can be manipulated through the use of intentional electromagnetic interference (IEMI). In this work, we demonstrate thatboth input and output signals, analog and digital, can be remotely manipulated via the physical layer---thus bypassing traditional integrity mechanisms. Through the use of specially crafted IEMI it is shown that the physical layer signaling used for sensor input to, and digital communications between, embedded systems may be undermined to an attacker's advantage. Three attack scenarios are analyzed and their efficacy demonstrated. In the first scenario the analog sensing channel is manipulated to produce arbitrary sensor readings, while in the second it is shown that an attacker may induce bit flips in serial communications. Finally, a commonly used actuation signal is shown to be vulnerable to IEMI. The attacks are effective over appreciable distances and at low power.",
"Sensors measure physical quantities of the environment for sensing and actuation systems, and are widely used in many commercial embedded systems such as smart devices, drones, and medical devices because they offer convenience and accuracy. As many sensing and actuation systems depend entirely on data from sensors, these systems are naturally vulnerable to sensor spoofing attacks that use fabricated physical stimuli. As a result, the systems become entirely insecure and unsafe. In this paper, we propose a new type of sensor spoofing attack based on saturation. A sensor shows a linear characteristic between its input physical stimuli and output sensor values in a typical operating region. However, if the input exceeds the upper bound of the operating region, the output is saturated and does not change as much as the corresponding changes of the input. Using saturation, our attack can make a sensor to ignore legitimate inputs. To demonstrate our sensor spoofing attack, we target two medical infusion pumps equipped with infrared (IR) drop sensors to control precisely the amount of medicine injected into a patients' body. Our experiments based on analyses of the drop sensors show that the output of them could be manipulated by saturating the sensors using an additional IR source. In addition, by analyzing the infusion pumps' firmware, we figure out the vulnerability in the mechanism handling the output of the drop sensors, and implement a sensor spoofing attack that can bypass the alarm systems of the targets. As a result, we show that both over-infusion and under-infusion are possible: our spoofing attack can inject up to 3.33 times the intended amount of fluid or 0.65 times of it for a 10 minute period.",
"Sensors are embedded in security-critical applications from medical devices to nuclear power plants, but their outputs can be spoofed through signals transmitted by attackers at a distance. To address the lack of a unifying framework for evaluating the effect of such transmissions, we introduce a system and threat model for signal injection attacks. Our model abstracts away from specific circuit-design issues, and highlights the need to characterize the response of Analog-to-Digital Converters (ADCs) beyond their Nyquist frequency. This ADC characterization can be conducted using direct power injections, reducing the amount of circuit-specific experiments to be performed. We further define the concepts of existential, selective, and universal security, which address attacker goals from mere disruptions of the sensor readings to precise waveform injections. As security in our framework is not binary, it allows for the direct comparison of the level of security between different systems. We additionally conduct extensive experiments across all major ADC types, and demonstrate that an attacker can inject complex waveforms such as human speech into ADCs by transmitting amplitude-modulated (AM) signals over carrier frequencies up to the GHz range. All ADCs we test are vulnerable and demodulate the injected AM signal, although some require a more fine-tuned selection of the carrier frequency. We finally introduce an algorithm which allows circuit designers to concretely calculate the security level of real systems, and we apply our definitions and algorithm in practice using measurements of injections against a smartphone microphone. Overall, our work highlights the importance of evaluating the susceptibility of systems against signal injection attacks, and introduces both the terminology and the methodology to do so.",
"Speech recognition (SR) systems such as Siri or Google Now have become an increasingly popular human-computer interaction method, and have turned various systems into voice controllable systems (VCS). Prior work on attacking VCS shows that the hidden voice commands that are incomprehensible to people can control the systems. Hidden voice commands, though \"hidden\", are nonetheless audible. In this work, we design a totally inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers (e.g., f > 20 kHz) to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated low-frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems. We validated DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions, and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks.",
"Abstract Hyperthermia at the time of or following a hypoxic-ischemic insult has been associated with adverse neurodevelopmental outcome. Moreover, an elevation in temperature during labor has been associated with a variety of other adverse neurologic sequelae such as neonatal seizures, encephalopathy, stroke, and cerebral palsy. These outcomes may be secondary to a number of deleterious effects of hyperthermia including an increase in cellular metabolic rate and cerebral blood flow alteration, release of excitotoxic products such as free radicals and glutamate, and hemostatic changes. There is also an association between chorioamnionitis at the time of delivery and cerebral palsy, which is thought to be secondary to cytokine-mediated injury. We review experimental and human studies demonstrating a link between hyperthermia and perinatal brain injury.",
"A fully functional radiant warmer induced rapid and continuous increases in regional skin temperatures, heart rate, mean arterial blood pressure and respiratory rate in a newborn patient without corrective action. We report this case of passive overheating to create awareness of the risks associated with regulating radiant heat output based upon a single servo-controlled temperature.",
"With the advancement in computing, sensing, and vehicle electronics, autonomous vehicles are being realized. For autonomous driving, environment perception sensors such as radars, lidars, and vision sensors play core roles as the eyes of a vehicle; therefore, their reliability cannot be compromised. In this work, we present a spoofing by relaying attack, which can not only induce illusions in the lidar output but can also cause the illusions to appear closer than the location of a spoofing device. In a recent work, the former attack is shown to be effective, but the latter one was never shown. Additionally, we present a novel saturation attack against lidars, which can completely incapacitate a lidar from sensing a certain direction. The effectiveness of both the approaches is experimentally verified against Velodyne’s VLP-16.",
"Research on how hardware imperfections impact security has primarily focused on side-channel leakage mechanisms produced by power consumption, electromagnetic emanations, acoustic vibrations, and optical emissions. However, with the proliferation of sensors in security-critical devices, the impact of attacks on sensor-to-microcontroller and microcontroller-to-actuator interfaces using the same channels is starting to become more than an academic curiosity. These out-of-band signal injection attacks target connections which transform physical quantities to analog properties and fundamentally cannot be authenticated, posing previously unexplored security risks. This paper contains the first survey of such out-of-band signal injection attacks, with a focus on unifying their terminology, and identifying commonalities in their causes and effects. The taxonomy presented contains a chronological, evolutionary, and thematic view of out-of-band signal injection attacks which highlights the cross-influences that exist and underscores the need for a common language irrespective of the method of injection. By placing attack and defense mechanisms in the wider context of their dual counterparts of side-channel leakage and electromagnetic interference, our paper identifies common threads and gaps that can help guide and inform future research. Overall, the ever-increasing reliance on sensors embedded in everyday commodity devices necessitates that a stronger focus be placed on improving the security of such systems against out-of-band signal injection attacks.",
""
]
} |
1904.06883 | 2937734424 | Traditional neural objection detection methods use multi-scale features that allow multiple detectors to perform detecting tasks independently and in parallel. At the same time, with the handling of the prior box, the algorithm's ability to deal with scale invariance is enhanced. However, too many prior boxes and independent detectors will increase the computational redundancy of the detection algorithm. In this study, we introduce Dubox, a new one-stage approach that detects the objects without prior box. Working with multi-scale features, the designed dual scale residual unit makes dual scale detectors no longer run independently. The second scale detector learns the residual of the first. Dubox has enhanced the capacity of heuristic-guided that can further enable the first scale detector to maximize the detection of small targets and the second to detect objects that cannot be identified by the first one. Besides, for each scale detector, with the new classification-regression progressive strapped loss makes our process not based on prior boxes. Integrating these strategies, our detection algorithm has achieved excellent performance in terms of speed and accuracy. Extensive experiments on the VOC, COCO object detection benchmark have confirmed the effectiveness of this algorithm. | Single-scale detectors detect targets in a typical ratio and cannot identify objects in other proportions. To overcome this drawback, many algorithms use image pyramids, and each proportional object in the pyramid is fed into the detector. Such framework design is prevalent in algorithms that do not use deep learning, and often design some manual features such as HOG @cite_4 or SIFT @cite_12 . There are also algorithms using this method in the CNN network, like @cite_17 . The detector only processes the features in a specific range, and based on this feature map, classifies and regresses the object box. Although this may reduce the detection difficulty of the detector, on the whole, it has a substantial computational cost, making it not easy to use on devices with low computing capability, which greatly limits its practicality. | {
"cite_N": [
"@cite_4",
"@cite_12",
"@cite_17"
],
"mid": [
"2161969291",
"2124386111",
"1934410531"
],
"abstract": [
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.",
"In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks."
]
} |
1904.06883 | 2937734424 | Traditional neural objection detection methods use multi-scale features that allow multiple detectors to perform detecting tasks independently and in parallel. At the same time, with the handling of the prior box, the algorithm's ability to deal with scale invariance is enhanced. However, too many prior boxes and independent detectors will increase the computational redundancy of the detection algorithm. In this study, we introduce Dubox, a new one-stage approach that detects the objects without prior box. Working with multi-scale features, the designed dual scale residual unit makes dual scale detectors no longer run independently. The second scale detector learns the residual of the first. Dubox has enhanced the capacity of heuristic-guided that can further enable the first scale detector to maximize the detection of small targets and the second to detect objects that cannot be identified by the first one. Besides, for each scale detector, with the new classification-regression progressive strapped loss makes our process not based on prior boxes. Integrating these strategies, our detection algorithm has achieved excellent performance in terms of speed and accuracy. Extensive experiments on the VOC, COCO object detection benchmark have confirmed the effectiveness of this algorithm. | The multi-scale detection algorithm only needs a fixed-scale input and detects objects with diverse sizes. YOLOv3 @cite_2 and RetinaNet @cite_16 had fixed input sizes and detected in parallel at various scales by using multiple detectors. In general, detectors in the lower layers detect small targets and the uppers are more accessible to identify large objects. This is a heuristic-guided strategy. The addition of the anchor design further strengthens this guidance. However, because multiple levels of detectors operate independently in parallel in each scale feature, there is no cooperation between them, resulting in a large amount of detection redundancy. In the meanwhile, the common design of anchor in these detectors dramatically increases the number of output channels and aggravates the computation burden. | {
"cite_N": [
"@cite_16",
"@cite_2"
],
"mid": [
"2743473392",
"2796347433"
],
"abstract": [
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL",
"We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL"
]
} |
1904.06950 | 2938206182 | Recently, deep models have been successfully applied in several applications, especially with low-level representations. However, sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models. Column Networks, a deep architecture, can succinctly capture such domain structure and interactions, but may still be prone to sub-optimal learning from sparse and noisy samples. Inspired by the success of human-advice guided learning in AI, especially in data-scarce domains, we propose Knowledge-augmented Column Networks that leverage human advice knowledge for better learning with noisy sparse samples. Our experiments demonstrate that our approach leads to either superior overall performance or faster convergence (i.e., both effective and efficient). | The idea of several processing layers to learn increasingly complex abstractions of the data was initiated by the perceptron model @cite_29 and was further strengthened by the advent of the back-propagation algorithm @cite_18 . A deep architecture was proposed by @cite_0 and have since been adapted for different problems across the entire spectrum of domains, such as, Atari games via deep reinforcement learning @cite_16 , sentiment classification @cite_47 and image super-resolution @cite_38 . | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_29",
"@cite_0",
"@cite_47",
"@cite_16"
],
"mid": [
"54257720",
"2310919327",
"2040870580",
"2163605009",
"22861983",
"1757796397"
],
"abstract": [
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"",
"",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
"We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them."
]
} |
1904.06950 | 2938206182 | Recently, deep models have been successfully applied in several applications, especially with low-level representations. However, sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models. Column Networks, a deep architecture, can succinctly capture such domain structure and interactions, but may still be prone to sub-optimal learning from sparse and noisy samples. Inspired by the success of human-advice guided learning in AI, especially in data-scarce domains, we propose Knowledge-augmented Column Networks that leverage human advice knowledge for better learning with noisy sparse samples. Our experiments demonstrate that our approach leads to either superior overall performance or faster convergence (i.e., both effective and efficient). | Column networks transform relational structures into a deep architecture in a principled manner and are designed especially for collective classification tasks @cite_2 . The architecture and formulation of the column network are suited for adapting it to the advice framework. The GraphSAGE algorithm @cite_9 shares similarities with column networks since both architectures operate by aggregating neighborhood information but differs in the way the aggregation is performed. Graph convolutional networks @cite_17 is another architecture that is very similar to the way CLN operates, again differing in the aggregation method. @cite_41 presents a method of incorporating constraints, as a regularization term, which are first order logic statements with fuzzy semantics, in a neural model and can be extended to collective classification problems. While it is similar in spirit to our proposed approach it differs in its representation and problem setup. | {
"cite_N": [
"@cite_41",
"@cite_9",
"@cite_17",
"@cite_2"
],
"mid": [
"2201744460",
"2962767366",
"2519887557",
"2963920355"
],
"abstract": [
"Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.",
"Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",
"Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than non-collective classifiers, collective classification is computationally challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multi-relational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) long-range, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient with linear complexity in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all of these applications, CLN demonstrates a higher accuracy than state-of-the-art rivals."
]
} |
1904.06950 | 2938206182 | Recently, deep models have been successfully applied in several applications, especially with low-level representations. However, sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models. Column Networks, a deep architecture, can succinctly capture such domain structure and interactions, but may still be prone to sub-optimal learning from sparse and noisy samples. Inspired by the success of human-advice guided learning in AI, especially in data-scarce domains, we propose Knowledge-augmented Column Networks that leverage human advice knowledge for better learning with noisy sparse samples. Our experiments demonstrate that our approach leads to either superior overall performance or faster convergence (i.e., both effective and efficient). | Several recent approaches aim to make deep architectures robust to label noise. They include (1.) learning from easy samples (w small loss) by using MentorNets which are neural architectures that estimate curriculum ( importance weight on samples) @cite_33 , (2.) noise-robust loss function via additional noise adaptation layers @cite_44 or via multiplicative modifiers over the error network parameters @cite_36 and (3) introduction of a regularizer in the loss function for smoothing in presence of adversarial randomizations on the distribution of the response variable @cite_6 . | {
"cite_N": [
"@cite_44",
"@cite_36",
"@cite_33",
"@cite_6"
],
"mid": [
"2752971446",
"2964292098",
"2963081269",
"2964159205"
],
"abstract": [
"The availability of large datsets has enabled neural networks to achieve impressive recognition results. However, the presence of inaccurate class labels is known to deteriorate the performance of even the best classifiers in a broad range of classification problems. Noisy labels also tend to be more harmful than noisy attributes. When the observed label is noisy, we can view the correct label as a latent random variable and model the noise processes by a communication channel with unknown parameters. Thus we can apply the EM algorithm to find the parameters of both the network and the noise and to estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones. This scheme is then extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Experimental results demonstrate that this approach outperforms previous methods.",
"We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures — stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers — demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.",
"",
"We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10."
]
} |
1904.06646 | 2937220922 | A general Intrusion Detection System (IDS) fundamentally acts based on an Anomaly Detection System (ADS) or a combination of anomaly detection and signature-based methods, gathering and analyzing observations and reporting possible suspicious cases to a system administrator or the other users for further investigation. One of the notorious challenges which even the state-of-the-art ADS and IDS have not overcome is the possibility of a very high false alarms rate. Especially in very large and complex system settings, the amount of low-level alarms easily overwhelms administrators and increases their tendency to ignore alerts.We can group the existing false alarm mitigation strategies into two main families: The first group covers the methods directly customized and applied toward higher quality anomaly scoring in ADS. The second group includes approaches utilized in the related contexts as a filtering method toward decreasing the possibility of false alarm rates.Given the lack of a comprehensive study regarding possible ways to mitigate the false alarm rates, in this paper, we review the existing techniques for false alarm mitigation in ADS and present the pros and cons of each technique. We also study a few promising techniques applied in the signature-based IDS and other related contexts like commercial Security Information and Event Management (SIEM) tools, which are applicable and promising in the ADS context.Finally, we conclude with some directions for future research. | For a long time, anomaly detection has been the topic of many surveys and books. A considerable amount of research on outlier and anomaly detection comes from statistics contexts like @cite_35 @cite_83 @cite_42 . Afterward, some other studies from computer science field have reviewed and surveyed the anomaly detection concepts more or less concerning the computational aspects @cite_78 @cite_62 @cite_88 ; which most of them have been covered in the comprehensive survey by Chandola @cite_60 that have deeply analyzed the pros and cons of anomaly detection methods. Thereafter, some other studies have collected and reviewed the state-of-the art methods in various contexts. For example, @cite_89 have thoroughly analyzed and categorized time-series anomaly detection methods based on their fundamental strategy and the type of data taken as input. @cite_15 provides a comprehensive survey of deep learning-based methods for cyber-intrusion detection. A broad review of deep anomaly detection techniques for fraud detection is presented by @cite_40 and Internet of Things (IoT) related anomaly detection has been reviewed by @cite_75 . | {
"cite_N": [
"@cite_35",
"@cite_62",
"@cite_78",
"@cite_60",
"@cite_42",
"@cite_89",
"@cite_40",
"@cite_83",
"@cite_88",
"@cite_15",
"@cite_75"
],
"mid": [
"2129249398",
"2007087405",
"2097627964",
"",
"2148610006",
"2026493302",
"2561283532",
"",
"2137130182",
"2756489700",
"2964248614"
],
"abstract": [
"1. Introduction. 2. Simple Regression. 3. Multiple Regression. 4. The Special Case of One-Dimensional Location. 5. Algorithms. 6. Outlier Diagnostics. 7. Related Statistical Techniques. References. Table of Data Sets. Index.",
"As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems-the cyberspace's equivalent to the burglar alarm-join ranks with firewalls as one of the fundamental technologies for network security. However, today's commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ''zero day'' attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area.",
"Data that appear to have different characteristics than the rest of the population are called outliers. Identifying outliers from huge data repositories is a very complex task called outlier mining. Outlier mining has been akin to finding needles in a haystack. However, outlier mining has a number of practical applications in areas such as fraud detection, network intrusion detection, and identification of competitor and emerging business trends in e-commerce. This survey discuses practical applications of outlier mining, and provides a taxonomy for categorizing related mining techniques. A comprehensive review of these techniques with their advantages and disadvantages along with some current research issues are provided.",
"",
"Existing studies in data mining mostly focus on finding patterns in large datasets and further using it for organizational decision making. However, finding such exceptions and outliers has not yet received as much attention in the data mining field as some other topics have, such as association rules, classification and clustering. Thus, this paper describes the performance of control chart, linear regression, and Manhattan distance techniques for outlier detection in data mining. Experimental studies show that outlier detection technique using control chart is better than the technique modeled from linear regression because the number of outlier data detected by control chart is smaller than linear regression. Further, experimental studies shows that Manhattan distance technique outperformed compared with the other techniques when the threshold values increased.",
"In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used.",
"Credit card is one of the popular modes of payment for electronic transactions in many developed and developing countries. Invention of credit cards has made online transactions seamless, easier, comfortable and convenient. However, it has also provided new fraud opportunities for criminals, and in turn, increased fraud rate. The global impact of credit card fraud is alarming, millions of US dollars have been lost by many companies and individuals. Furthermore, cybercriminals are innovating sophisticated techniques on a regular basis, hence, there is an urgent task to develop improved and dynamic techniques capable of adapting to rapidly evolving fraudulent patterns. Achieving this task is very challenging, primarily due to the dynamic nature of fraud and also due to lack of dataset for researchers. This paper presents a review of improved credit card fraud detection techniques. Precisely, this paper focused on recent Machine Learning based and Nature Inspired based credit card fraud detection techniques proposed in literature. This paper provides a picture of recent trend in credit card fraud detection. Moreover, this review outlines some limitations and contributions of existing credit card fraud detection techniques, it also provides necessary background information for researchers in this domain. Additionally, this review serves as a guide and stepping stone for financial institutions and individuals seeking for new and effective credit card fraud detection techniques.",
"",
"Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review.",
"A great deal of attention has been given to deep learning over the past several years, and new deep learning techniques are emerging with improved functionality. Many computer and network applications actively utilize such deep learning algorithms and report enhanced performance through them. In this study, we present an overview of deep learning methodologies, including restricted Bolzmann machine-based deep belief network, deep neural network, and recurrent neural network, as well as the machine learning techniques relevant to network anomaly detection. In addition, this article introduces the latest work that employed deep learning techniques with the focus on network anomaly detection through the extensive literature survey. We also discuss our local experiments showing the feasibility of the deep learning approach to network traffic analysis.",
"In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | We first review existing BIQA models according to their two-stage structure: feature extraction and quality prediction model learning. We then review typical L2R algorithms. Details of RankNet @cite_75 are provided in . | {
"cite_N": [
"@cite_75"
],
"mid": [
"2143331230"
],
"abstract": [
"We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data from a commercial internet search engine."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | From the feature extraction point of view, three types of knowledge can be exploited to craft useful features for BIQA. The first is knowledge about our visual world that summarizes the statistical regularities of undistorted images. The second is knowledge about degradation, which can then be explicitly taken into account to build features for particular artifacts, such as blocking @cite_57 @cite_36 @cite_72 , blurring @cite_16 @cite_70 @cite_21 and ringing @cite_82 @cite_39 @cite_1 . The third is knowledge of the human visual system (HVS) @cite_25 , namely perceptual models derived from visual physiological and psychophysical studies @cite_45 @cite_90 @cite_11 @cite_68 . Natural scene statistics (NSS), which seek to capture the natural statistical behavior of images, embody the three-fold modeling in a rather elegant way @cite_60 . NSS can be extracted directly in the spatial domain or in transform domains such as DFT, DCT, and wavelets @cite_78 @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_11",
"@cite_78",
"@cite_60",
"@cite_36",
"@cite_70",
"@cite_90",
"@cite_21",
"@cite_1",
"@cite_39",
"@cite_57",
"@cite_68",
"@cite_72",
"@cite_45",
"@cite_16",
"@cite_25",
"@cite_82"
],
"mid": [
"2132984323",
"2085927826",
"2107790757",
"",
"2106775624",
"2120038204",
"2170319235",
"2163084851",
"2130172650",
"2143901157",
"2113045514",
"2144038638",
"2118344463",
"2116360511",
"2111992298",
"",
"2145300615"
],
"abstract": [
"Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2 sup j+1 and 2 sup j (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L sup 2 (R sup n ), the vector space of measurable, square-integrable n-dimensional functions. In L sup 2 (R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >",
"A number of recent attempts have been made to describe early sensory coding in terms of a general information processing strategy. In this paper, two strategies are contrasted. Both strategies take advantage of the redundancy in the environment to produce more effective representations. The first is described as a \"compact\" coding scheme. A compact code performs a transform that allows the input to be represented with a reduced number of vectors (cells) with minimal RMS error. This approach has recently become popular in the neural network literature and is related to a process called Principal Components Analysis (PCA). A number of recent papers have suggested that the optimal compact code for representing natural scenes will have units with receptive field profiles much like those found in the retina and primary visual cortex. However, in this paper, it is proposed that compact coding schemes are insufficient to account for the receptive field properties of cells in the mammalian visual pathway. In contrast, it is proposed that the visual system is near to optimal in representing natural scenes only if optimality is defined in terms of \"sparse distributed\" coding. In a sparse distributed code, all cells in the code have an equal response probability across the class of images but have a low response probability for any single image. In such a code, the dimensionality is not reduced. Rather, the redundancy of the input is transformed into the redundancy of the firing pattern of cells. It is proposed that the signature for a sparse code is found in the fourth moment of the response distribution (i.e., the kurtosis). In measurements with 55 calibrated natural scenes, the kurtosis was found to peak when the bandwidths of the visual code matched those of cells in the mammalian visual cortex. Codes resembling \"wavelet transforms\" are proposed to be effective because the response histograms of such codes are sparse (i.e., show high kurtosis) when presented with natural scenes. It is proposed that the structure of the image that allows sparse coding is found in the phase spectrum of the image. It is suggested that natural scenes, to a first approximation, can be considered as a sum of self-similar local functions (the inverse of a wavelet). Possible reasons for why sensory systems would evolve toward sparse coding are presented.",
"One of the major drawbacks of orthogonal wavelet transforms is their lack of translation invariance: the content of wavelet subbands is unstable under translations of the input signal. Wavelet transforms are also unstable with respect to dilations of the input signal and, in two dimensions, rotations of the input signal. The authors formalize these problems by defining a type of translation invariance called shiftability. In the spatial domain, shiftability corresponds to a lack of aliasing; thus, the conditions under which the property holds are specified by the sampling theorem. Shiftability may also be applied in the context of other domains, particularly orientation and scale. Jointly shiftable transforms that are simultaneously shiftable in more than one domain are explored. Two examples of jointly shiftable transforms are designed and implemented: a 1-D transform that is jointly shiftable in position and scale, and a 2-D transform that is jointly shiftable in position and orientation. The usefulness of these image representations for scale-space analysis, stereo disparity measurement, and image enhancement is demonstrated. >",
"",
"The objective measurement of blocking artifacts plays an important role in the design, optimization, and assessment of image and video coding systems. We propose a new approach that can blindly measure blocking artifacts in images without reference to the originals. The key idea is to model the blocky image as a non-blocky image interfered with a pure blocky signal. The task of the blocking effect measurement algorithm is then to detect and evaluate the power of the blocky signal. The proposed approach has the flexibility to integrate human visual system features such as the luminance and the texture masking effects.",
"Humans are able to detect blurring of visual images, but the mechanism by which they do so is not clear. A traditional view is that a blurred image looks \"unnatural\" because of the reduction in energy (either globally or locally) at high frequencies. In this paper, we propose that the disruption of local phase can provide an alternative explanation for blur perception. We show that precisely localized features such as step edges result in strong local phase coherence structures across scale and space in the complex wavelet transform domain, and blurring causes loss of such phase coherence. We propose a technique for coarse-to-fine phase prediction of wavelet coefficients, and observe that (1) such predictions are highly effective in natural images, (2) phase coherence increases with the strength of image features, and (3) blurring disrupts the phase coherence relationship in images. We thus lay the groundwork for a new theory of perceptual blur estimation, as well as a variety of algorithms for restoration and manipulation of photographic images.",
"",
"A no-reference objective sharpness metric detecting both blur and noise is proposed in this paper. This metric is based on the local gradients of the image and does not require any edge detection. Its value drops either when the test image becomes blurred or corrupted by random noise. It can be thought of as an indicator of the signal to noise ratio of the image. Experiments using synthetic, natural, and compressed images are presented to demonstrate the effectiveness and robustness of this metric. Its statistical properties are also provided.",
"A novel no-reference metric that can automatically quantify ringing annoyance in compressed images is presented. In the first step a recently proposed ringing region detection method extracts the regions which are likely to be impaired by ringing artifacts. To quantify ringing annoyance in these detected regions, the visibility of ringing artifacts is estimated, and is compared to the activity of the corresponding local background. The local annoyance score calculated for each individual ringing region is averaged over all ringing regions to yield a ringing annoyance score for the whole image. A psychovisual experiment is carried out to measure ringing annoyance subjectively and to validate the proposed metric. The performance of our metric is compared to existing alternatives in literature and shows to be highly consistent with subjective data.",
"Measurement of image or video quality is crucial for many image-processing algorithms, such as acquisition, compression, restoration, enhancement, and reproduction. Traditionally, image quality assessment (QA) algorithms interpret image quality as similarity with a \"reference\" or \"perfect\" image. The obvious limitation of this approach is that the reference image or video may not be available to the QA algorithm. The field of blind, or no-reference, QA, in which image quality is predicted without the reference image or video, has been largely unexplored, with algorithms focussing mostly on measuring the blocking artifacts. Emerging image and video compression technologies can avoid the dreaded blocking artifact by using various mechanisms, but they introduce other types of distortions, specifically blurring and ringing. In this paper, we propose to use natural scene statistics (NSS) to blindly measure the quality of images compressed by JPEG2000 (or any other wavelet based) image coder. We claim that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality. We train and test our algorithm with data from human subjects, and show that reasonably comprehensive NSS models can help us in making blind, but accurate, predictions of quality. Our algorithm performs close to the limit imposed on useful prediction by the variability between human subjects.",
"A mew generalized block-edge impairment metric (GBIM) is presented in this paper as a quantitative distortion measure for blocking artifacts in digital video and image coding. This distortion measure does not require the original image sequence as a comparative reference, and is found to be consistent with subjective evaluation.",
"In recent years, there has been much interest in characterizing statistical properties of natural stimuli in order to better understand the design of perceptual systems. A fruitful approach has been to compare the processing of natural stimuli in real perceptual systems with that of ideal observers derived within the framework of Bayesian statistical decision theory. While this form of optimization theory has provided a deeper understanding of the information contained in natural stimuli as well as of the computational principles employed in perceptual systems, it does not directly consider the process of natural selection, which is ultimately responsible for design. Here we propose a formal framework for analysing how the statistics of natural stimuli and the process of natural selection interact to determine the design of perceptual systems. The framework consists of two complementary components. The first is a maximum fitness ideal observer, a standard Bayesian ideal observer with a utility function appropriate for natural selection. The second component is a formal version of natural selection based upon Bayesian statistical decision theory. Maximum fitness ideal observers and Bayesian natural selection are demonstrated in several examples. We suggest that the Bayesian approach is appropriate not only for the study of perceptual systems but also for the study of many other systems in biology.",
"Blocking artifacts continue to be among the most serious defects that occur in images and video streams compressed to low bit rates using block discrete cosine transform (DCT)-based compression standards (e.g., JPEG, MPEG, and H.263). It is of interest to be able to numerically assess the degree of blocking artifact in a visual signal, for example, in order to objectively determine the efficacy of a compression method, or to discover the quality of video content being delivered by a web server. We propose new methods for efficiently assessing, and subsequently reducing, the severity of blocking artifacts in compressed image bitstreams. The method is blind, and operates only in the DCT domain. Hence, it can be applied to unknown visual signals, and it is efficient since the signal need not be compressed or decompressed. In the algorithm, blocking artifacts are modeled as 2-D step functions. A fast DCT-domain algorithm extracts all parameters needed to detect the presence of, and estimate the amplitude of blocking artifacts, by exploiting several properties of the human vision system. Using the estimate of blockiness, a novel DCT-domain method is then developed which adaptively reduces detected blocking artifacts. Our experimental results show that the proposed method of measuring blocking artifacts is effective and stable across a wide variety of images. Moreover, the proposed blocking-artifact reduction method exhibits satisfactory performance as compared to other post-processing techniques. The proposed technique has a low computational cost hence can be used for real-time image video quality monitoring and control, especially in applications where it is desired that the image video data be processed directly in the DCT-domain.",
"",
"With the prevalence of digital cameras, the number of digital images increases quickly, which raises the demand for image quality assessment in terms of blur. Based on the edge type and sharpness analysis, using the Harr wavelet transform, a new blur detection scheme is proposed in this paper, which can determine whether an image is blurred or not and to what extent an image is blurred. Experimental results demonstrate the effectiveness of the proposed scheme.",
"",
"Ringing is an annoying artifact frequently encountered in low bit-rate transform and subband decomposition based compression of different media such as image, intra frame video and graphics. A mathematical morphology based post-processing algorithm is presented in this paper for image ringing artifact suppression. First, we use binary morphological operators to isolate the regions of an image where the ringing artifact is most prominent to the human visual system (HVS) while preserving genuine edges and other (high-frequency) fine details present in the image. Then, a gray-level morphological nonlinear smoothing filter is applied to the unmasked regions of the image under the filtering mask to eliminate ringing within this constraint region. To gauge the effectiveness of this approach, we propose an HVS compatible objective measure of the ringing artifact. Preliminary simulations indicate that the proposed method is capable of significantly reducing the ringing artifact on both subjective and objective basis."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | In the spatial domain, edges are presumably the most important image features. The edge spread can be used to detect blurring @cite_41 @cite_93 , and the intensity variance in smooth regions close to edges can indicate ringing artifacts @cite_82 . Step edge detectors that operate at @math block boundaries measure the severity of discontinuities caused by JPEG compression @cite_57 . The sample entropy of intensity histograms is used to identify image anisotropy @cite_33 @cite_10 . The responses of image gradients and the Laplacian of Gaussian operators are jointly modeled to describe the destruction of statistical naturalness of images @cite_65 . The singular value decomposition of local image gradient matrices may provide a quantitative measure of image content @cite_17 . Mean-subtracted and contrast-normalized pixel value statistics have also been modeled using a generalized Gaussian distribution (GGD) @cite_48 @cite_79 @cite_63 @cite_42 , inspired by the adaptive gain control mechanism seen in neurons @cite_90 . | {
"cite_N": [
"@cite_33",
"@cite_41",
"@cite_48",
"@cite_90",
"@cite_42",
"@cite_65",
"@cite_17",
"@cite_57",
"@cite_79",
"@cite_63",
"@cite_93",
"@cite_10",
"@cite_82"
],
"mid": [
"2140094223",
"2112409076",
"1982471090",
"2170319235",
"1997974943",
"1977246677",
"2171125155",
"2113045514",
"2102166818",
"1975115580",
"2114338738",
"2068403421",
"2145300615"
],
"abstract": [
"We develop a no-reference image quality assessment (QA) algorithm that deploys a general regression neural network (GRNN). The new algorithm is trained on and successfully assesses image quality, relative to human subjectivity, across a range of distortion types. The features deployed for QA include the mean value of phase congruency image, the entropy of phase congruency image, the entropy of the distorted image, and the gradient of the distorted image. Image quality estimation is accomplished by approximating the functional relationship between these features and subjective mean opinion scores using a GRNN. Our experimental results show that the new method accords closely with human subjective judgment.",
"Blind image quality assessment refers to the problem of evaluating the visual quality of an image without any reference. It addresses a fundamental distinction between fidelity and quality, i.e. human vision system usually does not need any reference to determine the subjective quality of a target image. In this paper, we propose to appraise the image quality by three objective measures: edge sharpness level, random noise level and structural noise level. They jointly provide a heuristic approach of characterizing the most important aspects of visual quality. We investigate various mathematical tools (analytical, statistical and PDE-based) for accurately and robustly estimating those three levels. Extensive experiment results are used to justify the validity of our approach.",
"We propose a natural scene statistic-based distortion-generic blind no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of “naturalness” in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http: live.ece.utexas.edu research quality BRISQUE_release.zip for public use and evaluation.",
"",
"This paper addresses the problem of general-purpose No-Reference Image Quality Assessment (NR-IQA) with the goal of developing a real-time, cross-domain model that can predict the quality of distorted images without prior knowledge of non-distorted reference images and types of distortions present in these images. The contributions of our work are two-fold: first, the proposed method is highly efficient. NR-IQA measures are often used in real-time imaging or communication systems, therefore it is important to have a fast NR-IQA algorithm that can be used in these real-time applications. Second, the proposed method has the potential to be used in multiple image domains. Previous work on NR-IQA focus primarily on predicting quality of natural scene image with respect to human perception, yet, in other image domains, the final receiver of a digital image may not be a human. The proposed method consists of the following components: (1) a local feature extractor, (2) a global feature extractor and (3) a regression model. While previous approaches usually treat local feature extraction and regression model training independently, we propose a supervised method based on back-projection, which links the two steps by learning a compact set of filters which can be applied to local image patches to obtain discriminative local features. Using a small set of filters, the proposed method is extremely fast. We have tested this method on various natural scene and document image datasets and obtained state-of-the-art results.",
"Blind image quality assessment (BIQA) aims to evaluate the perceptual quality of a distorted image without information regarding its reference image. Existing BIQA models usually predict the image quality by analyzing the image statistics in some transformed domain, e.g., in the discrete cosine transform domain or wavelet domain. Though great progress has been made in recent years, BIQA is still a very challenging task due to the lack of a reference image. Considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we propose a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian (LOG) response. We employ an adaptive procedure to jointly normalize the GM and LOG features, and show that the joint statistics of normalized GM and LOG features have desirable properties for the BIQA task. The proposed model is extensively evaluated on three large-scale benchmark databases, and shown to deliver highly competitive performance with state-of-the-art BIQA models, as well as with some well-known full reference image quality assessment models.",
"Across the field of inverse problems in image and video processing, nearly all algorithms have various parameters which need to be set in order to yield good results. In practice, usually the choice of such parameters is made empirically with trial and error if no “ground-truth” reference is available. Some analytical methods such as cross-validation and Stein's unbiased risk estimate (SURE) have been successfully used to set such parameters. However, these methods tend to be strongly reliant on restrictive assumptions on the noise, and also computationally heavy. In this paper, we propose a no-reference metric Q which is based upon singular value decomposition of local image gradient matrix, and provides a quantitative measure of true image content (i.e., sharpness and contrast as manifested in visually salient geometric features such as edges,) in the presence of noise and other disturbances. This measure 1) is easy to compute, 2) reacts reasonably to both blur and random noise, and 3) works well even when the noise is not Gaussian. The proposed measure is used to automatically and effectively set the parameters of two leading image denoising algorithms. Ample simulated and real data experiments support our claims. Furthermore, tests using the TID2008 database show that this measure correlates well with subjective quality evaluations for both blur and noise distortions.",
"A mew generalized block-edge impairment metric (GBIM) is presented in this paper as a quantitative distortion measure for blocking artifacts in digital video and image coding. This distortion measure does not require the original image sequence as a comparative reference, and is found to be consistent with subjective evaluation.",
"An important aim of research on the blind image quality assessment (IQA) problem is to devise perceptual models that can predict the quality of distorted images with as little prior knowledge of the images or their distortions as possible. Current state-of-the-art “general purpose” no reference (NR) IQA algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores. However we have recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed without any exposure to distorted images. Thus, it is “completely blind.” The new IQA model, which we call the Natural Image Quality Evaluator (NIQE) is based on the construction of a “quality aware” collection of statistical features based on a simple and successful space domain natural scene statistic (NSS) model. These features are derived from a corpus of natural, undistorted images. Experimental results show that the new index delivers performance comparable to top performing NR IQA models that require training on large databases of human opinions of distorted images. A software release is available at http: live.ece.utexas.edu research quality niqe_release.zip.",
"We propose a highly unsupervised, training free, no reference image quality assessment (IQA) model that is based on the hypothesis that distorted images have certain latent characteristics that differ from those of “natural” or “pristine” images. These latent characteristics are uncovered by applying a “topic model” to visual words extracted from an assortment of pristine and distorted images. For the latent characteristics to be discriminatory between pristine and distorted images, the choice of the visual words is important. We extract quality-aware visual words that are based on natural scene statistic features [1]. We show that the similarity between the probability of occurrence of the different topics in an unseen image and the distribution of latent topics averaged over a large number of pristine natural images yields a quality measure. This measure correlates well with human difference mean opinion scores on the LIVE IQA database [2].",
"We present a full- and no-reference blur metric as well as a full-reference ringing metric. These metrics are based on an analysis of the edges and adjacent regions in an image and have very low computational complexity. As blur and ringing are typical artifacts of wavelet compression, the metrics are then applied to JPEG2000 coded images. Their perceptual significance is corroborated through a number of subjective experiments. The results show that the proposed metrics perform well over a wide range of image content and distortion levels. Potential applications include source coding optimization and network resource management.",
"Contrast distortion is often a determining factor in human perception of image quality, but little investigation has been dedicated to quality assessment of contrast-distorted images without assuming the availability of a perfect-quality reference image. In this letter, we propose a simple but effective method for no-reference quality assessment of contrast distorted images based on the principle of natural scene statistics (NSS). A large scale image database is employed to build NSS models based on moment and entropy features. The quality of a contrast-distorted image is then evaluated based on its unnaturalness characterized by the degree of deviation from the NSS models. Support vector regression (SVR) is employed to predict human mean opinion score (MOS) from multiple NSS features as the input. Experiments based on three publicly available databases demonstrate the promising performance of the proposed method.",
"Ringing is an annoying artifact frequently encountered in low bit-rate transform and subband decomposition based compression of different media such as image, intra frame video and graphics. A mathematical morphology based post-processing algorithm is presented in this paper for image ringing artifact suppression. First, we use binary morphological operators to isolate the regions of an image where the ringing artifact is most prominent to the human visual system (HVS) while preserving genuine edges and other (high-frequency) fine details present in the image. Then, a gray-level morphological nonlinear smoothing filter is applied to the unmasked regions of the image under the filtering mask to eliminate ringing within this constraint region. To gauge the effectiveness of this approach, we propose an HVS compatible objective measure of the ringing artifact. Preliminary simulations indicate that the proposed method is capable of significantly reducing the ringing artifact on both subjective and objective basis."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | Statistical modeling in the wavelet domain resembles the early visual system @cite_45 , and natural images exhibit statistical regularities in the wavelet space. Specifically, it is widely acknowledged that the marginal distribution of wavelet coefficients of a natural image (regardless of content) has a sharp peak near zero and heavier than Gaussian tails. Therefore, statistics of raw @cite_61 @cite_87 @cite_12 @cite_46 and normalized @cite_51 @cite_62 wavelet coefficients, and wavelet coefficient correlations in the neighborhood @cite_39 @cite_47 @cite_56 @cite_58 @cite_54 can be individually or jointly modeled as image naturalness measurements. The phase information of wavelet coefficients, for example expressed as the local phase coherence, is exploited to describe the perception of blur @cite_70 and sharpness @cite_80 . | {
"cite_N": [
"@cite_61",
"@cite_47",
"@cite_62",
"@cite_87",
"@cite_70",
"@cite_54",
"@cite_39",
"@cite_56",
"@cite_45",
"@cite_80",
"@cite_46",
"@cite_58",
"@cite_51",
"@cite_12"
],
"mid": [
"2153582625",
"2129644086",
"1971155098",
"",
"2120038204",
"2061513831",
"2143901157",
"1981572319",
"2116360511",
"2117644767",
"2170947705",
"2052287864",
"2142731874",
"2163370434"
],
"abstract": [
"Reduced-reference (RR) image quality measures aim to predict the visual quality of distorted images with only partial information about the reference images. In this paper, we propose an RR quality assessment method based on a natural image statistic model in the wavelet transform domain. In particular, we observe that the marginal distribution of wavelet coefficients changes in different ways for different types of image distortions. To quantify such changes, we estimate the Kullback-Leibler distance between the marginal distributions of wavelet coefficients of the reference and distorted images. A generalized Gaussian model is employed to summarize the marginal distribution of wavelet coefficients of the reference image, so that only a relatively small number of RR features are needed for the evaluation of image quality. The proposed method is easy to implement and computationally efficient. In addition, we find that many well-known types of image distortion lead to significant changes in wavelet coefficient histograms, and thus are readily detectable by our measure. The algorithm is tested with subjective ratings of a large image database that contains images corrupted with a wide variety of distortion types.",
"Our approach to blind image quality assessment (IQA) is based on the hypothesis that natural scenes possess certain statistical properties which are altered in the presence of distortion, rendering them un-natural; and that by characterizing this un-naturalness using scene statistics, one can identify the distortion afflicting the image and perform no-reference (NR) IQA. Based on this theory, we propose an (NR) blind algorithm-the Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index-that assesses the quality of a distorted image without need for a reference image. DIIVINE is based on a 2-stage framework involving distortion identification followed by distortion-specific quality assessment. DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature. DIIVINE is based on natural scene statistics which govern the behavior of natural images. In this paper, we detail the principles underlying DIIVINE, the statistical features extracted and their relevance to perception and thoroughly evaluate the algorithm on the popular LIVE IQA database. Further, we compare the performance of DIIVINE against leading full-reference (FR) IQA algorithms and demonstrate that DIIVINE is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM). A software release of DIIVINE has been made available online: http: live.ece.utexas.edu research quality DIIVINE_release.zip for public use and evaluation.",
"Reduced-reference image quality assessment (RR-IQA) provides a practical solution for automatic image quality evaluations in various applications where only partial information about the original reference image is accessible. In this paper, we propose an RR-IQA method by estimating the structural similarity index (SSIM), which is a widely used full-reference (FR) image quality measure shown to be a good indicator of perceptual image quality. Specifically, we extract statistical features from a multiscale multiorientation divisive normalization transform and develop a distortion measure by following the philosophy in the construction of SSIM. We find an interesting linear relationship between the FR SSIM measure and our RR estimate when the image distortion type is fixed. A regression-by-discretization method is then applied to normalize our measure across image distortion types. We use six publicly available subject-rated databases to test the proposed RR-SSIM method, which shows strong correlations with both SSIM and subjective quality evaluations. Finally, we introduce the novel idea of partially repairing an image using RR features and use deblurring as an example to demonstrate its application.",
"",
"Humans are able to detect blurring of visual images, but the mechanism by which they do so is not clear. A traditional view is that a blurred image looks \"unnatural\" because of the reduction in energy (either globally or locally) at high frequencies. In this paper, we propose that the disruption of local phase can provide an alternative explanation for blur perception. We show that precisely localized features such as step edges result in strong local phase coherence structures across scale and space in the complex wavelet transform domain, and blurring causes loss of such phase coherence. We propose a technique for coarse-to-fine phase prediction of wavelet coefficients, and observe that (1) such predictions are highly effective in natural images, (2) phase coherence increases with the strength of image features, and (3) blurring disrupts the phase coherence relationship in images. We thus lay the groundwork for a new theory of perceptual blur estimation, as well as a variety of algorithms for restoration and manipulation of photographic images.",
"The goal of no-reference objective image quality assessment (NR-IQA) is to develop a computational model that can predict the human-perceived quality of distorted images accurately and automatically without any prior knowledge of reference images. Most existing NR-IQA approaches are distortion specific and are typically limited to one or two specific types of distortions. In most practical applications, however, information about the distortion type is not really available. In this paper, we propose a general-purpose NR-IQA approach based on visual codebooks. A visual codebook consisting of Gabor-filter-based local features extracted from local image patches is used to capture complex statistics of a natural image. The codebook encodes statistics by quantizing the feature space and accumulating histograms of patch appearances. This method does not assume any specific types of distortions; however, when evaluating images with a particular type of distortion, it does require examples with the same or similar distortion for training. Experimental results demonstrate that the predicted quality score using our method is consistent with human-perceived image quality. The proposed method is comparable to state-of-the-art general-purpose NR-IQA methods and outperforms the full-reference image quality metrics, peak signal-to-noise ratio and structural similarity index on the Laboratory for Image and Video Engineering IQA database.",
"Measurement of image or video quality is crucial for many image-processing algorithms, such as acquisition, compression, restoration, enhancement, and reproduction. Traditionally, image quality assessment (QA) algorithms interpret image quality as similarity with a \"reference\" or \"perfect\" image. The obvious limitation of this approach is that the reference image or video may not be available to the QA algorithm. The field of blind, or no-reference, QA, in which image quality is predicted without the reference image or video, has been largely unexplored, with algorithms focussing mostly on measuring the blocking artifacts. Emerging image and video compression technologies can avoid the dreaded blocking artifact by using various mechanisms, but they introduce other types of distortions, specifically blurring and ringing. In this paper, we propose to use natural scene statistics (NSS) to blindly measure the quality of images compressed by JPEG2000 (or any other wavelet based) image coder. We claim that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality. We train and test our algorithm with data from human subjects, and show that reasonably comprehensive NSS models can help us in making blind, but accurate, predictions of quality. Our algorithm performs close to the limit imposed on useful prediction by the variability between human subjects.",
"It is often desirable to evaluate an image based on its quality. For many computer vision applications, a perceptually meaningful measure is the most relevant for evaluation; however, most commonly used measure do not map well to human judgements of image quality. A further complication of many existing image measure is that they require a reference image, which is often not available in practice. In this paper, we present a “blind” image quality measure, where potentially neither the groundtruth image nor the degradation process are known. Our method uses a set of novel low-level image features in a machine learning framework to learn a mapping from these features to subjective image quality scores. The image quality features stem from natural image measure and texture statistics. Experiments on a standard image quality benchmark dataset shows that our method outperforms the current state of art.",
"",
"Sharpness is an important determinant in visual assessment of image quality. The human visual system is able to effortlessly detect blur and evaluate sharpness of visual images, but the underlying mechanism is not fully understood. Existing blur sharpness evaluation algorithms are mostly based on edge width, local gradient, or energy reduction of global local high frequency content. Here we understand the subject from a different perspective, where sharpness is identified as strong local phase coherence (LPC) near distinctive image features evaluated in the complex wavelet transform domain. Previous LPC computation is restricted to be applied to complex coefficients spread in three consecutive dyadic scales in the scale-space. Here we propose a flexible framework that allows for LPC computation in arbitrary fractional scales. We then develop a new sharpness assessment algorithm without referencing the original image. We use four subject-rated publicly available image databases to test the proposed algorithm, which demonstrates competitive performance when compared with state-of-the-art algorithms.",
"This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model’s effectiveness, efficiency, and robustness.",
"It is often desirable to evaluate images quality with a perceptually relevant measure that does not require a reference image. Recent approaches to this problem use human provided quality scores with machine learning to learn a measure. The biggest hurdles to these efforts are: 1) the difficulty of generalizing across diverse types of distortions and 2) collecting the enormity of human scored training data that is needed to learn the measure. We present a new blind image quality measure that addresses these difficulties by learning a robust, nonlinear kernel regression function using a rectifier neural network. The method is pre-trained with unlabeled data and fine-tuned with labeled data. It generalizes across a large set of images and distortion types without the need for a large amount of labeled data. We evaluate our approach on two benchmark datasets and show that it not only outperforms the current state of the art in blind image quality estimation, but also outperforms the state of the art in non-blind measures. Furthermore, we show that our semi-supervised approach is robust to using varying amounts of labeled data.",
"Reduced-reference image quality assessment (RRIQA) methods estimate image quality degradations with partial information about the ldquoperfect-qualityrdquo reference image. In this paper, we propose an RRIQA algorithm based on a divisive normalization image representation. Divisive normalization has been recognized as a successful approach to model the perceptual sensitivity of biological vision. It also provides a useful image representation that significantly improves statistical independence for natural images. By using a Gaussian scale mixture statistical model of image wavelet coefficients, we compute a divisive normalization transformation (DNT) for images and evaluate the quality of a distorted image by comparing a set of reduced-reference statistical features extracted from DNT-domain representations of the reference and distorted images, respectively. This leads to a generic or general-purpose RRIQA method, in which no assumption is made about the types of distortions occurring in the image being evaluated. The proposed algorithm is cross-validated using two publicly-accessible subject-rated image databases (the UT-Austin LIVE database and the Cornell-VCL A57 database) and demonstrates good performance across a wide range of image distortions.",
"Present day no-reference no-reference image quality assessment (NR IQA) algorithms usually assume that the distortion affecting the image is known. This is a limiting assumption for practical applications, since in a majority of cases the distortions in the image are unknown. We propose a new two-step framework for no-reference image quality assessment based on natural scene statistics (NSS). Once trained, the framework does not require any knowledge of the distorting process and the framework is modular in that it can be extended to any number of distortions. We describe the framework for blind image quality assessment and a version of this framework-the blind image quality index (BIQI) is evaluated on the LIVE image quality assessment database. A software release of BIQI has been made available online: http: live.ece.utexas.edu research quality BIQI_release.zip."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | In the DFT domain, blur kernels can be efficiently estimated @cite_56 @cite_71 @cite_58 to quantify the degree of image blurring. The regular peaks at feature frequencies can be used to identity blocking artifacts @cite_36 @cite_8 . Moreover, it is generally hypothesized that most perceptual information in an image is stored in the Fourier phase rather than the Fourier amplitude @cite_26 @cite_44 . Phase congruency @cite_0 is such a feature that identifies perceptually significant image features at spatial locations where Fourier components are maximally in-phase @cite_33 . | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_8",
"@cite_36",
"@cite_56",
"@cite_44",
"@cite_0",
"@cite_71",
"@cite_58"
],
"mid": [
"1988839668",
"2140094223",
"2107476778",
"2106775624",
"1981572319",
"2122787031",
"1962010357",
"1598281290",
"2052287864"
],
"abstract": [
"We demonstrate that phase accuracy is extremely important in image processing filters and express the hope that more work will be done on the development of filter design techniques which use phase as well as magnitude specifications.",
"We develop a no-reference image quality assessment (QA) algorithm that deploys a general regression neural network (GRNN). The new algorithm is trained on and successfully assesses image quality, relative to human subjectivity, across a range of distortion types. The features deployed for QA include the mean value of phase congruency image, the entropy of phase congruency image, the entropy of the distorted image, and the gradient of the distorted image. Image quality estimation is accomplished by approximating the functional relationship between these features and subjective mean opinion scores using a GRNN. Our experimental results show that the new method accords closely with human subjective judgment.",
"Human observers can easily assess the quality of a distorted image without examining the original image as a reference. By contrast, designing objective No-Reference (NR) quality measurement algorithms is a very difficult task. Currently, NR quality assessment is feasible only when prior knowledge about the types of image distortion is available. This research aims to develop NR quality measurement algorithms for JPEG compressed images. First, we established a JPEG image database and subjective experiments were conducted on the database. We show that Peak Signal-to-Noise Ratio (PSNR), which requires the reference images, is a poor indicator of subjective quality. Therefore, tuning an NR measurement model towards PSNR is not an appropriate approach in designing NR quality metrics. Furthermore, we propose a computational and memory efficient NR quality assessment model for JPEG images. Subjective test results are used to train the model, which achieves good quality prediction performance.",
"The objective measurement of blocking artifacts plays an important role in the design, optimization, and assessment of image and video coding systems. We propose a new approach that can blindly measure blocking artifacts in images without reference to the originals. The key idea is to model the blocky image as a non-blocky image interfered with a pure blocky signal. The task of the blocking effect measurement algorithm is then to detect and evaluate the power of the blocky signal. The proposed approach has the flexibility to integrate human visual system features such as the luminance and the texture masking effects.",
"It is often desirable to evaluate an image based on its quality. For many computer vision applications, a perceptually meaningful measure is the most relevant for evaluation; however, most commonly used measure do not map well to human judgements of image quality. A further complication of many existing image measure is that they require a reference image, which is often not available in practice. In this paper, we present a “blind” image quality measure, where potentially neither the groundtruth image nor the degradation process are known. Our method uses a set of novel low-level image features in a machine learning framework to learn a mapping from these features to subjective image quality scores. The image quality features stem from natural image measure and texture statistics. Experiments on a standard image quality benchmark dataset shows that our method outperforms the current state of art.",
"In the Fourier representation of signals, spectral magnitude and phase tend to play different roles and in some situations many of the important features of a signal are preserved if only the phase is retained. Furthermore, under a variety of conditions, such as when a signal is of finite length, phase information alone is sufficient to completely reconstruct a signal to within a scale factor. In this paper, we review and discuss these observations and results in a number of different contexts and applications. Specifically, the intelligibility of phase-only reconstruction for images, speech, and crystallographic structures are illustrated. Several approaches to justifying the relative importance of phase through statistical arguments are presented, along with a number of informal arguments suggesting reasons for the importance of phase. Specific conditions under which a sequence can be exactly reconstructed from phase are reviewed, both for one-dimensional and multi-dimensional sequences, and algorithms for both approximate and exact reconstruction of signals from phase information are presented. A number of applications of the observations and results in this paper are suggested.",
"Videre: Journal of Computer Vision Research (ISSN 1089-2788) is a quarterly journal published electronically on the Internet by The MIT Press, Cambridge, Massachusetts, 02142. Subscriptions and address changes should be addressed to MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142; phone: (617) 253-2889; fax: (617) 577-1545; e-mail: journals-orders@mit.edu. Subscription rates are: Individuals @math 125.00. Canadians add additional 7 GST. Prices subject to change without notice.",
"We discuss a few new motion deblurring problems that are significant to kernel estimation and non-blind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the usefulness of image edges in motion deblurring and a gradient selection process to mitigate their possible adverse effect. We also propose an efficient and high-quality kernel estimation method based on using the spatial prior and the iterative support detection (ISD) kernel refinement, which avoids hard threshold of the kernel elements to enforce sparsity. We employ the TV-l1 deconvolution model, solved with a new variable substitution scheme to robustly suppress noise.",
"It is often desirable to evaluate images quality with a perceptually relevant measure that does not require a reference image. Recent approaches to this problem use human provided quality scores with machine learning to learn a measure. The biggest hurdles to these efforts are: 1) the difficulty of generalizing across diverse types of distortions and 2) collecting the enormity of human scored training data that is needed to learn the measure. We present a new blind image quality measure that addresses these difficulties by learning a robust, nonlinear kernel regression function using a rectifier neural network. The method is pre-trained with unlabeled data and fine-tuned with labeled data. It generalizes across a large set of images and distortion types without the need for a large amount of labeled data. We evaluate our approach on two benchmark datasets and show that it not only outperforms the current state of the art in blind image quality estimation, but also outperforms the state of the art in non-blind measures. Furthermore, we show that our semi-supervised approach is robust to using varying amounts of labeled data."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | In the DCT domain, blocking artifacts can be identified in a shifted @math block @cite_72 . The ratio of AC coefficients to DC components can be interpreted as a measure of local contrast @cite_24 . The kurtosis of AC coefficients can be used to quantify the structure statistics. In addition, AC coefficients can also be jointly modeled using a GGD @cite_40 . | {
"cite_N": [
"@cite_24",
"@cite_40",
"@cite_72"
],
"mid": [
"2124562516",
"2162692770",
"2118344463"
],
"abstract": [
"The development of general-purpose no-reference approaches to image quality assessment still lags recent advances in full-reference methods. Additionally, most no-reference or blind approaches are distortion-specific, meaning they assess only a specific type of distortion assumed present in the test image (such as blockiness, blur, or ringing). This limits their application domain. Other approaches rely on training a machine learning algorithm. These methods however, are only as effective as the features used to train their learning machines. Towards ameliorating this we introduce the BLIINDS index (BLind Image Integrity Notator using DCT Statistics) which is a no-reference approach to image quality assessment that does not assume a specific type of distortion of the image. It is based on predicting image quality based on observing the statistics of local discrete cosine transform coefficients, and it requires only minimal training. The method is shown to correlate highly with human perception of quality.",
"We develop an efficient general-purpose blind no-reference image quality assessment (IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index.",
"Blocking artifacts continue to be among the most serious defects that occur in images and video streams compressed to low bit rates using block discrete cosine transform (DCT)-based compression standards (e.g., JPEG, MPEG, and H.263). It is of interest to be able to numerically assess the degree of blocking artifact in a visual signal, for example, in order to objectively determine the efficacy of a compression method, or to discover the quality of video content being delivered by a web server. We propose new methods for efficiently assessing, and subsequently reducing, the severity of blocking artifacts in compressed image bitstreams. The method is blind, and operates only in the DCT domain. Hence, it can be applied to unknown visual signals, and it is efficient since the signal need not be compressed or decompressed. In the algorithm, blocking artifacts are modeled as 2-D step functions. A fast DCT-domain algorithm extracts all parameters needed to detect the presence of, and estimate the amplitude of blocking artifacts, by exploiting several properties of the human vision system. Using the estimate of blockiness, a novel DCT-domain method is then developed which adaptively reduces detected blocking artifacts. Our experimental results show that the proposed method of measuring blocking artifacts is effective and stable across a wide variety of images. Moreover, the proposed blocking-artifact reduction method exhibits satisfactory performance as compared to other post-processing techniques. The proposed technique has a low computational cost hence can be used for real-time image video quality monitoring and control, especially in applications where it is desired that the image video data be processed directly in the DCT-domain."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | There is a growing interest in learning features for BIQA. Ye learned quality filters on image patches using K-means clustering and adopted filter responses as features @cite_89 . They then took one step further by supervised filter learning @cite_42 . Xue @cite_9 proposed a quality-aware clustering scheme on the high frequencies of raw patches, guided by an FR-IQA measure @cite_6 . Kang investigated a convolutional neural network to jointly learn features and nonlinear mappings for BIQA @cite_86 . | {
"cite_N": [
"@cite_9",
"@cite_42",
"@cite_6",
"@cite_89",
"@cite_86"
],
"mid": [
"2162915697",
"1997974943",
"2141983208",
"1987489060",
"2051596736"
],
"abstract": [
"General purpose blind image quality assessment (BIQA) has been recently attracting significant attention in the fields of image processing, vision and machine learning. State-of-the-art BIQA methods usually learn to evaluate the image quality by regression from human subjective scores of the training samples. However, these methods need a large number of human scored images for training, and lack an explicit explanation of how the image quality is affected by image local features. An interesting question is then: can we learn for effective BIQA without using human scored images? This paper makes a good effort to answer this question. We partition the distorted images into overlapped patches, and use a percentile pooling strategy to estimate the local quality of each patch. Then a quality-aware clustering (QAC) method is proposed to learn a set of centroids on each quality level. These centroids are then used as a codebook to infer the quality of each patch in a given image, and subsequently a perceptual quality score of the whole image can be obtained. The proposed QAC based BIQA method is simple yet effective. It not only has comparable accuracy to those methods using human scored images in learning, but also has merits such as high linearity to human perception of image quality, real-time implementation and availability of image local quality map.",
"This paper addresses the problem of general-purpose No-Reference Image Quality Assessment (NR-IQA) with the goal of developing a real-time, cross-domain model that can predict the quality of distorted images without prior knowledge of non-distorted reference images and types of distortions present in these images. The contributions of our work are two-fold: first, the proposed method is highly efficient. NR-IQA measures are often used in real-time imaging or communication systems, therefore it is important to have a fast NR-IQA algorithm that can be used in these real-time applications. Second, the proposed method has the potential to be used in multiple image domains. Previous work on NR-IQA focus primarily on predicting quality of natural scene image with respect to human perception, yet, in other image domains, the final receiver of a digital image may not be a human. The proposed method consists of the following components: (1) a local feature extractor, (2) a global feature extractor and (3) a regression model. While previous approaches usually treat local feature extraction and regression model training independently, we propose a supervised method based on back-projection, which links the two steps by learning a compact set of filters which can be applied to local image patches to obtain discriminative local features. Using a small set of filters, the proposed method is extremely fast. We have tested this method on various natural scene and document image datasets and obtained state-of-the-art results.",
"Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.",
"In this paper, we present an efficient general-purpose objective no-reference (NR) image quality assessment (IQA) framework based on unsupervised feature learning. The goal is to build a computational model to automatically predict human perceived image quality without a reference image and without knowing the distortion present in the image. Previous approaches for this problem typically rely on hand-crafted features which are carefully designed based on prior knowledge. In contrast, we use raw-image-patches extracted from a set of unlabeled images to learn a dictionary in an unsupervised manner. We use soft-assignment coding with max pooling to obtain effective image representations for quality estimation. The proposed algorithm is very computationally appealing, using raw image patches as local descriptors and using soft-assignment for encoding. Furthermore, unlike previous methods, our unsupervised feature learning strategy enables our method to adapt to different domains. CORNIA (Codebook Representation for No-Reference Image Assessment) is tested on LIVE database and shown to perform statistically better than the full-reference quality measure, structural similarity index (SSIM) and is shown to be comparable to state-of-the-art general purpose NR-IQA algorithms.",
"In this work we describe a Convolutional Neural Network (CNN) to accurately predict image quality without a reference image. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max and min pooling, two fully connected layers and an output node. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating image quality. This approach achieves state of the art performance on the LIVE dataset and shows excellent generalization ability in cross dataset experiments. Further experiments on images with local distortions demonstrate the local quality estimation ability of our CNN, which is rarely reported in previous literature."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | Existing L2R algorithms can be broadly classified into three categories based on the training data format and loss function: pointwise, pairwise, and listwise approaches. An excellent survey of L2R algorithms can be found in @cite_84 . Here we only provide a brief overview. | {
"cite_N": [
"@cite_84"
],
"mid": [
"1622427076"
],
"abstract": [
"Originally published in Contemporary Psychology: APA Review of Books, 1997, Vol 42(7), 649-649. In Foundations of Vision (see record 1995-98050-000), Brian Wandell divides the study of vision into three parts: encoding, representation, and interpretation. Each part is designed to inform students on"
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | Pointwise approaches assume that each instance's importance degree is known. The loss function usually examines the prediction accuracy of each individual instance. In an early attempt on L2R, Fuhr @cite_55 adopted a linear regression with a polynomial feature expansion to learn the score function @math . Cossock and Zhang @cite_83 utilized a similar formulation with some theoretical justifications for the use of the least squares loss function. Nallapati @cite_43 formulated L2R as a classification problem and investigated the use of maximum entropy and support vector machines (SVMs) to classify each instance into two classes---relevant or irrelevant. Ordinal regression-based pointwise L2R algorithms have also been proposed such as PRanking @cite_66 and SVM-based large margin principles @cite_50 . | {
"cite_N": [
"@cite_55",
"@cite_43",
"@cite_83",
"@cite_50",
"@cite_66"
],
"mid": [
"2012318340",
"2067802667",
"1708221419",
"2124105163",
"2171541062"
],
"abstract": [
"We show that any approach to developing optimum retrieval functions is based on two kinds of assumptions: first, a certain form of representation for documents and requests, and second, additional simplifying assumptions that predefine the type of the retrieval function. Then we describe an approach for the development of optimum polynomial retrieval functions: request-document pairs ( f l , d m ) are mapped onto description vectors x ( f l , d m ), and a polynomial function e ( x ) is developed such that it yields estimates of the probability of relevance P( R | x ( f l , d m ) with minimum square errors. We give experimental results for the application of this approach to documents with weighted indexing as well as to documents with complex representations. In contrast to other probabilistic models, our approach yields estimates of the actual probabilities, it can handle very complex representations of documents and requests, and it can be easily applied to multivalued relevance scales. On the other hand, this approach is not suited to log-linear probabilistic models and it needs large samples of relevance feedback data for its application.",
"Discriminative models have been preferred over generative models in many machine learning problems in the recent past owing to some of their attractive theoretical properties. In this paper, we explore the applicability of discriminative classifiers for IR. We have compared the performance of two popular discriminative models, namely the maximum entropy model and support vector machines with that of language modeling, the state-of-the-art generative model for IR. Our experiments on ad-hoc retrieval indicate that although maximum entropy is significantly worse than language models, support vector machines are on par with language models. We argue that the main reason to prefer SVMs over language models is their ability to learn arbitrary features automatically as demonstrated by our experiments on the home-page finding task of TREC-10.",
"We study the subset ranking problem, motivated by its important application in web-search. In this context, we consider the standard DCG criterion (discounted cumulated gain) that measures the quality of items near the top of the rank-list. Similar to error minimization for binary classification, the DCG criterion leads to a non-convex optimization problem that can be NP-hard. Therefore a computationally more tractable approach is needed. We present bounds that relate the approximate optimization of DCG to the approximate minimization of certain regression errors. These bounds justify the use of convex learning formulations for solving the subset ranking problem. The resulting estimation methods are not conventional, in that we focus on the estimation quality in the top-portion of the rank-list. We further investigate the generalization ability of these formulations. Under appropriate conditions, the consistency of the estimation schemes with respect to the DCG metric can be derived.",
"We discuss the problem of ranking k instances with the use of a \"large margin\" principle. We introduce two main approaches: the first is the \"fixed margin\" policy in which the margin of the closest neighboring classes is being maximized — which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for k - 1 different margins where the sum of margins is maximized. This approach is shown to reduce to v-SVM when the number of classes k - 2. Both approaches are optimal in size of 2l where l is the total number of training examples. Experiments performed on visual classification and \"collaborative filtering\" show that both approaches outperform existing ordinal regression algorithms applied for ranking and multi-class SVM applied to general multi-class classification.",
"We discuss the problem of ranking instances. In our framework each instance is associated with a rank or a rating, which is an integer from 1 to k. Our goal is to find a rank-predict ion rule that assigns each instance a rank which is as close as possible to the instance's true rank. We describe a simple and efficient online algorithm, analyze its performance in the mistake bound model, and prove its correctness. We describe two sets of experiments, with synthetic data and with the EachMovie dataset for collaborative filtering. In the experiments we performed, our algorithm outperforms online algorithms for regression and classification applied to ranking."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | Pairwise approaches assume that the relative order between two instances is known or can be inferred from other ground truth formats. The goal is to minimize the number of misclassified instance pairs. In the extreme case, if all instance pairs are correctly classified, they will be correctly ranked @cite_84 . In RankSVM @cite_49 , Joachims creatively generated training pairs from clickthrough data and reformulated SVM to learn the score function @math from instance pairs. Proposed in 2005, RankNet @cite_75 was probably the first L2R algorithm used by commercial search engines, which had a typical neural network with a weight-sharing scheme forming its skeleton. Tsai @cite_31 replaced RankNet's loss function @cite_75 with a fidelity loss originating from quantum physics. In this paper, RankNet is adopted as the default pairwise L2R algorithm to learn OU-BIQA models for reasons that will be described later. RankBoost @cite_15 is another well-known pairwise L2R algorithm based on AdaBoost @cite_94 with an exponential loss. | {
"cite_N": [
"@cite_15",
"@cite_84",
"@cite_49",
"@cite_31",
"@cite_75",
"@cite_94"
],
"mid": [
"2107890099",
"1622427076",
"2047221353",
"2138790992",
"2143331230",
"1988790447"
],
"abstract": [
"We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the \"collaborative-filtering\" problem of ranking movies for a user based on the movie rankings provided by other users. In this work, we begin by presenting a formal framework for this general problem. We then describe and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning. We give theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training. We also describe an efficient implementation of the algorithm for a particular restricted but common case. We next discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different web search strategies, each of which is a query expansion for a given domain. The second experiment is a collaborative-filtering task for making movie recommendations.",
"Originally published in Contemporary Psychology: APA Review of Books, 1997, Vol 42(7), 649-649. In Foundations of Vision (see record 1995-98050-000), Brian Wandell divides the study of vision into three parts: encoding, representation, and interpretation. Each part is designed to inform students on",
"This paper presents an approach to automatically optimizing the retrieval quality of search engines using clickthrough data. Intuitively, a good information retrieval system should present relevant documents high in the ranking, with less relevant documents following below. While previous approaches to learning retrieval functions from examples exist, they typically require training data generated from relevance judgments by experts. This makes them difficult and expensive to apply. The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking. Such clickthrough data is available in abundance and can be recorded at very low cost. Taking a Support Vector Machine (SVM) approach, this paper presents a method for learning retrieval functions. From a theoretical perspective, this method is shown to be well-founded in a risk minimization framework. Furthermore, it is shown to be feasible even for large sets of queries and features. The theoretical results are verified in a controlled experiment. It shows that the method can effectively adapt the retrieval function of a meta-search engine to a particular group of users, outperforming Google in terms of retrieval quality after only a couple of hundred training examples.",
"Ranking problem is becoming important in many fields, especially in information retrieval (IR). Many machine learning techniques have been proposed for ranking problem, such as RankSVM, RankBoost, and RankNet. Among them, RankNet, which is based on a probabilistic ranking framework, is leading to promising results and has been applied to a commercial Web search engine. In this paper we conduct further study on the probabilistic ranking framework and provide a novel loss function named fidelity loss for measuring loss of ranking. The fidelity loss notonly inherits effective properties of the probabilistic ranking framework in RankNet, but possesses new properties that are helpful for ranking. This includes the fidelity loss obtaining zero for each document pair, and having a finite upper bound that is necessary for conducting query-level normalization. We also propose an algorithm named FRank based on a generalized additive model for the sake of minimizing the fedelity loss and learning an effective ranking function. We evaluated the proposed algorithm for two datasets: TREC dataset and real Web search dataset. The experimental results show that the proposed FRank algorithm outperforms other learning-based ranking methods on both conventional IR problem and Web search.",
"We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data from a commercial internet search engine.",
"In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line."
]
} |
1904.06505 | 2618902759 | Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain. | Listwise approaches provide the opportunity to directly optimize ranking performance criteria @cite_84 . Representative algorithms include SoftRank @cite_13 , @math @cite_38 , and RankGP @cite_3 . Another subset of listwise approaches choose to optimize listwise ranking losses. For example, as a direct extension of RankNet, ListNet @cite_81 duplicates RankNet's structure to accommodate an instance list as input and optimizes a ranking loss based on the permutation probability distribution @cite_81 . In this paper, we also employ ListNet to learn OU-BIQA models as an extension of the proposed pairwise L2R approach. | {
"cite_N": [
"@cite_38",
"@cite_3",
"@cite_84",
"@cite_81",
"@cite_13"
],
"mid": [
"2127176025",
"196422389",
"1622427076",
"2108862644",
"2059001985"
],
"abstract": [
"Machine learning is commonly used to improve ranked retrieval systems. Due to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems. Existing approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive. In contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP. We evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea. In most cases we show our method to produce statistically significant improvements in MAP scores.",
"central problem of information retrieval (IR) is to determine which documents are relevant and which are not to the user information need. This problem is practically handled by a ranking function which defines an ordering among documents according to their degree of relevance to the user query. This paper discusses work on using machine learning to automatically generate an effective ranking function for IR. This task is referred to as \"learning to rank for IR\" in the field. In this paper, a learning method, RankGP, is presented to address this task. RankGP employs genetic programming to learn a ranking function by combining various types of evidences in IR, including content features, structure features, and query-independent features. The proposed method is evaluated using the LETOR benchmark datasets and found to be competitive with Ranking SVM and RankBoost.",
"Originally published in Contemporary Psychology: APA Review of Books, 1997, Vol 42(7), 649-649. In Foundations of Vision (see record 1995-98050-000), Brian Wandell divides the study of vision into three parts: encoding, representation, and interpretation. Each part is designed to inform students on",
"The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach.",
"We address the problem of learning large complex ranking functions. Most IR applications use evaluation metrics that depend only upon the ranks of documents. However, most ranking functions generate document scores, which are sorted to produce a ranking. Hence IR metrics are innately non-smooth with respect to the scores, due to the sort. Unfortunately, many machine learning algorithms require the gradient of a training objective in order to perform the optimization of the model parameters,and because IR metrics are non-smooth,we need to find a smooth proxy objective that can be used for training. We present a new family of training objectives that are derived from the rank distributions of documents, induced by smoothed scores. We call this approach SoftRank. We focus on a smoothed approximation to Normalized Discounted Cumulative Gain (NDCG), called SoftNDCG and we compare it with three other training objectives in the recent literature. We present two main results. First, SoftRank yields a very good way of optimizing NDCG. Second, we show that it is possible to achieve state of the art test set NDCG results by optimizing a soft NDCG objective on the training set with a different discount function"
]
} |
1904.06585 | 2936313222 | It has been a longstanding goal in computer vision to describe the 3D physical space in terms of parameterized volumetric models that would allow autonomous machines to understand and interact with their surroundings. Such models are typically motivated by human visual perception and aim to represents all elements of the physical word ranging from individual objects to complex scenes using a small set of parameters. One of the de facto stadards to approach this problem are superquadrics - volumetric models that define various 3D shape primitives and can be fitted to actual 3D data (either in the form of point clouds or range images). However, existing solutions to superquadric recovery involve costly iterative fitting procedures, which limit the applicability of such techniques in practice. To alleviate this problem, we explore in this paper the possibility to recover superquadrics from range images without time consuming iterative parameter estimation techniques by using contemporary deep-learning models, more specifically, convolutional neural networks (CNNs). We pose the superquadric recovery problem as a regression task and develop a CNN regressor that is able to estimate the parameters of a superquadric model from a given range image. We train the regressor on a large set of synthetic range images, each containing a single (unrotated) superquadric shape and evaluate the learned model in comparaitve experiments with the current state-of-the-art. Additionally, we also present a qualitative analysis involving a dataset of real-world objects. The results of our experiments show that the proposed regressor not only outperforms the existing state-of-the-art, but also ensures a 270x faster execution time. | Pentland, who introduced supequadrics to computer vision, proposed to recover them from shading information derived from 2D intensity images @cite_9 . But this approached proved to be overly complicated and not successful in practice. Solina and Bajcsy used instead explicit 3D information in the form of range images @cite_32 @cite_16 which are a uniform 2D grid of 3D points as seen from a particular viewpoint. Solina and Bajcsy designed a fitting function that needed to be minimized to fit a superquadric model to the 3D points. Since this fitting function is highly non-linear, an iterative procedure could not be avoided for its minimization. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_32"
],
"mid": [
"2039005177",
"2123564600",
"1578500489"
],
"abstract": [
"Abstract To support our reasoning abilities perception must recover environmental regularities—e.g., rigidity, “objectness,” axes of symmetry—for later use by cognition. To create a theory of how our perceptual apparatus can produce meaningful cognitive primitives from an array of image intensities we require a representation whose elements may be lawfully related to important physical regularities, and that correctly describes the perceptual organization people impose on the stimulus. Unfortunately, the representations that are currently available were originally developed for other purposes (e.g., physics, engineering) and have so far proven unsuitable for the problems of perception or common-sense reasoning. In answer to this problem we present a representation that has proven competent to accurately describe an extensive variety of natural forms (e.g., people, mountains, clouds, trees), as well as man-made forms, in a succinct and natural manner. The approach taken in this representational system is to describe scene structure at a scale that is similar to our naive perceptual notion of “a part,” by use of descriptions that reflect a possible formative history of the object, e.g., how the object might have been constructed from lumps of clay. For this representation to be useful it must be possible to recover such descriptions from image data; we show that the primitive elements of such descriptions may be recovered in an overconstrained and therefore reliable manner. We believe that this descriptive system makes an important contribution towards solving current problems in perceiving and reasoning about natural forms by allowing us to construct accurate descriptions that are extremely compact and that capture people's intuitive notions about the part structure of three-dimensional forms.",
"A method for recovery of compact volumetric models for shape representation of single-part objects in computer vision is introduced. The models are superquadrics with parametric deformations (bending, tapering, and cavity deformation). The input for the model recovery is three-dimensional range points. Model recovery is formulated as a least-squares minimization of a cost function for all range points belonging to a single part. During an iterative gradient descent minimization process, all model parameters are adjusted simultaneously, recovery position, orientation, size, and shape of the model, such that most of the given range points lie close to the model's surface. A specific solution among several acceptable solutions, where are all minima in the parameter space, can be reached by constraining the search to a part of the parameter space. The many shallow local minima in the parameter space are avoided as a solution by using a stochastic technique during minimization. Results using real range data show that the recovered models are stable and that the recovery procedure is fast. >",
"Categories and shape prototypes are considered for a class ob object recognition problems where rigid and detailed object models are not available or do not apply. We propose a modeling system for generic objects to recognize different objects from the same category with only one generic model. We base our design of the modeling system upon the current psychological theories of categorization and human visual perception. The representation consists of a prototype represented by parts and their configuration. Parts are modeled by superquadric volumetric primitives which can be combined via Boolean operations to form objects. Variations between objects within a category are described by changes in structure and shape deformations of prototypical parts. Recovery of deformed supequadric models from sparse 3-D points is developed and some results are shown."
]
} |
1904.06585 | 2936313222 | It has been a longstanding goal in computer vision to describe the 3D physical space in terms of parameterized volumetric models that would allow autonomous machines to understand and interact with their surroundings. Such models are typically motivated by human visual perception and aim to represents all elements of the physical word ranging from individual objects to complex scenes using a small set of parameters. One of the de facto stadards to approach this problem are superquadrics - volumetric models that define various 3D shape primitives and can be fitted to actual 3D data (either in the form of point clouds or range images). However, existing solutions to superquadric recovery involve costly iterative fitting procedures, which limit the applicability of such techniques in practice. To alleviate this problem, we explore in this paper the possibility to recover superquadrics from range images without time consuming iterative parameter estimation techniques by using contemporary deep-learning models, more specifically, convolutional neural networks (CNNs). We pose the superquadric recovery problem as a regression task and develop a CNN regressor that is able to estimate the parameters of a superquadric model from a given range image. We train the regressor on a large set of synthetic range images, each containing a single (unrotated) superquadric shape and evaluate the learned model in comparaitve experiments with the current state-of-the-art. Additionally, we also present a qualitative analysis involving a dataset of real-world objects. The results of our experiments show that the proposed regressor not only outperforms the existing state-of-the-art, but also ensures a 270x faster execution time. | Other researchers have tried to improve this method in various ways, for example in modifying the fitting function @cite_19 , or using multiresolution @cite_7 but still essentially relying on iterative methods of minimizing the fitting function. Instead of gradient least-squares minimization, other methods of minimization have also been tried, such as genetic algorithms @cite_8 . Several extensions of superquadrics were proposed in the literature @cite_25 @cite_0 , however, the basic superquadric shape model and the recovery method of Solina and Bajcsy prevailed in most applications of superquadrics, in particular for path and grasp planning in robotics, for modelling and interpretation of medical images etc. Later, Leonardis and Solina expanded Solina and Bajcsy's method to simultaneously deconstruct the input range image into several superquadrics, resulting in a perceptually relevant segmentation @cite_18 . Nevertheless, the procedure still relied on an iterative fitting procedure. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_0",
"@cite_19",
"@cite_25"
],
"mid": [
"2108889042",
"2024577416",
"2133416062",
"2070568264",
"1532860291",
"2113877705"
],
"abstract": [
"We present an approach to reliable and efficient recovery of part-descriptions in terms of superquadric models from range data. We show that superquadrics can directly be recovered from unsegmented data, thus avoiding any presegmentation steps (e.g. in terms of surfaces). The approach is based on the recover-and-select paradigm. We present several experiments on real and synthetic range images, where we demonstrate the stability of the results with respect to viewpoint and noise.",
"Rapidly acquiring the shape and pose information of unknown objects is an essential characteristic of modern robotic systems in order to perform efficient manipulation tasks. In this work, we present a framework for 3D geometric shape recovery and pose estimation from unorganized point cloud data. We propose a low latency multi-scale voxelization strategy that rapidly fits superquadrics to single view 3D point clouds. As a result, we are able to quickly and accurately estimate the shape and pose parameters of relevant objects in a scene. We evaluate our approach on two datasets of common household objects collected using Microsoft's Kinect sensor. We also compare our work to the state of the art and achieve comparable results in less computational time. Our experimental results demonstrate the efficacy of our approach.",
"Supershape model is a recent primitive that represents numerous 3D shapes with several symmetry axes. The main interest of this model is its capability to reconstruct more complex shape than superquadric model with only one implicit equation. In this paper we propose a genetic algorithms to re-construct a point cloud using those primitives. We used the pseudo-Euclidean distance to introduce a threshold to handle real data imperfection and speed up the process. Simulations using our proposed fitness functions and a fitness function based on inside-outside function show that our fitness function based on the pseudo-Euclidean distance performs better.",
"We present a new approach to the problem of modeling smoothly deformable shapes with convex polyhedral bounds. Our hyperquadric modeling primitives, which include superquadrics as a special case, can be viewed as hyperplanar slices of deformed hyperspheres. As the original hypersphere is deformed to its bounding hypercube, the slices undergo corresponding smooth deformations to convex polytopes. The possible shape classes include arbitrary convex polygons and polyhedra, as well as taperings and distortions that are not naturally included within the conventional superquadric framework. By generalizing Blinn's “blobby” approach to modeling complex objects, we construct single equations for nonconvex, composite shapes starting with our basic convex primitives. Hyperquadrics are of potential interest for the generation of synthetic images, for automated image interpretation and for psychological models of geometric shape representation, manipulation, and perception.",
"",
"A physically-based approach is presented to fitting complex 3D shapes using a novel class of dynamic models. These models can deform both locally and globally. The authors formulate deformable superquadrics which incorporate the global shape parameters of a conventional superellipsoid with the local degrees of freedom of a spline. The local global representational power of a deformable superquadric simultaneously satisfies the conflicting requirements of shape reconstruction and shape recognition. The model's six global deformational degrees of freedom capture gross shape features from visual data and provide salient part descriptors for efficient indexing into a database of stored models. Model fitting experiments involving 2D monocular image data and 3D range data are reported. >"
]
} |
1904.06585 | 2936313222 | It has been a longstanding goal in computer vision to describe the 3D physical space in terms of parameterized volumetric models that would allow autonomous machines to understand and interact with their surroundings. Such models are typically motivated by human visual perception and aim to represents all elements of the physical word ranging from individual objects to complex scenes using a small set of parameters. One of the de facto stadards to approach this problem are superquadrics - volumetric models that define various 3D shape primitives and can be fitted to actual 3D data (either in the form of point clouds or range images). However, existing solutions to superquadric recovery involve costly iterative fitting procedures, which limit the applicability of such techniques in practice. To alleviate this problem, we explore in this paper the possibility to recover superquadrics from range images without time consuming iterative parameter estimation techniques by using contemporary deep-learning models, more specifically, convolutional neural networks (CNNs). We pose the superquadric recovery problem as a regression task and develop a CNN regressor that is able to estimate the parameters of a superquadric model from a given range image. We train the regressor on a large set of synthetic range images, each containing a single (unrotated) superquadric shape and evaluate the learned model in comparaitve experiments with the current state-of-the-art. Additionally, we also present a qualitative analysis involving a dataset of real-world objects. The results of our experiments show that the proposed regressor not only outperforms the existing state-of-the-art, but also ensures a 270x faster execution time. | CNNs have already been employed to process 3D visual data. A CNN regression approach was, for example, used in @cite_29 for real-time 2D 3D registration which was, similarly to superquadric recovery, traditionally solved by iterative computation. CNNs have also been used to estimate face normals from 2D intensity images instead of standard shape-from-shading methods @cite_21 or for fitting 3D morphable models to faces captured in unconstrained conditions @cite_10 @cite_5 @cite_15 @cite_31 . | {
"cite_N": [
"@cite_31",
"@cite_29",
"@cite_21",
"@cite_5",
"@cite_15",
"@cite_10"
],
"mid": [
"2599226450",
"2344328023",
"2767225062",
"",
"2605701576",
"2555254696"
],
"abstract": [
"3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions. Code and models will be made available at http: aaronsplace.co.uk",
"In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D 3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D 3-D registration with a significantly enlarged capture range when compared to intensity-based methods.",
"In this work we pursue a data-driven approach to the problem of estimating surface normals from a single intensity image, focusing in particular on human faces. We introduce new methods to exploit the currently available facial databases for dataset construction and tailor a deep convolutional neural network to the task of estimating facial surface normals in-the-wild. We train a fully convolutional network that can accurately recover facial normals from images including a challenging variety of expressions and facial poses. We compare against state-of-the-art face Shape-from-Shading and 3D reconstruction techniques and show that the proposed network can recover substantially more accurate and realistic normals. Furthermore, in contrast to other existing face-specific surface recovery methods, we do not require the solving of an explicit alignment step due to the fully convolutional nature of our network.",
"",
"Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.",
"During the last few years, Convolutional Neural Networks are slowly but surely becoming the default method solve many computer vision related problems. This is mainly due to the continuous success that they have achieved when applied to certain tasks such as image, speech, or object recognition. Despite all the efforts, object class recognition methods based on deep learning techniques still have room for improvement. Most of the current approaches do not fully exploit 3D information, which has been proven to effectively improve the performance of other traditional object recognition methods. In this work, we propose PointNet, a new approach inspired by VoxNet and 3D ShapeNets, as an improvement over the existing methods by using density occupancy grids representations for the input data, and integrating them into a supervised Convolutional Neural Network architecture. An extensive experimentation was carried out, using ModelNet - a large-scale 3D CAD models dataset - to train and test the system, to prove that our approach is on par with state-of-the-art methods in terms of accuracy while being able to perform recognition under real-time constraints."
]
} |
1904.06585 | 2936313222 | It has been a longstanding goal in computer vision to describe the 3D physical space in terms of parameterized volumetric models that would allow autonomous machines to understand and interact with their surroundings. Such models are typically motivated by human visual perception and aim to represents all elements of the physical word ranging from individual objects to complex scenes using a small set of parameters. One of the de facto stadards to approach this problem are superquadrics - volumetric models that define various 3D shape primitives and can be fitted to actual 3D data (either in the form of point clouds or range images). However, existing solutions to superquadric recovery involve costly iterative fitting procedures, which limit the applicability of such techniques in practice. To alleviate this problem, we explore in this paper the possibility to recover superquadrics from range images without time consuming iterative parameter estimation techniques by using contemporary deep-learning models, more specifically, convolutional neural networks (CNNs). We pose the superquadric recovery problem as a regression task and develop a CNN regressor that is able to estimate the parameters of a superquadric model from a given range image. We train the regressor on a large set of synthetic range images, each containing a single (unrotated) superquadric shape and evaluate the learned model in comparaitve experiments with the current state-of-the-art. Additionally, we also present a qualitative analysis involving a dataset of real-world objects. The results of our experiments show that the proposed regressor not only outperforms the existing state-of-the-art, but also ensures a 270x faster execution time. | There has been work already on recovering volumetric models using deep neural networks @cite_14 @cite_17 @cite_27 @cite_22 . @cite_17 , for example, were building voxel representations of objects, called 3D shapenets, from range images and use CNNs to complete the shape, determine the next best view, and to recognize objects. @cite_14 extend shapenets into full convolutional volumetric auto encoders by estimating voxel occupancy grids. @cite_27 use CNNs to predict volumes on previously unseen image data. @cite_22 deal with superquadric recovery and segmentation of 3D point clouds. | {
"cite_N": [
"@cite_27",
"@cite_14",
"@cite_22",
"@cite_17"
],
"mid": [
"2529969239",
"2338532005",
"2768623474",
"1920022804"
],
"abstract": [
"We introduce a convolutional neural network for inferring a compact disentangled graphical description of objects from 2D images that can be used for volumetric reconstruction. The network comprises an encoder and a twin-tailed decoder. The encoder generates a disentangled graphics code. The first decoder generates a volume, and the second decoder reconstructs the input image using a novel training regime that allows the graphics code to learn a separate representation of the 3D object and a description of its lighting and pose conditions. We demonstrate this method by generating volumes and disentangled graphical descriptions from images and videos of faces and chairs.",
"With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes.",
"The need to model visual information with compact representations has existed since the early days of computer vision. We implemented in the past a segmentation and model recovery method for range images which is unfortunately too slow for current size of 3D point clouds and type of applications. Recently, neural networks have become the popular choice for quick and effective processing of visual data. In this article we demonstrate that with a convolutional neural network we could achieve comparable results, that is to determine and model all objects in a given 3D point cloud scene. We started off with a simple architecture that could predict the parameters of a single object in a scene. Then we expanded it with an architecture similar to Faster R-CNN, that could predict the parameters for any number of objects in a scene. The results of the initial neural network were satisfactory. The second network, that performed also segmentation, still gave decent results comparable to the original method, but compared to the initial one, performed somewhat worse. Results, however, are encouraging but further experiments are needed to build CNNs that will be able to replace the state-of-the-art method.",
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks."
]
} |
1904.06652 | 2938205538 | Recently, a simple combination of passage retrieval using off-the-shelf IR techniques and a BERT reader was found to be very effective for question answering directly on Wikipedia, yielding a large improvement over the previous state of the art on a standard benchmark dataset. In this paper, we present a data augmentation technique using distant supervision that exploits positive as well as negative examples. We apply a stage-wise approach to fine tuning BERT on multiple datasets, starting with data that is "furthest" from the test data and ending with the "closest". Experimental results show large gains in effectiveness over previous approaches on English QA datasets, and we establish new baselines on two recent Chinese QA datasets. | The roots of the distant supervision techniques we use trace back to at least the 1990s @cite_11 @cite_21 , although the term had not yet been coined. Such techniques have recently become commonplace, especially as a way to gather large amounts of labeled examples for data-hungry neural networks and other machine learning algorithms. Specific recent applications in question answering include , , , as well as for building benchmark test collections. | {
"cite_N": [
"@cite_21",
"@cite_11"
],
"mid": [
"2124634352",
"2101210369"
],
"abstract": [
"Many corpus-based natural language processing systems rely on text corpora that have been manually annotated with syntactic or semantic tags. In particular, all previous dictionary construction systems for information extraction have used an annotated training corpus or some form of annotated input. We have developed a system called AutoSlog-TS that creates dictionaries of extraction patterns using only untagged text. AutoSlog-TS is based on the AutoSlog system, which generated extraction patterns using annotated text and a set of heuristic rules. By adapting AutoSlog and combining it with statistical techniques, we eliminated its dependency on tagged text. In experiments with the MUG-4 terrorism domain, AutoSlog-TS created a dictionary of extraction patterns that performed comparably to a dictionary created by AutoSlog, using only preclassified texts as input.",
"This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 ."
]
} |
1904.06546 | 2936174995 | Principal Component Analysis (PCA) is a popular tool for dimensionality reduction and feature extraction in data analysis. There is a probabilistic version of PCA, known as Probabilistic PCA (PPCA). However, standard PCA and PPCA are not robust, as they are sensitive to outliers. To alleviate this problem, this paper introduces the Self-Paced Learning mechanism into PPCA, and proposes a novel method called Self-Paced Probabilistic Principal Component Analysis (SP-PPCA). Furthermore, we design the corresponding optimization algorithm based on the alternative search strategy and the expectation-maximization algorithm. SP-PPCA looks for optimal projection vectors and filters out outliers iteratively. Experiments on both synthetic problems and real-world datasets clearly demonstrate that SP-PPCA is able to reduce or eliminate the impact of outliers. | Projection Pursuit @cite_27 based robust PCA attempts to search for the direction that maximize a robust measure of the variance of the projected observations. In order to relieve the high computational cost problem in @cite_27 , a faster algorithm, named RAPCA, was presented in @cite_17 . Furthermore, @cite_40 combined projection pursuit ideas with robust covariance estimator and proposed the method ROBPCA. | {
"cite_N": [
"@cite_40",
"@cite_27",
"@cite_17"
],
"mid": [
"2054834816",
"1979486458",
"2124449497"
],
"abstract": [
"We introduce a new method for robust principal component analysis (PCA). Classical PCA is based on the empirical covariance matrix of the data and hence is highly sensitive to outlying observations. Two robust approaches have been developed to date. The first approach is based on the eigenvectors of a robust scatter matrix such as the minimum covariance determinant or an S-estimator and is limited to relatively low-dimensional data. The second approach is based on projection pursuit and can handle high-dimensional data. Here we propose the ROBPCA approach, which combines projection pursuit ideas with robust scatter matrix estimation. ROBPCA yields more accurate estimates at noncontaminated datasets and more robust estimates at contaminated data. ROBPCA can be computed rapidly, and is able to detect exact-fit situations. As a by-product, ROBPCA produces a diagnostic plot that displays and classifies the outliers. We apply the algorithm to several datasets from chemometrics and engineering.",
"Abstract This article proposes and discusses a type of new robust estimators for covariance correlation matrices and principal components via projection-pursuit techniques. The most attractive advantage of the new procedures is that they are of both rotational equivariance and high breakdown point. Besides, they are qualitatively robust and consistent at elliptic underlying distributions. The Monte Carlo study shows that the best of the new estimators compare favorably with other robust methods. They provide as good a performance as M-estimators and somewhat better empirical breakdown properties.",
"When faced with high-dimensional data, one often uses principal component analysis (PCA) for dimension reduction. Classical PCA constructs a set of uncorrelated variables, which correspond to eigenvectors of the sample covariance matrix. However, it is well-known that this covariance matrix is strongly affected by anomalous observations. It is therefore necessary to apply robust methods that are resistant to possible outliers. Li and Chen [J. Am. Stat. Assoc. 80 (1985) 759] proposed a solution based on projection pursuit (PP). The idea is to search for the direction in which the projected observations have the largest robust scale. In subsequent steps, each new direction is constrained to be orthogonal to all previous directions. This method is very well suited for high-dimensional data, even when the number of variables p is higher than the number of observations n. However, the algorithm of Li and Chen has a high computational cost. In the references [C. Croux, A. Ruiz-Gazen, in COMPSTAT: Proceedings in Computational Statistics 1996, Physica-Verlag, Heidelberg, 1996, pp. 211–217; C. Croux and A. Ruiz-Gazen, High Breakdown Estimators for Principal Components: the Projection-Pursuit Approach Revisited, 2000, submitted for publication.], a computationally much more attractive method is presented, but in high dimensions (large p) it has a numerical accuracy problem and still consumes much computation time. In this paper, we construct a faster two-step algorithm that is more stable numerically. The new algorithm is illustrated on a data set with four dimensions and on two chemometrical data sets with 1200 and 600 dimensions."
]
} |
1904.06546 | 2936174995 | Principal Component Analysis (PCA) is a popular tool for dimensionality reduction and feature extraction in data analysis. There is a probabilistic version of PCA, known as Probabilistic PCA (PPCA). However, standard PCA and PPCA are not robust, as they are sensitive to outliers. To alleviate this problem, this paper introduces the Self-Paced Learning mechanism into PPCA, and proposes a novel method called Self-Paced Probabilistic Principal Component Analysis (SP-PPCA). Furthermore, we design the corresponding optimization algorithm based on the alternative search strategy and the expectation-maximization algorithm. SP-PPCA looks for optimal projection vectors and filters out outliers iteratively. Experiments on both synthetic problems and real-world datasets clearly demonstrate that SP-PPCA is able to reduce or eliminate the impact of outliers. | In Principal Component Pursuit method (PCP) @cite_24 , the observed data is decomposed into a low rank matrix and a sparse matrix, among which the sparse matrix can be treated as noise. While, such a decomposition is not always realistic @cite_2 . Moreover, PCP is proposed for uniformly corrupted coordinates of data, rather than outlying samples @cite_35 . | {
"cite_N": [
"@cite_24",
"@cite_35",
"@cite_2"
],
"mid": [
"2145962650",
"2099953425",
"2928677486"
],
"abstract": [
"This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.",
"We study the basic problem of robust subspace recovery. That is, we assume a data set that some of its points are sampled around a fixed subspace and the rest of them are spread in the whole ambient space, and we aim to recover the fixed underlying subspace. We first estimate \"robust inverse sample covariance\" by solving a convex minimization procedure; we then recover the subspace by the bottom eigenvectors of this matrix (their number correspond to the number of eigenvalues close to 0). We guarantee exact subspace recovery under some conditions on the underlying data. Furthermore, we propose a fast iterative algorithm, which linearly converges to the matrix minimizing the convex problem. We also quantify the effect of noise and regularization and discuss many other practical and theoretical issues for improving the subspace recovery in various settings. When replacing the sum of terms in the convex energy function (that we minimize) with the sum of squares of terms, we obtain that the new minimizer is a scaled version of the inverse sample covariance (when exists). We thus interpret our minimizer and its subspace (spanned by its bottom eigenvectors) as robust versions of the empirical inverse covariance and the PCA subspace respectively. We compare our method with many other algorithms for robust PCA on synthetic and real data sets and demonstrate state-of-the-art speed and accuracy.",
"Principal component analysis (PCA) is a powerful tool for dimensionality reduction. Unfortunately, it is sensitive to outliers, so that various robust PCA variants were proposed in the literature. Among them the so-called rotational invariant @math -norm PCA is rather popular. In this paper, we reinterpret this robust method as conditional gradient algorithm and show moreover that it coincides with a gradient descent algorithm on Grassmannian manifolds. Based on this point of view, we prove for the first time convergence of the whole series of iterates to a critical point using the Kurdyka- ojasiewicz property of the energy functional."
]
} |
1904.06546 | 2936174995 | Principal Component Analysis (PCA) is a popular tool for dimensionality reduction and feature extraction in data analysis. There is a probabilistic version of PCA, known as Probabilistic PCA (PPCA). However, standard PCA and PPCA are not robust, as they are sensitive to outliers. To alleviate this problem, this paper introduces the Self-Paced Learning mechanism into PPCA, and proposes a novel method called Self-Paced Probabilistic Principal Component Analysis (SP-PPCA). Furthermore, we design the corresponding optimization algorithm based on the alternative search strategy and the expectation-maximization algorithm. SP-PPCA looks for optimal projection vectors and filters out outliers iteratively. Experiments on both synthetic problems and real-world datasets clearly demonstrate that SP-PPCA is able to reduce or eliminate the impact of outliers. | In addition, based on statistical physics and self-organized rules, a robust PCA was proposed in @cite_37 . The authors generalized an energy function by introducing binary variables. We also adopt binary variables in our algorithm, however, our method is derived under the probability framework based on Self-Paced Learning and Probabilistic PCA. More importantly, we give an efficient strategy to solve the relevant optimization problem. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2098884960"
],
"abstract": [
"This paper applies statistical physics to the problem of robust principal component analysis (PCA). The commonly used PCA learning rules are first related to energy functions. These functions are generalized by adding a binary decision field with a given prior distribution so that outliers in the data are dealt with explicitly in order to make PCA robust. Each of the generalized energy functions is then used to define a Gibbs distribution from which a marginal distribution is obtained by summing over the binary decision field. The marginal distribution defines an effective energy function, from which self-organizing rules have been developed for robust PCA. Under the presence of outliers, both the standard PCA methods and the existing self-organizing PCA rules studied in the literature of neural networks perform quite poorly. By contrast, the robust rules proposed here resist outliers well and perform excellently for fulfilling various PCA-like tasks such as obtaining the first principal component vector, the first k principal component vectors, and directly finding the subspace spanned by the first k vector principal component vectors without solving for each vector individually. Comparative experiments have been made, and the results show that the authors' robust rules improve the performances of the existing PCA algorithms significantly when outliers are present. >"
]
} |
1904.06611 | 2957320304 | LiveSketch is a novel algorithm for searching large image collections using hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch search by creating visual suggestions that augment the query as it is drawn, making query specification an iterative rather than one-shot process that helps disambiguate users' search intent. Our technical contributions are: a triplet convnet architecture that incorporates an RNN based variational autoencoder to search for images using vector (stroke-based) queries; real-time clustering to identify likely search intents (and so, targets within the search embedding); and the use of backpropagation from those targets to perturb the input stroke sequence, so suggesting alterations to the query in order to guide the search. We show improvements in accuracy and time-to-task over contemporary baselines using a 67M image corpus. | Visual search is a long-standing problem within the computer vision and information retrieval communities, where the iterative presentation and refinement of results has been studied extensively as ‘relevance feedback’ (RF) @cite_23 @cite_20 @cite_25 although only sparsely for SBIR @cite_19 . RF is driven by interactive markup of results at each search iteration. Users tag results as relevant or irrelevant, so tuning internal search parameters to improve results. Our work differs in that we modify the query itself to affect subsequent search iterations; queries may be further augmented by the user at each iteration. Recognizing the ambiguity present in sketched queries we group putative results into semantic clusters and propose edits to the search query for each object class present. | {
"cite_N": [
"@cite_19",
"@cite_20",
"@cite_25",
"@cite_23"
],
"mid": [
"2092639952",
"2033365921",
"2155855695",
"194213661"
],
"abstract": [
"We present a new algorithm for searching video repositories using free-hand sketches. Our queries express both appearance (color, shape) and motion attributes, as well as semantic properties (object labels) enabling hybrid queries to be specified. Unlike existing sketch based video retrieval (SBVR) systems that enable hybrid queries of this form, we do not adopt a model fitting optimization approach to match at query-time. Rather, we create an efficiently searchable index via a novel space-time descriptor that encapsulates all these properties. The real-time performance yielded by our indexing approach enables interactive refinement of search results within a relevance feedback (RF) framework; a unique contribution to SBVR. We evaluate our system over 700 sports footage clips exhibiting a variety of clutter and motion conditions, demonstrating significant accuracy and speed gains over the state of the art.",
"We propose a novel mode of feedback for image search, where a user describes which properties of exemplar images should be adjusted in order to more closely match his her mental model of the image(s) sought. For example, perusing image results for a query “black shoes”, the user might state, “Show me shoe images like these, but sportier.” Offline, our approach first learns a set of ranking functions, each of which predicts the relative strength of a nameable attribute in an image (‘sportiness’, ‘furriness’, etc.). At query time, the system presents an initial set of reference images, and the user selects among them to provide relative attribute feedback. Using the resulting constraints in the multi-dimensional attribute space, our method updates its relevance function and re-ranks the pool of images. This procedure iterates using the accumulated constraints until the top ranked images are acceptably close to the user's envisioned target. In this way, our approach allows a user to efficiently “whittle away” irrelevant portions of the visual feature space, using semantic language to precisely communicate her preferences to the system. We demonstrate the technique for refining image search for people, products, and scenes, and show it outperforms traditional binary relevance feedback in terms of search speed and accuracy.",
"In interactive image search, a user iteratively refines his results by giving feedback on exemplar images. Active selection methods aim to elicit useful feedback, but traditional approaches suffer from expensive selection criteria and cannot predict in formativeness reliably due to the imprecision of relevance feedback. To address these drawbacks, we propose to actively select \"pivot\" exemplars for which feedback in the form of a visual comparison will most reduce the system's uncertainty. For example, the system might ask, \"Is your target image more or less crowded than this image?\" Our approach relies on a series of binary search trees in relative attribute space, together with a selection function that predicts the information gain were the user to compare his envisioned target to the next node deeper in a given attribute's tree. It makes interactive search more efficient than existing strategies-both in terms of the system's selection time as well as the user's feedback effort.",
"Relevance Feedback is an interesting procedure to improve the performance of Content-Based Image Retrieval systems even when using low-level features alone. In this work we compare the efficiency of one class and two class Support Vector Machines in content-based image retrieval using Invariant Feature Histograms. We describe our methodology of performing Relevance Feedback in both cases and report encouraging results on a subset of MPEG-7 content dataset."
]
} |
1904.06574 | 2938348746 | Recently, Internet service providers (ISPs) have gained increased flexibility in how they configure their in-ground optical fiber into an IP network. This greater control has been made possible by (i) the maturation of software defined networking (SDN), and (ii) improvements in optical switching technology. Whereas traditionally, at network design time, each IP link was assigned a fixed optical path and bandwidth, modern colorless and directionless Reconfigurable Optical Add Drop Multiplexers (CD ROADMs) allow a remote SDN controller to remap the IP topology to the optical underlay on the fly. Consequently, ISPs face new opportunities and challenges in the design and operation of their backbone networks. Specifically, ISPs must determine how best to design their networks to take advantage of the new capabilities; they need an automated way to generate the least expensive network design that still delivers all offered traffic, even in the presence of equipment failures. This problem is difficult because of the physical constraints governing the placement of optical regenerators, a piece of optical equipment necessary for maintaining an optical signal over long stretches of fiber. As a solution, we present an integer linear program (ILP) which (1) solves the equipment-placement network design problem; (2) determines the optimal mapping of IP links to the optical infrastructure for any given failure scenario; and (3) determines how best to route the offered traffic over the IP topology. To scale to larger networks, we also describe an efficient heuristic that finds nearly optimal network designs in a fraction of the time. Further, in our experiments our ILP offers cost savings of up to 29 compared to traditional network design techniques. | At a high level, the Owan work by @cite_6 is similar to ours. Like our work, Owan is a centralized system that jointly optimizes the IP and optical topologies and configures network devices, including CD ROADMs, according to this global strategy. However, there are three key differences between Owan and our work. First, our objective differs from that of We aim to minimize the cost of tails and regens such that we place the equipment such that, under all failure scenarios, we can set up IP links to carry all necessary traffic. aim to minimize the transfer completion time or maximize the number of transfers that meet their deadlines. Second, our work applies in a different setting. Owan is designed for bulk transfers and depends on the network operator being able to control sending rates, possibly delaying traffic for several hours. We target all ISP traffic; we can't rate control any traffic, and we must route all demands, even in the case of failures, except during a brief transient period during IP link reconfiguration. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2504195863"
],
"abstract": [
"Bulk transfer on the wide-area network (WAN) is a fundamental service to many globally-distributed applications. It is challenging to efficiently utilize expensive WAN bandwidth to achieve short transfer completion time and meet mission-critical deadlines. Advancements in software-defined networking (SDN) and optical hardware make it feasible and beneficial to quickly reconfigure optical devices in the optical layer, which brings a new opportunity for traffic management on the WAN. We present Owan, a novel traffic management system that optimizes wide-area bulk transfers with centralized joint control of the optical and network layers. can dynamically change the network-layer topology by reconfiguring the optical devices. We develop efficient algorithms to jointly optimize optical circuit setup, routing and rate allocation, and dynamically adapt them to traffic demand changes. We have built a prototype of Owan with commodity optical and electrical hardware. Testbed experiments and large-scale simulations on two ISP topologies and one inter-DC topology show that completes transfers up to 4.45x faster on average, and up to 1.36x more transfers meet their deadlines, as compared to prior methods that only control the network layer."
]
} |
1904.06574 | 2938348746 | Recently, Internet service providers (ISPs) have gained increased flexibility in how they configure their in-ground optical fiber into an IP network. This greater control has been made possible by (i) the maturation of software defined networking (SDN), and (ii) improvements in optical switching technology. Whereas traditionally, at network design time, each IP link was assigned a fixed optical path and bandwidth, modern colorless and directionless Reconfigurable Optical Add Drop Multiplexers (CD ROADMs) allow a remote SDN controller to remap the IP topology to the optical underlay on the fly. Consequently, ISPs face new opportunities and challenges in the design and operation of their backbone networks. Specifically, ISPs must determine how best to design their networks to take advantage of the new capabilities; they need an automated way to generate the least expensive network design that still delivers all offered traffic, even in the presence of equipment failures. This problem is difficult because of the physical constraints governing the placement of optical regenerators, a piece of optical equipment necessary for maintaining an optical signal over long stretches of fiber. As a solution, we present an integer linear program (ILP) which (1) solves the equipment-placement network design problem; (2) determines the optimal mapping of IP links to the optical infrastructure for any given failure scenario; and (3) determines how best to route the offered traffic over the IP topology. To scale to larger networks, we also describe an efficient heuristic that finds nearly optimal network designs in a fraction of the time. Further, in our experiments our ILP offers cost savings of up to 29 compared to traditional network design techniques. | Third, we make different assumptions about what parts of the infrastructure are given and fixed. take the locations of optical equipment as an input constraint, while we solve for the optimal places to put tails and regens. This distinction is crucial; don't need any notion of here-and-now decisions about where to place tails and regens separate from wait-and-see decisions about IP link configuration and routing. Other studies demonstrate that, to minimize delay, it is best to set up direct IP links between endpoints exchanging significant amounts of traffic, while relying on packet switching through multiple hops to handle lower demands @cite_3 . | {
"cite_N": [
"@cite_3"
],
"mid": [
"1668143537"
],
"abstract": [
"We develop algorithms for joint IP layer routing and WDM logical topology reconfiguration in IP-over-WDM networks experiencing stochastic traffic. At the WDM layer, we associate a non-negligible tuning latency with WDM reconfiguration, during which time tuned transceivers cannot service backlogged data. The IP layer is modeled as a queueing system. We demonstrate that our algorithms achieve asymptotic throughput optimality by using frame-based maximum weight scheduling decisions. We study both deterministic and random frame durations. In addition to dynamically triggering WDM reconfiguration, our algorithms specify precisely how to route packets over the IP layer during the phases in which the WDM layer remains fixed. Our algorithms remain valid under a variety of optical layer constraints. We provide an analysis of the specific case of WDM networks with multiple ports per node. In order to gauge the delay properties of our algorithms, we conduct a simulation study and demonstrate an important tradeoff between WDM reconfiguration and IP layer routing. We find that multi-hop routing is extremely beneficial at low throughput levels, while single-hop routing achieves improved delay at high throughput levels. For a simple access network, we demonstrate through simulation the benefit of employing multi-hop IP layer routes."
]
} |
1904.06535 | 2939740641 | Previous scene text detection methods have progressed substantially over the past years. However, limited by the receptive field of CNNs and the simple representations like rectangle bounding box or quadrangle adopted to describe text, previous methods may fall short when dealing with more challenging text instances, such as extremely long text and arbitrarily shaped text. To address these two problems, we present a novel text detector namely LOMO, which localizes the text progressively for multiple times (or in other word, LOok More than Once). LOMO consists of a direct regressor (DR), an iterative refinement module (IRM) and a shape expression module (SEM). At first, text proposals in the form of quadrangle are generated by DR branch. Next, IRM progressively perceives the entire long text by iterative refinement based on the extracted feature blocks of preliminary proposals. Finally, a SEM is introduced to reconstruct more precise representation of irregular text by considering the geometry properties of text instance, including text region, text center line and border offsets. The state-of-the-art results on several public benchmarks including ICDAR2017-RCTW, SCUT-CTW1500, Total-Text, ICDAR2015 and ICDAR17-MLT confirm the striking robustness and effectiveness of LOMO. | @cite_26 @cite_10 @cite_3 @cite_37 @cite_35 @cite_19 @cite_15 first detect individual text parts or characters, and then group them into words with a set of post-processing steps. CTPN @cite_10 adopted the framework of Faster R-CNN @cite_3 to generate dense and compact text components. In @cite_37 , scene text is decomposed into two detectable elements namely text segments and links, where a link can indicate whether two adjacent segments belong to the same word and should be connected together. WordSup @cite_35 and Wetext @cite_19 proposed two different weakly supervised learning methods for the character detector, which greatly ease the difficulty of training with insufficient character-level annotations. @cite_15 converted a text image into a stochastic flow graph and then performed Markov Clustering on it to predict instance-level bounding boxes. However, such methods are not robust in scenarios with complex background due to the limitation of staged word line generation. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_26",
"@cite_3",
"@cite_19",
"@cite_15",
"@cite_10"
],
"mid": [
"2962804639",
"2605076167",
"1972065312",
"2613718673",
"2962935569",
"2798591388",
"2519818067"
],
"abstract": [
"Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 [19] and COCO-text [39]. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.",
"Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line, A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese.",
"With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"The requiring of large amounts of annotated training data has become a common constraint on various deep learning systems. In this paper, we propose a weakly supervised scene text detection method (WeText) that trains robust and accurate scene text detection models by learning from unannotated or weakly annotated data. With a \"light\" supervised model trained on a small fully annotated dataset, we explore semi-supervised and weakly supervised learning on a large unannotated dataset and a large weakly annotated dataset, respectively. For the unsupervised learning, the light supervised model is applied to the unannotated dataset to search for more character training samples, which are further combined with the small annotated dataset to retrain a superior character detection model. For the weakly supervised learning, the character searching is guided by high-level annotations of words text lines that are widely available and also much easier to prepare. In addition, we design an unified scene character detector by adapting regression based deep networks, which greatly relieves the error accumulation issue that widely exists in most traditional approaches. Extensive experiments across different unannotated and weakly annotated datasets show that the scene text detection performance can be clearly boosted under both scenarios, where the weakly supervised learning can achieve the state-of-the-art performance by using only 229 fully annotated scene text images.",
"A novel framework named Markov Clustering Network (MCN) is proposed for fast and robust scene text detection. MCN predicts instance-level bounding boxes by firstly converting an image into a Stochastic Flow Graph (SFG) and then performing Markov Clustering on this graph. Our method can detect text objects with arbitrary size and orientation without prior knowledge of object size. The stochastic flow graph encode objects' local correlation and semantic information. An object is modeled as strongly connected nodes, which allows flexible bottom-up detection for scale-varying and rotated objects. MCN generates bounding boxes without using Non-Maximum Suppression, and it can be fully parallelized on GPUs. The evaluation on public benchmarks shows that our method outperforms the existing methods by a large margin in detecting multioriented text objects. MCN achieves new state-of-art performance on challenging MSRA-TD500 dataset with precision of 0.88, recall of 0.79 and F-score of 0.83. Also, MCN achieves realtime inference with frame rate of 34 FPS, which is @math speedup when compared with the fastest scene text detection algorithm.",
"We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com ."
]
} |
1904.06535 | 2939740641 | Previous scene text detection methods have progressed substantially over the past years. However, limited by the receptive field of CNNs and the simple representations like rectangle bounding box or quadrangle adopted to describe text, previous methods may fall short when dealing with more challenging text instances, such as extremely long text and arbitrarily shaped text. To address these two problems, we present a novel text detector namely LOMO, which localizes the text progressively for multiple times (or in other word, LOok More than Once). LOMO consists of a direct regressor (DR), an iterative refinement module (IRM) and a shape expression module (SEM). At first, text proposals in the form of quadrangle are generated by DR branch. Next, IRM progressively perceives the entire long text by iterative refinement based on the extracted feature blocks of preliminary proposals. Finally, a SEM is introduced to reconstruct more precise representation of irregular text by considering the geometry properties of text instance, including text region, text center line and border offsets. The state-of-the-art results on several public benchmarks including ICDAR2017-RCTW, SCUT-CTW1500, Total-Text, ICDAR2015 and ICDAR17-MLT confirm the striking robustness and effectiveness of LOMO. | @cite_30 @cite_32 @cite_18 @cite_2 @cite_7 usually adopt some popular object detection frameworks and models under the supervision of the word or line level annotations. TextBoxes @cite_30 and RRD @cite_32 adjusted the anchor ratios of SSD @cite_24 to handle different aspect ratios of text. RRPN @cite_18 proposed rotation region proposal to cover multi-oriented scene text. However, EAST @cite_2 and Deep Regression @cite_7 directly detected the quadrangles of words in a per-pixel manner without using anchors and proposals. Due to their end-to-end design, these approaches can maximize word-level annotation and easily achieve high performance on standard benchmarks. Because the huge variance of text aspect ratios (especially for non-Latin text), as well as the limited receptive filed of the CNN, these methods cannot efficiently handle long text. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_7",
"@cite_32",
"@cite_24",
"@cite_2"
],
"mid": [
"2962773189",
"2593539516",
"2952662639",
"",
"2193145675",
"2605982830"
],
"abstract": [
"This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.",
"This paper introduces a novel rotation-based framework for arbitrary-oriented text detection in natural scene images. We present the Rotation Region Proposal Networks , which are designed to generate inclined proposals with text orientation angle information. The angle information is then adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation. The Rotation Region-of-Interest pooling layer is proposed to project arbitrary-oriented proposals to a feature map for a text region classifier. The whole framework is built upon a region-proposal-based architecture, which ensures the computational efficiency of the arbitrary-oriented text detection compared with previous text detection systems. We conduct experiments using the rotation-based framework on three real-world scene text detection datasets and demonstrate its superiority in terms of effectiveness and efficiency over previous approaches.",
"In this paper, we first provide a new perspective to divide existing high performance object detection methods into direct and indirect regressions. Direct regression performs boundary regression by predicting the offsets from a given point, while indirect regression predicts the offsets from some bounding box proposals. Then we analyze the drawbacks of the indirect regression, which the recent state-of-the-art detection structures like Faster-RCNN and SSD follows, for multi-oriented scene text detection, and point out the potential superiority of direct regression. To verify this point of view, we propose a deep direct regression based method for multi-oriented scene text detection. Our detection framework is simple and effective with a fully convolutional network and one-step post processing. The fully convolutional network is optimized in an end-to-end way and has bi-task outputs where one is pixel-wise classification between text and non-text, and the other is direct regression to determine the vertex coordinates of quadrilateral text boundaries. The proposed method is particularly beneficial for localizing incidental scene texts. On the ICDAR2015 Incidental Scene Text benchmark, our method achieves the F1-measure of 81 , which is a new state-of-the-art and significantly outperforms previous approaches. On other standard datasets with focused scene texts, our method also reaches the state-of-the-art performance.",
"",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution."
]
} |
1904.06535 | 2939740641 | Previous scene text detection methods have progressed substantially over the past years. However, limited by the receptive field of CNNs and the simple representations like rectangle bounding box or quadrangle adopted to describe text, previous methods may fall short when dealing with more challenging text instances, such as extremely long text and arbitrarily shaped text. To address these two problems, we present a novel text detector namely LOMO, which localizes the text progressively for multiple times (or in other word, LOok More than Once). LOMO consists of a direct regressor (DR), an iterative refinement module (IRM) and a shape expression module (SEM). At first, text proposals in the form of quadrangle are generated by DR branch. Next, IRM progressively perceives the entire long text by iterative refinement based on the extracted feature blocks of preliminary proposals. Finally, a SEM is introduced to reconstruct more precise representation of irregular text by considering the geometry properties of text instance, including text region, text center line and border offsets. The state-of-the-art results on several public benchmarks including ICDAR2017-RCTW, SCUT-CTW1500, Total-Text, ICDAR2015 and ICDAR17-MLT confirm the striking robustness and effectiveness of LOMO. | @cite_38 @cite_31 @cite_23 @cite_16 mainly draw inspiration from semantic segmentation methods and regard all the pixels within text bounding boxes as positive regions. The greatest benefit of these methods is the ability to extract arbitrary-shape text. @cite_38 first used FCN @cite_29 to extract text blocks and then hunted text lines with the statistical information of MSERs @cite_39 . To better separate adjacent text instances, @cite_31 classified each pixel into three categories: non-text, text border and text. TextSnake @cite_23 and PSENet @cite_16 further provided a novel heat map, namely, text center line map to separate different text instances. These methods are based on proposal-free instance segmentation whose performances are strongly affected by the robustness of segmentation results. | {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_39",
"@cite_23",
"@cite_31",
"@cite_16"
],
"mid": [
"2339589954",
"1903029394",
"1488125194",
"2810028092",
"2776766448",
"2806327167"
],
"abstract": [
"In this paper, we propose a novel approach for text detection in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine procedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Finally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple orientations, languages and fonts. The proposed method consistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"A general method for text localization and recognition in real-world images is presented. The proposed method is novel, as it (i) departs from a strict feed-forward pipeline and replaces it by a hypothesesverification framework simultaneously processing multiple text line hypotheses, (ii) uses synthetic fonts to train the algorithm eliminating the need for time-consuming acquisition and labeling of real-world training data and (iii) exploits Maximally Stable Extremal Regions (MSERs) which provides robustness to geometric and illumination conditions. The performance of the method is evaluated on two standard datasets. On the Char74k dataset, a recognition rate of 72 is achieved, 18 higher than the state-of-the-art. The paper is first to report both text detection and recognition results on the standard and rather challenging ICDAR 2003 dataset. The text localization works for number of alphabets and the method is easily adapted to recognition of other scripts, e.g. cyrillics.",
"Driven by deep neural networks and large scale datasets, scene text detection methods have progressed substantially over the past years, continuously refreshing the performance records on various standard benchmarks. However, limited by the representations (axis-aligned rectangles, rotated rectangles or quadrangles) adopted to describe text, existing methods may fall short when dealing with much more free-form text instances, such as curved text, which are actually very common in real-world scenarios. To tackle this problem, we propose a more flexible representation for scene text, termed as TextSnake, which is able to effectively represent text instances in horizontal, oriented and curved forms. In TextSnake, a text instance is described as a sequence of ordered, overlapping disks centered at symmetric axes, each of which is associated with potentially variable radius and orientation. Such geometry attributes are estimated via a Fully Convolutional Network (FCN) model. In experiments, the text detector based on TextSnake achieves state-of-the-art or comparable performance on Total-Text and SCUT-CTW1500, the two newly published benchmarks with special emphasis on curved text in natural images, as well as the widely-used datasets ICDAR 2015 and MSRA-TD500. Specifically, TextSnake outperforms the baseline on Total-Text by more than 40 in F-measure.",
"In this paper we propose a new solution to the text detection problem via border learning. Specifically, we make four major contributions: 1) We analyze the insufficiencies of the classic non-text and text settings for text detection. 2) We introduce the border class to the text detection problem for the first time, and validate that the decoding process is largely simplified with the help of text border. 3) We collect and release a new text detection PPT dataset containing 10,692 images with non-text, border, and text annotations. 4) We develop a lightweight (only 0.28M parameters), fully convolutional network (FCN) to effectively learn borders in text images. The results of our extensive experiments show that the proposed solution achieves comparable performance, and often outperforms state-of-theart approaches on standard benchmarks–even though our solution only requires minimal post-processing to parse a bounding box from a detected text map, while others often require heavy post-processing.",
"Scene text detection has witnessed rapid progress especially with the recent development of convolutional neural networks. However, there still exists two challenges which prevent the algorithm into industry applications. On the one hand, most of the state-of-art algorithms require quadrangle bounding box which is in-accurate to locate the texts with arbitrary shape. On the other hand, two text instances which are close to each other may lead to a false detection which covers both instances. Traditionally, the segmentation-based approach can relieve the first problem but usually fail to solve the second challenge. To address these two challenges, in this paper, we propose a novel Progressive Scale Expansion Network (PSENet), which can precisely detect text instances with arbitrary shapes. More specifically, PSENet generates the different scale of kernels for each text instance, and gradually expands the minimal scale kernel to the text instance with the complete shape. Due to the fact that there are large geometrical margins among the minimal scale kernels, our method is effective to split the close text instances, making it easier to use segmentation-based methods to detect arbitrary-shaped text instances. Extensive experiments on CTW1500, Total-Text, ICDAR 2015 and ICDAR 2017 MLT validate the effectiveness of PSENet. Notably, on CTW1500, a dataset full of long curve texts, PSENet achieves a F-measure of 74.3 at 27 FPS, and our best F-measure (82.2 ) outperforms state-of-art algorithms by 6.6 . The code will be released in the future."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | In @cite_6 authors propose an algorithm for data collection with self-awareness protection. The paper considers some data aggregators and consumers in a smart grid where some of the respondents may not participate in contributing their personal data or submit erroneous data. To overcome this issue a self-awareness protocol is proposed to enhance trust of the respondents when sending their personal data to the data aggregators. In this scheme, all consumers collaborate with each other to preserve the privacy. They have hired an idea, which allows respondents to know protection level before the data submission process is initiated. The work is motivated by @cite_23 and @cite_10 . In @cite_23 , co-privacy (co-operative privacy) is introduced. Co-privacy claims that best solution to achieve privacy is to help other parties to achieve their privacy. | {
"cite_N": [
"@cite_10",
"@cite_23",
"@cite_6"
],
"mid": [
"1553453440",
"1582127143",
"195135924"
],
"abstract": [
"We introduce the novel concept of coprivacy or co-operative privacy to make privacy preservation attractive. A protocol is coprivate if the best option for a player to preserve her privacy is to help another player in preserving his privacy. Coprivacy makes an individual�s privacy preservation a goal that rationally interests other individuals: it is a matter of helping oneself by helping someone else. We formally define coprivacy in terms of Nash equilibria. We then extend the concept to: i) general coprivacy, where a helping player�s utility (i.e. interest) may include earning functionality and security in addition to privacy; ii) mixed coprivacy, where mixed strategies and mixed Nash equilibria are allowed with some restrictions; iii) correlated coprivacy, in which Nash equilibria are replaced by correlated equilibria. Coprivacy can be applied to any peer-to-peer (P2P) protocol. We illustrate coprivacy in P2P anonymous keyword search, in content privacy in social networks, in vehicular network communications and in controlled content distribution and digital oblivion enforcement.",
"We introduce the novel concept of coprivacy or co-operative privacy to make privacy preservation attractive. A protocol is coprivate if the best option for a player to preserve her privacy is to help another player in preserving his privacy. Coprivacy makes an individual's privacy preservation a goal that rationally interests other individuals: it is a matter of helping oneself by helping someone else. We formally define coprivacy in terms of Nash equilibria. We then extend the concept to: i) general coprivacy, where a helping player's utility (i.e. interest) may include earning functionality and security in addition to privacy; ii) mixed coprivacy, where mixed strategies and mixed Nash equilibria are allowed with some restrictions; iii) correlated coprivacy, in which Nash equilibria are replaced by correlated equilibria. Coprivacy can be applied to any peer-to-peer (P2P) protocol. We illustrate coprivacy in P2P user-private information retrieval, and also in content privacy in on-line social networking.",
"Data privacy protection is an emerging issue in data collection due to increasing concerns related to security and privacy. In the current data collection approaches, data collector is a dominant player who enforces the secure protocol. In other words, privacy protection is only defined by the data collector without the participation of any respondents. Furthermore, the privacy protection becomes more crucial when the raw data analysis is performed by the data collector itself. In view of this, some of the respondents might refuse to contribute their personal data or submit inaccurate data. In this paper, we study a self-awareness protocol to raise the confidence of the respondents when submitting their personal data to the data collector. Our self-awareness protocol requires each respondent to help others in preserving his privacy. At the end of the protocol execution, respondents can verify the protection level (i.e., k-anonymity) they will receive from the data collector."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | In @cite_9 , a respondent-defined privacy protection (RDPP) is introduced. It means that respondents are allowed to determine their required privacy protection level before delivering data to data collector. Unlike other methods in which data aggregators decide about the privacy protection level, in this scheme the consumers can freely define the privacy protection level. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2028990530"
],
"abstract": [
"The massive amount of sensitive survey data about individuals that agencies collect and share through the Internet is causing a great deal of privacy concerns. These concerns may discourage individuals from revealing their sensitive information. Existing data collection techniques have serious downsides in terms of both efficiency and the levels of protection they offer against various realizations of threats. Moreover, they do not provide any flexibility to the users to be able to specify acceptable levels of privacy protection before deciding whether to participate in the surveys. In this paper, we propose a two-pronged privacy protection model corresponding to these two privacy concerns: these are a new efficient anonymity preserving data collection technique and a method to incorporate heterogeneous privacy constraints. Together, they help preserve the privacy of respondents both during and after data collection."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | There are also some other researches on privacy-preserving data collection. For instance, in @cite_0 authors design a balanced anonymity and traceability for outsourcing small-scale linear data aggregation (called BAT-LA) in smart grid. Anonymity means that consumers’ identity should be kept secret and traceability means that imposter consumers should be traced. Here an important challenge is that many devices are not capable of handling required complicated computations. Hence, they have hired the idea of outsourcing computations with the help of public cloud. The paper utilizes elliptic curve cryptography and proxy re-encryption to make BAT-LA secure. BAT-LA is evaluated by comparing it with two other schemes, RVK @cite_11 , and LMO @cite_2 and it is shown that BAT-LA is more efficient in terms of confidentiality. | {
"cite_N": [
"@cite_0",
"@cite_2",
"@cite_11"
],
"mid": [
"2473925076",
"2138272709",
"1514360972"
],
"abstract": [
"Along with the development of information technology, the traditional electrical grid is moving to smart grid technology. By using the smart grid, the users and utility providers can more efficiently manage and generate power. Along with the advantages, the smart grid is also faced with new security concerns. In the smart grid, the user's citizen identity information should be preserved and the offensive user should be traced. For some low-capacity devices, it is indispensable to perform complicated computation by using outsourcing computation. The authors provide the outsourcing computation through public cloud. Anonymity and traceability are two important security properties in the smart grid. They are the unity of opposites. On the basis of the security requirements, they propose the balanced anonymity and traceability for outsourcing small-scale data linear aggregation (BAT-LA) in the smart grid. The formal definition, system model and security model are presented. Then, a concrete BAT-LA protocol is designed by using the elliptic curve cryptography and proxy re-encryption. Through security analysis and performance analysis, the designed BAT-LA protocol is provably secure and efficient.",
"The widespread deployment of Automatic Metering Infrastructures in Smart Grid scenarios rises great concerns about privacy preservation of user-related data, from which detailed information about customer's habits and behaviors can be deduced. Therefore, the users' individual measurements should be aggregated before being provided to External Entities such as utilities, grid managers and third parties. This paper proposes a security architecture for distributed aggregation of additive data, in particular energy consumption metering data, relying on Gateways placed at the customers' premises, which collect the data generated by local Meters and provide communication and cryptographic capabilities. The Gateways communicate with one another and with the External Entities by means of a public data network. We propose a secure communication protocol aimed at preventing Gateways and External Entities from inferring information about individual data, in which privacy-preserving aggregation is performed by means of a cryptographic homomorphic scheme. The routing of information flows can be centralized or it can be performed in a distributed fashion using a protocol inspired by Chord. We compare the performance of both approaches to the optimal solution minimizing the data aggregation delay.",
"In vehicle-to-grid (V2G) networks, service providers are battery-powered vehicles, and the service consumer is the power grid. Security and privacy concerns are major obstacles for V2G networks to be extensively deployed. In 2011, proposed a very interesting privacy-preserving communication and precise reward architecture for V2G networks in smart grids. In this paper, we enhance ’s framework with the formal definitions of unforgeability and restrictiveness. Then, we propose a new traceable privacy-preserving communication and precise reward scheme with available cryptographic primitives. The proposed scheme is formally proven secure with well-established assumptions in the random oracle model. Thorough theoretical and experimental analyses demonstrate that our scheme is efficient and practical for secure V2G networks in smart grids."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | The papers @cite_0 and @cite_15 focus on outsourcing to clouds or distributed systems. For encryption process, it is important to use a secure key management scheme. The cryptographic technique ensures that no privacy sensitive information would be revealed. But, there is still the challenge of how to efficiently query encrypted multidimensional metering data stored in an untrusted heterogeneous distributed system environment. @cite_17 addresses this issue and introduces a high performance and privacy-preserving query (P2Q) scheme which provides confidentiality and privacy in a semi-trusted environment. | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_17"
],
"mid": [
"2473925076",
"2790979291",
"2365029783"
],
"abstract": [
"Along with the development of information technology, the traditional electrical grid is moving to smart grid technology. By using the smart grid, the users and utility providers can more efficiently manage and generate power. Along with the advantages, the smart grid is also faced with new security concerns. In the smart grid, the user's citizen identity information should be preserved and the offensive user should be traced. For some low-capacity devices, it is indispensable to perform complicated computation by using outsourcing computation. The authors provide the outsourcing computation through public cloud. Anonymity and traceability are two important security properties in the smart grid. They are the unity of opposites. On the basis of the security requirements, they propose the balanced anonymity and traceability for outsourcing small-scale data linear aggregation (BAT-LA) in the smart grid. The formal definition, system model and security model are presented. Then, a concrete BAT-LA protocol is designed by using the elliptic curve cryptography and proxy re-encryption. Through security analysis and performance analysis, the designed BAT-LA protocol is provably secure and efficient.",
"Abstract In a cyber-physical system, the control component plays an essential role to make the cyber and physical components work harmoniously together. When information collected from the physical space contains private or sensitive data that cannot be passed onto the cyber space, properly controlling the cyber-physical system becomes a very challenging task. For instance, the smart grid systems, a replacement for the traditional power grid systems, have been widely used in the industries. To prevent power shortage, threshold-based power usage control (PUC) in a smart grid considers a situation where the utility company sets a threshold to control the total power usage or supply of a neighborhood. If the total power usage exceeds the threshold, either certain households need to reduce their power consumption or the utility company needs to buy additional power supplies to meet the increasing demand. In these scenarios, the utility company needs to frequently collect power usage data from smart meters. It has been well documented that these power usage data can reveal a person's daily activity and violate personal privacy. To mitigate the privacy concerns, the goal of this paper is to develop efficient and privacy-preserving power usage control protocols that allow a utility company to balance supply and demand in a smart grid without violating personal privacy of its customers. We will provide extensive empirical study to show the practicality of our proposed protocols.",
"Abstract With the proliferation of smart grids, traditional utilities are struggling to handle the increasing amount of metering data. Outsourcing the metering data to heterogeneous distributed systems has the potential to provide efficient data access and processing. In an untrusted heterogeneous distributed system environment, employing data encryption prior to outsourcing can be an effective way to preserve user privacy. However, how to efficiently query encrypted multidimensional metering data stored in an untrusted heterogeneous distributed system environment remains a research challenge. In this paper, we propose a high performance and privacy-preserving query (P2Q) scheme over encrypted multidimensional big metering data to address this challenge. In the proposed scheme, encrypted metering data are stored in the server of an untrusted heterogeneous distributed system environment. A Locality Sensitive Hashing (LSH) based similarity search approach is then used to realize the similarity query. To demonstrate utility of the proposed LSH-based search approach, we implement a prototype using MapReduce for the Hadoop distributed environment. More specifically, for a given query, the proxy server will return K top similar data object identifiers. An enhanced Ciphertext-Policy Attribute-based Encryption (CP-ABE) policy is then used to control access to the search results. Therefore, only the requester with an authorized query attribute can obtain the correct secret keys to retrieve the metering data. We then prove that the P2Q scheme achieves data confidentiality and preserves the data owner’s privacy in a semi-trusted cloud. In addition, our evaluations demonstrate that the P2Q scheme can significantly reduce response time and provide high search efficiency without compromising on search quality (i.e. suitable for multidimensional big data search in heterogeneous distributed system, such as cloud storage system)."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | To obtain privacy of residential consumers, a scheme named APED is proposed in @cite_19 . The paper employs a pairwise private stream aggregation. The scheme achieves privacy preserving aggregation and also executes error detection when some nodes fail to function normally. DG-APED is an improved form of APED, suggested in @cite_7 . DG-APED propounds diverse grouping-based protocol with error detection. This research added differential privacy technique to APED. Moreover, DG-APED has an advantage of being efficient in terms of communication and computation overhead compared to APED. | {
"cite_N": [
"@cite_19",
"@cite_7"
],
"mid": [
"2091760837",
"1902335584"
],
"abstract": [
"Smart grid, as the next generation of power grid characterized with “two-way” communications, has been paid great attention to realize green, reliable and efficient electricity delivery for our future lives. In order to support the “two-way” communications in smart grid, a large number of smart meters should be deployed at customers to report their near real-time data to control center for monitoring. However, this kind of real-time report could disclose users' privacy, bringing down the users' willingness to participate in smart grid. In order to address the challenge, in this paper, we propose an efficient aggregation protocol with error detection, named APED, for secure smart grid communications. The proposed APED protocol employs a pairwise private stream aggregation scheme to not only achieve privacy-preserving aggregation but also perform error detection when some smart meters are malfunctioning. Detailed security analysis has shown that the proposed APED protocol can guarantee the security and privacy of smart grid communications. In addition, performance evaluation demonstrates its efficiency in terms of computation and communication cost.",
"Smart grid, as the next generation of power grid characterized by “two-way” communications, has been paid great attention to realizing green, reliable, and efficient electricity delivery for our future lives. In order to support the two-way communications in smart grid, a large number of smart meters (SMs) should be deployed to customers to report their near real-time data to control center for monitoring purpose. However, this kind of real-time report could disclose users’ privacy, bringing down the users’ willingness to participate in smart grid. In order to address the challenge, in this paper, by considering the lifetime of SMs as exponential distribution, we propose a diverse grouping-based aggregation protocol with error detection (DG-APED), which employs differential privacy technique into grouping-based private stream aggregation for secure smart grid communications. DG-APED can not only achieve privacy-preserving aggregation, but also perform error detection efficiently when some SMs are malfunctioning. Detailed security analysis shows that DG-APED can guarantee the security and privacy requirements of smart grid communications. In addition, extensive performance evaluation also verifies the effectiveness and efficiency of the proposed DG-APED."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | In @cite_20 , the authors present a new kind of attack, which adversary extracts information about the presence or absence of a consumer to access the smart meter information. The attack is called human-factor-aware differential aggregation (HDA) and it is claimed that other proposed schemes cannot handle it. To solve this issue, the authors introduce two privacy-preserving schemes which can stand out against HDA attack by transmitting encrypted measurements to an aggregator in a way that aggregator cannot steal any information of human activities. | {
"cite_N": [
"@cite_20"
],
"mid": [
"1991871638"
],
"abstract": [
"Privacy-preserving metering aggregation is regarded as an important research topic in securing a smart grid. In this paper, we first identify and formalize a new attack, in which the attacker could exploit the information about the presence or absence of a specific person to infer his meter readings. This attack, coined as human-factor-aware differential aggregation (HDA) attack, cannot be addressed in existing privacy-preserving aggregation protocols proposed for smart grids. We give a formal definition on it and propose two novel protocols, including basic scheme and advanced scheme, to achieve privacy-preserving smart metering data aggregation and to resist the HDA attack. Our protocol ensures that smart meters periodically upload encrypted measurements to a (electricity) supplier aggregator such that the aggregator is able to derive the aggregated statistics of all meter measurements but is unable to learn any information about the human activities. We present the formal security analysis for the proposed protocol to guarantee the strong privacy. Moreover, we evaluate the performance of our protocol in a Java-based implementation under different parameters. The performance and utility analysis shows that our protocol is simple, efficient, and practical."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | PDA is a privacy-preserving dual-functional aggregation scheme in which, every consumer disseminates only one data and then supplier computes two statistical averages (mean and variance) of all consumers @cite_3 . The paper shows by simulations that PDA possesses low computational complexity and communication overheads. In another work, the authors introduce privacy-preserving data aggregation with fault-tolerance called PDAFT @cite_28 . If PDAFT is implemented, a strong adversary is not able to gain any information, even in the case of compromising a few servers at the supplier. Like PDA, PDAFT has relatively high communication overhead and is tenacious against many security threats. In PDAFT scheme, if some consumers or servers fail, it can still work correctly. | {
"cite_N": [
"@cite_28",
"@cite_3"
],
"mid": [
"2119609004",
"1531182675"
],
"abstract": [
"Smart grid, as the next generation of power grid featured with efficient, reliable, and flexible characteristics, has received considerable attention in recent years. However, the full flourish of smart grid is still hindered by how to efficiently and effectively tackle with its security and privacy challenges. In this paper, we propose a privacy-preserving data aggregation scheme with fault tolerance, named PDAFT, for secure smart grid communications. Specifically, PDAFT uses the homomorphic Paillier Encryption technique to encrypt sensitive user data such that the control center can obtain the aggregated data without knowing individual ones, and a strong adversary who aims to threaten user privacy can learn nothing even though he has already compromised a few servers at the control center. In addition, PDAFT also supports the fault-tolerant feature, i.e., PDAFT can still work well even when some user failures and server malfunctions occur. Through extensive analysis, we demonstrate that PDAFT not only resists various security threats and preserves user privacy, but also has significantly less communication overhead compared with those previously reported competitive approaches.",
"Privacy-preserving aggregation for smart grid communications, which precisely meets the requirement of periodically collecting users' electricity consumption while preserving privacy of each individual user, has been extensively studied in recent years. However, most of existing privacy-preserving aggregation schemes are only focused on the summation aggregation. In this paper, based on the lattice cryptographic technique, we propose a novel privacy-preserving dual-functional aggregation scheme PDA for smart grid communications. With our proposed PDA scheme, each individual user just reports one data, then multiple statistic values, that is, mean and variance, of all users can be computed by the data & control center in the smart grid, while the privacy of each individual user can still be protected. Detailed security analyses demonstrate that our proposed PDA scheme is secure and robust. In addition, extensive performance evaluations also show that our proposed PDA scheme is efficient in terms of computational and communication overheads when the number of considered users is within an acceptable range. Copyright © 2015 John Wiley & Sons, Ltd."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | DPAFT @cite_25 is another privacy-preserving data collection scheme which supports both differential privacy and fault tolerance at the same time. It is claimed that, DPAFT surpass other schemes in many aspects, such as storage cost, computation complexity, utility of differential privacy, robustness of fault tolerance, and the efficiency of consumer addition or removal @cite_25 . A new malfunctioning data aggregation scheme, named MuDA, is introduced in @cite_24 . The scheme is resistant to differential attacks and keeps consumers’ information secret with an acceptable noise rate. PDAFT @cite_24 , DPAFT @cite_25 , and MuDA @cite_24 , shows nearly same characteristics with differences on their cryptographic methods @cite_4 . PDAFT employs homomorphic Paillier cryptosystem @cite_30 , while DPAFT and MUDA use Boneh-Goh-Nissim cryptosystem @cite_33 . | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_33",
"@cite_24",
"@cite_25"
],
"mid": [
"2132172731",
"2553643700",
"1798609567",
"2028009605",
"2074609553"
],
"abstract": [
"This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.",
"In this paper, we present a comprehensive survey of privacy-preserving schemes for Smart Grid communications. Specifically, we select and in-detail examine thirty privacy preserving schemes developed for or applied in the context of Smart Grids. Based on the communication and system models, we classify these schemes that are published between 2013 and 2016, in five categories, including, 1) Smart grid with the advanced metering infrastructure, 2) Data aggregation communications, 3) Smart grid marketing architecture, 4) Smart community of home gateways, and 5) Vehicle-to grid architecture. For each scheme, we survey the attacks of leaking privacy, countermeasures, and game theoretic approaches. In addition, we review the survey articles published in the recent years that deal with Smart Grids communications, applications, standardization, and security. Based on the current survey, several recommendations for further research are discussed at the end of this paper.",
"Let ψ be a 2-DNF formula on boolean variables x1,...,xn ∈ 0,1 . We present a homomorphic public key encryption scheme that allows the public evaluation of ψ given an encryption of the variables x1,...,xn. In other words, given the encryption of the bits x1,...,xn, anyone can create the encryption of ψ(x1,...,xn). More generally, we can evaluate quadratic multi-variate polynomials on ciphertexts provided the resulting value falls within a small set. We present a number of applications of the system: In a database of size n, the total communication in the basic step of the Kushilevitz-Ostrovsky PIR protocol is reduced from @math to @math . An efficient election system based on homomorphic encryption where voters do not need to include non-interactive zero knowledge proofs that their ballots are valid. The election system is proved secure without random oracles but still efficient. A protocol for universally verifiable computation.",
"Privacy-preserving data aggregation has been widely studied to meet the requirement of timely monitoring electricity consumption of users while protecting individual user’s data privacy in smart grid communications. In this paper, we propose a new multifunctional data aggregation scheme, named MuDA, for privacy-preserving smart grid communications. With MuDA, the smart grid control center can compute multiple statistical functions of users’ data in a privacy-preserving way to provide diversiform services. Moreover, MuDA is also designed to resist differential attacks that most secure data aggregation schemes may suffer. Through detailed security and utility analyses, we demonstrate that MuDA preserves users’ data privacy with acceptable noise rate. In addition, extensive performance evaluations are conducted to illustrate that our MuDA scheme is more efficient than a popular aggregation scheme in terms of communication overhead.",
"Privacy-preserving data aggregation has been widely studied to meet the requirement of timely monitoring measurements of users while protecting individual’s privacy in smart grid communications. In this paper, a new secure data aggregation scheme, named d ifferentially p rivate data a ggregation with f ault t olerance (DPAFT), is proposed, which can achieve differential privacy and fault tolerance simultaneously. Specifically, inspired by the idea of Diffie–Hellman key exchange protocol, an artful constraint relation is constructed for data aggregation. With this novel constraint, DPAFT can support fault tolerance of malfunctioning smart meters efficiently and flexibly. In addition, DPAFT is also enhanced to resist against differential attacks, which are suffered in most of the existing data aggregation schemes. By improving the basic Boneh–Goh–Nissim cryptosystem to be more applicable to the practical scenarios, DPAFT can resist much stronger adversaries, i.e., user’s privacy can be protected in the honest-but-curious model. Extensive performance evaluations are further conducted to illustrate that DPAFT outperforms the state-of-the-art data aggregation schemes in terms of storage cost, computation complexity, utility of differential privacy, robustness of fault tolerance, and the efficiency of user addition and removal."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | In @cite_27 authors present a secure power usage data aggregation method for smart grid where the supplier understands usage of each neighborhood and makes decision about energy distribution, while it has no idea of the individual electricity consumption of each consumer. This method is designed to barricade internal attacks and provide batch verification. Authors of @cite_26 found out that @cite_27 has the weakness of key leakage and the imposter can obtain the private key of consumer easily. It is proved that by using the protocol in @cite_26 , key leakage problem is solved and a better performance in terms of computational cost is achieved. Neglecting energy cost is the main disadvantage of this method. | {
"cite_N": [
"@cite_27",
"@cite_26"
],
"mid": [
"1976060656",
"2187738029"
],
"abstract": [
"According to related research, energy consumption can be effectively reduced by using energy management information of smart grids. In smart grid architecture, electricity suppliers can monitor, predicate, and control energy generation consumption in real time. Users can know the current price of electrical energy and obtain energy management information from smart meters. It helps users reduce home's energy use. However, electricity consumptions of users may divulge the privacy information of users. Therefore, privacy of users and communication security of the smart grid become crucial security issues. This paper presents a secure power-usage data aggregation scheme for smart grid. Electricity suppliers can learn about the current power usage of each neighborhood to arrange energy supply and distribution without knowing the individual electricity consumption of each user. This is the first scheme against internal attackers, and it provides secure batch verification. Additionally, the security of the proposed scheme is demonstrated by formal proofs.",
"With fast advancements of communication, systems and information technologies, a smart grid (SG) could bring much convenience to users because it could provide a reliable and efficient energy service. The data aggregation (DA) scheme for the SG plays an important role in evaluating information about current energy usage. To achieve the goal of preserving users' privacy, many DA schemes for the SG have been proposed in last decade. However, how to withstand attacks of internal adversaries is not considered in those schemes. To enhance preservation of privacy, proposed a DA scheme for the SG against internal adversaries. In 's DA scheme, blinding factors are used in evaluating information about current energy usage and the aggregator cannot get the consumption information of any individual user. demonstrated that their scheme was secure against various attacks. However, we find that their scheme suffers from the key leakage problem, i.e., the adversary could extract the user's private key through the public information. To overcome such serious weakness, this paper proposes an efficient and privacy-preserving DA scheme for the SG against internal attacks. Analysis shows that the proposed DA scheme not only overcome the key leakage problem in 's DA scheme, but also has better performance."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | In @cite_15 , a privacy-preserving protocol for smart grid is presented which outsources computations to cloud servers. In this protocol, the data is encrypted before outsourcing and consequently cloud can perform any computations without decryption. @cite_13 adopts perturbation techniques to preserve privacy and uses perturbation techniques and cryptosystems at the same time. It is designed in a way to be suitable for hardware-limited devices. Evaluations show that it is resilient to two types of attack: filtering attack, and true value attack. | {
"cite_N": [
"@cite_15",
"@cite_13"
],
"mid": [
"2790979291",
"2801779015"
],
"abstract": [
"Abstract In a cyber-physical system, the control component plays an essential role to make the cyber and physical components work harmoniously together. When information collected from the physical space contains private or sensitive data that cannot be passed onto the cyber space, properly controlling the cyber-physical system becomes a very challenging task. For instance, the smart grid systems, a replacement for the traditional power grid systems, have been widely used in the industries. To prevent power shortage, threshold-based power usage control (PUC) in a smart grid considers a situation where the utility company sets a threshold to control the total power usage or supply of a neighborhood. If the total power usage exceeds the threshold, either certain households need to reduce their power consumption or the utility company needs to buy additional power supplies to meet the increasing demand. In these scenarios, the utility company needs to frequently collect power usage data from smart meters. It has been well documented that these power usage data can reveal a person's daily activity and violate personal privacy. To mitigate the privacy concerns, the goal of this paper is to develop efficient and privacy-preserving power usage control protocols that allow a utility company to balance supply and demand in a smart grid without violating personal privacy of its customers. We will provide extensive empirical study to show the practicality of our proposed protocols.",
"Abstract The electric industry's planned shift to smart grid metering infrastructure raised several concerns especially about preserving the privacy. Various data perturbation and aggregation solutions are being developed to address these concerns. The drawback of these solutions is that a simple random noise scheme cannot protect privacy, and there is a need for more advanced perturbation techniques to increase hardware costs of smart metering devices. The proposed data aggregation scheme combines the power of perturbation techniques with crypto-systems in an efficient and lightweight way so that it becomes applicable for devices with limited hardware, such as smart meters. We investigated the privacy preserving capabilities of the proposed aggregation scheme with Holt-Winters and Seasonal Trend Decomposition using Loess prediction methods. The results indicate that the proposed scheme is resilient to both filtering and true value attacks."
]
} |
1904.06576 | 2939155038 | As smart grids are getting popular and being widely implemented, preserving the privacy of consumers is becoming more substantial. Power generation and pricing in smart grids depends on the continuously gathered information from the consumers. However, having access to the data relevant to the electricity consumption of each individual consumer is in conflict with its privacy. One common approach for preserving privacy is to aggregate data of different consumers and to use their smart-meters for calculating the bills. But in this approach, malicious consumers who send erroneous data to take advantage or disrupt smart grid cannot be identified. In this paper, we propose a new statistical-based scheme for data gathering and billing in which the privacy of consumers is preserved, and at the same time, if any consumer with erroneous data can be detected. Our simulation results verify these matters. | Authors of @cite_1 explain how for privacy preserving an individual meter of a consumer can share its readings to multiple consumers, and how a consumer can receive meter readings from multiple meters. They propose a polynomial-based protocol for pricing. In @cite_5 a security protocol called TPS3 is introduced which uses Temporal Perturbation and Shamir’s Secret Sharing (SSS) to guarantees privacy and reliability of consumers’ data. In @cite_29 , data collector tries to preserve privacy by adding some random noise to its computation result. To overcome the problem of computation accuracy reduction, an approximation method is proposed in @cite_29 which leads to obtain a closed form of aggregator’s decision problem. In @cite_14 , a slightly different scenario is considered in which a data aggregator collects data from consumers and then spreads data to supplier. The goal is to preserve consumers’ data privacy. Anonymization might be an answer, but it has its own challenges. To achieve a tradeoff between privacy protection and data utility, interactions among three elements of scenario (consumers, data aggregator, and supplier) is modelled as a game and the Nash equilibria of the game is found. | {
"cite_N": [
"@cite_5",
"@cite_29",
"@cite_14",
"@cite_1"
],
"mid": [
"2793901360",
"2782817518",
"2801383368",
"2550710304"
],
"abstract": [
"ABSTRACTSmart grid (SG) allows for two-way communication between the utility and its consumers and hence they are considered as an inevitable future of the traditional grid. Since consumers are the key component of SGs, providing security and privacy to their personal data is a critical problem. In this paper, a security protocol, namely TPS3, is based on Temporal Perturbation and Shamir’s Secret Sharing (SSS) schemes that are proposed to ensure the privacy of SG consumer’s data. Temporal perturbation is employed to provide temporal privacy, while the SSS scheme is used to ensure data confidentiality. Temporal perturbation adds random delays to the data collected by smart meters, whereas the SSS scheme fragments these data before transmitting them to the data collection server. Joint employment of both schemes makes it hard for attackers to obtain consumer data collected in the SG. The proposed protocol TPS3 is evaluated in terms of privacy, reliability, and communication cost using two different SG topol...",
"We study a mechanism design problem of privacy- preserving data collection with privacy protection uncertainty. A data collector wants to collect enough data to perform a certain computation that benefits the individuals who contribute the data, with the possibility of individual privacy leakage. The data collector adopts a privacy-preserving mechanism by adding some random noise to the computation result, which reduces the accuracy of the computation. Individuals decide whether to contribute data based on the potential benefit and the possible privacy cost induced by the mechanism. Due to the intrinsic uncertainty involved in privacy protection, we model individuals' privacy-aware participation using the prospect theory, which more accurately models individuals' behavior under uncertainty than the traditional expected utility theory. We show that the data collector's utility maximization problem involves a polynomial of high and fractional order, which is difficult to solve analytically. We get around this issue by proposing an approximation method, which allows us to obtain a closed form unique solution of the data collector's decision problem. We numerically show that the approximation error is small when the number of individuals is large. By comparing with the results under the expected utility theory, we conclude that a data collector who considers the more realistic prospect theory modeling should adopt a stricter privacy-preserving mechanism to boost her utility.",
"Collecting and publishing personal data may lead to the disclosure of individual privacy. In this chapter, we consider a scenario where a data collector collects data from data providers and then publish the data to a data miner. To protect data providers’ privacy, the data collector performs anonymization on the data. Anonymization usually causes a decline of data utility on which the data miner’s profit depends, meanwhile, data providers would provide more data if anonymity is strongly guaranteed. How to make a trade-off between privacy protection and data utility is an important question for data collector. We model the interactions among data providers, data collector and data miner as a game. A backward induction-based approach is proposed to find the Nash equilibria of the game. To elaborate the analysis, we also present a specific game formulation which uses k-anonymity as the privacy model. Simulation results show that the game theoretic analysis can help the data collector to achieve a better trade-off between privacy and utility.",
"Privacy-preserving billing protocols are useful in settings where a meter measures user consumption of some service, such as smart metering of utility consumption, pay-as-you-drive insurance and electronic toll collection. In such settings, service providers apply fine-grained tariff policies that require meters to provide a detailed account of user consumption. The protocols allow the user to pay to the service provider without revealing the user’s consumption measurements. Our contribution is twofold. First, we propose a general model where a meter can output meter readings to multiple users, and where a user receives meter readings from multiple meters. Unlike previous schemes, our model accommodates a wider variety of smart metering applications. Second, we describe a protocol based on polynomial commitments that improves the efficiency of previous protocols for tariff policies that employ splines to compute the price due."
]
} |
1904.06346 | 2941484865 | Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computer-aided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the "background" usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, a prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent [20], we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge "Multi-Atlas Labeling Beyond the Cranial Vault", a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97 , surpassing the prior art by a large margin of 3.27 . | Currently, the most successful deep learning techniques for semantic segmentation stem from a common forerunner, , Fully Convolutional Network (FCN) @cite_26 . Based on FCN, many recent advanced techniques have been proposed, such as DeepLab @cite_0 @cite_25 @cite_22 , SegNet @cite_20 , PSPNet @cite_33 , RefineNet @cite_32 , Most of these methods are based on supervised learning, hence requiring a sufficient number of labeled training data to train. To cope with scenarios where supervision is limited, researchers begin to investigate the weakly-supervised setting @cite_57 @cite_9 @cite_3 , , only bounding-boxes or image-level labels are available, and the semi-supervised setting @cite_57 @cite_27 , , unlabeled data are used to enlarge the training set. Papandreou al proposed EM-Adapt @cite_57 where the pseudo-labels of the unknown pixels are estimated in the expectation step and standard SGD is performed in the maximization step. Souly al @cite_39 demonstrated the usefulness of generative adversarial networks for semi-supervised segmentation. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_33",
"@cite_9",
"@cite_32",
"@cite_3",
"@cite_39",
"@cite_0",
"@cite_57",
"@cite_27",
"@cite_25",
"@cite_20"
],
"mid": [
"1903029394",
"",
"2560023338",
"1931270512",
"",
"1495267108",
"2778764040",
"2962872526",
"1529410181",
"",
"",
""
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"",
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained end-to-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge.",
"",
"Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called \"BoxSup\", produces competitive results (e.g., 62.0 mAP for validation) supervised by boxes only, on par with strong baselines (e.g., 63.8 mAP) fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT [26].",
"Semantic segmentation has been a long standing challenging task in computer vision. It aims at assigning a label to each image pixel and needs a significant number of pixel-level annotated data, which is often unavailable. To address this lack of annotations, in this paper, we leverage, on one hand, a massive amount of available unlabeled or weakly labeled data, and on the other hand, non-real images created through Generative Adversarial Networks. In particular, we propose a semi-supervised framework – based on Generative Adversarial Networks (GANs) – which consists of a generator network to provide extra training examples to a multi-class classifier, acting as discriminator in the GAN framework, that assigns sample a label y from the K possible classes or marks it as a fake sample (extra class). The underlying idea is that adding large fake visual data forces real samples to be close in the feature space, which, in turn, improves multiclass pixel classification. To ensure a higher quality of generated images by GANs with consequently improved pixel classification, we extend the above framework by adding weakly annotated data, i.e., we provide class level information to the generator. We test our approaches on several challenging benchmarking visual datasets, i.e. PASCAL, SiftFLow, Stanford and CamVid, achieving competitive performance compared to state-of-the-art semantic segmentation methods.",
"Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality.",
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL",
"",
"",
""
]
} |
1904.06346 | 2941484865 | Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computer-aided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the "background" usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, a prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent [20], we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge "Multi-Atlas Labeling Beyond the Cranial Vault", a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97 , surpassing the prior art by a large margin of 3.27 . | In the medical imaging domain, it becomes more intractable to acquire sufficient labeled data due to the difficulty of annotation, as the annotation has to be done by experts. Although fully-supervised methods ( , UNet @cite_45 , VoxResNet @cite_40 , DeepMedic @cite_4 , 3D-DSN @cite_55 , HNN @cite_23 ) have achieved remarkable performance improvement in tasks such as brain MR segmentation, abdominal single-organ segmentation and multi-organ segmentation, semi- or weakly- supervised learning is still a far more realistic solution. For example, Bai al @cite_15 proposed an EM-based iterative method, where a CNN is alternately trained on labeled and post-processed unlabeled sets. @cite_18 , supervised and unsupervised adversarial costs are involved to address semi-supervised gland segmentation. DeepCut @cite_43 shows that weak annotations such as bounding-boxes in medical image segmentation can also be utilized by performing an iterative optimization scheme like @cite_57 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_55",
"@cite_57",
"@cite_43",
"@cite_40",
"@cite_45",
"@cite_23",
"@cite_15"
],
"mid": [
"2750925197",
"2301358467",
"2463818697",
"1529410181",
"2396622801",
"2608353599",
"1901129140",
"2585890928",
"2751665805"
],
"abstract": [
"Semantic segmentation is a fundamental problem in biomedical image analysis. In biomedical practice, it is often the case that only limited annotated data are available for model training. Unannotated images, on the other hand, are easier to acquire. How to utilize unannotated images for training effective segmentation models is an important issue. In this paper, we propose a new deep adversarial network (DAN) model for biomedical image segmentation, aiming to attain consistently good segmentation results on both annotated and unannotated images. Our model consists of two networks: (1) a segmentation network (SN) to conduct segmentation; (2) an evaluation network (EN) to assess segmentation quality. During training, EN is encouraged to distinguish between segmentation results of unannotated images and annotated ones (by giving them different scores), while SN is encouraged to produce segmentation results of unannotated images such that EN cannot distinguish these from the annotated ones. Through an iterative adversarial training process, because EN is constantly “criticizing” the segmentation results of unannotated images, SN can be trained to produce more and more accurate segmentation for unannotated and unseen samples. Experiments show that our proposed DAN model is effective in utilizing unannotated image data to obtain considerably better segmentation.",
"This work is supported by the EPSRC First Grant scheme (grant ref no. EP N023668 1) and partially funded under the 7th Framework Programme by the European Commission (TBIcare: http: www.tbicare.eu ; CENTER-TBI: https: www.center-tbi.eu ). This work was further supported by a Medical Research Council (UK) Program Grant (Acute brain injury: heterogeneity of mechanisms, therapeutic targets and outcome effects [G9439390 ID 65883]), the UK National Institute of Health Research Biomedical Research Centre at Cambridge and Technology Platform funding provided by the UK Department of Health. KK is supported by the Imperial College London PhD Scholarship Programme. VFJN is supported by a Health Foundation Academy of Medical Sciences Clinician Scientist Fellowship. DKM is supported by an NIHR Senior Investigator Award. We gratefully acknowledge the support of NVIDIA Corporation with the donation of two Titan X GPUs for our research.",
"Automatic liver segmentation from CT volumes is a crucial prerequisite yet challenging task for computer-aided hepatic disease diagnosis and treatment. In this paper, we present a novel 3D deeply supervised network (3D DSN) to address this challenging task. The proposed 3D DSN takes advantage of a fully convolutional architecture which performs efficient end-to-end learning and inference. More importantly, we introduce a deep supervision mechanism during the learning process to combat potential optimization difficulties, and thus the model can acquire a much faster convergence rate and more powerful discrimination capability. On top of the high-quality score map produced by the 3D DSN, a conditional random field model is further employed to obtain refined segmentation results. We evaluated our framework on the public MICCAI-SLiver07 dataset. Extensive experiments demonstrated that our method achieves competitive segmentation results to state-of-the-art approaches with a much faster processing speed.",
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL",
"In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut[1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naive approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fet al magnetic resonance dataset and obtain encouraging results in terms of accuracy.",
"Abstract Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time-consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network ( VoxResNet ) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand-crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi-modality and multi-level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto-context version of the VoxResNet is proposed by combining the low-level image appearance features, implicit shape information, and high-level context together for further improving the segmentation performance. Extensive experiments on the well-known benchmark (i.e., MRBrainS ) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet . Our method achieved the first place in the challenge out of 37 competitors including several state-of-the-art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain-related studies, where accurate segmentation of brain structures is critical.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .",
"Abstract Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach—pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27 in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70 and 78.01 ± 8.20 , respectively, using the same dataset.",
"Training a fully convolutional network for pixel-wise (or voxel-wise) image segmentation normally requires a large number of training images with corresponding ground truth label maps. However, it is a challenge to obtain such a large training set in the medical imaging domain, where expert annotations are time-consuming and difficult to obtain. In this paper, we propose a semi-supervised learning approach, in which a segmentation network is trained from both labelled and unlabelled data. The network parameters and the segmentations for the unlabelled data are alternately updated. We evaluate the method for short-axis cardiac MR image segmentation and it has demonstrated a high performance, outperforming a baseline supervised method. The mean Dice overlap metric is 0.92 for the left ventricular cavity, 0.85 for the myocardium and 0.89 for the right ventricular cavity. It also outperforms a state-of-the-art multi-atlas segmentation method by a large margin and the speed is substantially faster."
]
} |
1904.06346 | 2941484865 | Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computer-aided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the "background" usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, a prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent [20], we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge "Multi-Atlas Labeling Beyond the Cranial Vault", a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97 , surpassing the prior art by a large margin of 3.27 . | However, these methods fail to capture the anatomical priors @cite_44 . Inclusion of priors in medical imaging could potentially have much more impact compared with their usage in natural images since anatomical objects in medical images are naturally more constrained in terms of shape, location, size, . Some recent works @cite_14 @cite_51 demonstrate that these priors can be learnt by a generative model. But these methods will induce heavy computational overhead. Kervadec al @cite_47 proposed that directly imposing inequality constraints on sizes is also an effective way of incorporating anatomical priors. Unlike these methods, we propose to learn from partial annotations by embedding the abdominal region statistics in the training objective, which requires no additional training budget. | {
"cite_N": [
"@cite_44",
"@cite_47",
"@cite_14",
"@cite_51"
],
"mid": [
"2592929672",
"2799738340",
"2798753173",
""
],
"abstract": [
"Abstract Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskelet al. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.",
"Abstract Weakly-supervised learning based on, e.g., partially labelled images or image-tags, is currently attracting significant attention in CNN segmentation as it can mitigate the need for full and laborious pixel voxel annotations. Enforcing high-order (global) inequality constraints on the network output (for instance, to constrain the size of the target region) can leverage unlabeled data, guiding the training process with domain-specific knowledge. Inequality constraints are very flexible because they do not assume exact prior knowledge. However, constrained Lagrangian dual optimization has been largely avoided in deep networks, mainly for computational tractability reasons. To the best of our knowledge, the method of (2015a) is the only prior work that addresses deep CNNs with linear constraints in weakly supervised segmentation. It uses the constraints to synthesize fully-labeled training masks (proposals) from weak labels, mimicking full supervision and facilitating dual optimization. We propose to introduce a differentiable penalty, which enforces inequality constraints directly in the loss function, avoiding expensive Lagrangian dual iterates and proposal generation. From constrained-optimization perspective, our simple penalty-based approach is not optimal as there is no guarantee that the constraints are satisfied. However, surprisingly, it yields substantially better results than the Lagrangian-based constrained CNNs in (2015a) , while reducing the computational demand for training. By annotating only a small fraction of the pixels, the proposed approach can reach a level of segmentation performance that is comparable to full supervision on three separate tasks. While our experiments focused on basic linear constraints such as the target-region size and image tags, our framework can be easily extended to other non-linear constraints, e.g., invariant shape moments (Klodt and Cremers, 2011) and other region statistics (, 2014). Therefore, it has the potential to close the gap between weakly and fully supervised learning in semantic medical image segmentation. Our code is publicly available.",
"We consider the problem of segmenting a biomedical image into anatomical regions of interest. We specifically address the frequent scenario where we have no paired training data that contains images and their manual segmentations. Instead, we employ unpaired segmentation images that we use to build an anatomical prior. Critically these segmentations can be derived from imaging data from a different dataset and imaging modality than the current task. We introduce a generative probabilistic model that employs the learned prior through a convolutional neural network to compute segmentations in an unsupervised setting. We conducted an empirical analysis of the proposed approach in the context of structural brain MRI segmentation, using a multi-study dataset of more than 14,000 scans. Our results show that an anatomical prior enables fast unsupervised segmentation which is typically not possible using standard convolutional networks. The integration of anatomical priors can facilitate CNN-based anatomical segmentation in a range of novel clinical problems, where few or no annotations are available and thus standard networks are not trainable. The code, model definitions and model weights are freely available at http: github.com adalca neuron.",
""
]
} |
1904.06400 | 2929084559 | In this paper, we propose a Distributed Intelligent Video Surveillance (DIVS) system using Deep Learning (DL) algorithms and deploy it in an edge computing environment. We establish a multi-layer edge computing architecture and a distributed DL training model for the DIVS system. The DIVS system can migrate computing workloads from the network center to network edges to reduce huge network communication overhead and provide low-latency and accurate video analysis solutions. We implement the proposed DIVS system and address the problems of parallel training, model synchronization, and workload balancing. Task-level parallel and model-level parallel training methods are proposed to further accelerate the video analysis process. In addition, we propose a model parameter updating method to achieve model synchronization of the global DL model in a distributed EC environment. Moreover, a dynamic data migration approach is proposed to address the imbalance of workload and computational power of edge nodes. Experimental results showed that the EC architecture can provide elastic and scalable computing power, and the proposed DIVS system can efficiently handle video surveillance and analysis tasks. | Various distributed AI and DL algorithms were proposed in distributed computing, cloud computing, fog computing, and edge computing environments to improve their performance and scalability @cite_32 @cite_22 @cite_25 @cite_29 @cite_33 @cite_35 . In our previous work, we proposed a two-layer parallel CNN training architecture in a distributed computing cluster @cite_32 . Li . discussed the application of Machine Learning (ML) in smart industry and introduced an efficient manufacture inspection system using fog computing @cite_5 . Diro . proposed a Long Short-Term Memory (LSTM) network for distributed attack detection in fog computing environments @cite_19 . Focusing on edge computing, Khelifi . discussed the applicability of merging DL models in EC environments, such as CNN, RNN, and RL @cite_2 . In @cite_21 , Li . designed an offloading strategy to optimize the performance of IoT deep learning applications in EC environments. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_33",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_25"
],
"mid": [
"",
"2802446276",
"",
"",
"2786070938",
"2895891814",
"2889885925",
"2896880663",
"2805454539",
""
],
"abstract": [
"",
"Many distributed deep learning systems have been published over the past few years, often accompanied by impressive performance claims. In practice these figures are often achieved in high performance computing (HPC) environments with fast InfiniBand network connections. For average deep learning practitioners this is usually an unrealistic scenario, since they cannot afford access to these facilities. Simple re-implementations of algorithms such as EASGD [1] for standard Ethernet environments often fail to replicate the scalability and performance of the original works [2] . In this paper, we explore this particular problem domain and present MPCA SGD, a method for distributed training of deep neural networks that is specifically designed to run in low-budget environments. MPCA SGD tries to make the best possible use of available resources, and can operate well if network bandwidth is constrained. Furthermore, MPCA SGD runs on top of the popular Apache Spark [3] framework. Thus, it can easily be deployed in existing data centers and office environments where Spark is already used. When training large deep learning models in a gigabit Ethernet cluster, MPCA SGD achieves significantly faster convergence rates than many popular alternatives. For example, MPCA SGD can train ResNet-152 [4] up to 5.3x faster than state-of-the-art systems like MXNet [5] , up to 5.3x faster than bulk-synchronous systems like SparkNet [6] and up to 5.3x faster than decentral asynchronous systems like EASGD [1] .",
"",
"",
"Deep learning is a promising approach for extracting accurate information from raw sensor data from IoT devices deployed in complex environments. Because of its multilayer structure, deep learning is also appropriate for the edge computing environment. Therefore, in this article, we first introduce deep learning for IoTs into the edge computing environment. Since existing edge nodes have limited processing capability, we also design a novel offloading strategy to optimize the performance of IoT deep learning applications with edge computing. In the performance evaluation, we test the performance of executing multiple deep learning tasks in an edge computing environment with our strategy. The evaluation results show that our method outperforms other optimization solutions on deep learning for IoT.",
"Benefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with high accuracy. However, the training process of CNNs is very time-consuming, where large amounts of training samples and iterative operations are required to obtain high-quality weight parameters. In this paper, we focus on the time-consuming training process of large-scale CNNs and propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed computing environments. BPT-CNN consists of two main components: (a) an outer-layer parallel training for multiple CNN subnetworks on separate data subsets, and (b) an inner-layer parallel training for each subnetwork. In the outer-layer parallelism, we address critical issues of distributed and parallel computing, including data communication, synchronization, and workload balance. A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA) strategy is proposed, where large-scale training datasets are partitioned and allocated to the computing nodes in batches according to their computing power. To minimize the synchronization waiting during the global weight update process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In the inner-layer parallelism, we further accelerate the training process for each CNN subnetwork on each computer, where computation steps of convolutional layer and the local weight training are parallelized based on task-parallelism. We introduce task decomposition and scheduling strategies with the objectives of thread-level load balancing and minimum waiting time for critical paths. Extensive experimental results indicate that the proposed BPT-CNN effectively improves the training performance of CNNs while maintaining the accuracy.",
"The evolution and sophistication of cyber-attacks need resilient and evolving cybersecurity schemes. As an emerging technology, the Internet of Things (IoT) inherits cyber-attacks and threats from the IT environment despite the existence of a layered defensive security mechanism. The extension of the digital world to the physical environment of IoT brings unseen attacks that require a novel lightweight and distributed attack detection mechanism due to their architecture and resource constraints. Architecturally, fog nodes can be leveraged to offload security functions from IoT and the cloud to mitigate the resource limitation issues of IoT and scalability bottlenecks of the cloud. Classical machine learning algorithms have been extensively used for intrusion detection, although scalability, feature engineering efforts, and accuracy have hindered their penetration into the security market. These shortcomings could be mitigated using the deep learning approach as it has been successful in big data fields. Apart from eliminating the need to craft features manually, deep learning is resilient against morphing attacks with high detection accuracy. This article proposes an LSTM network for distributed cyber-attack detection in fog-to-things communication. We identify and analyze critical attacks and threats targeting IoT devices, especially attacks exploiting vulnerabilities of wireless communications. The conducted experiments on two scenarios demonstrate the effectiveness and efficiency of deeper models over traditional machine learning models.",
"Various Internet solutions take their power processing and analysis from cloud computing services. Internet of Things (IoT) applications started discovering the benefits of computing, processing, and analysis on the device itself aiming to reduce latency for time-critical applications. However, on-device processing is not suitable for resource-constraints IoT devices. Edge computing (EC) came as an alternative solution that tends to move services and computation more closer to consumers, at the edge. In this letter, we study and discuss the applicability of merging deep learning (DL) models, i.e., convolutional neural network (CNN), recurrent neural network (RNN), and reinforcement learning (RL), with IoT and information-centric networking which is a promising future Internet architecture, combined all together with the EC concept. Therefore, a CNN model can be used in the IoT area to exploit reliably data from a complex environment. Moreover, RL and RNN have been recently integrated into IoT, which can be used to take the multi-modality of data in real-time applications into account.",
"With the rapid development of Internet of things devices and network infrastructure, there have been a lot of sensors adopted in the industrial productions, resulting in a large size of data. One of the most popular examples is the manufacture inspection, which is to detect the defects of the products. In order to implement a robust inspection system with higher accuracy, we propose a deep learning based classification model in this paper, which can find the possible defective products. As there may be many assembly lines in one factory, one huge problem in this scenario is how to process such big data in real time. Therefore, we design our system with the concept of fog computing. By offloading the computation burden from the central server to the fog nodes, the system obtains the ability to deal with extremely large data. There are two obvious advantages in our system. The first one is that we adapt the convolutional neural network model to the fog computing environment, which significantly improves its computing efficiency. The other one is that we work out an inspection model, which can simultaneously indicate the defect type and its degree. The experiments well prove that the proposed method is robust and efficient.",
""
]
} |
1904.06264 | 2942471690 | We introduce a method to infer a variational approximation to the posterior distribution of solutions in computational imaging inverse problems. Machine learning methods applied to computational imaging have proven very successful, but have so far largely focused on retrieving a single optimal solution for a given task. Such retrieval is arguably an incomplete description of the solution space, as in ill-posed inverse problems there may be many similarly likely reconstructions. We minimise an upper bound on the divergence between our approximate distribution and the true intractable posterior, thereby obtaining a probabilistic description of the solution space in imaging inverse problems with empirical prior. We demonstrate the advantage of our technique in quantitative simulations with the CelebA dataset and common image reconstruction tasks. We then apply our method to two of the currently most challenging problems in experimental optics: imaging through highly scattering media and imaging through multi-modal optical fibres. In both settings we report state of the art reconstructions, while providing new capabilities, such as estimation of error-bars and visualisation of multiple likely reconstructions. | However, despite its theoretical benefits and wide adoption, sparsity regularisation is often too generic to accurately model the desired assumptions in a given imaging setting @cite_37 . | {
"cite_N": [
"@cite_37"
],
"mid": [
"2604885021"
],
"abstract": [
"While deep learning methods have achieved state-of-theart performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, each inverse problem requires its own dedicated network. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and expensive to use these problem-specific networks. On the other hand, traditional methods using analytic signal priors can be used to solve any linear inverse problem; this often comes with a performance that is worse than learning-based methods. In this work, we provide a middle ground between the two kinds of methods — we propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems. We achieve this by training a network that acts as a quasi-projection operator for the set of natural images and show that any linear inverse problem involving natural images can be solved using iterative methods. We empirically show that the proposed framework demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting."
]
} |
1904.06264 | 2942471690 | We introduce a method to infer a variational approximation to the posterior distribution of solutions in computational imaging inverse problems. Machine learning methods applied to computational imaging have proven very successful, but have so far largely focused on retrieving a single optimal solution for a given task. Such retrieval is arguably an incomplete description of the solution space, as in ill-posed inverse problems there may be many similarly likely reconstructions. We minimise an upper bound on the divergence between our approximate distribution and the true intractable posterior, thereby obtaining a probabilistic description of the solution space in imaging inverse problems with empirical prior. We demonstrate the advantage of our technique in quantitative simulations with the CelebA dataset and common image reconstruction tasks. We then apply our method to two of the currently most challenging problems in experimental optics: imaging through highly scattering media and imaging through multi-modal optical fibres. In both settings we report state of the art reconstructions, while providing new capabilities, such as estimation of error-bars and visualisation of multiple likely reconstructions. | A second emerging class of learning based methods in CI is that of maximum a posteriori (MAP) inference with generative models. In these frameworks, similarly to MAP inference with analytical priors, the solution is found by minimising the Euclidean distance to the observations @math . However, instead of regularising the solution with some penalty function, the signal of interest is assumed to be generated by a pre-trained generative model as @math , where @math is the generative model's latent variable @cite_18 . As the generative model is trained with an example set that the target signal is assumed to belong to, generated images are expected to retain empirically learned properties @cite_37 . The solution is then found by performing the following minimisation, In such a way, the solution is constrained to be within the domain of the generative model, as the recovered @math is by definition generated from @math , but at the same time agreement to the measurements is directly induced. Though the optimisation problem is not convex, certain theoretical bounds were derived for the cases of linear observation processes by and more recently phase-less linear observation processes @cite_4 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_37"
],
"mid": [
"2949536516",
"",
"2604885021"
],
"abstract": [
"The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model @math . Our main theorem is that, if @math is @math -Lipschitz, then roughly @math random Gaussian measurements suffice for an @math recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use @math - @math x fewer measurements than Lasso for the same accuracy.",
"",
"While deep learning methods have achieved state-of-theart performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, each inverse problem requires its own dedicated network. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and expensive to use these problem-specific networks. On the other hand, traditional methods using analytic signal priors can be used to solve any linear inverse problem; this often comes with a performance that is worse than learning-based methods. In this work, we provide a middle ground between the two kinds of methods — we propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems. We achieve this by training a network that acts as a quasi-projection operator for the set of natural images and show that any linear inverse problem involving natural images can be solved using iterative methods. We empirically show that the proposed framework demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting."
]
} |
1904.06264 | 2942471690 | We introduce a method to infer a variational approximation to the posterior distribution of solutions in computational imaging inverse problems. Machine learning methods applied to computational imaging have proven very successful, but have so far largely focused on retrieving a single optimal solution for a given task. Such retrieval is arguably an incomplete description of the solution space, as in ill-posed inverse problems there may be many similarly likely reconstructions. We minimise an upper bound on the divergence between our approximate distribution and the true intractable posterior, thereby obtaining a probabilistic description of the solution space in imaging inverse problems with empirical prior. We demonstrate the advantage of our technique in quantitative simulations with the CelebA dataset and common image reconstruction tasks. We then apply our method to two of the currently most challenging problems in experimental optics: imaging through highly scattering media and imaging through multi-modal optical fibres. In both settings we report state of the art reconstructions, while providing new capabilities, such as estimation of error-bars and visualisation of multiple likely reconstructions. | MAP inference with generative models has been demonstrated in compressed sensing settings @cite_18 @cite_23 and with several common image processing tasks @cite_37 . Though it is iterative and requires the definition of a differentiable observation model, it is formally more consistent than inverse mappings; the solution is found by maximising data fidelity under empirically learned signal assumptions. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_23"
],
"mid": [
"2949536516",
"2604885021",
""
],
"abstract": [
"The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model @math . Our main theorem is that, if @math is @math -Lipschitz, then roughly @math random Gaussian measurements suffice for an @math recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use @math - @math x fewer measurements than Lasso for the same accuracy.",
"While deep learning methods have achieved state-of-theart performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, each inverse problem requires its own dedicated network. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and expensive to use these problem-specific networks. On the other hand, traditional methods using analytic signal priors can be used to solve any linear inverse problem; this often comes with a performance that is worse than learning-based methods. In this work, we provide a middle ground between the two kinds of methods — we propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems. We achieve this by training a network that acts as a quasi-projection operator for the set of natural images and show that any linear inverse problem involving natural images can be solved using iterative methods. We empirically show that the proposed framework demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting.",
""
]
} |
1904.06052 | 2938946739 | In this paper, we present COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (this http URL). COCI is the first open citation index created by OpenCitations, in which we have applied the concept of citations as first-class data entities, and it contains more than 445 million DOI-to-DOI citation links derived from the data available in Crossref. These citations are described in RDF by means of the newly extended version of the OpenCitations Data Model (OCDM). We introduce the workflow we have developed for creating these data, and also show the additional services that facilitate the access to and querying of these data via different access points: a SPARQL endpoint, a REST API, bulk downloads, Web interfaces, and direct access to the citations via HTTP content negotiation. Finally, we present statistics regarding the use of COCI citation data, and we introduce several projects that have already started to use COCI data for different purposes. | ScholarlyData ( http: www.scholarlydata.org ) @cite_26 is a project that refactors the Semantic Web Dog Food so as to keep the dataset growing in good health. It uses the Conference Ontology, an improvement version of the Semantic Web Conference Ontology, to describe metadata of documents (5,415, as of March 31, 2019), people (more than 1,100), and data about academic events (592) where such documents have been presented. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2522413525"
],
"abstract": [
"The Semantic Web Dog Food (SWDF) is the reference linked dataset of the Semantic Web community about papers, people, organisations, and events related to its academic conferences. In this paper we analyse the existing problems of generating, representing and maintaining Linked Data for the SWDF. With this work (i) we provide a refactored and cleaned SWDF dataset; (ii) we use a novel data model which improves the Semantic Web Conference Ontology, adopting best ontology design practices and (iii) we provide an open source workflow to support a healthy growth of the dataset beyond the Semantic Web conferences."
]
} |
1904.06052 | 2938946739 | In this paper, we present COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (this http URL). COCI is the first open citation index created by OpenCitations, in which we have applied the concept of citations as first-class data entities, and it contains more than 445 million DOI-to-DOI citation links derived from the data available in Crossref. These citations are described in RDF by means of the newly extended version of the OpenCitations Data Model (OCDM). We introduce the workflow we have developed for creating these data, and also show the additional services that facilitate the access to and querying of these data via different access points: a SPARQL endpoint, a REST API, bulk downloads, Web interfaces, and direct access to the citations via HTTP content negotiation. Finally, we present statistics regarding the use of COCI citation data, and we introduce several projects that have already started to use COCI data for different purposes. | Another important source of bibliographic data in RDF is OpenAIRE ( https: www.openaire.eu ) @cite_19 . Created by funding from the European Union, its RDF dataset makes available data for around 34 million research products created in the context of around 2.5 million research projects. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2612009452"
],
"abstract": [
"OpenAIRE, the Open Access Infrastructure for Research in Europe, enables search, discovery and monitoring of publications and datasets from more than 100,000 research projects. Increasing the reusability of the OpenAIRE research metadata, connecting it to other open data about projects, publications, people and organizations, and reaching out to further related domains requires better technical interoperability, which we aim at achieving by exposing the OpenAIRE Information Space as Linked Data. We present a scalable and maintainable architecture that converts the OpenAIRE data from its original HBase NoSQL source to RDF. We furthermore explore how this novel integration of data about research can facilitate scholarly communication."
]
} |
1904.06052 | 2938946739 | In this paper, we present COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (this http URL). COCI is the first open citation index created by OpenCitations, in which we have applied the concept of citations as first-class data entities, and it contains more than 445 million DOI-to-DOI citation links derived from the data available in Crossref. These citations are described in RDF by means of the newly extended version of the OpenCitations Data Model (OCDM). We introduce the workflow we have developed for creating these data, and also show the additional services that facilitate the access to and querying of these data via different access points: a SPARQL endpoint, a REST API, bulk downloads, Web interfaces, and direct access to the citations via HTTP content negotiation. Finally, we present statistics regarding the use of COCI citation data, and we introduce several projects that have already started to use COCI data for different purposes. | The OpenCitations Corpus (OCC, https: w3id.org oc corpus ) @cite_13 is a collection of open bibliographic and citation data created by ourselves, harvested from the open access literature available in PubMed Central. As of March 31, 2019, it contains information about almost 14 million citation links to more than 7.5 million cited bibliographic resources. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2766227634"
],
"abstract": [
"Reference lists from academic articles are core elements of scholarly communication that permit the attribution of credit and integrate our independent research endeavours. Hitherto, however, they have not been freely available in an appropriate machine-readable format such as RDF and in aggregate for use by scholars. To address this issue, one year ago we started ingesting citation data from the Open Access literature into the OpenCitations Corpus (OCC), creating an RDF dataset of scholarly citation data that is open to all. In this paper we introduce the OCC and we discuss its outcomes and uses after the first year of life."
]
} |
1904.06052 | 2938946739 | In this paper, we present COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (this http URL). COCI is the first open citation index created by OpenCitations, in which we have applied the concept of citations as first-class data entities, and it contains more than 445 million DOI-to-DOI citation links derived from the data available in Crossref. These citations are described in RDF by means of the newly extended version of the OpenCitations Data Model (OCDM). We introduce the workflow we have developed for creating these data, and also show the additional services that facilitate the access to and querying of these data via different access points: a SPARQL endpoint, a REST API, bulk downloads, Web interfaces, and direct access to the citations via HTTP content negotiation. Finally, we present statistics regarding the use of COCI citation data, and we introduce several projects that have already started to use COCI data for different purposes. | WikiCite ( https: meta.wikimedia.org wiki WikiCite ) is a proposal, with a related series of workshops, which aims at building a bibliographic database in Wikidata @cite_10 to serve all Wikimedia projects. Currently Wikidata hosts (as of March 29, 2019) more than 170 million citations. | {
"cite_N": [
"@cite_10"
],
"mid": [
"150699991"
],
"abstract": [
"Wikidata is the central data management platform of Wikipedia. By the efforts of thousands of volunteers, the project has produced a large, open knowledge base with many interesting applications. The data is highly interlinked and connected to many other datasets, but it is also very rich, complex, and not available in RDF. To address this issue, we introduce new RDF exports that connect Wikidata to the Linked Data Web. We explain the data model of Wikidata and discuss its encoding in RDF. Moreover, we introduce several partial exports that provide more selective or simplified views on the data. This includes a class hierarchy and several other types of ontological axioms that we extract from the site. All datasets we discuss here are freely available online and updated regularly."
]
} |
1904.06052 | 2938946739 | In this paper, we present COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (this http URL). COCI is the first open citation index created by OpenCitations, in which we have applied the concept of citations as first-class data entities, and it contains more than 445 million DOI-to-DOI citation links derived from the data available in Crossref. These citations are described in RDF by means of the newly extended version of the OpenCitations Data Model (OCDM). We introduce the workflow we have developed for creating these data, and also show the additional services that facilitate the access to and querying of these data via different access points: a SPARQL endpoint, a REST API, bulk downloads, Web interfaces, and direct access to the citations via HTTP content negotiation. Finally, we present statistics regarding the use of COCI citation data, and we introduce several projects that have already started to use COCI data for different purposes. | Biotea ( https: biotea.github.io ) @cite_15 is an RDF datasets containing information about some of the articles available in the Open Access subset of PubMed Central, that have been enhanced with specialized annotation pipelines. The last released dataset includes information extracted from 2,811 articles, including data on their citations. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2778697372"
],
"abstract": [
"A significant portion of biomedical literature is represented in a manner that makes it difficult for consumers to find or aggregate content through a computational query. One approach to facilitate reuse of the scientific literature is to structure this information as linked data using standardized web technologies. In this paper we present the second version of Biotea, a semantic, linked data version of the open-access subset of PubMed Central that has been enhanced with specialized annotation pipelines that uses existing infrastructure from the National Center for Biomedical Ontology. We expose our models, services, software and datasets. Our infrastructure enables manual and semi-automatic annotation, resulting data are represented as RDF-based linked data and can be readily queried using the SPARQL query language. We illustrate the utility of our system with several use cases. Our datasets, methods and techniques are available at http: biotea.github.io."
]
} |
1904.06052 | 2938946739 | In this paper, we present COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (this http URL). COCI is the first open citation index created by OpenCitations, in which we have applied the concept of citations as first-class data entities, and it contains more than 445 million DOI-to-DOI citation links derived from the data available in Crossref. These citations are described in RDF by means of the newly extended version of the OpenCitations Data Model (OCDM). We introduce the workflow we have developed for creating these data, and also show the additional services that facilitate the access to and querying of these data via different access points: a SPARQL endpoint, a REST API, bulk downloads, Web interfaces, and direct access to the citations via HTTP content negotiation. Finally, we present statistics regarding the use of COCI citation data, and we introduce several projects that have already started to use COCI data for different purposes. | Finally, Semantic Lancet @cite_14 proposes to build a dataset of scholarly publication metadata and citations (including the specification of the citation functions) starting from articles published by Elsevier. To date it includes bibliographic metadata, abstract and citations of 291 articles published in the Journal of Web Semantics. | {
"cite_N": [
"@cite_14"
],
"mid": [
"633162274"
],
"abstract": [
"In this poster we introduce the Semantic Lancet Project, whose goal is to make available rich data about scholarly publications and to provide users with sophisticated services on top of those data."
]
} |
1904.06031 | 2937220865 | Batch normalization (BN) has been very effective for deep learning and is widely used. However, when training with small minibatches, models using BN exhibit a significant degradation in performance. In this paper we study this peculiar behavior of BN to gain a better understanding of the problem, and identify a potential cause based on a statistical insight. We propose EvalNorm' to address the issue by estimating corrected normalization statistics to use for BN during evaluation. EvalNorm supports online estimation of the corrected statistics while the model is being trained, and it does not affect the training scheme of the model. As a result, an added advantage of EvalNorm is that it can be used with existing pre-trained models allowing them to benefit from our method. EvalNorm yields large gains for models trained with smaller batches. Our experiments show that EvalNorm performs 6.18 (absolute) better than vanilla BN for a batchsize of 2 on ImageNet validation set and from 1.5 to 7.0 points (absolute) gain on the COCO object detection benchmark across a variety of setups. | Instead of dealing with the small minibatch problem, several normalization techniques have been proposed that do not utilize the construct of a minibatch.' Instance Normalization @cite_20 performs normalization similar to BN but only for a single sample and was shown to be effective on image style transfer applications. Similarly, Layer Normalization @cite_10 utilizes the entire layer (all channels) to estimate the normalization statistics. These approaches @cite_10 @cite_20 have not shown benefits on image recognition tasks, which is the application we focus on. Instead of normalizing the activations, Weight Normalization @cite_9 reparameterizes the weights in the neural network to accelerate convergence. Normalization Propagation @cite_24 uses data independent moment estimates in every layer, instead of computing them from minibatches during training,. Group Normalization (GN) @cite_2 divides the channels into groups and, within each group, computes the moments for normalization. GN alleviates the small minibatch problem to some extent, but it performs worse than BN for larger minibatches. provide a unifying view of the different normalization approaches by characterizing them as the same transformation but along different dimensions (layers, samples, filters, etc.). | {
"cite_N": [
"@cite_9",
"@cite_24",
"@cite_2",
"@cite_10",
"@cite_20"
],
"mid": [
"2963685250",
"2292729293",
"2795783309",
"2951720195",
"2502312327"
],
"abstract": [
"We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.",
"While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks-- Internal Covariate Shift-- the current solution has certain drawbacks. Specifically, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate for validation due to shifting parameter values (especially during initial training epochs). Also, BN cannot be used with batch-size 1 during training. We address these drawbacks by proposing a non-adaptive normalization technique for removing internal covariate shift, that we call Normalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.",
"Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6 lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.",
"Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution.",
"It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at this https URL. Full paper can be found at arXiv:1701.02096."
]
} |
1904.06152 | 2941859974 | Software model checking has experienced significant progress in the last two decades, however, one of its major bottlenecks for practical applications remains its scalability and adaptability. Here, we describe an approach to integrate software model checking techniques into the DevOps culture by exploiting practices such as continuous integration and regression tests. In particular, our proposed approach looks at the modifications to the software system since its last verification, and submits them to a continuous formal verification process, guided by a set of regression test cases. Our vision is to focus on the developer in order to integrate formal verification techniques into the developer workflow by using their main software development methodologies and tools. | For instance, Klein @cite_12 show how to scale formal proofs based on software architecture to real systems at low cost; Godefroid, Levin, and Molnar @cite_7 describe the remarkable impact of SAGE tool, which performs dynamic symbolic execution to hunt for security issues in Microsoft applications; Cordeiro, Fischer, and Marques-Silva @cite_11 as well as Yin and Knight @cite_13 propose approaches to conduct formal verification of large software systems. Furthermore, there are two important studies that tackle the combination of formal techniques with continuous integration, which led to promising results and reflect the need and scientific challenges in the industry to follow this road. First, Chudnov @cite_19 describe how Amazon Web Services (AWS) prove the correctness of their Transport Layer Security (TLS) protocol implementation, and how they use CI tools to keep proving the software properties during its lifetime. Similarly, O'Hearn @cite_10 presents Infer, a static analyzer used at Facebook following a continuous reasoning approach. Neither Chudnov nor O'Hearn try to handle model checking in a continuous process; the latter states this as an open challenge for the community. | {
"cite_N": [
"@cite_7",
"@cite_10",
"@cite_19",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2042033151",
"2810768857",
"2884325678",
"2185774137",
"2893106650",
"2100142439"
],
"abstract": [
"",
"This paper describes work in continuous reasoning, where formal reasoning about a (changing) codebase is done in a fashion which mirrors the iterative, continuous model of software development that is increasingly practiced in industry. We suggest that advances in continuous reasoning will allow formal reasoning to scale to more programs, and more programmers. The paper describes the rationale for continuous reasoning, outlines some success cases from within industry, and proposes directions for work by the scientific community.",
"We describe formal verification of s2n, the open source TLS implementation used in numerous Amazon services. A key aspect of this proof infrastructure is continuous checking, to ensure that properties remain proven during the lifetime of the software. At each change to the code, proofs are automatically re-established with little to no interaction from the developers. We describe the proof itself and the technical decisions that enabled integration into development.",
"We introduce a scalable proof structure to facilitate formal verification of large software systems. In our approach, we mechanically synthesize an abstract specification from the software implementation, match its static operational structure to that of the original specification, and organize the proof as the conjunction of a series of lemmas about the specification structure. By setting up a different lemma for each distinct element and proving each lemma independently, we obtain the important benefit that the proof scales easily for large systems. We present details of the approach and an illustration of its application on a challenge problem from the security domain.",
"Verified software secures the Unmanned Little Bird autonomous helicopter against mid-flight cyber attacks.",
"The complexity of software in embedded systems has increased significantly over the last years so that software verification now plays an important role in ensuring the overall product quality. In this context, bounded model checking has been successfully applied to discover subtle errors, but for larger applications, it often suffers from the state space explosion problem. This paper describes a new approach called continuous verification to detect design errors as quickly as possible by exploiting information from the software configuration management system and by combining dynamic and static verification to reduce the state space to be explored. We also give a set of encodings that provide accurate support for program verification and use different background theories in order to improve scalability and precision in a completely automatic way. A case study from the telecommunications domain shows that the proposed approach improves the error-detection capability and reduces the overall verification time by up to 50 ."
]
} |
1904.06039 | 2936954755 | Recent research on Software-Defined Networking (SDN) strongly promotes the adoption of distributed controller architectures. To achieve high network performance, designing a scheduling function (SF) to properly dispatch requests from each switch to suitable controllers becomes critical. However, existing literature tends to design the SF targeted at specific network settings. In this paper, a reinforcement-learning-based (RL) approach is proposed with the aim to automatically learn a general, effective, and efficient SF. In particular, a new dispatching system is introduced in which the SF is represented as a neural network that determines the priority of each controller. Based on the priorities, a controller is selected using our proposed probability selection scheme to balance the trade-off between exploration and exploitation during learning. In order to train a general SF, we first formulate the scheduling function design problem as an RL problem. Then a new training approach is developed based on a state-of-the-art deep RL algorithm. Our simulation results show that our RL approach can rapidly design (or learn) SFs with optimal performance. Apart from that, the trained SF can generalize well and outperforms commonly used scheduling heuristics under various network settings. | In recent years, distributed controller architectures @cite_20 @cite_8 have been widely adopted in SDN to enhance the network performance. Although multiple controllers can be deployed in the control plane, the network performance still heavily relies on effective utilization of the controller resources. Thus, designing an effective and efficient SF for request dispatching is of great importance. | {
"cite_N": [
"@cite_20",
"@cite_8"
],
"mid": [
"2074616737",
"2798915702"
],
"abstract": [
"We present our experiences to date building ONOS (Open Network Operating System), an experimental distributed SDN control platform motivated by the performance, scalability, and availability requirements of large operator networks. We describe and evaluate two ONOS prototypes. The first version implemented core features: a distributed, but logically centralized, global network view; scale-out; and fault tolerance. The second version focused on improving performance. Based on experience with these prototypes, we identify additional steps that will be required for ONOS to support use cases such as core network traffic engineering and scheduling, and to become a usable open source, distributed network OS platform that the SDN community can build upon.",
"Computer networks lack a general control paradigm, as traditional networks do not provide any network-wide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes. To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability."
]
} |
1904.06039 | 2936954755 | Recent research on Software-Defined Networking (SDN) strongly promotes the adoption of distributed controller architectures. To achieve high network performance, designing a scheduling function (SF) to properly dispatch requests from each switch to suitable controllers becomes critical. However, existing literature tends to design the SF targeted at specific network settings. In this paper, a reinforcement-learning-based (RL) approach is proposed with the aim to automatically learn a general, effective, and efficient SF. In particular, a new dispatching system is introduced in which the SF is represented as a neural network that determines the priority of each controller. Based on the priorities, a controller is selected using our proposed probability selection scheme to balance the trade-off between exploration and exploitation during learning. In order to train a general SF, we first formulate the scheduling function design problem as an RL problem. Then a new training approach is developed based on a state-of-the-art deep RL algorithm. Our simulation results show that our RL approach can rapidly design (or learn) SFs with optimal performance. Apart from that, the trained SF can generalize well and outperforms commonly used scheduling heuristics under various network settings. | In the literature, there exist SFs in the form of heuristics designed by human experts. For example, a weighted round-robin heuristic is designed to proportionally forward requests to controllers based on their processing capacities. BLAC @cite_22 randomly sampled a small number of controllers and sent requests to the least loaded one. Similar approaches can also be found in literature @cite_23 @cite_3 . Although such manually designed SFs are intuitive and simple in nature, the design process is time-consuming and requires substantial domain knowledge. Moreover, the performance of manually designed SFs could also vary significantly, depending on specific network settings. | {
"cite_N": [
"@cite_3",
"@cite_22",
"@cite_23"
],
"mid": [
"2779533463",
"2770754945",
"2034677282"
],
"abstract": [
"One grand challenge in software defined networking is to select appropriate locations for controllers to shorten the latency between controllers and switches in wide area networks. In the literature, the majority of approaches are focused on the reduction of packet propagation latency, but propagation latency is only one of the contributors of the overall latency between controllers and their associated switches. In this paper, we explore and investigate more possible contributors of the latency, including the end-to-end latency and the queuing latency of controllers. In order to decrease the end-to-end latency, the concept of network partition is introduced and a clustering-based network partition algorithm (CNPA) is then proposed to partition the network. The CNPA can guarantee that each partition is able to shorten the maximum end-to-end latency between controllers and switches. To further decrease the queuing latency of controllers, appropriate multiple controllers are then placed in the subnetworks. Extensive simulations are conducted under two real network topologies from the Internet Topology Zoo. The results verify that the proposed algorithm can remarkably reduce the maximum latency between controllers and their associated switches.",
"Distributed controller architectures have been proposed for Software-Defined Networking (SDN) to ensure scalability and reliability. One major drawback of the existing architectures is the uneven load distribution among controllers stemming from the static binding between controllers and switches. To address this issue, several existing studies introduce dynamic binding by adopting some switch migration mechanisms that re-associate switches from overloaded controllers to underutilized controllers. However, the migration process adds a considerable amount of complexity to the system and may incur significant network latency. In this paper, we propose BLAC, a novel BindingLess Architecture for distributed Controllers (BLAC), in which load balance is achieved with the help of the proposed scheduling layer, which intercepts flow requests from switches and dispatches them to different controllers as determined by selected scheduling algorithms. The process is proceeded transparently with no extra modification required for off-the-shelf SDN switches. Besides, the scheduling layer can flexibly support various scheduling algorithms and causes neither disruption of service nor significant network delay. We build a prototype that can work with various distributed controller systems and conduct experiments to demonstrate its efficacy. The results show that our design outperforms the static-binding controller system in terms of both system throughput and response time without the complexity of the dynamic-binding controller system.",
"Software Defined Networking (SDN) has become a popular paradigm for centralized control in many modern networking scenarios such as data centers and cloud. For large data centers hosting many hundreds of thousands of servers, there are few thousands of switches that need to be managed in a centralized fashion, which cannot be done using a single controller node. Previous works have proposed distributed controller architectures to address scalability issues. A key limitation of these works, however, is that the mapping between a switch and a controller is statically configured, which may result in uneven load distribution among the controllers as traffic conditions change dynamically. To address this problem, we propose ElastiCon, an elastic distributed controller architecture in which the controller pool is dynamically grown or shrunk according to traffic conditions. To address the load imbalance caused due to spatial and temporal variations in the traffic conditions, ElastiCon automatically balances the load across controllers thus ensuring good performance at all times irrespective of the traffic dynamics. We propose a novel switch migration protocol for enabling such load shifting, which conforms with the Openflow standard. We further design the algorithms for controller load balancing and elasticity. We also build a prototype of ElastiCon and evaluate it extensively to demonstrate the efficacy of our design."
]
} |
1904.06039 | 2936954755 | Recent research on Software-Defined Networking (SDN) strongly promotes the adoption of distributed controller architectures. To achieve high network performance, designing a scheduling function (SF) to properly dispatch requests from each switch to suitable controllers becomes critical. However, existing literature tends to design the SF targeted at specific network settings. In this paper, a reinforcement-learning-based (RL) approach is proposed with the aim to automatically learn a general, effective, and efficient SF. In particular, a new dispatching system is introduced in which the SF is represented as a neural network that determines the priority of each controller. Based on the priorities, a controller is selected using our proposed probability selection scheme to balance the trade-off between exploration and exploitation during learning. In order to train a general SF, we first formulate the scheduling function design problem as an RL problem. Then a new training approach is developed based on a state-of-the-art deep RL algorithm. Our simulation results show that our RL approach can rapidly design (or learn) SFs with optimal performance. Apart from that, the trained SF can generalize well and outperforms commonly used scheduling heuristics under various network settings. | To address these limitations, EC techniques have been widely applied for automatically designing SFs. For instance, @cite_2 proposed a multi-objective Genetic Programming (GP) approach for handling dynamic job shop scheduling problems. Although promising results have been obtained in the literature, these SFs are generally designed offline and cannot easily and quickly adapt to the never-ending changes in the network environment @cite_15 . Besides, each newly evolved SF must be extensively tested in either simulated or real-world environments, which is time-consuming and costly. | {
"cite_N": [
"@cite_15",
"@cite_2"
],
"mid": [
"2275596639",
"2087376002"
],
"abstract": [
"Hyper-heuristics have recently emerged as a powerful approach to automate the design of heuristics for a number of different problems. Production scheduling is a particularly popular application area for which a number of different hyper-heuristics have been developed and are shown to be effective, efficient, easy to implement, and reusable in different shop conditions. In particular, they seem to be a promising way to tackle highly dynamic and stochastic scheduling problems, an aspect that is specifically emphasized in this survey. Despite their success and the substantial number of papers in this area, there is currently no systematic discussion of the design choices and critical issues involved in the process of developing such approaches. This paper strives to fill this gap by summarizing the state-of-the-art approaches, suggesting a taxonomy, and providing the interested researchers and practitioners with guidelines for the design of hyper-heuristics in production scheduling. This paper also identifies challenges and open questions and highlights various directions for future work.",
"A scheduling policy strongly influences the performance of a manufacturing system. However, the design of an effective scheduling policy is complicated and time consuming due to the complexity of each scheduling decision, as well as the interactions among these decisions. This paper develops four new multi-objective genetic programming-based hyperheuristic (MO-GPHH) methods for automatic design of scheduling policies, including dispatching rules and due-date assignment rules in job shop environments. In addition to using three existing search strategies, nondominated sorting genetic algorithm II, strength Pareto evolutionary algorithm 2, and harmonic distance-based multi-objective evolutionary algorithm, to develop new MO-GPHH methods, a new approach called diversified multi-objective cooperative evolution (DMOCC) is also proposed. The novelty of these MO-GPHH methods is that they are able to handle multiple scheduling decisions simultaneously. The experimental results show that the evolved Pareto fronts represent effective scheduling policies that can dominate scheduling policies from combinations of existing dispatching rules with dynamic regression-based due-date assignment rules. The evolved scheduling policies also show dominating performance on unseen simulation scenarios with different shop settings. In addition, the uniformity of the scheduling policies obtained from the proposed method of DMOCC is better than those evolved by other evolutionary approaches."
]
} |
1904.06039 | 2936954755 | Recent research on Software-Defined Networking (SDN) strongly promotes the adoption of distributed controller architectures. To achieve high network performance, designing a scheduling function (SF) to properly dispatch requests from each switch to suitable controllers becomes critical. However, existing literature tends to design the SF targeted at specific network settings. In this paper, a reinforcement-learning-based (RL) approach is proposed with the aim to automatically learn a general, effective, and efficient SF. In particular, a new dispatching system is introduced in which the SF is represented as a neural network that determines the priority of each controller. Based on the priorities, a controller is selected using our proposed probability selection scheme to balance the trade-off between exploration and exploitation during learning. In order to train a general SF, we first formulate the scheduling function design problem as an RL problem. Then a new training approach is developed based on a state-of-the-art deep RL algorithm. Our simulation results show that our RL approach can rapidly design (or learn) SFs with optimal performance. Apart from that, the trained SF can generalize well and outperforms commonly used scheduling heuristics under various network settings. | Recently, a completely different design approach based on RL has been studied in the literature @cite_4 @cite_13 @cite_1 . @cite_13 proposed an RL-based approach to automatically allocate the server resources in data centers. DeepRM @cite_4 tackled the multi-resource cluster scheduling problem using policy search to optimize various objectives, e.g., average job completion time and resource utilization. @cite_1 leveraged the delay-tolerant feature of IoT traffic and developed an RL-based scheduler to handle traffic variation so that the network utilization can be constantly optimized. | {
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_4"
],
"mid": [
"2102558581",
"2788005034",
"2546571074"
],
"abstract": [
"Reinforcement Learning (RL) provides a promising new approach to systems performance management that differs radically from standard queuing-theoretic approaches making use of explicit system performance models. In principle, RL can automatically learn high-quality management policies without an explicit performance model or traffic model and with little or no built-in system specific knowledge. In our original work [1], [2], [3] we showed the feasibility of using online RL to learn resource valuation estimates (in lookup table form) which can be used to make high-quality server allocation decisions in a multi-application prototype Data Center scenario. The present work shows how to combine the strengths of both RL and queuing models in a hybrid approach in which RL trains offline on data collected while a queuing model policy controls the system. By training offline we avoid suffering potentially poor performance in live online training. We also now use RL to train nonlinear function approximators (e.g. multi-layer perceptrons) instead of lookup tables; this enables scaling to substantially larger state spaces. Our results now show that in both open-loop and closed-loop traffic, hybrid RL training can achieve significant performance improvements over a variety of initial model-based policies. We also find that, as expected, RL can deal effectively with both transients and switching delays, which lie outside the scope of traditional steady-state queuing theory.",
"",
"Resource management problems in systems and networking often manifest as difficult online decision making tasks where appropriate solutions depend on understanding the workload and environment. Inspired by recent advances in deep reinforcement learning for AI problems, we consider building systems that learn to manage resources directly from experience. We present DeepRM, an example solution that translates the problem of packing tasks with multiple resource demands into a learning problem. Our initial results show that DeepRM performs comparably to state-of-the-art heuristics, adapts to different conditions, converges quickly, and learns strategies that are sensible in hindsight."
]
} |
1904.06034 | 2937385938 | We propose a supervised anomaly detection method based on neural density estimators, where the negative log likelihood is used for the anomaly score. Density estimators have been widely used for unsupervised anomaly detection. By the recent advance of deep learning, the density estimation performance has been greatly improved. However, the neural density estimators cannot exploit anomaly label information, which would be valuable for improving the anomaly detection performance. The proposed method effectively utilizes the anomaly label information by training the neural density estimator so that the likelihood of normal instances is maximized and the likelihood of anomalous instances is lower than that of the normal instances. We employ an autoregressive model for the neural density estimator, which enables us to calculate the likelihood exactly. With the experiments using 16 datasets, we demonstrate that the proposed method improves the anomaly detection performance with a few labeled anomalous instances, and achieves better performance than existing unsupervised and supervised anomaly detection methods. | A number of unsupervised methods for anomaly detection, which is sometimes called outlier detection @cite_32 or novelty detection @cite_37 , have been proposed, such as the local outlier factor @cite_9 , one-class support vector machines @cite_49 , and the isolation forest @cite_45 . With density estimation based anomaly detection methods, Gaussian distributions @cite_16 , Gaussian mixtures @cite_2 and kernel density estimators @cite_13 have been used. The density estimation methods have been regarded as unsuitable for anomaly detection in high-dimensional data due to the difficulty of estimating multivariate probability distributions @cite_46 @cite_38 . Although some supervised anomaly detection methods have been proposed @cite_20 @cite_34 @cite_21 @cite_28 @cite_29 @cite_52 @cite_41 @cite_55 , they are not based on deep autoregressive density estimators, which can achieve high density estimation performance. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_28",
"@cite_41",
"@cite_9",
"@cite_21",
"@cite_29",
"@cite_32",
"@cite_34",
"@cite_52",
"@cite_55",
"@cite_45",
"@cite_49",
"@cite_2",
"@cite_46",
"@cite_16",
"@cite_13",
"@cite_20"
],
"mid": [
"2118882002",
"175423134",
"2752155475",
"2803446235",
"2144182447",
"2584401436",
"2963046013",
"2137130182",
"117883395",
"2804914057",
"2922995320",
"",
"2132870739",
"1543388142",
"",
"2091311817",
"2100106957",
"2528067192"
],
"abstract": [
"We propose a new statistical approach to the problem of inlier-based outlier detection, i.e., finding outliers in the test set based on the training set consisting only of inliers. Our key idea is to use the ratio of training and test data densities as an outlier score. This approach is expected to have better performance even in high-dimensional problems since methods for directly estimating the density ratio without going through density estimation are available. Among various density ratio estimation methods, we employ the method called unconstrained least-squares importance fitting (uLSIF) since it is equipped with natural cross-validation procedures, allowing us to objectively optimize the value of tuning parameters such as the regularization parameter and the kernel width. Furthermore, uLSIF offers a closed-form solution as well as a closed-form formula for the leave-one-out error, so it is computationally very efficient and is scalable to massive datasets. Simulations with benchmark and real-world datasets illustrate the usefulness of the proposed approach.",
"",
"Anomaly detectors are often used to produce a ranked list of statistical anomalies, which are examined by human analysts in order to extract the actual anomalies of interest. Unfortunately, in realworld applications, this process can be exceedingly difficult for the analyst since a large fraction of high-ranking anomalies are false positives and not interesting from the application perspective. In this paper, we aim to make the analyst's job easier by allowing for analyst feedback during the investigation process. Ideally, the feedback influences the ranking of the anomaly detector in a way that reduces the number of false positives that must be examined before discovering the anomalies of interest. In particular, we introduce a novel technique for incorporating simple binary feedback into tree-based anomaly detectors. We focus on the Isolation Forest algorithm as a representative tree-based anomaly detector, and show that we can significantly improve its performance by incorporating feedback, when compared with the baseline algorithm that does not incorporate feedback. Our technique is simple and scales well as the size of the data increases, which makes it suitable for interactive discovery of anomalies in large datasets.",
"Anomaly detection is a classical problem in computer vision, namely the determination of the normal from the abnormal when datasets are highly biased towards one class (normal) due to the insufficient sample size of the other class (abnormal). While this can be addressed as a supervised learning problem, a significantly more challenging problem is that of detecting the unknown unseen anomaly case that takes us instead into the space of a one-class, semi-supervised learning paradigm. We introduce such a novel anomaly detection model, by using a conditional generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space. Employing encoder-decoder-encoder sub-networks in the generator network enables the model to map the input image to a lower dimension vector, which is then used to reconstruct the generated output image. The use of the additional encoder network maps this generated image to its latent representation. Minimizing the distance between these images and the latent vectors during training aids in learning the data distribution for the normal samples. As a result, a larger distance metric from this learned data distribution at inference time is indicative of an outlier from that distribution - an anomaly. Experimentation over several benchmark datasets, from varying domains, shows the model efficacy and superiority over previous state-of-the-art approaches.",
"For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical.",
"Unsupervised anomaly detection algorithms search for outliers and then predict that these outliers are the anomalies. When deployed, however, these algorithms are often criticized for high false positive and high false negative rates. One cause of poor performance is that not all outliers are anomalies and not all anomalies are outliers. In this paper, we describe an Active Anomaly Discovery (AAD) method for incorporating expert feedback to adjust the anomaly detector so that the outliers it discovers are more in tune with the expert user's semantic understanding of the anomalies. The AAD approach is designed to operate in an interactive data exploration loop. In each iteration of this loop, our algorithm first selects a data instance to present to the expert as a potential anomaly and then the expert labels the instance as an anomaly or as a nominal data point. Our algorithm updates its internal model with the instance label and the loop continues until a budget of B queries is spent. The goal of our approach is to maximize the total number of true anomalies in the B instances presented to the expert. We show that when compared to other state-of-the-art algorithms, AAD is consistently one of the best performers.",
"Generative models are widely used for unsupervised learning with various applications, including data compression and signal restoration. Training methods for such systems focus on the generality of the network given limited amount of training data. A less researched type of techniques concerns generation of only a single type of input. This is useful for applications such as constraint handling, noise reduction and anomaly detection. In this paper we present a technique to limit the generative capability of the network using negative learning. The proposed method searches the solution in the gradient direction for the desired input and in the opposite direction for the undesired input. One of the application can be anomaly detection where the undesired inputs are the anomalous data. We demonstrate the features of the algorithm using MNIST handwritten digit dataset and latter apply the technique to a real-world obstacle detection problem. The results clearly show that the proposed learning technique can significantly improve the performance for anomaly detection.",
"Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review.",
"This paper presents a principled approach for incorporating labeled examples into an anomaly detection task. We demonstrate that, with the addition of labeled examples, the anomaly detection algorithm can be guided to develop better models of the normal and abnormal behavior of the data, thus improving the detection rate and reducing the false alarm rate of the algorithm. A framework based on the finite mixture model is introduced to model the data as well as the constraints imposed by the labeled examples. Empirical studies conducted on real data sets show that significant improvements in detection rate and false alarm rate are achieved using our proposed framework.",
"This work formalizes the new framework for anomaly detection, called active anomaly detection. This framework has, in practice, the same cost of unsupervised anomaly detection but with the possibility of much better results. We show that unsupervised anomaly detection is an undecidable problem and that a prior needs to be assumed for the anomalies probability distribution in order to have performance guarantees. Finally, we also present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active anomaly detection method, presenting results on both synthetic and real anomaly detection datasets.",
"We propose the Autoencoding Binary Classifiers (ABC), a novel supervised anomaly detector based on the Autoencoder (AE). There are two main approaches in anomaly detection: supervised and unsupervised. The supervised approach accurately detects the known anomalies included in training data, but it cannot detect the unknown anomalies. Meanwhile, the unsupervised approach can detect both known and unknown anomalies that are located away from normal data points. However, it does not detect known anomalies as accurately as the supervised approach. Furthermore, even if we have labeled normal data points and anomalies, the unsupervised approach cannot utilize these labels. The ABC is a probabilistic binary classifier that effectively exploits the label information, where normal data points are modeled using the AE as a component. By maximizing the likelihood, the AE in the proposed ABC is trained to minimize the reconstruction error for normal data points, and to maximize it for known anomalies. Since our approach becomes able to reconstruct the normal data points accurately and fails to reconstruct the known and unknown anomalies, it can accurately discriminate both known and unknown anomalies from normal data points. Experimental results show that the ABC achieves higher detection performance than existing supervised and unsupervised methods.",
"",
"Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a \"simple\" subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data.",
"",
"",
"",
"This paper presents a first attempt to evaluate two previously proposed methods for statistical anomaly detection in sea traffic, namely the Gaussian Mixture Model (GMM) and the adaptive Kernel Density Estimator (KDE). A novel performance measure related to anomaly detection, together with an intermediate performance measure related to normalcy modeling, are proposed and evaluated using recorded AIS data of vessel traffic and simulated anomalous trajectories. The normalcy modeling evaluation indicates that KDE more accurately captures finer details of normal data. Yet, results from anomaly detection show no significant difference between the two techniques and the performance of both is considered suboptimal. Part of the explanation is that the methods are based on a rather artificial division of data into geographical cells. The paper therefore discusses other clustering approaches based on more informed features of data and more background knowledge regarding the structure and natural classes of the data.",
"Network security is of vital importance for corporations and institutions. In order to protect valuable computer systems, network data needs to be analyzed so that possible network intrusions can be detected. Supervised machine learning methods achieve high accuracy at classifying network data as normal or malicious, but they require the availability of fully labeled data. The recently developed ladder network, which combines neural networks with unsupervised learning, shows promise in achieving a high accuracy while only requiring a small number of labeled examples. We applied the ladder network to classifying network data using the Third International Knowledge Discovery and Data Mining Tools Competition dataset (KDD 1999). Our experiments, show the ladder network was able to achieve similar results compared to supervised classifiers while using a limited number of labeled samples. Disciplines Computer Sciences | Information Security | Management Information Systems | Technology and Innovation This event is available at DigitalCommons@Kennesaw State University: http: digitalcommons.kennesaw.edu ccerp 2016 Practice 2"
]
} |
1904.06025 | 2939449863 | In order to drive safely and efficiently under merging scenarios, autonomous vehicles should be aware of their surroundings and make decisions by interacting with other road participants. Moreover, different strategies should be made when the autonomous vehicle is interacting with drivers having different level of cooperativeness. Whether the vehicle is on the merge-lane or main-lane will also influence the driving maneuvers since drivers will behave differently when they have the right-of-way than otherwise. Many traditional methods have been proposed to solve decision making problems under merging scenarios. However, these works either are incapable of modeling complicated interactions or require implementing hand-designed rules which cannot properly handle the uncertainties in real-world scenarios. In this paper, we proposed an interaction-aware decision making with adaptive strategies (IDAS) approach that can let the autonomous vehicle negotiate the road with other drivers by leveraging their cooperativeness under merging scenarios. A single policy is learned under the multi-agent reinforcement learning (MARL) setting via the curriculum learning strategy, which enables the agent to automatically infer other drivers' various behaviors and make decisions strategically. A masking mechanism is also proposed to prevent the agent from exploring states that violate common sense of human judgment and increase the learning efficiency. An exemplar merging scenario was used to implement and examine the proposed method. | This paper @cite_11 : It requires extra information of other vehicle during execution (behavior), which is hard to obtain without a V2V assumption | {
"cite_N": [
"@cite_11"
],
"mid": [
"2741086815"
],
"abstract": [
"Autonomous driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intention of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surroundings allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation allows a low-dimensional state-space so that the problem can be solved online for varying road layouts and number of other vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. We show the results with simulations for the crossing of complex (unsignalized) intersections. Our approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches."
]
} |
1904.06025 | 2939449863 | In order to drive safely and efficiently under merging scenarios, autonomous vehicles should be aware of their surroundings and make decisions by interacting with other road participants. Moreover, different strategies should be made when the autonomous vehicle is interacting with drivers having different level of cooperativeness. Whether the vehicle is on the merge-lane or main-lane will also influence the driving maneuvers since drivers will behave differently when they have the right-of-way than otherwise. Many traditional methods have been proposed to solve decision making problems under merging scenarios. However, these works either are incapable of modeling complicated interactions or require implementing hand-designed rules which cannot properly handle the uncertainties in real-world scenarios. In this paper, we proposed an interaction-aware decision making with adaptive strategies (IDAS) approach that can let the autonomous vehicle negotiate the road with other drivers by leveraging their cooperativeness under merging scenarios. A single policy is learned under the multi-agent reinforcement learning (MARL) setting via the curriculum learning strategy, which enables the agent to automatically infer other drivers' various behaviors and make decisions strategically. A masking mechanism is also proposed to prevent the agent from exploring states that violate common sense of human judgment and increase the learning efficiency. An exemplar merging scenario was used to implement and examine the proposed method. | This paper @cite_0 MADDPG: The nature of interaction between agents can either be cooperative, competitive, or both and many algorithms are designed only for a particular nature of interaction It doesn't address multi-agent credit assignment problem as in @cite_6 . Different from @cite_6 , it learns a separate centralized critic for each agent, allowing for agents with different reward functions including competitive scenarios. It uses the framework of centralized training with decentralized execution. | {
"cite_N": [
"@cite_0",
"@cite_6"
],
"mid": [
"2623431351",
"2617547828"
],
"abstract": [
"We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.",
"Cooperative multi-agent systems can be naturally used to model many real world problems, such as network packet routing and the coordination of autonomous vehicles. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actor-critic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state."
]
} |
1904.06025 | 2939449863 | In order to drive safely and efficiently under merging scenarios, autonomous vehicles should be aware of their surroundings and make decisions by interacting with other road participants. Moreover, different strategies should be made when the autonomous vehicle is interacting with drivers having different level of cooperativeness. Whether the vehicle is on the merge-lane or main-lane will also influence the driving maneuvers since drivers will behave differently when they have the right-of-way than otherwise. Many traditional methods have been proposed to solve decision making problems under merging scenarios. However, these works either are incapable of modeling complicated interactions or require implementing hand-designed rules which cannot properly handle the uncertainties in real-world scenarios. In this paper, we proposed an interaction-aware decision making with adaptive strategies (IDAS) approach that can let the autonomous vehicle negotiate the road with other drivers by leveraging their cooperativeness under merging scenarios. A single policy is learned under the multi-agent reinforcement learning (MARL) setting via the curriculum learning strategy, which enables the agent to automatically infer other drivers' various behaviors and make decisions strategically. A masking mechanism is also proposed to prevent the agent from exploring states that violate common sense of human judgment and increase the learning efficiency. An exemplar merging scenario was used to implement and examine the proposed method. | This paper @cite_6 COMA: It uses a centralized critic. It uses a counterfactual baseline to address the issue of multi-agent credit assignment. In this way, the agent is able to deduce it's own contribution to the team's success | {
"cite_N": [
"@cite_6"
],
"mid": [
"2617547828"
],
"abstract": [
"Cooperative multi-agent systems can be naturally used to model many real world problems, such as network packet routing and the coordination of autonomous vehicles. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actor-critic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state."
]
} |
1904.06072 | 2936333464 | We consider a multipoint-to-point network in which sensors periodically send measurements to a gateway. The system uses Long Range (LoRa) communications in a frequency band with duty-cycle limits. Our aim is to enhance the reliability of the measurement transmissions. In this setting, retransmission protocols do not scale well with the number of sensors as the duty cycle limit prevents a gateway from acknowledging all receptions if there are many sensors. We thus intend to improve the reliability without acknowledgments by transmitting multiple copies of a measurement, so that the gateway is able to obtain this measurement as long as it receives at least one copy. Each frame includes the current and a few past measurements. We propose a strategy for redundancy allocation that takes into account the effects of fading and interference to determine the number of measurements to be included in a frame. Numerical results obtained using the simulation tool LoRaSim show that the allocation of redundancy provides up to six orders of magnitude decrease in the outage probability. Compared to a system that blindly allocates the maximum redundancy possible under duty-cycle and delay constraints of the gateway and memory constraints of the sensors, our technique provides up to 30 reduction in the average energy spent to successfully deliver a measurement to the gateway. | The transmission of redundant data for improved reliability of LoRaWAN is also considered in @cite_5 , where the redundancy is generated by performing application-layer coding on past data. To avoid increasing the computational load at the sensors, we do not consider coding. Unlike our work, the presence of multiple transmitters and the resulting interference are not considered in @cite_5 . | {
"cite_N": [
"@cite_5"
],
"mid": [
"2606902731"
],
"abstract": [
"Internet of Things (IoT) solutions are increasingly being deployed for smart applications. To provide good communication for the increasing number of smart applications, there is a need for low cost and long range Low Power Wide Area Network (LPWAN) technologies. LoRaWAN is an energy efficient and inexpensive LPWAN solution that is rapidly being adopted all around the world. However, LoRaWAN does not guarantee reliable communication in its basic configuration. Transmitted frames can be lost due to the channel effects and mobility of the end-devices. In this study, we perform extensive measurements on a new LoRaWAN network to characterise spatial and temporal properties of the LoRaWAN channel. The empirical outage probability for the farthest measured distance from the closest gateway of 7.5 km in our deployment is as low as 0.004, but the frame loss measured at this distance was up to 70 . Furthermore, we show that burstiness in frame loss can be expected for both mobile and stationary scenarios. Frame loss results in data loss, since in the basic configuration frames are only transmitted once. To reduce data loss in LoRaWAN, we design a novel coding scheme for data recovery called DaRe, which extends frames with redundant information that is calculated from the data from previous frames. DaRe combines techniques from convolutional codes and fountain codes. We develop an implementation for DaRe and show that 99 of the data can be recovered with a code rate of 1 2 for up to 40 frame loss. Compared to repetition coding DaRe provides 21 more data recovery, and can save up to 42 energy consumption on transmission for 10 byte data units. DaRe also provides better resilience to bursty frame loss. This study provides useful results to both LoRaWAN network operators as well as developers of LoRaWAN applications. Network operators can use the characterisation results to identify possible weaknesses in the network, and application developers are offered a tool to prevent possible data loss."
]
} |
1904.06072 | 2936333464 | We consider a multipoint-to-point network in which sensors periodically send measurements to a gateway. The system uses Long Range (LoRa) communications in a frequency band with duty-cycle limits. Our aim is to enhance the reliability of the measurement transmissions. In this setting, retransmission protocols do not scale well with the number of sensors as the duty cycle limit prevents a gateway from acknowledging all receptions if there are many sensors. We thus intend to improve the reliability without acknowledgments by transmitting multiple copies of a measurement, so that the gateway is able to obtain this measurement as long as it receives at least one copy. Each frame includes the current and a few past measurements. We propose a strategy for redundancy allocation that takes into account the effects of fading and interference to determine the number of measurements to be included in a frame. Numerical results obtained using the simulation tool LoRaSim show that the allocation of redundancy provides up to six orders of magnitude decrease in the outage probability. Compared to a system that blindly allocates the maximum redundancy possible under duty-cycle and delay constraints of the gateway and memory constraints of the sensors, our technique provides up to 30 reduction in the average energy spent to successfully deliver a measurement to the gateway. | There is a wide range of results available on interference analysis based on tools from stochastic geometry @cite_8 @cite_9 . They consider the sum interference power as opposed to the power of the strongest interferer. Since only the latter is important for LoRa due to the capture effect @cite_2 , these results are not applicable for our purpose. Results on the outage probability of LoRa that take into account the capture effect are given in @cite_2 . The scenario analyzed in @cite_2 comprises a random number of nodes distributed over a circular region with the gateway at the center. In contrast, our work is motivated mainly by use cases pertaining to industrial monitoring in which a known number of sensors are placed over a small area. The area could be a production floor, a processing unit, a storage facility, or a collection of entities equipped with many sensors to periodically measure physical quantities of interest. The gateway could be located at an office building and wired to a computational unit where the sensor data are processed and monitored. | {
"cite_N": [
"@cite_9",
"@cite_2",
"@cite_8"
],
"mid": [
"2065627966",
"2537187694",
"2092609929"
],
"abstract": [
"We analyze the performance of an interference-limited decode-and-forward cooperative relaying system that comprises a source, a destination, and @math relays, arbitrarily placed on the plane and suffering from interference by a set of interferers placed according to a spatial Poisson process. In each transmission attempt, first, the transmitter sends a packet; subsequently, a single one of the relays that received the packet correctly, if such a relay exists, retransmits it. We consider both selection combining and maximal ratio combining at the destination, Rayleigh fading, and interferer mobility. We derive expressions for the probability that a single transmission attempt is successful, as well as for the distribution of the transmission attempts until a packet is successfully transmitted. Results provide design guidelines applicable to a wide range of systems. Overall, the temporal and spatial characteristics of the interference play a significant role in shaping the system performance. Maximal ratio combining is only helpful when relays are close to the destination; in harsh environments, having many relays is particularly helpful, and relay placement is critical; the performance improves when interferer mobility increases; and a tradeoff exists between energy efficiency and throughput.",
"Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.",
"The temporal correlation of interference is a key performance factor of several technologies and protocols for wireless communications. A comprehensive understanding of interference correlation is especially important in the design of diversity schemes, whose performance can severely degrade in case of highly correlated interference. Taking into account three sources of correlation-node locations, channel, and traffic-and using common modeling assumptions-random homogeneous node positions, Rayleigh block fading, and slotted ALOHA traffic-we derive closed-form expressions and calculation rules for the correlation coefficient of the overall interference power received at a certain point in space. Plots give an intuitive understanding as to how model parameters influence the interference correlation."
]
} |
1904.06145 | 2940656073 | We build on recent advances in progressively growing generative autoencoder models. These models can encode and reconstruct existing images, and generate novel ones, at resolutions comparable to Generative Adversarial Networks (GANs), while consisting only of a single encoder and decoder network. The ability to reconstruct and arbitrarily modify existing samples such as images separates autoencoder models from GANs, but the output quality of image autoencoders has remained inferior. The recently proposed PIONEER autoencoder can reconstruct faces in the @math CelebAHQ dataset, but like IntroVAE, another recent method, it often loses the identity of the person in the process. We propose an improved and simplified version of PIONEER and show significantly improved quality and preservation of the face identity in CelebAHQ, both visually and quantitatively. We also show evidence of state-of-the-art disentanglement of the latent space of the model, both quantitatively and via realistic image feature manipulations. On the LSUN Bedrooms dataset, our model also improves the results of the original PIONEER. Overall, our results indicate that the PIONEER networks provide a way to photorealistic face manipulation. | During the recent years there has been rapid progress related to GAN-based models and applications. Many recent improvements improve the stability and robustness of the training process by suggesting new loss functions @cite_37 , regularization methods @cite_14 @cite_30 @cite_4 , multi-resolution training @cite_33 @cite_12 , architectures @cite_18 , or combinations of these. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_18",
"@cite_12"
],
"mid": [
"2953141406",
"2739748921",
"2785678896",
"2783391889",
"2766527293",
"2904367110",
""
],
"abstract": [
"Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f-divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer across several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning.",
"",
"One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.",
"Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this note we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is generally not convergent. Furthermore, we discuss recent regularization strategies that were proposed to stabilize GAN training. Our analysis shows that while GAN training with instance noise or gradient penalties converges, Wasserstein-GANs and Wasserstein-GANs-GP with a finite number of discriminator updates per generator update do in general not converge to the equilibrium point. We explain these results and show that both instance noise and gradient penalties constitute solutions to the problem of purely imaginary eigenvalues of the Jacobian of the gradient vector field. Based on our analysis, we also propose a simplified gradient penalty with the same effects on local convergence as more complicated penalties.",
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.",
""
]
} |
1904.06145 | 2940656073 | We build on recent advances in progressively growing generative autoencoder models. These models can encode and reconstruct existing images, and generate novel ones, at resolutions comparable to Generative Adversarial Networks (GANs), while consisting only of a single encoder and decoder network. The ability to reconstruct and arbitrarily modify existing samples such as images separates autoencoder models from GANs, but the output quality of image autoencoders has remained inferior. The recently proposed PIONEER autoencoder can reconstruct faces in the @math CelebAHQ dataset, but like IntroVAE, another recent method, it often loses the identity of the person in the process. We propose an improved and simplified version of PIONEER and show significantly improved quality and preservation of the face identity in CelebAHQ, both visually and quantitatively. We also show evidence of state-of-the-art disentanglement of the latent space of the model, both quantitatively and via realistic image feature manipulations. On the LSUN Bedrooms dataset, our model also improves the results of the original PIONEER. Overall, our results indicate that the PIONEER networks provide a way to photorealistic face manipulation. | Thus, there have been many efforts to combine GANs with autoencoder models, ( @cite_0 @cite_10 @cite_38 @cite_15 ). For instance, @cite_38 and @cite_15 proposed utilizing three deep networks in order to learn functions that enable mappings between the data space and the latent space in both directions. That is, besides the typical autoencoder architecture, consisting of a decoder (i.e. generator) and encoder networks, their approach uses an additional discriminator network, which is trained to classify tuples of image samples with their latent codes. Other authors introduce additional discriminator networks besides the generator and encoder. For example, @cite_21 @cite_0 use a GAN-like discriminator in sample space and @cite_6 @cite_31 in latent space. Nevertheless, the image synthesis performance of these hybrid models has not yet been shown to match state-of-the-art of purely generative models @cite_33 @cite_1 . | {
"cite_N": [
"@cite_38",
"@cite_31",
"@cite_33",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_15",
"@cite_10"
],
"mid": [
"2412320034",
"2952673310",
"2766527293",
"2202109488",
"",
"",
"2964144352",
"2411541852",
"2624918875"
],
"abstract": [
"The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.",
"Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.",
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.",
"",
"",
"The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle the challenge of achieving accurate reconstructions without loss of feature quality, we introduce the Introspective Adversarial Network, a novel hybridization of the VAE and GAN. Our model efficiently captures long-range dependencies through use of a computational block based on weight-shared dilated convolutions, and improves generalization performance with Orthogonal Regularization, a novel weight regularization method. We validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity.",
"We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.",
"Auto-encoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model. The underlying principle shows that variational inference can be used a basic tool for learning, but with the in- tractable likelihood replaced by a synthetic likelihood, and the unknown posterior distribution replaced by an implicit distribution; both synthetic likelihoods and implicit posterior distributions can be learned using discriminators. This allows us to develop a natural fusion of variational auto-encoders and generative adversarial networks, combining the best of both these methods. We describe a unified objective for optimization, discuss the constraints needed to guide learning, connect to the wide range of existing work, and use a battery of tests to systematically and quantitatively assess the performance of our method."
]
} |
1904.06145 | 2940656073 | We build on recent advances in progressively growing generative autoencoder models. These models can encode and reconstruct existing images, and generate novel ones, at resolutions comparable to Generative Adversarial Networks (GANs), while consisting only of a single encoder and decoder network. The ability to reconstruct and arbitrarily modify existing samples such as images separates autoencoder models from GANs, but the output quality of image autoencoders has remained inferior. The recently proposed PIONEER autoencoder can reconstruct faces in the @math CelebAHQ dataset, but like IntroVAE, another recent method, it often loses the identity of the person in the process. We propose an improved and simplified version of PIONEER and show significantly improved quality and preservation of the face identity in CelebAHQ, both visually and quantitatively. We also show evidence of state-of-the-art disentanglement of the latent space of the model, both quantitatively and via realistic image feature manipulations. On the LSUN Bedrooms dataset, our model also improves the results of the original PIONEER. Overall, our results indicate that the PIONEER networks provide a way to photorealistic face manipulation. | In this paper, we build upon @cite_3 , based on the adversarial generator--encoder (AGE) @cite_17 . In contrast to many other previous works, these two models consist of only two deep networks, a generator and an encoder, which represent the mappings between the image space and latent space. In addition, the method of progressive network growing, adapted from @cite_33 , is utilized in @cite_3 . | {
"cite_N": [
"@cite_33",
"@cite_3",
"@cite_17"
],
"mid": [
"2766527293",
"2881214865",
"2684734919"
],
"abstract": [
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"We introduce a novel generative autoencoder network model that learns to encode and reconstruct images with high quality and resolution, and supports smooth random sampling from the latent space of the encoder. Generative adversarial networks (GANs) are known for their ability to simulate random high-quality images, but they cannot reconstruct existing images. Previous works have attempted to extend GANs to support such inference but, so far, have not delivered satisfactory high-quality results. Instead, we propose the Progressively Growing Generative Autoencoder (Pioneer) network which achieves high-quality reconstruction with (128 128 ) images without requiring a GAN discriminator. We merge recent techniques for progressively building up the parts of the network with the recently introduced adversarial encoder–generator network. The ability to reconstruct input images is crucial in many real-world applications, and allows for precise intelligent manipulation of existing images. We show promising results in image synthesis and inference, with state-of-the-art results in CelebA inference tasks.",
"We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures."
]
} |
1904.06145 | 2940656073 | We build on recent advances in progressively growing generative autoencoder models. These models can encode and reconstruct existing images, and generate novel ones, at resolutions comparable to Generative Adversarial Networks (GANs), while consisting only of a single encoder and decoder network. The ability to reconstruct and arbitrarily modify existing samples such as images separates autoencoder models from GANs, but the output quality of image autoencoders has remained inferior. The recently proposed PIONEER autoencoder can reconstruct faces in the @math CelebAHQ dataset, but like IntroVAE, another recent method, it often loses the identity of the person in the process. We propose an improved and simplified version of PIONEER and show significantly improved quality and preservation of the face identity in CelebAHQ, both visually and quantitatively. We also show evidence of state-of-the-art disentanglement of the latent space of the model, both quantitatively and via realistic image feature manipulations. On the LSUN Bedrooms dataset, our model also improves the results of the original PIONEER. Overall, our results indicate that the PIONEER networks provide a way to photorealistic face manipulation. | The results of @cite_3 are promising, and both synthesis and reconstruction have good quality in relatively high image resolutions. However, in this paper, we show that @cite_3 suffers from large fluctuations of the competing divergence terms of the adversarial loss, and this seems to hamper optimization and convergence thereby limiting performance. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2881214865"
],
"abstract": [
"We introduce a novel generative autoencoder network model that learns to encode and reconstruct images with high quality and resolution, and supports smooth random sampling from the latent space of the encoder. Generative adversarial networks (GANs) are known for their ability to simulate random high-quality images, but they cannot reconstruct existing images. Previous works have attempted to extend GANs to support such inference but, so far, have not delivered satisfactory high-quality results. Instead, we propose the Progressively Growing Generative Autoencoder (Pioneer) network which achieves high-quality reconstruction with (128 128 ) images without requiring a GAN discriminator. We merge recent techniques for progressively building up the parts of the network with the recently introduced adversarial encoder–generator network. The ability to reconstruct input images is crucial in many real-world applications, and allows for precise intelligent manipulation of existing images. We show promising results in image synthesis and inference, with state-of-the-art results in CelebA inference tasks."
]
} |
1904.06145 | 2940656073 | We build on recent advances in progressively growing generative autoencoder models. These models can encode and reconstruct existing images, and generate novel ones, at resolutions comparable to Generative Adversarial Networks (GANs), while consisting only of a single encoder and decoder network. The ability to reconstruct and arbitrarily modify existing samples such as images separates autoencoder models from GANs, but the output quality of image autoencoders has remained inferior. The recently proposed PIONEER autoencoder can reconstruct faces in the @math CelebAHQ dataset, but like IntroVAE, another recent method, it often loses the identity of the person in the process. We propose an improved and simplified version of PIONEER and show significantly improved quality and preservation of the face identity in CelebAHQ, both visually and quantitatively. We also show evidence of state-of-the-art disentanglement of the latent space of the model, both quantitatively and via realistic image feature manipulations. On the LSUN Bedrooms dataset, our model also improves the results of the original PIONEER. Overall, our results indicate that the PIONEER networks provide a way to photorealistic face manipulation. | Besides @cite_3 , other recent and related works are IntroVAE @cite_23 and GLOW @cite_24 . Based on VAE, IntroVAE is fundamentally different from that is based on AGE. Based on the contributions of our paper, we are able to show that AGE-based generative models are capable of producing competitive results at @math resolution and with performance comparable to IntroVAE. This is in contrast to the observations in @cite_23 , where the authors were not able to make AGE training converge with large image resolutions. This finding is particularly promising since the model has a simpler yet more powerful architecture than the corresponding purely generative model PGGAN @cite_33 . It shows that the conventional GAN paradigm of a separate discriminator network is not necessary for learning to infer and generate image data sets. | {
"cite_N": [
"@cite_24",
"@cite_23",
"@cite_33",
"@cite_3"
],
"mid": [
"",
"2884581909",
"2766527293",
"2881214865"
],
"abstract": [
"",
"We present a novel introspective variational autoencoder (IntroVAE) model for synthesizing high-resolution photographic images. IntroVAE is capable of self-evaluating the quality of its generated samples and improving itself accordingly. Its inference and generator models are jointly trained in an introspective way. On one hand, the generator is required to reconstruct the input images from the noisy outputs of the inference model as normal VAEs. On the other hand, the inference model is encouraged to classify between the generated and real samples while the generator tries to fool it as GANs. These two famous generative frameworks are integrated in a simple yet efficient single-stream architecture that can be trained in a single stage. IntroVAE preserves the advantages of VAEs, such as stable training and nice latent manifold. Unlike most other hybrid models of VAEs and GANs, IntroVAE requires no extra discriminators, because the inference model itself serves as a discriminator to distinguish between the generated and real samples. Experiments demonstrate that our method produces high-resolution photo-realistic images (e.g., CELEBA images at (1024^ 2 )), which are comparable to or better than the state-of-the-art GANs.",
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"We introduce a novel generative autoencoder network model that learns to encode and reconstruct images with high quality and resolution, and supports smooth random sampling from the latent space of the encoder. Generative adversarial networks (GANs) are known for their ability to simulate random high-quality images, but they cannot reconstruct existing images. Previous works have attempted to extend GANs to support such inference but, so far, have not delivered satisfactory high-quality results. Instead, we propose the Progressively Growing Generative Autoencoder (Pioneer) network which achieves high-quality reconstruction with (128 128 ) images without requiring a GAN discriminator. We merge recent techniques for progressively building up the parts of the network with the recently introduced adversarial encoder–generator network. The ability to reconstruct input images is crucial in many real-world applications, and allows for precise intelligent manipulation of existing images. We show promising results in image synthesis and inference, with state-of-the-art results in CelebA inference tasks."
]
} |
1904.06145 | 2940656073 | We build on recent advances in progressively growing generative autoencoder models. These models can encode and reconstruct existing images, and generate novel ones, at resolutions comparable to Generative Adversarial Networks (GANs), while consisting only of a single encoder and decoder network. The ability to reconstruct and arbitrarily modify existing samples such as images separates autoencoder models from GANs, but the output quality of image autoencoders has remained inferior. The recently proposed PIONEER autoencoder can reconstruct faces in the @math CelebAHQ dataset, but like IntroVAE, another recent method, it often loses the identity of the person in the process. We propose an improved and simplified version of PIONEER and show significantly improved quality and preservation of the face identity in CelebAHQ, both visually and quantitatively. We also show evidence of state-of-the-art disentanglement of the latent space of the model, both quantitatively and via realistic image feature manipulations. On the LSUN Bedrooms dataset, our model also improves the results of the original PIONEER. Overall, our results indicate that the PIONEER networks provide a way to photorealistic face manipulation. | In terms of image manipulation, we point out that our model learns to manipulate image attributes in a fully unsupervised manner, in contrast to supervised approaches where the class information is provided during training ( . @cite_7 @cite_11 ), and to models only capable of specific discrete domain transformations @cite_19 @cite_34 @cite_35 @cite_25 @cite_22 . In the unsupervised line of work, @cite_36 uses only low resolution, whereas the GAN of @cite_18 shows high resolution, but has no encoder. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_19",
"@cite_34",
"@cite_25",
"@cite_11"
],
"mid": [
"2608015370",
"2904367110",
"",
"2962752582",
"2787273002",
"2962793481",
"2951939904",
"2768626898",
""
],
"abstract": [
"Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.",
"We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.",
"",
"This paper introduces a new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the values of attributes directly in the latent space. As a result, after training, our model can generate different realistic versions of an input image by varying the attribute values. By using continuous attribute values, we can choose how much a specific attribute is perceivable in the generated image. This property could allow for applications where users can modify an image using sliding knobs, like faders on a mixing console, to change the facial expression of a portrait, or to update the color of some objects. Compared to the state-of-the-art which mostly relies on training adversarial networks in pixel space by altering attribute values at train time, our approach results in much simpler training schemes and nicely scales to multiple attributes. We present evidence that our model can significantly change the perceived value of the attributes while preserving the naturalness of images.",
"We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our @math -TCVAE (Total Correlation Variational Autoencoder), a refinement of the state-of-the-art @math -VAE objective for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the latent variables model is trained using our framework.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available this https URL",
"Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.",
""
]
} |
1904.06040 | 2936075131 | Automated digital histopathology image segmentation is an important task to help pathologists diagnose tumors and cancer subtypes. For pathological diagnosis of cancer subtypes, pathologists usually change the magnification of whole-slide images (WSI) viewers. A key assumption is that the importance of the magnifications depends on the characteristics of the input image, such as cancer subtypes. In this paper, we propose a novel semantic segmentation method, called Adaptive-Weighting-Multi-Field-of-View-CNN (AWMF-CNN), that can adaptively use image features from images with different magnifications to segment multiple cancer subtype regions in the input image. The proposed method aggregates several expert CNNs for images of different magnifications by adaptively changing the weight of each expert depending on the input image. It leverages information in the images with different magnifications that might be useful for identifying the subtypes. It outperformed other state-of-the-art methods in experiments. | Weighting (Gating): We propose an aggregation method that can adaptively weight multiple CNNs trained with different field-of-view images. There are several methods that can adaptively weight the effective channels @cite_1 , pixels (location) @cite_16 , and scales @cite_33 @cite_43 in a single network. Hu @cite_1 proposed SENet which adaptively weights the channel-wise feature responses by explicitly modeling interdependencies between channels. Their network improves state-of-the-art deep learning architectures. Sam @cite_20 proposed a hard-switch-CNN network that chooses a single optimal regressor from among several independent regressors. It was used for counting the number of people in patch images. Kumagai @cite_12 proposed a mixture of counting CNNs for the regression problem of estimating the number of people in an image. | {
"cite_N": [
"@cite_33",
"@cite_1",
"@cite_43",
"@cite_16",
"@cite_12",
"@cite_20"
],
"mid": [
"2798791840",
"",
"2963284331",
"2951675964",
"2604785935",
"2741077351"
],
"abstract": [
"Scene segmentation is a challenging task as it need label every pixel in the image. It is crucial to exploit discriminative context and aggregate multi-scale features to achieve better segmentation. In this paper, we first propose a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context. The proposed context contrasted local feature greatly improves the parsing performance, especially for inconspicuous objects and background stuff. Furthermore, we propose a scheme of gated sum to selectively aggregate multi-scale features for each spatial position. The gates in this scheme control the information flow of different scale features. Their values are generated from the testing image by the proposed network learnt from the training data so that they are adaptive not only to the training data, but also to the specific testing image. Without bells and whistles, the proposed approach achieves the state-of-the-arts consistently on the three popular scene segmentation datasets, Pascal Context, SUN-RGBD and COCO Stuff.",
"",
"We propose the autofocus convolutional layer for semantic segmentation with the objective of enhancing the capabilities of neural networks for multi-scale processing. Autofocus layers adaptively change the size of the effective receptive field based on the processed context to generate more powerful features. This is achieved by parallelising multiple convolutional layers with different dilation rates, combined by an attention mechanism that learns to focus on the optimal scales driven by context. By sharing the weights of the parallel convolutions we make the network scale-invariant, with only a modest increase in the number of parameters. The proposed autofocus layer can be easily integrated into existing networks to improve a model’s representational power. We evaluate our mod els on the challenging tasks of multi-organ segmentation in pelvic CT and brain tumor segmentation in MRI and achieve very promising performance.",
"Lossy image compression is generally formulated as a joint rate-distortion optimization to learn encoder, quantizer, and decoder. However, the quantizer is non-differentiable, and discrete entropy estimation usually is required for rate control. These make it very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that the bit rate of the different parts of the image should be adapted to local content. And the content aware bit rate is allocated under the guidance of a content-weighted importance map. Thus, the sum of the importance map can serve as a continuous alternative of discrete entropy estimation to control compression rate. And binarizer is adopted to quantize the output of encoder due to the binarization scheme is also directly defined by the importance map. Furthermore, a proxy function is introduced for binary operation in backward propagation to make it differentiable. Therefore, the encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner by using a subset of the ImageNet database. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts.",
"This paper proposes a crowd counting method. Crowd counting is difficult because of large appearance changes of a target which caused by density and scale changes. Conventional crowd counting methods generally utilize one predictor (e,g., regression and multi-class classifier). However, such only one predictor can not count targets with large appearance changes well. In this paper, we propose to predict the number of targets using multiple CNNs specialized to a specific appearance, and those CNNs are adaptively selected according to the appearance of a test image. By integrating the selected CNNs, the proposed method has the robustness to large appearance changes. In experiments, we confirm that the proposed method can count crowd with lower counting error than a CNN and integration of CNNs with fixed weights. Moreover, we confirm that each predictor automatically specialized to a specific appearance.",
"We propose a novel crowd counting model that maps a given crowd scene to its density. Crowd analysis is compounded by myriad of factors like inter-occlusion between people due to extreme crowding, high similarity of appearance between people and background elements, and large variability of camera view-points. Current state-of-the art approaches tackle these factors by using multi-scale CNN architectures, recurrent networks and late fusion of features from multi-column CNN with different receptive fields. We propose switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count. Patches from a grid within a crowd scene are relayed to independent CNN regressors based on crowd count prediction quality of the CNN established during training. The independent CNN regressors are designed to have different receptive fields and a switch classifier is trained to relay the crowd scene patch to the best CNN regressor. We perform extensive experiments on all major crowd counting datasets and evidence better performance compared to current state-of-the-art methods. We provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch. It is observed that the switch relays an image patch to a particular CNN column based on density of crowd."
]
} |
1904.06017 | 2937744573 | The condition assessment of road surfaces is essential to ensure their serviceability while still providing maximum road traffic safety. This paper presents a robust stereo vision system embedded in an unmanned aerial vehicle (UAV). The perspective view of the target image is first transformed into the reference view, and this not only improves the disparity accuracy, but also reduces the algorithm's computational complexity. The cost volumes generated from stereo matching are then filtered using a bilateral filter. The latter has been proved to be a feasible solution for the functional minimisation problem in a fully connected Markov random field model. Finally, the disparity maps are transformed by minimising an energy function with respect to the roll angle and disparity projection model. This makes the damaged road areas more distinguishable from the road surface. The proposed system is implemented on an NVIDIA Jetson TX2 GPU with CUDA for real-time purposes. It is demonstrated through experiments that the damaged road areas can be easily distinguished from the transformed disparity maps. | The two key aspects of computer stereo vision are speed and accuracy @cite_9 . A lot of research has been carried out over the past decades to improve either the disparity accuracy or the algorithm's computational complexity @cite_1 . The state-of-the-art stereo vision algorithms can be classified as convolutional neural network (CNN)-based @cite_12 @cite_35 @cite_22 @cite_11 @cite_17 and traditional @cite_1 @cite_38 @cite_24 @cite_37 @cite_27 @cite_15 . The former generally formulates disparity estimation as a binary classification problem and learns the probability distribution over all disparity values @cite_12 . For example, PSMNet @cite_32 generates the cost volumes by learning region-level features with different scales of receptive fields. Although these approaches have achieved some highly accurate disparity maps, they usually require a large amount of labelled training data to learn from. Therefore, it is impossible for them to work on the datasets without providing the disparity ground truth @cite_5 . Moreover, predicting disparities with CNNs is still a computationally intensive task, which usually takes seconds or even minutes to execute on state-of-the-art graphics cards @cite_9 . Therefore, the existing CNN-based stereo vision algorithms are not suitable for real-time applications. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_37",
"@cite_22",
"@cite_9",
"@cite_1",
"@cite_17",
"@cite_32",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"",
"",
"",
"2006943762",
"2792796963",
"",
"2794812000",
"",
"",
"2776033207",
"",
"2440384215",
""
],
"abstract": [
"",
"",
"",
"",
"A significant amount of research in the field of stereo vision has been published in the past decade. Considerable progress has been made in improving accuracy of results as well as achieving real-time performance in obtaining those results. This work provides a comprehensive review of stereo vision algorithms with specific emphasis on real-time performance to identify those suitable for resource-limited systems. An attempt has been made to compile and present accuracy and runtime performance data for all stereo vision algorithms developed in the past decade. Algorithms are grouped into three categories: (1) those that have published results of real-time or near real-time performance on standard processors, (2) those that have real-time performance on specialized hardware (i.e. GPU, FPGA, DSP, ASIC), and (3) those that have not been shown to obtain near real-time performance. This review is intended to aid those seeking algorithms suitable for real-time implementation on resource-limited systems, and to encourage further research and development of the same by providing a snapshot of the status quo.",
"Various 3D reconstruction methods have enabled civil engineers to detect damage on a road surface. To achieve the millimeter accuracy required for road condition assessment, a disparity map with subpixel resolution needs to be used. However, none of the existing stereo matching algorithms are specially suitable for the reconstruction of the road surface. Hence in this paper, we propose a novel dense subpixel disparity estimation algorithm with high computational efficiency and robustness. This is achieved by first transforming the perspective view of the target frame into the reference view, which not only increases the accuracy of the block matching for the road surface but also improves the processing speed. The disparities are then estimated iteratively using our previously published algorithm, where the search range is propagated from three estimated neighboring disparities. Since the search range is obtained from the previous iteration, errors may occur when the propagated search range is not sufficient. Therefore, a correlation maxima verification is performed to rectify this issue, and the subpixel resolution is achieved by conducting a parabola interpolation enhancement. Furthermore, a novel disparity global refinement approach developed from the Markov random fields and fast bilateral stereo is introduced to further improve the accuracy of the estimated disparity map, where disparities are updated iteratively by minimizing the energy function that is related to their interpolated correlation polynomials. The algorithm is implemented in C language with a near real-time performance. The experimental results illustrate that the absolute error of the reconstruction varies from 0.1 to 3 mm.",
"",
"Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in illposed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: this https URL",
"",
"",
"Convolutional neural networks showed the ability in stereo matching cost learning. Recent approaches learned parameters from public datasets that have ground truth disparity maps. Due to the difficulty of labeling ground truth depth, usable data for system training is rather limited, making it difficult to apply the system to real applications. In this paper, we present a framework for learning stereo matching costs without human supervision. Our method updates network parameters in an iterative manner. It starts with a randomly initialized network. Left-right check is adopted to guide the training. Suitable matching is then picked and used as training data in following iterations. Our system finally converges to a stable state and performs even comparably with other supervised methods.",
"",
"In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
""
]
} |
1904.06017 | 2937744573 | The condition assessment of road surfaces is essential to ensure their serviceability while still providing maximum road traffic safety. This paper presents a robust stereo vision system embedded in an unmanned aerial vehicle (UAV). The perspective view of the target image is first transformed into the reference view, and this not only improves the disparity accuracy, but also reduces the algorithm's computational complexity. The cost volumes generated from stereo matching are then filtered using a bilateral filter. The latter has been proved to be a feasible solution for the functional minimisation problem in a fully connected Markov random field model. Finally, the disparity maps are transformed by minimising an energy function with respect to the roll angle and disparity projection model. This makes the damaged road areas more distinguishable from the road surface. The proposed system is implemented on an NVIDIA Jetson TX2 GPU with CUDA for real-time purposes. It is demonstrated through experiments that the damaged road areas can be easily distinguished from the transformed disparity maps. | The traditional stereo vision algorithms can be classified as local, global and semi-global @cite_1 . The local algorithms typically select a series of blocks from the target image and match them with a constant block selected from the reference image @cite_1 . The disparities are then determined by finding the shifting distances corresponding to either the highest correlation or the lowest cost @cite_9 . This optimisation technique is also known as winner-take-all (WTA). | {
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2006943762",
"2792796963"
],
"abstract": [
"A significant amount of research in the field of stereo vision has been published in the past decade. Considerable progress has been made in improving accuracy of results as well as achieving real-time performance in obtaining those results. This work provides a comprehensive review of stereo vision algorithms with specific emphasis on real-time performance to identify those suitable for resource-limited systems. An attempt has been made to compile and present accuracy and runtime performance data for all stereo vision algorithms developed in the past decade. Algorithms are grouped into three categories: (1) those that have published results of real-time or near real-time performance on standard processors, (2) those that have real-time performance on specialized hardware (i.e. GPU, FPGA, DSP, ASIC), and (3) those that have not been shown to obtain near real-time performance. This review is intended to aid those seeking algorithms suitable for real-time implementation on resource-limited systems, and to encourage further research and development of the same by providing a snapshot of the status quo.",
"Various 3D reconstruction methods have enabled civil engineers to detect damage on a road surface. To achieve the millimeter accuracy required for road condition assessment, a disparity map with subpixel resolution needs to be used. However, none of the existing stereo matching algorithms are specially suitable for the reconstruction of the road surface. Hence in this paper, we propose a novel dense subpixel disparity estimation algorithm with high computational efficiency and robustness. This is achieved by first transforming the perspective view of the target frame into the reference view, which not only increases the accuracy of the block matching for the road surface but also improves the processing speed. The disparities are then estimated iteratively using our previously published algorithm, where the search range is propagated from three estimated neighboring disparities. Since the search range is obtained from the previous iteration, errors may occur when the propagated search range is not sufficient. Therefore, a correlation maxima verification is performed to rectify this issue, and the subpixel resolution is achieved by conducting a parabola interpolation enhancement. Furthermore, a novel disparity global refinement approach developed from the Markov random fields and fast bilateral stereo is introduced to further improve the accuracy of the estimated disparity map, where disparities are updated iteratively by minimizing the energy function that is related to their interpolated correlation polynomials. The algorithm is implemented in C language with a near real-time performance. The experimental results illustrate that the absolute error of the reconstruction varies from 0.1 to 3 mm."
]
} |
1904.06017 | 2937744573 | The condition assessment of road surfaces is essential to ensure their serviceability while still providing maximum road traffic safety. This paper presents a robust stereo vision system embedded in an unmanned aerial vehicle (UAV). The perspective view of the target image is first transformed into the reference view, and this not only improves the disparity accuracy, but also reduces the algorithm's computational complexity. The cost volumes generated from stereo matching are then filtered using a bilateral filter. The latter has been proved to be a feasible solution for the functional minimisation problem in a fully connected Markov random field model. Finally, the disparity maps are transformed by minimising an energy function with respect to the roll angle and disparity projection model. This makes the damaged road areas more distinguishable from the road surface. The proposed system is implemented on an NVIDIA Jetson TX2 GPU with CUDA for real-time purposes. It is demonstrated through experiments that the damaged road areas can be easily distinguished from the transformed disparity maps. | Unlike the local algorithms, the global algorithms generally translate stereo matching into an energy minimisation problem, which can later be addressed using sophisticated optimisation techniques, , belief propagation (BP) @cite_19 and graph cuts (GC) @cite_29 . These techniques are commonly developed based on the Markov random field (MRF) @cite_20 . Semi-global matching (SGM) @cite_28 approximates the MRF inference by performing cost aggregation along all directions in the image, and this greatly improves the accuracy and efficiency of stereo matching. However, finding the optimum smoothness values is a challenging task, due to the occlusion problem @cite_26 . Over-penalising the smoothness term can reduce ambiguities around the discontinuous areas, but on the other hand, can cause incorrect matches for the continuous areas @cite_1 . Furthermore, the computational complexities of the aforementioned optimisation techniques are significantly intensive, making these algorithms difficult to perform in real time @cite_9 . | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_20"
],
"mid": [
"",
"2117248802",
"2143516773",
"2006943762",
"2792796963",
"2163364417",
"2169282664"
],
"abstract": [
"",
"This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems.",
"Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy.",
"A significant amount of research in the field of stereo vision has been published in the past decade. Considerable progress has been made in improving accuracy of results as well as achieving real-time performance in obtaining those results. This work provides a comprehensive review of stereo vision algorithms with specific emphasis on real-time performance to identify those suitable for resource-limited systems. An attempt has been made to compile and present accuracy and runtime performance data for all stereo vision algorithms developed in the past decade. Algorithms are grouped into three categories: (1) those that have published results of real-time or near real-time performance on standard processors, (2) those that have real-time performance on specialized hardware (i.e. GPU, FPGA, DSP, ASIC), and (3) those that have not been shown to obtain near real-time performance. This review is intended to aid those seeking algorithms suitable for real-time implementation on resource-limited systems, and to encourage further research and development of the same by providing a snapshot of the status quo.",
"Various 3D reconstruction methods have enabled civil engineers to detect damage on a road surface. To achieve the millimeter accuracy required for road condition assessment, a disparity map with subpixel resolution needs to be used. However, none of the existing stereo matching algorithms are specially suitable for the reconstruction of the road surface. Hence in this paper, we propose a novel dense subpixel disparity estimation algorithm with high computational efficiency and robustness. This is achieved by first transforming the perspective view of the target frame into the reference view, which not only increases the accuracy of the block matching for the road surface but also improves the processing speed. The disparities are then estimated iteratively using our previously published algorithm, where the search range is propagated from three estimated neighboring disparities. Since the search range is obtained from the previous iteration, errors may occur when the propagated search range is not sufficient. Therefore, a correlation maxima verification is performed to rectify this issue, and the subpixel resolution is achieved by conducting a parabola interpolation enhancement. Furthermore, a novel disparity global refinement approach developed from the Markov random fields and fast bilateral stereo is introduced to further improve the accuracy of the estimated disparity map, where disparities are updated iteratively by minimizing the energy function that is related to their interpolated correlation polynomials. The algorithm is implemented in C language with a near real-time performance. The experimental results illustrate that the absolute error of the reconstruction varies from 0.1 to 3 mm.",
"Belief propagation (BP) is an increasingly popular method of performing approximate inference on arbitrary graphical models. At times, even further approximations are required, whether due to quantization of the messages or model parameters, from other simplified message or model representations, or from stochastic approximation methods. The introduction of such errors into the BP message computations has the potential to affect the solution obtained adversely. We analyze the effect resulting from message approximation under two particular measures of error, and show bounds on the accumulation of errors in the system. This analysis leads to convergence conditions for traditional BP message passing, and both strict bounds and estimates of the resulting error in systems of approximate BP message passing.",
"Recent stereo algorithms have achieved impressive results by modelling the disparity image as a Markov Random Field (MRF). An important component of an MRF-based approach is the inference algorithm used to find the most likely setting of each node in the MRF. Algorithms have been proposed which use graph cuts or belief propagation for inference. These stereo algorithms differ in both the inference algorithm used and the formulation of the MRF. It is unknown whether to attribute the responsibility for differences in performance to the MRF or the inference algorithm. We address this through controlled experiments by comparing the belief propagation algorithm and the graph cuts algorithm on the same MRF's, which have been created for calculating stereo disparities. We find that the labellings produced by the two algorithms are comparable. The solutions produced by graph cuts have a lower energy than those produced with belief propagation, but this does not necessarily lead to increased performance relative to the ground truth."
]
} |
1904.05979 | 2939598386 | Sounds originate from object motions and vibrations of surrounding air. Inspired by the fact that humans is capable of interpreting sound sources from how objects move visually, we propose a novel system that explicitly captures such motion cues for the task of sound localization and separation. Our system is composed of an end-to-end learnable model called Deep Dense Trajectory (DDT), and a curriculum learning scheme. It exploits the inherent coherence of audio-visual signals from a large quantities of unlabeled videos. Quantitative and qualitative evaluations show that comparing to previous models that rely on visual appearance cues, our motion based system improves performance in separating musical instrument sounds. Furthermore, it separates sound components from duets of the same category of instruments, a challenging problem that has not been addressed before. | Sound source separation is a challenging classic problem, and is known as the cocktail party problem" @cite_6 @cite_33 in the speech area. Algorithms based on Non-negative Matrix Factorization (NMF) @cite_10 @cite_12 @cite_45 were the major solutions to this problem. More recently, several deep learning methods have been proposed, where Wang al gave an overview @cite_22 on this series of approaches. Simpson al @cite_27 and Chandna al @cite_31 used CNNs to predict time-frequency masks for music source separation and enhancement. To solve the identity permutation problem in speech separation, Hershey al @cite_26 proposed a deep learning-based clustering method, and Yu al @cite_28 proposed a speaker-independent training scheme. While these solutions are inspiring, our setting is different from the previous ones in that we use additional visual signals to help with sound source separation. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_22",
"@cite_28",
"@cite_6",
"@cite_27",
"@cite_45",
"@cite_31",
"@cite_10",
"@cite_12"
],
"mid": [
"2950354455",
"",
"2749131051",
"2951295643",
"2143169494",
"2953311167",
"2104298926",
"2587994092",
"",
"1246381107"
],
"abstract": [
"We address the problem of acoustic source separation in a deep learning framework we call \"deep clustering.\" Rather than directly estimating signals or masking functions, we train a deep network to produce spectrogram embeddings that are discriminative for partition labels given in training data. Previous deep network approaches provide great advantages in terms of learning power and speed, but previously it has been unclear how to use them to separate signals in a class-independent way. In contrast, spectral clustering approaches are flexible with respect to the classes and number of items to be segmented, but it has been unclear how to leverage the learning power and speed of deep networks. To obtain the best of both worlds, we use an objective function that to train embeddings that yield a low-rank approximation to an ideal pairwise affinity matrix, in a class-independent way. This avoids the high cost of spectral factorization and instead produces compact clusters that are amenable to simple clustering methods. The segmentations are therefore implicitly encoded in the embeddings, and can be \"decoded\" by clustering. Preliminary experiments show that the proposed method can separate speech: when trained on spectrogram features containing mixtures of two speakers, and tested on mixtures of a held-out set of speakers, it can infer masking functions that improve signal quality by around 6dB. We show that the model can generalize to three-speaker mixtures despite training only on two-speaker mixtures. The framework can be used without class labels, and therefore has the potential to be trained on a diverse set of sound types, and to generalize to novel sources. We hope that future work will lead to segmentation of arbitrary sounds, with extensions to microphone array methods as well as image segmentation and other domains.",
"",
"Speech separation is the task of separating target speech from background interference. Traditionally, speech separation is studied as a signal processing problem. A more recent approach formulates speech separation as a supervised learning problem, where the discriminative patterns of speech, speakers, and background noise are learned from training data. Over the past decade, many supervised separation algorithms have been put forward. In particular, the recent introduction of deep learning to supervised speech separation has dramatically accelerated progress and boosted separation performance. This article provides a comprehensive overview of the research on deep learning based supervised speech separation in the last several years. We first introduce the background of speech separation and the formulation of supervised separation. Then we discuss three main components of supervised separation: learning machines, training targets, and acoustic features. Much of the overview is on separation algorithms where we review monaural methods, including speech enhancement (speech-nonspeech separation), speaker separation (multi-talker separation), and speech dereverberation, as well as multi-microphone techniques. The important issue of generalization, unique to supervised learning, is discussed. This overview provides a historical perspective on how advances are made. In addition, we discuss a number of conceptual issues, including what constitutes the target source.",
"We propose a novel deep learning model, which supports permutation invariant training (PIT), for speaker independent multi-talker speech separation, commonly known as the cocktail-party problem. Different from most of the prior arts that treat speech separation as a multi-class regression problem and the deep clustering technique that considers it a segmentation (or clustering) problem, our model optimizes for the separation regression error, ignoring the order of mixing sources. This strategy cleverly solves the long-lasting label permutation problem that has prevented progress on deep learning based techniques for speech separation. Experiments on the equal-energy mixing setup of a Danish corpus confirms the effectiveness of PIT. We believe improvements built upon PIT can eventually solve the cocktail-party problem and enable real-world adoption of, e.g., automatic meeting transcription and multi-party human-computer interaction, where overlapping speech is common.",
"This review presents an overview of a challenging problem in auditory perception, the cocktail party phenomenon, the delineation of which goes back to a classic paper by Cherry in 1953. In this review, we address the following issues: (1) human auditory scene analysis, which is a general process carried out by the auditory system of a human listener; (2) insight into auditory perception, which is derived from Marr's vision theory; (3) computational auditory scene analysis, which focuses on specific approaches aimed at solving the machine cocktail party problem; (4) active audition, the proposal for which is motivated by analogy with active vision, and (5) discussion of brain theory and independent component analysis, on the one hand, and correlative neural firing, on the other.",
"Identification and extraction of singing voice from within musical mixtures is a key challenge in source separation and machine audition. Recently, deep neural networks (DNN) have been used to estimate 'ideal' binary masks for carefully controlled cocktail party speech separation problems. However, it is not yet known whether these methods are capable of generalizing to the discrimination of voice and non-voice in the context of musical mixtures. Here, we trained a convolutional DNN (of around a billion parameters) to provide probabilistic estimates of the ideal binary mask for separation of vocal sounds from real-world musical mixtures. We contrast our DNN results with more traditional linear methods. Our approach may be useful for automatic removal of vocal sounds from musical mixtures for 'karaoke' type applications.",
"We present a methodology for analyzing polyphonic musical passages comprised of notes that exhibit a harmonically fixed spectral profile (such as piano notes). Taking advantage of this unique note structure, we can model the audio content of the musical passage by a linear basis transform and use non-negative matrix decomposition methods to estimate the spectral profile and the temporal information of every note. This approach results in a very simple and compact system that is not knowledge-based, but rather learns notes by observation.",
"In this paper we introduce a low-latency monaural source separation framework using a Convolutional Neural Network (CNN). We use a CNN to estimate time-frequency soft masks which are applied for source separation. We evaluate the performance of the neural network on a database comprising of musical mixtures of three instruments: voice, drums, bass as well as other instruments which vary from song to song. The proposed architecture is compared to a Multilayer Perceptron (MLP), achieving on-par results and a significant improvement in processing time. The algorithm was submitted to source separation evaluation campaigns to test efficiency, and achieved competitive results.",
"",
"This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF). This includes NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD). NMF NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets. It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been successfully applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining. As such, the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models. Key features: Acts as a single source reference guide to NMF, collating information that is widely dispersed in current literature, including the authors own recently developed techniques in the subject area. Uses generalized cost functions such as Bregman, Alpha and Beta divergences, to present practical implementations of several types of robust algorithms, in particular Multiplicative, Alternating Least Squares, Projected Gradient and Quasi Newton algorithms. Provides a comparative analysis of the different methods in order to identify approximation error and complexity. Includes pseudo codes and optimized MATLAB source codes for almost all algorithms presented in the book. The increasing interest in nonnegative matrix and tensor factorizations, as well as decompositions and sparse representation of data, will ensure that this book is essential reading for engineers, scientists, researchers, industry practitioners and graduate students across signal and image processing; neuroscience; data mining and data analysis; computer science; bioinformatics; speech processing; biomedical engineering; and multimedia."
]
} |
1904.05979 | 2939598386 | Sounds originate from object motions and vibrations of surrounding air. Inspired by the fact that humans is capable of interpreting sound sources from how objects move visually, we propose a novel system that explicitly captures such motion cues for the task of sound localization and separation. Our system is composed of an end-to-end learnable model called Deep Dense Trajectory (DDT), and a curriculum learning scheme. It exploits the inherent coherence of audio-visual signals from a large quantities of unlabeled videos. Quantitative and qualitative evaluations show that comparing to previous models that rely on visual appearance cues, our motion based system improves performance in separating musical instrument sounds. Furthermore, it separates sound components from duets of the same category of instruments, a challenging problem that has not been addressed before. | Learning the correspondences between vision and sound has become a popular topic recently. One line of work has explored representation learning from audio-visual training. Owens al @cite_51 used sound signals as supervision for vision model training; Aytar al @cite_5 used vision as supervision for sound models; Arandjelovic al @cite_30 and Korbar al @cite_25 trained vision and sound models jointly and achieve superior results. Another line of work explored sound localization in the visual input @cite_14 @cite_4 @cite_42 @cite_29 @cite_37 . More recently, researchers used voices and faces to do biometric matching @cite_21 , generated sounds for videos @cite_39 , generated talking faces @cite_54 , and predicted stereo sounds @cite_19 or 360 ambisonics @cite_23 from videos. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_29",
"@cite_42",
"@cite_21",
"@cite_54",
"@cite_39",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_51",
"@cite_25"
],
"mid": [
"2619697695",
"",
"2065274193",
"2164899449",
"2792330714",
"2777181663",
"2796292145",
"2883082281",
"2771079066",
"2905005291",
"2950388022",
"2544224704",
"2951353470",
"2809716557"
],
"abstract": [
"We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself – the correspondence between the visual and the audio streams, and we introduce a novel “Audio-Visual Correspondence” learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art selfsupervised approaches on ImageNet classification. We also demonstrate that the network is able to localize objects in both modalities, as well as perform fine-grained recognition tasks.",
"",
"In this paper, we propose a novel method that exploits correlation between audio-visual dynamics of a video to segment and localize objects that are the dominant source of audio. Our approach consists of a two-step spatiotemporal segmentation mechanism that relies on velocity and acceleration of moving objects as visual features. Each frame of the video is segmented into regions based on motion and appearance cues using the QuickShift algorithm, which are then clustered over time using K-means, so as to obtain a spatiotemporal video segmentation. The video is represented by motion features computed over individual segments. The Mel-Frequency Cepstral Coefficients (MFCC) of the audio signal, and their first order derivatives are exploited to represent audio. The proposed framework assumes there is a non-trivial correlation between these audio features and the velocity and acceleration of the moving and sounding objects. The canonical correlation analysis (CCA) is utilized to identify the moving objects which are most correlated to the audio signal. In addition to moving-sounding object identification, the same framework is also exploited to solve the problem of audio-video synchronization, and is used to aid interactive segmentation. We evaluate the performance of our proposed method on challenging videos. Our experiments demonstrate significant increase in performance over the state-of-the-art both qualitatively and quantitatively, and validate the feasibility and superiority of our approach.",
"Psychophysical and physiological evidence shows that sound localization of acoustic signals is strongly influenced by their synchrony with visual signals. This effect, known as ventriloquism, is at work when sound coming from the side of a TV set feels as if it were coming from the mouth of the actors. The ventriloquism effect suggests that there is important information about sound location encoded in the synchrony between the audio and video signals. In spite of this evidence, audiovisual synchrony is rarely used as a source of information in computer vision tasks. In this paper we explore the use of audio visual synchrony to locate sound sources. We developed a system that searches for regions of the visual landscape that correlate highly with the acoustic signals and tags them as likely to contain an acoustic source. We discuss our experience implementing the system, present results on a speaker localization task and discuss potential applications of the approach.",
"Visual events are usually accompanied by sounds in our daily lives. We pose the question: Can the machine learn the correspondence between visual scene and the sound, and localize the sound source only by observing sound and visual scene pairs like human? In this paper, we propose a novel unsupervised algorithm to address the problem of localizing the sound source in visual scenes. A two-stream network structure which handles each modality, with attention mechanism is developed for sound source localization. Moreover, although our network is formulated within the unsupervised learning framework, it can be extended to a unified architecture with a simple modification for the supervised and semi-supervised learning settings as well. Meanwhile, a new sound source dataset is developed for performance evaluation. Our empirical evaluation shows that the unsupervised method eventually go through false conclusion in some cases. We show that even with a few supervision, false conclusion is able to be corrected and the source of sound in a visual scene can be localized effectively.",
"In this paper our objectives are, first, networks that can embed audio and visual inputs into a common space that is suitable for cross-modal retrieval; and second, a network that can localize the object that sounds in an image, given the audio signal. We achieve both these objectives by training from unlabelled video using only audio-visual correspondence (AVC) as the objective function. This is a form of cross-modal self-supervision from video. To this end, we design new network architectures that can be trained using the AVC task for these functionalities: for cross-modal retrieval, and for localizing the source of a sound in an image. We make the following contributions: (i) show that audio and visual embedding can be learnt that enable both within-mode (e.g. audio-to-audio) and between-mode retrieval; (ii) explore various architectures for the AVC task, including those for the visual stream that ingest a single image, or multiple images, or a single image and multi-frame optical flow; (iii) show that the semantic object that sounds within an image can be localized (using only the sound, no motion or flow information); and (iv) give a cautionary tale in how to avoid undesirable shortcuts in the data preparation.",
"We introduce a seemingly impossible task: given only an audio clip of someone speaking, decide which of two face images is the speaker. In this paper we study this, and a number of related cross-modal tasks, aimed at answering the question: how much can we infer from the voice about the face and vice versa? We study this task \"in the wild\", employing the datasets that are now publicly available for face recognition from static images (VGGFace) and speaker identification from audio (VoxCeleb). These provide training and testing scenarios for both static and dynamic testing of cross-modal matching. We make the following contributions: (i) we introduce CNN architectures for both binary and multi-way cross-modal face and audio matching, (ii) we compare dynamic testing (where video information is available, but the audio is not from the same video) with static testing (where only a single still image is available), and (iii) we use human testing as a baseline to calibrate the difficulty of the task. We show that a CNN can indeed be trained to solve this task in both the static and dynamic scenarios, and is even well above chance on 10-way classification of the face given the voice. The CNN matches human performance on easy examples (e.g. different gender across faces) but exceeds human performance on more challenging examples (e.g. faces with the same gender, age and nationality).",
"Talking face generation aims to synthesize a sequence of face images that correspond to a clip of speech. This is a challenging task because face appearance variation and semantics of speech are coupled together in the subtle movements of the talking face regions. Existing works either construct specific face appearance model on specific subjects or model the transformation between lip motion and speech. In this work, we integrate both aspects and enable arbitrary-subject talking face generation by learning disentangled audio-visual representation. We find that the talking face sequence is actually a composition of both subject-related information and speech-related information. These two spaces are then explicitly disentangled through a novel associative-and-adversarial training process. This disentangled representation has an advantage where both audio and video can serve as inputs for generation. Extensive experiments show that the proposed approach generates realistic talking face sequences on arbitrary subjects with much clearer lip motion patterns than previous work. We also demonstrate the learned audio-visual representation is extremely useful for the tasks of automatic lip reading and audio-video retrieval.",
"As two of the five traditional human senses (sight, hearing, taste, smell, and touch), vision and sound are basic sources through which humans understand the world. Often correlated during natural events, these two modalities combine to jointly affect human perception. In this paper, we pose the task of generating sound given visual input. Such capabilities could help enable applications in virtual reality (generating sound for virtual scenes automatically) or provide additional accessibility to images or videos for people with visual impairments. As a first step in this direction, we apply learning-based methods to generate raw waveform samples given input video frames. We evaluate our models on a dataset of videos containing a variety of sounds (such as ambient sounds and sounds from people animals). Our experiments show that the generated sounds are fairly realistic and have good temporal synchronization with the visual inputs.",
"Binaural audio provides a listener with 3D sound sensation, allowing a rich perceptual experience of the scene. However, binaural recordings are scarcely available and require nontrivial expertise and equipment to obtain. We propose to convert common monaural audio into binaural audio by leveraging video. The key idea is that visual frames reveal significant spatial cues that, while explicitly lacking in the accompanying single-channel audio, are strongly linked to it. Our multi-modal approach recovers this link from unlabeled video. We devise a deep convolutional neural network that learns to decode the monaural (single-channel) soundtrack into its binaural counterpart by injecting visual information about object and scene configurations. We call the resulting output 2.5D visual sound---the visual stream helps \"lift\" the flat single channel audio into spatialized sound. In addition to sound generation, we show the self-supervised representation learned by our network benefits audio-visual source separation. Our video results: this http URL",
"We introduce an approach to convert mono audio recorded by a 360 video camera into spatial audio, a representation of the distribution of sound over the full viewing sphere. Spatial audio is an important component of immersive 360 video viewing, but spatial audio microphones are still rare in current 360 video production. Our system consists of end-to-end trainable neural networks that separate individual sound sources and localize them on the viewing sphere, conditioned on multi-modal analysis of audio and 360 video frames. We introduce several datasets, including one filmed ourselves, and one collected in-the-wild from YouTube, consisting of 360 videos uploaded with spatial audio. During training, ground-truth spatial audio serves as self-supervision and a mixed down mono track forms the input to our network. Using our approach, we show that it is possible to infer the spatial location of sound sources based only on 360 video and a mono audio track.",
"We learn rich natural sound representations by capitalizing on large amounts of unlabeled sound data collected in the wild. We leverage the natural synchronization between vision and sound to learn an acoustic representation using two-million unlabeled videos. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about natural sound. We propose a student-teacher training procedure which transfers discriminative visual knowledge from well established visual recognition models into the sound modality using unlabeled video as a bridge. Our sound representation yields significant performance improvements over the state-of-the-art results on standard benchmarks for acoustic scene object classification. Visualizations suggest some high-level semantics automatically emerge in the sound network, even though it is trained without ground truth labels.",
"The sound of crashing waves, the roar of fast-moving cars -- sound conveys important information about the objects in our surroundings. In this work, we show that ambient sounds can be used as a supervisory signal for learning visual models. To demonstrate this, we train a convolutional neural network to predict a statistical summary of the sound associated with a video frame. We show that, through this process, the network learns a representation that conveys information about objects and scenes. We evaluate this representation on several recognition tasks, finding that its performance is comparable to that of other state-of-the-art unsupervised learning methods. Finally, we show through visualizations that the network learns units that are selective to objects that are often associated with characteristic sounds.",
"There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective features for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further finetuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +16.7 in action recognition accuracy on UCF101 and a boost of +13.0 on HMDB51."
]
} |
1904.05979 | 2939598386 | Sounds originate from object motions and vibrations of surrounding air. Inspired by the fact that humans is capable of interpreting sound sources from how objects move visually, we propose a novel system that explicitly captures such motion cues for the task of sound localization and separation. Our system is composed of an end-to-end learnable model called Deep Dense Trajectory (DDT), and a curriculum learning scheme. It exploits the inherent coherence of audio-visual signals from a large quantities of unlabeled videos. Quantitative and qualitative evaluations show that comparing to previous models that rely on visual appearance cues, our motion based system improves performance in separating musical instrument sounds. Furthermore, it separates sound components from duets of the same category of instruments, a challenging problem that has not been addressed before. | Early works in vision and audition have explored the strong relations between sounds and motions. Fisher al @cite_7 used a maximal mutual information approach and Kidron al @cite_0 @cite_14 proposed variations of canonical correlation methods to discover such relations. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_7"
],
"mid": [
"2105582566",
"2065274193",
"2106488367"
],
"abstract": [
"People and animals fuse auditory and visual information to obtain robust perception. A particular benefit of such cross-modal analysis is the ability to localize visual events associated with sound sources. We aim to achieve this using computer-vision aided by a single microphone. Past efforts encountered problems stemming from the huge gap between the dimensions involved and the available data. This has led to solutions suffering from low spatio-temporal resolutions. We present a rigorous analysis of the fundamental problems associated with this task. Then, we present a stable and robust algorithm which overcomes past deficiencies. It grasps dynamic audio-visual events with high spatial resolution, and derives a unique solution. The algorithm effectively detects pixels that are associated with the sound, while filtering out other dynamic pixels. It is based on canonical correlation analysis (CCA), where we remove inherent ill-posedness by exploiting the typical spatial sparsity of audio-visual events. The algorithm is simple and efficient thanks to its reliance on linear programming and is free of user-defined parameters. To quantitatively assess the performance, we devise a localization criterion. The algorithm capabilities were demonstrated in experiments, where it overcame substantial visual distractions and audio noise.",
"In this paper, we propose a novel method that exploits correlation between audio-visual dynamics of a video to segment and localize objects that are the dominant source of audio. Our approach consists of a two-step spatiotemporal segmentation mechanism that relies on velocity and acceleration of moving objects as visual features. Each frame of the video is segmented into regions based on motion and appearance cues using the QuickShift algorithm, which are then clustered over time using K-means, so as to obtain a spatiotemporal video segmentation. The video is represented by motion features computed over individual segments. The Mel-Frequency Cepstral Coefficients (MFCC) of the audio signal, and their first order derivatives are exploited to represent audio. The proposed framework assumes there is a non-trivial correlation between these audio features and the velocity and acceleration of the moving and sounding objects. The canonical correlation analysis (CCA) is utilized to identify the moving objects which are most correlated to the audio signal. In addition to moving-sounding object identification, the same framework is also exploited to solve the problem of audio-video synchronization, and is used to aid interactive segmentation. We evaluate the performance of our proposed method on challenging videos. Our experiments demonstrate significant increase in performance over the state-of-the-art both qualitatively and quantitatively, and validate the feasibility and superiority of our approach.",
"People can understand complex auditory and visual information, often using one to disambiguate the other. Automated analysis, even at a low-level, faces severe challenges, including the lack of accurate statistical models for the signals, and their high-dimensionality and varied sampling rates. Previous approaches [6] assumed simple parametric models for the joint distribution which, while tractable, cannot capture the complex signal relationships. We learn the joint distribution of the visual and auditory signals using a non-parametric approach. First, we project the data into a maximally informative, low-dimensional subspace, suitable for density estimation. We then model the complicated stochastic relationships between the signals using a nonparametric density estimator. These learned densities allow processing across signal modalities. We demonstrate, on synthetic and real signals, localization in video of the face that is speaking in audio, and, conversely, audio enhancement of a particular speaker selected from the video."
]
} |
1904.05979 | 2939598386 | Sounds originate from object motions and vibrations of surrounding air. Inspired by the fact that humans is capable of interpreting sound sources from how objects move visually, we propose a novel system that explicitly captures such motion cues for the task of sound localization and separation. Our system is composed of an end-to-end learnable model called Deep Dense Trajectory (DDT), and a curriculum learning scheme. It exploits the inherent coherence of audio-visual signals from a large quantities of unlabeled videos. Quantitative and qualitative evaluations show that comparing to previous models that rely on visual appearance cues, our motion based system improves performance in separating musical instrument sounds. Furthermore, it separates sound components from duets of the same category of instruments, a challenging problem that has not been addressed before. | Lip motion is a useful cue in the speech processing domain, Gabbay al @cite_20 used it for speech denoising; Chung al @cite_32 demonstrated lip reading from face videos. Ephrat al @cite_15 and Owens al @cite_38 demonstrated speech separation and enhancement from videos. | {
"cite_N": [
"@cite_38",
"@cite_15",
"@cite_32",
"@cite_20"
],
"mid": [
"2796992393",
"2797032258",
"2952746495",
"2767108501"
],
"abstract": [
"The thud of a bouncing ball, the onset of speech as lips open -- when visual and audio events occur together, it suggests that there might be a common, underlying event that produced both signals. In this paper, we argue that the visual and audio components of a video signal should be modeled jointly using a fused multisensory representation. We propose to learn such a representation in a self-supervised way, by training a neural network to predict whether video frames and audio are temporally aligned. We use this learned representation for three applications: (a) sound source localization, i.e. visualizing the source of sound in a video; (b) audio-visual action recognition; and (c) on off-screen audio source separation, e.g. removing the off-screen translator's voice from a foreign official's speech. Code, models, and video results are available on our webpage: this http URL",
"A trained, machine learning model that utilizes both the visual and auditory signals of an input video to separate the speech of different speakers in the video.",
"The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a 'Watch, Listen, Attend and Spell' (WLAS) network that learns to transcribe videos of mouth motion to characters; (2) a curriculum learning strategy to accelerate training and to reduce overfitting; (3) a 'Lip Reading Sentences' (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television. The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available.",
"Isolating the voice of a specific person while filtering out other voices or background noises is challenging when video is shot in noisy environments, using a single microphone. For example, video conferences from home or office are disturbed by other voices, TV reporting from city streets is mixed with traffic noise, etc. We propose audio-visual methods to isolate the voice of a single speaker and eliminate unrelated sounds. Face motions captured in the video are used to estimate the speaker's voice, which is applied as a filter on the input audio. This approach avoids using mixtures of sounds in the learning process, as the number of such possible mixtures is huge, and would inevitably bias the trained model."
]
} |
1904.05979 | 2939598386 | Sounds originate from object motions and vibrations of surrounding air. Inspired by the fact that humans is capable of interpreting sound sources from how objects move visually, we propose a novel system that explicitly captures such motion cues for the task of sound localization and separation. Our system is composed of an end-to-end learnable model called Deep Dense Trajectory (DDT), and a curriculum learning scheme. It exploits the inherent coherence of audio-visual signals from a large quantities of unlabeled videos. Quantitative and qualitative evaluations show that comparing to previous models that rely on visual appearance cues, our motion based system improves performance in separating musical instrument sounds. Furthermore, it separates sound components from duets of the same category of instruments, a challenging problem that has not been addressed before. | The most related work to ours is @cite_3 , which claimed the tight associations between audio and visual onset signals, and use the signals to perform audio-visual sound attribution. In this work, we generalize their idea by learning an aligned audio-visual representations for sound separation. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2171819471"
],
"abstract": [
"Cross-modal analysis offers information beyond that extracted from individual modalities. Consider a camcorder having a single microphone in a cocktail-party: it captures several moving visual objects which emit sounds. A task for audio-visual analysis is to identify the number of independent audio-associated visual objects (AVOs), pinpoint the AVOs' spatial locations in the video and isolate each corresponding audio component. Part of these problems were considered by prior studies, which were limited to simple cases, e.g., a single AVO or stationary sounds. We describe an approach that seeks to overcome these challenges. It acknowledges the importance of temporal features that are based on significant changes in each modality. A probabilistic formalism identifies temporal coincidences between these features, yielding cross-modal association and visual localization. This association is of particular benefit in harmonic sounds, as it enables subsequent isolation of each audio source. We demonstrate this in challenging experiments, having multiple, simultaneous highly nonstationary AVOs."
]
} |
1904.05979 | 2939598386 | Sounds originate from object motions and vibrations of surrounding air. Inspired by the fact that humans is capable of interpreting sound sources from how objects move visually, we propose a novel system that explicitly captures such motion cues for the task of sound localization and separation. Our system is composed of an end-to-end learnable model called Deep Dense Trajectory (DDT), and a curriculum learning scheme. It exploits the inherent coherence of audio-visual signals from a large quantities of unlabeled videos. Quantitative and qualitative evaluations show that comparing to previous models that rely on visual appearance cues, our motion based system improves performance in separating musical instrument sounds. Furthermore, it separates sound components from duets of the same category of instruments, a challenging problem that has not been addressed before. | Our work is in part related to motion representation learning for videos, as we are working on videos of actions. Traditional techniques mainly use handcrafted spatio-temporal features, like space-time interest points @cite_40 , HOG3D @cite_43 , dense trajectories @cite_47 , improved dense trajectories @cite_49 as the motion representations of videos. Recently, works have shifted to learning representations using deep neural networks. There are three kinds of successful architectures to capture motion and temporal information in videos: (1) two-stream CNNs @cite_52 , where motion information is modeled by taking optical flow frames as network inputs; (2) 3D CNNs @cite_16 , which performs 3D convolutions over the spatio-temporal video volume; (3) 2D CNNs with temporal models on top such as LSTM @cite_34 , Attention @cite_17 @cite_13 , Graph CNNs @cite_8 , . More recently, researchers proposed to learn motion trajectory representations for action recognition @cite_18 @cite_35 @cite_36 . In contrast to action recognition or localization, our goal is to find correspondence between sound components and movements in videos. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_8",
"@cite_36",
"@cite_52",
"@cite_34",
"@cite_43",
"@cite_40",
"@cite_49",
"@cite_47",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2891446678",
"2795839309",
"2806331055",
"",
"2952186347",
"2951183276",
"2024868105",
"",
"2105101328",
"",
"2952633803",
"2747804831",
"2962899219"
],
"abstract": [
"How to leverage the temporal dimension is a key question in video analysis. Recent work suggests an efficient approach to video feature learning, namely, factorizing 3D convolutions into separate components respectively for spatial and temporal convolutions. The temporal convolution, however, comes with an implicit assumption – the feature maps across time steps are well aligned so that the features at the same locations can be aggregated. This assumption may be overly strong in practical applications, especially in action recognition where the motion of people or objects is a crucial aspect. In this work, we propose a new CNN architecture TrajectoryNet, which incorporates trajectory convolution, a new operation for integrating features along the temporal dimension, to replace the standard temporal convolution. This operation explicitly takes into account the location changes caused by deformation or motion, allowing the visual features to be aggregated along the the motion paths. On two very large-scale action recognition datasets, namely, Something-Something and Kinetics, the proposed network architecture achieves notable improvement over strong baselines.",
"Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.",
"How do humans recognize the action “opening a book”? We argue that there are two important cues: modeling temporal shape dynamics and modeling functional relationships between humans and objects. In this paper, we propose to represent videos as space-time region graphs which capture these two important cues. Our graph nodes are defined by the object region proposals from different frames in a long range video. These nodes are connected by two types of relations: (i) similarity relations capturing the long range dependencies between correlated objects and (ii) spatial-temporal relations capturing the interactions between nearby objects. We perform reasoning on this graph representation via Graph Convolutional Networks. We achieve state-of-the-art results on the Charades and Something-Something datasets. Especially for Charades with complex environments, we obtain a huge (4.4 ) gain when our model is applied in complex environments.",
"",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.",
"",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"This paper describes our solution for the video recognition task of ActivityNet Kinetics challenge that ranked the 1st place. Most of existing state-of-the-art video recognition approaches are in favor of an end-to-end pipeline. One exception is the framework of DevNet. The merit of DevNet is that they first use the video data to learn a network (i.e. fine-tuning or training from scratch). Instead of directly using the end-to-end classification scores (e.g. softmax scores), they extract the features from the learned network and then fed them into the off-the-shelf machine learning models to conduct video classification. However, the effectiveness of this line work has long-term been ignored and underestimated. In this submission, we extensively use this strategy. Particularly, we investigate four temporal modeling approaches using the learned features: Multi-group Shifting Attention Network, Temporal Xception Network, Multi-stream sequence Model and Fast-Forward Sequence Model. Experiment results on the challenging Kinetics dataset demonstrate that our proposed temporal modeling approaches can significantly improve existing approaches in the large-scale video recognition tasks. Most remarkably, our best single Multi-group Shifting Attention Network can achieve 77.7 in term of top-1 accuracy and 93.2 in term of top-5 accuracy on the validation set.",
"Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4 in terms of the top-1 and 94.0 in terms of the top-5 accuracy on the validation set."
]
} |
1904.06109 | 2938008799 | In recent decades, 3D morphable model (3DMM) has been commonly used in image-based photorealistic 3D face reconstruction. However, face images are often corrupted by serious occlusion by non-face objects including eyeglasses, masks, and hands. Such objects block the correct capture of landmarks and shading information. Therefore, the reconstructed 3D face model is hardly reusable. In this paper, a novel method is proposed to restore de-occluded face images based on inverse use of 3DMM and generative adversarial network. We utilize the 3DMM prior to the proposed adversarial network and combine a global and local adversarial convolutional neural network to learn face de-occlusion model. The 3DMM serves not only as geometric prior but also proposes the face region for the local discriminator. Experiment results confirm the effectiveness and robustness of the proposed algorithm in removing challenging types of occlusions with various head poses and illumination. Furthermore, the proposed method reconstructs the correct 3D face model with de-occluded textures. | * -3mm Conventional face de-occlusion algorithms are developed to increase the performance of face recognition algorithms. Wright @cite_9 proposed to apply sparse representation to encode faces and demonstrated the robustness of the extracted features to occlusion. Cheng @cite_21 introduced the double-channel SSDA (DC-SSDA) to detect noise by exploiting the difference between activations of two channels. Recently, a deep learning-based approach has been proposed Zhao @cite_31 to restore the partially occluded face in several successive processes using an LSTM auto-encoder. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_31"
],
"mid": [
"2129812935",
"2073575766",
"2565575913"
],
"abstract": [
"We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.",
"Occlusions by sunglasses, scarf, hats, beard, shadow etc, can significantly reduce the performance of face recognition systems. Although there exists a rich literature of researches focusing on face recognition with illuminations, poses and facial expression variations, there is very limited work reported for occlusion robust face recognition. In this paper, we present a method to restore occluded facial regions using deep learning technique to improve face recognition performance. Inspired by SSDA for facial occlusion removal with known occlusion type and explicit occlusion location detection from a preprocessing step, this paper further introduces Double Channel SSDA (DC-SSDA) which requires no prior knowledge of the types and the locations of occlusions. Experimental results based on CMU-PIE face database have showed that, the proposed method is robust to a variety of occlusion types and locations, and the restored faces could yield significant recognition performance improvements over occluded ones.",
"Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all of faces are from a pre-defined closed set of subjects. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real data sets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces."
]
} |
1904.06109 | 2938008799 | In recent decades, 3D morphable model (3DMM) has been commonly used in image-based photorealistic 3D face reconstruction. However, face images are often corrupted by serious occlusion by non-face objects including eyeglasses, masks, and hands. Such objects block the correct capture of landmarks and shading information. Therefore, the reconstructed 3D face model is hardly reusable. In this paper, a novel method is proposed to restore de-occluded face images based on inverse use of 3DMM and generative adversarial network. We utilize the 3DMM prior to the proposed adversarial network and combine a global and local adversarial convolutional neural network to learn face de-occlusion model. The 3DMM serves not only as geometric prior but also proposes the face region for the local discriminator. Experiment results confirm the effectiveness and robustness of the proposed algorithm in removing challenging types of occlusions with various head poses and illumination. Furthermore, the proposed method reconstructs the correct 3D face model with de-occluded textures. | Yin al proposed a deep 3DMM-conditioned face frontalization method called FF–GAN @cite_10 , which incorporates 3DMM coefficients into the GAN structure to provide poses prior to the face frontalization task. This method utilizes 3DMM coefficients as a weak prior to reduce the artifacts during frontalization in extreme profile views. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2963100452"
],
"abstract": [
"Despite recent advances in face recognition using deep learning, severe accuracy drops are observed for large pose variations in unconstrained environments. Learning pose-invariant features is one solution, but needs expensively labeled large-scale data and carefully designed feature learning algorithms. In this work, we focus on frontalizing faces in the wild under various head poses, including extreme profile view's. We propose a novel deep 3D Morphable Model (3DMM) conditioned Face Frontalization Generative Adversarial Network (GAN), termed as FF-GAN, to generate neutral head pose face images. Our framework differs from both traditional GANs and 3DMM based modeling. Incorporating 3DMM into the GAN structure provides shape and appearance priors for fast convergence with less training data, while also supporting end-to-end training. The 3DMM-conditioned GAN employs not only the discriminator and generator loss but also a new masked symmetry loss to retain visual quality under occlusions, besides an identity loss to recover high frequency information. Experiments on face recognition, landmark localization and 3D reconstruction consistently show the advantage of our frontalization method on faces in the wild datasets. 1"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.