id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1509.05452 | Anant Baijal | Anant Baijal, Julia Kim, Carmen Branje, Frank Russo, Deborah I. Fels | Composing vibrotactile music: A multisensory experience with the
Emoti-chair | IEEE HAPTICS Symposium 2012 | null | 10.1109/HAPTIC.2012.6183839 | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Emoti-Chair is a novel technology to enhance entertainment through
vibrotactile stimulation. We assessed the experience of this technology in two
workshops. In the first workshop, deaf film-makers experimented with creating
vibetracks for a movie clip using a professional movie editing software. In the
second workshop, trained opera singers sang and felt their voice through the
Emoti-Chair. Participants in both workshops generally found the overall
experience to be exciting and they were motivated to use the Chair for upcoming
projects.
| [
{
"created": "Thu, 17 Sep 2015 21:52:12 GMT",
"version": "v1"
}
] | 2015-09-21 | [
[
"Baijal",
"Anant",
""
],
[
"Kim",
"Julia",
""
],
[
"Branje",
"Carmen",
""
],
[
"Russo",
"Frank",
""
],
[
"Fels",
"Deborah I.",
""
]
] | The Emoti-Chair is a novel technology to enhance entertainment through vibrotactile stimulation. We assessed the experience of this technology in two workshops. In the first workshop, deaf film-makers experimented with creating vibetracks for a movie clip using a professional movie editing software. In the second workshop, trained opera singers sang and felt their voice through the Emoti-Chair. Participants in both workshops generally found the overall experience to be exciting and they were motivated to use the Chair for upcoming projects. |
2306.17162 | Simeon Adebola | Simeon Adebola, Rishi Parikh, Mark Presten, Satvik Sharma, Shrey
Aeron, Ananth Rao, Sandeep Mukherjee, Tomson Qu, Christina Wistrom, Eugen
Solowjow, Ken Goldberg | Can Machines Garden? Systematically Comparing the AlphaGarden vs.
Professional Horticulturalists | International Conference on Robotics and Automation(ICRA) 2023 Oral | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The AlphaGarden is an automated testbed for indoor polyculture farming which
combines a first-order plant simulator, a gantry robot, a seed planting
algorithm, plant phenotyping and tracking algorithms, irrigation sensors and
algorithms, and custom pruning tools and algorithms. In this paper, we
systematically compare the performance of the AlphaGarden to professional
horticulturalists on the staff of the UC Berkeley Oxford Tract Greenhouse. The
humans and the machine tend side-by-side polyculture gardens with the same seed
arrangement. We compare performance in terms of canopy coverage, plant
diversity, and water consumption. Results from two 60-day cycles suggest that
the automated AlphaGarden performs comparably to professional horticulturalists
in terms of coverage and diversity, and reduces water consumption by as much as
44%. Code, videos, and datasets are available at
https://sites.google.com/berkeley.edu/systematiccomparison.
| [
{
"created": "Thu, 29 Jun 2023 17:59:05 GMT",
"version": "v1"
}
] | 2023-06-30 | [
[
"Adebola",
"Simeon",
""
],
[
"Parikh",
"Rishi",
""
],
[
"Presten",
"Mark",
""
],
[
"Sharma",
"Satvik",
""
],
[
"Aeron",
"Shrey",
""
],
[
"Rao",
"Ananth",
""
],
[
"Mukherjee",
"Sandeep",
""
],
[
"Qu",
"Tomson",
""
],
[
"Wistrom",
"Christina",
""
],
[
"Solowjow",
"Eugen",
""
],
[
"Goldberg",
"Ken",
""
]
] | The AlphaGarden is an automated testbed for indoor polyculture farming which combines a first-order plant simulator, a gantry robot, a seed planting algorithm, plant phenotyping and tracking algorithms, irrigation sensors and algorithms, and custom pruning tools and algorithms. In this paper, we systematically compare the performance of the AlphaGarden to professional horticulturalists on the staff of the UC Berkeley Oxford Tract Greenhouse. The humans and the machine tend side-by-side polyculture gardens with the same seed arrangement. We compare performance in terms of canopy coverage, plant diversity, and water consumption. Results from two 60-day cycles suggest that the automated AlphaGarden performs comparably to professional horticulturalists in terms of coverage and diversity, and reduces water consumption by as much as 44%. Code, videos, and datasets are available at https://sites.google.com/berkeley.edu/systematiccomparison. |
2203.16797 | Chuizheng Meng | Chuizheng Meng, Sungyong Seo, Defu Cao, Sam Griesemer, Yan Liu | When Physics Meets Machine Learning: A Survey of Physics-Informed
Machine Learning | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Physics-informed machine learning (PIML), referring to the combination of
prior knowledge of physics, which is the high level abstraction of natural
phenomenons and human behaviours in the long history, with data-driven machine
learning models, has emerged as an effective way to mitigate the shortage of
training data, to increase models' generalizability and to ensure the physical
plausibility of results. In this paper, we survey an abundant number of recent
works in PIML and summarize them from three aspects: (1) motivations of PIML,
(2) physics knowledge in PIML, (3) methods of physics knowledge integration in
PIML. We also discuss current challenges and corresponding research
opportunities in PIML.
| [
{
"created": "Thu, 31 Mar 2022 04:58:27 GMT",
"version": "v1"
}
] | 2022-04-01 | [
[
"Meng",
"Chuizheng",
""
],
[
"Seo",
"Sungyong",
""
],
[
"Cao",
"Defu",
""
],
[
"Griesemer",
"Sam",
""
],
[
"Liu",
"Yan",
""
]
] | Physics-informed machine learning (PIML), referring to the combination of prior knowledge of physics, which is the high level abstraction of natural phenomenons and human behaviours in the long history, with data-driven machine learning models, has emerged as an effective way to mitigate the shortage of training data, to increase models' generalizability and to ensure the physical plausibility of results. In this paper, we survey an abundant number of recent works in PIML and summarize them from three aspects: (1) motivations of PIML, (2) physics knowledge in PIML, (3) methods of physics knowledge integration in PIML. We also discuss current challenges and corresponding research opportunities in PIML. |
2208.03374 | Aleksandar Stanic | Aleksandar Stani\'c, Yujin Tang, David Ha, J\"urgen Schmidhuber | Learning to Generalize with Object-centric Agents in the Open World
Survival Game Crafter | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reinforcement learning agents must generalize beyond their training
experience. Prior work has focused mostly on identical training and evaluation
environments. Starting from the recently introduced Crafter benchmark, a 2D
open world survival game, we introduce a new set of environments suitable for
evaluating some agent's ability to generalize on previously unseen (numbers of)
objects and to adapt quickly (meta-learning). In Crafter, the agents are
evaluated by the number of unlocked achievements (such as collecting resources)
when trained for 1M steps. We show that current agents struggle to generalize,
and introduce novel object-centric agents that improve over strong baselines.
We also provide critical insights of general interest for future work on
Crafter through several experiments. We show that careful hyper-parameter
tuning improves the PPO baseline agent by a large margin and that even
feedforward agents can unlock almost all achievements by relying on the
inventory display. We achieve new state-of-the-art performance on the original
Crafter environment. Additionally, when trained beyond 1M steps, our tuned
agents can unlock almost all achievements. We show that the recurrent PPO
agents improve over feedforward ones, even with the inventory information
removed. We introduce CrafterOOD, a set of 15 new environments that evaluate
OOD generalization. On CrafterOOD, we show that the current agents fail to
generalize, whereas our novel object-centric agents achieve state-of-the-art
OOD generalization while also being interpretable. Our code is public.
| [
{
"created": "Fri, 5 Aug 2022 20:05:46 GMT",
"version": "v1"
}
] | 2022-08-09 | [
[
"Stanić",
"Aleksandar",
""
],
[
"Tang",
"Yujin",
""
],
[
"Ha",
"David",
""
],
[
"Schmidhuber",
"Jürgen",
""
]
] | Reinforcement learning agents must generalize beyond their training experience. Prior work has focused mostly on identical training and evaluation environments. Starting from the recently introduced Crafter benchmark, a 2D open world survival game, we introduce a new set of environments suitable for evaluating some agent's ability to generalize on previously unseen (numbers of) objects and to adapt quickly (meta-learning). In Crafter, the agents are evaluated by the number of unlocked achievements (such as collecting resources) when trained for 1M steps. We show that current agents struggle to generalize, and introduce novel object-centric agents that improve over strong baselines. We also provide critical insights of general interest for future work on Crafter through several experiments. We show that careful hyper-parameter tuning improves the PPO baseline agent by a large margin and that even feedforward agents can unlock almost all achievements by relying on the inventory display. We achieve new state-of-the-art performance on the original Crafter environment. Additionally, when trained beyond 1M steps, our tuned agents can unlock almost all achievements. We show that the recurrent PPO agents improve over feedforward ones, even with the inventory information removed. We introduce CrafterOOD, a set of 15 new environments that evaluate OOD generalization. On CrafterOOD, we show that the current agents fail to generalize, whereas our novel object-centric agents achieve state-of-the-art OOD generalization while also being interpretable. Our code is public. |
2207.14745 | Maria Vittoria Minniti | Jessie van Dam, Andreea Tulbure, Maria Vittoria Minniti, Firas
Abi-Farraj, Marco Hutter | Collision detection and identification for a legged manipulator | International Conference on Intelligent Robots and Systems, IROS
2022, Kyoto, Japan, Oct 23 - Oct. 27, 2022 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To safely deploy legged robots in the real world it is necessary to provide
them with the ability to reliably detect unexpected contacts and accurately
estimate the corresponding contact force. In this paper, we propose a collision
detection and identification pipeline for a quadrupedal manipulator. We first
introduce an approach to estimate the collision time span based on band-pass
filtering and show that this information is key for obtaining accurate
collision force estimates. We then improve the accuracy of the identified force
magnitude by compensating for model inaccuracies, unmodeled loads, and any
other potential source of quasi-static disturbances acting on the robot. We
validate our framework with extensive hardware experiments in various
scenarios, including trotting and additional unmodeled load on the robot.
| [
{
"created": "Fri, 29 Jul 2022 15:37:23 GMT",
"version": "v1"
}
] | 2022-08-01 | [
[
"van Dam",
"Jessie",
""
],
[
"Tulbure",
"Andreea",
""
],
[
"Minniti",
"Maria Vittoria",
""
],
[
"Abi-Farraj",
"Firas",
""
],
[
"Hutter",
"Marco",
""
]
] | To safely deploy legged robots in the real world it is necessary to provide them with the ability to reliably detect unexpected contacts and accurately estimate the corresponding contact force. In this paper, we propose a collision detection and identification pipeline for a quadrupedal manipulator. We first introduce an approach to estimate the collision time span based on band-pass filtering and show that this information is key for obtaining accurate collision force estimates. We then improve the accuracy of the identified force magnitude by compensating for model inaccuracies, unmodeled loads, and any other potential source of quasi-static disturbances acting on the robot. We validate our framework with extensive hardware experiments in various scenarios, including trotting and additional unmodeled load on the robot. |
2001.09747 | Dirk Hartmann | Dirk Hartmann, Herman van der Auweraer | Digital Twins | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital Twins are one of the hottest digital trends. In this contribution we
will shortly review the concept of Digital Twins and the chances for novel
industrial applications. Mathematics are a key enabler and the impact will be
highlighted along four specific examples addressing Digital Product Twins
democratizing Design, Digital Production Twins enabling robots to mill, Digital
Production Twins driving industrialization of additive manufacturing, and
Digital Performance Twins boosting operations. We conclude the article with an
outlook on the next wave of Digital Twins, Executable Digital Twins, and will
review the associated challenges and opportunities for mathematics.
| [
{
"created": "Fri, 3 Jan 2020 19:20:47 GMT",
"version": "v1"
}
] | 2020-01-28 | [
[
"Hartmann",
"Dirk",
""
],
[
"van der Auweraer",
"Herman",
""
]
] | Digital Twins are one of the hottest digital trends. In this contribution we will shortly review the concept of Digital Twins and the chances for novel industrial applications. Mathematics are a key enabler and the impact will be highlighted along four specific examples addressing Digital Product Twins democratizing Design, Digital Production Twins enabling robots to mill, Digital Production Twins driving industrialization of additive manufacturing, and Digital Performance Twins boosting operations. We conclude the article with an outlook on the next wave of Digital Twins, Executable Digital Twins, and will review the associated challenges and opportunities for mathematics. |
2001.01000 | Thomas Drugman | Thomas Drugman, Thierry Dutoit | The Deterministic plus Stochastic Model of the Residual Signal and its
Applications | null | null | null | null | cs.SD cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The modeling of speech production often relies on a source-filter approach.
Although methods parameterizing the filter have nowadays reached a certain
maturity, there is still a lot to be gained for several speech processing
applications in finding an appropriate excitation model. This manuscript
presents a Deterministic plus Stochastic Model (DSM) of the residual signal.
The DSM consists of two contributions acting in two distinct spectral bands
delimited by a maximum voiced frequency. Both components are extracted from an
analysis performed on a speaker-dependent dataset of pitch-synchronous residual
frames. The deterministic part models the low-frequency contents and arises
from an orthonormal decomposition of these frames. As for the stochastic
component, it is a high-frequency noise modulated both in time and frequency.
Some interesting phonetic and computational properties of the DSM are also
highlighted. The applicability of the DSM in two fields of speech processing is
then studied. First, it is shown that incorporating the DSM vocoder in
HMM-based speech synthesis enhances the delivered quality. The proposed
approach turns out to significantly outperform the traditional pulse excitation
and provides a quality equivalent to STRAIGHT. In a second application, the
potential of glottal signatures derived from the proposed DSM is investigated
for speaker identification purpose. Interestingly, these signatures are shown
to lead to better recognition rates than other glottal-based methods.
| [
{
"created": "Sun, 29 Dec 2019 07:52:37 GMT",
"version": "v1"
}
] | 2020-01-07 | [
[
"Drugman",
"Thomas",
""
],
[
"Dutoit",
"Thierry",
""
]
] | The modeling of speech production often relies on a source-filter approach. Although methods parameterizing the filter have nowadays reached a certain maturity, there is still a lot to be gained for several speech processing applications in finding an appropriate excitation model. This manuscript presents a Deterministic plus Stochastic Model (DSM) of the residual signal. The DSM consists of two contributions acting in two distinct spectral bands delimited by a maximum voiced frequency. Both components are extracted from an analysis performed on a speaker-dependent dataset of pitch-synchronous residual frames. The deterministic part models the low-frequency contents and arises from an orthonormal decomposition of these frames. As for the stochastic component, it is a high-frequency noise modulated both in time and frequency. Some interesting phonetic and computational properties of the DSM are also highlighted. The applicability of the DSM in two fields of speech processing is then studied. First, it is shown that incorporating the DSM vocoder in HMM-based speech synthesis enhances the delivered quality. The proposed approach turns out to significantly outperform the traditional pulse excitation and provides a quality equivalent to STRAIGHT. In a second application, the potential of glottal signatures derived from the proposed DSM is investigated for speaker identification purpose. Interestingly, these signatures are shown to lead to better recognition rates than other glottal-based methods. |
1907.12998 | Tomasz Arodz | Han Zhang, Xi Gao, Jacob Unterman, Tom Arodz | Approximation Capabilities of Neural ODEs and Invertible Residual
Networks | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural ODEs and i-ResNet are recently proposed methods for enforcing
invertibility of residual neural models. Having a generic technique for
constructing invertible models can open new avenues for advances in learning
systems, but so far the question of whether Neural ODEs and i-ResNets can model
any continuous invertible function remained unresolved. Here, we show that both
of these models are limited in their approximation capabilities. We then prove
that any homeomorphism on a $p$-dimensional Euclidean space can be approximated
by a Neural ODE operating on a $2p$-dimensional Euclidean space, and a similar
result for i-ResNets. We conclude by showing that capping a Neural ODE or an
i-ResNet with a single linear layer is sufficient to turn the model into a
universal approximator for non-invertible continuous functions.
| [
{
"created": "Tue, 30 Jul 2019 15:04:01 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Mar 2020 03:28:45 GMT",
"version": "v2"
}
] | 2020-03-03 | [
[
"Zhang",
"Han",
""
],
[
"Gao",
"Xi",
""
],
[
"Unterman",
"Jacob",
""
],
[
"Arodz",
"Tom",
""
]
] | Neural ODEs and i-ResNet are recently proposed methods for enforcing invertibility of residual neural models. Having a generic technique for constructing invertible models can open new avenues for advances in learning systems, but so far the question of whether Neural ODEs and i-ResNets can model any continuous invertible function remained unresolved. Here, we show that both of these models are limited in their approximation capabilities. We then prove that any homeomorphism on a $p$-dimensional Euclidean space can be approximated by a Neural ODE operating on a $2p$-dimensional Euclidean space, and a similar result for i-ResNets. We conclude by showing that capping a Neural ODE or an i-ResNet with a single linear layer is sufficient to turn the model into a universal approximator for non-invertible continuous functions. |
2306.11104 | Lucia Cipolina-Kun | Lucia Cipolina-Kun | Markovian Embeddings for Coalitional Bargaining Games | null | null | null | null | cs.MA cs.GT cs.LG | http://creativecommons.org/licenses/by/4.0/ | We examine the Markovian properties of coalition bargaining games, in
particular, the case where past rejected proposals cannot be repeated. We
propose a Markovian embedding with filtrations to render the sates Markovian
and thus, fit into the framework of stochastic games.
| [
{
"created": "Mon, 19 Jun 2023 18:13:16 GMT",
"version": "v1"
}
] | 2023-06-21 | [
[
"Cipolina-Kun",
"Lucia",
""
]
] | We examine the Markovian properties of coalition bargaining games, in particular, the case where past rejected proposals cannot be repeated. We propose a Markovian embedding with filtrations to render the sates Markovian and thus, fit into the framework of stochastic games. |
2404.15475 | Robert Grossman | Robert L. Grossman | An Annotated Glossary for Data Commons, Data Meshes, and Other Data
Platforms | 6 pages | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Cloud-based data commons, data meshes, data hubs, and other data platforms
are important ways to manage, analyze and share data to accelerate research and
to support reproducible research. This is an annotated glossary of some of the
more common terms used in articles and discussions about these platforms.
| [
{
"created": "Tue, 23 Apr 2024 19:26:28 GMT",
"version": "v1"
}
] | 2024-04-25 | [
[
"Grossman",
"Robert L.",
""
]
] | Cloud-based data commons, data meshes, data hubs, and other data platforms are important ways to manage, analyze and share data to accelerate research and to support reproducible research. This is an annotated glossary of some of the more common terms used in articles and discussions about these platforms. |
1507.05224 | Kiran Garimella | Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis,
Michael Mathioudakis | Quantifying Controversy in Social Media | Accepted in the journal Transactions on Social Computing (TSC).
Extended version of the WSDM 2016 and CSCW 2016 demo paper. Please cite the
TSC/WSDM version and not the arxiv version | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Which topics spark the most heated debates on social media? Identifying those
topics is not only interesting from a societal point of view, but also allows
the filtering and aggregation of social media content for disseminating news
stories. In this paper, we perform a systematic methodological study of
controversy detection by using the content and the network structure of social
media.
Unlike previous work, rather than study controversy in a single hand-picked
topic and use domain specific knowledge, we take a general approach to study
topics in any domain. Our approach to quantifying controversy is based on a
graph-based three-stage pipeline, which involves (i) building a conversation
graph about a topic; (ii) partitioning the conversation graph to identify
potential sides of the controversy; and (iii) measuring the amount of
controversy from characteristics of the graph.
We perform an extensive comparison of controversy measures, different
graph-building approaches, and data sources. We use both controversial and
non-controversial topics on Twitter, as well as other external datasets. We
find that our new random-walk-based measure outperforms existing ones in
capturing the intuitive notion of controversy, and show that content features
are vastly less helpful in this task.
| [
{
"created": "Sat, 18 Jul 2015 20:50:42 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Aug 2015 04:12:06 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Dec 2015 18:48:53 GMT",
"version": "v3"
},
{
"created": "Tue, 7 Jun 2016 15:28:51 GMT",
"version": "v4"
},
{
"created": "Wed, 20 Sep 2017 14:26:57 GMT",
"version": "v5"
}
] | 2017-09-21 | [
[
"Garimella",
"Kiran",
""
],
[
"Morales",
"Gianmarco De Francisci",
""
],
[
"Gionis",
"Aristides",
""
],
[
"Mathioudakis",
"Michael",
""
]
] | Which topics spark the most heated debates on social media? Identifying those topics is not only interesting from a societal point of view, but also allows the filtering and aggregation of social media content for disseminating news stories. In this paper, we perform a systematic methodological study of controversy detection by using the content and the network structure of social media. Unlike previous work, rather than study controversy in a single hand-picked topic and use domain specific knowledge, we take a general approach to study topics in any domain. Our approach to quantifying controversy is based on a graph-based three-stage pipeline, which involves (i) building a conversation graph about a topic; (ii) partitioning the conversation graph to identify potential sides of the controversy; and (iii) measuring the amount of controversy from characteristics of the graph. We perform an extensive comparison of controversy measures, different graph-building approaches, and data sources. We use both controversial and non-controversial topics on Twitter, as well as other external datasets. We find that our new random-walk-based measure outperforms existing ones in capturing the intuitive notion of controversy, and show that content features are vastly less helpful in this task. |
2111.13839 | Hanlin Zhang | Hanlin Zhang, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard
Sch\"olkopf, Eric P. Xing | Towards Principled Disentanglement for Domain Generalization | CVPR 2022 Oral | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental challenge for machine learning models is generalizing to
out-of-distribution (OOD) data, in part due to spurious correlations. To tackle
this challenge, we first formalize the OOD generalization problem as
constrained optimization, called Disentanglement-constrained Domain
Generalization (DDG). We relax this non-trivial constrained optimization
problem to a tractable form with finite-dimensional parameterization and
empirical approximation. Then a theoretical analysis of the extent to which the
above transformations deviates from the original problem is provided. Based on
the transformation, we propose a primal-dual algorithm for joint representation
disentanglement and domain generalization. In contrast to traditional
approaches based on domain adversarial training and domain labels, DDG jointly
learns semantic and variation encoders for disentanglement, enabling flexible
manipulation and augmentation on training data. DDG aims to learn intrinsic
representations of semantic concepts that are invariant to nuisance factors and
generalizable across domains. Comprehensive experiments on popular benchmarks
show that DDG can achieve competitive OOD performance and uncover interpretable
salient structures within data.
| [
{
"created": "Sat, 27 Nov 2021 07:36:32 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jan 2022 21:23:57 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Mar 2022 05:34:27 GMT",
"version": "v3"
},
{
"created": "Wed, 19 Oct 2022 04:26:12 GMT",
"version": "v4"
}
] | 2022-10-20 | [
[
"Zhang",
"Hanlin",
""
],
[
"Zhang",
"Yi-Fan",
""
],
[
"Liu",
"Weiyang",
""
],
[
"Weller",
"Adrian",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Xing",
"Eric P.",
""
]
] | A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data, in part due to spurious correlations. To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG). We relax this non-trivial constrained optimization problem to a tractable form with finite-dimensional parameterization and empirical approximation. Then a theoretical analysis of the extent to which the above transformations deviates from the original problem is provided. Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization. In contrast to traditional approaches based on domain adversarial training and domain labels, DDG jointly learns semantic and variation encoders for disentanglement, enabling flexible manipulation and augmentation on training data. DDG aims to learn intrinsic representations of semantic concepts that are invariant to nuisance factors and generalizable across domains. Comprehensive experiments on popular benchmarks show that DDG can achieve competitive OOD performance and uncover interpretable salient structures within data. |
2310.05882 | Erica Weng | Erica Weng, Kenta Mukoya, Deva Ramanan, Kris Kitani | Evaluating a VR System for Collecting Safety-Critical Vehicle-Pedestrian
Interactions | Spotlight paper in the Data Generation for Robotics Workshop at RSS
2024 | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Autonomous vehicles (AVs) require comprehensive and reliable pedestrian
trajectory data to ensure safe operation. However, obtaining data of
safety-critical scenarios such as jaywalking and near-collisions, or uncommon
agents such as children, disabled pedestrians, and vulnerable road users poses
logistical and ethical challenges. This paper evaluates a Virtual Reality (VR)
system designed to collect pedestrian trajectory and body pose data in a
controlled, low-risk environment. We substantiate the usefulness of such a
system through semi-structured interviews with professionals in the AV field,
and validate the effectiveness of the system through two empirical studies: a
first-person user evaluation involving 62 participants, and a third-person
evaluative survey involving 290 respondents. Our findings demonstrate that the
VR-based data collection system elicits realistic responses for capturing
pedestrian data in safety-critical or uncommon vehicle-pedestrian interaction
scenarios.
| [
{
"created": "Mon, 9 Oct 2023 17:23:20 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2024 20:08:00 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Jul 2024 18:42:09 GMT",
"version": "v3"
}
] | 2024-07-11 | [
[
"Weng",
"Erica",
""
],
[
"Mukoya",
"Kenta",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Kitani",
"Kris",
""
]
] | Autonomous vehicles (AVs) require comprehensive and reliable pedestrian trajectory data to ensure safe operation. However, obtaining data of safety-critical scenarios such as jaywalking and near-collisions, or uncommon agents such as children, disabled pedestrians, and vulnerable road users poses logistical and ethical challenges. This paper evaluates a Virtual Reality (VR) system designed to collect pedestrian trajectory and body pose data in a controlled, low-risk environment. We substantiate the usefulness of such a system through semi-structured interviews with professionals in the AV field, and validate the effectiveness of the system through two empirical studies: a first-person user evaluation involving 62 participants, and a third-person evaluative survey involving 290 respondents. Our findings demonstrate that the VR-based data collection system elicits realistic responses for capturing pedestrian data in safety-critical or uncommon vehicle-pedestrian interaction scenarios. |
1812.01156 | Anik Islam | Anik Islam, Mohammed Belal Uddin, Md. Fazlul Kader, Soo Young Shin | Blockchain based secure data handover scheme in non-orthogonal multiple
access | Published in 2018 4th International Conference on Wireless and
Telematics (ICWT) | 2018 4th International Conference on Wireless and Telematics
(ICWT), Nusa Dua, 2018, pp. 1-5 | 10.1109/ICWT.2018.8527732 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-orthogonal multiple access (NOMA) with successive interference
cancellation receiver is considered as one of the most potent multiple access
techniques to be adopted in future wireless communication networks. Data
security in the NOMA transmission scheme is on much attention drawing issue.
Blockchain is a distributed peer-to-peer network enables a way of protecting
information from unauthorized access, tempering etc. By utilizing encryption
techniques of blockchain, a secured data communication scheme using blockchain
in NOMA is proposed in this paper. A two-phase encryption technique with key
generation using different parameter is proposed. In the first-phase data is
encrypted by imposing users' public key and in the second phase, a private key
of the base station (BS) is engaged for encryption. Finally, the superiority of
the proposed scheme over existing scheme is proven through a comparative study
based on the different features.
| [
{
"created": "Tue, 4 Dec 2018 01:15:27 GMT",
"version": "v1"
}
] | 2018-12-05 | [
[
"Islam",
"Anik",
""
],
[
"Uddin",
"Mohammed Belal",
""
],
[
"Kader",
"Md. Fazlul",
""
],
[
"Shin",
"Soo Young",
""
]
] | Non-orthogonal multiple access (NOMA) with successive interference cancellation receiver is considered as one of the most potent multiple access techniques to be adopted in future wireless communication networks. Data security in the NOMA transmission scheme is on much attention drawing issue. Blockchain is a distributed peer-to-peer network enables a way of protecting information from unauthorized access, tempering etc. By utilizing encryption techniques of blockchain, a secured data communication scheme using blockchain in NOMA is proposed in this paper. A two-phase encryption technique with key generation using different parameter is proposed. In the first-phase data is encrypted by imposing users' public key and in the second phase, a private key of the base station (BS) is engaged for encryption. Finally, the superiority of the proposed scheme over existing scheme is proven through a comparative study based on the different features. |
0902.2969 | Giorgi Japaridze | Giorgi Japaridze | Ptarithmetic | Substantially better versions are on their way. Hence the present
article probably will not be published | The Baltic International Yearbook on Cognition, Logic and
Communication 8 (2013), Article 5, pp. 1-186 | 10.4148/1944-3676.1074 | null | cs.LO cs.AI cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The present article introduces ptarithmetic (short for "polynomial time
arithmetic") -- a formal number theory similar to the well known Peano
arithmetic, but based on the recently born computability logic (see
http://www.cis.upenn.edu/~giorgi/cl.html) instead of classical logic. The
formulas of ptarithmetic represent interactive computational problems rather
than just true/false statements, and their "truth" is understood as existence
of a polynomial time solution. The system of ptarithmetic elaborated in this
article is shown to be sound and complete. Sound in the sense that every
theorem T of the system represents an interactive number-theoretic
computational problem with a polynomial time solution and, furthermore, such a
solution can be effectively extracted from a proof of T. And complete in the
sense that every interactive number-theoretic problem with a polynomial time
solution is represented by some theorem T of the system.
The paper is self-contained, and can be read without any previous familiarity
with computability logic.
| [
{
"created": "Tue, 17 Feb 2009 19:14:09 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Feb 2009 12:48:43 GMT",
"version": "v2"
},
{
"created": "Fri, 26 Feb 2010 10:17:31 GMT",
"version": "v3"
}
] | 2013-12-16 | [
[
"Japaridze",
"Giorgi",
""
]
] | The present article introduces ptarithmetic (short for "polynomial time arithmetic") -- a formal number theory similar to the well known Peano arithmetic, but based on the recently born computability logic (see http://www.cis.upenn.edu/~giorgi/cl.html) instead of classical logic. The formulas of ptarithmetic represent interactive computational problems rather than just true/false statements, and their "truth" is understood as existence of a polynomial time solution. The system of ptarithmetic elaborated in this article is shown to be sound and complete. Sound in the sense that every theorem T of the system represents an interactive number-theoretic computational problem with a polynomial time solution and, furthermore, such a solution can be effectively extracted from a proof of T. And complete in the sense that every interactive number-theoretic problem with a polynomial time solution is represented by some theorem T of the system. The paper is self-contained, and can be read without any previous familiarity with computability logic. |
2004.13629 | Masahiro Oda Dr. | Masahiro Oda, Holger R. Roth, Takayuki Kitasaka, Kazuhiro Furukawa,
Ryoji Miyahara, Yoshiki Hirooka, Hidemi Goto, Nassir Navab, Kensaku Mori | Colon Shape Estimation Method for Colonoscope Tracking using Recurrent
Neural Networks | Accepted paper as a poster presentation at MICCAI 2018 (International
Conference on Medical Image Computing and Computer-Assisted Intervention),
Granada, Spain | Published in Proceedings of MICCAI 2018, LNCS 11073, pp 176-184 | 10.1007/978-3-030-00937-3_21 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an estimation method using a recurrent neural network (RNN) of the
colon's shape where deformation was occurred by a colonoscope insertion.
Colonoscope tracking or a navigation system that navigates physician to polyp
positions is needed to reduce such complications as colon perforation. Previous
tracking methods caused large tracking errors at the transverse and sigmoid
colons because these areas largely deform during colonoscope insertion. Colon
deformation should be taken into account in tracking processes. We propose a
colon deformation estimation method using RNN and obtain the colonoscope shape
from electromagnetic sensors during its insertion into the colon. This method
obtains positional, directional, and an insertion length from the colonoscope
shape. From its shape, we also calculate the relative features that represent
the positional and directional relationships between two points on a
colonoscope. Long short-term memory is used to estimate the current colon shape
from the past transition of the features of the colonoscope shape. We performed
colon shape estimation in a phantom study and correctly estimated the colon
shapes during colonoscope insertion with 12.39 (mm) estimation error.
| [
{
"created": "Mon, 20 Apr 2020 04:43:58 GMT",
"version": "v1"
}
] | 2020-04-29 | [
[
"Oda",
"Masahiro",
""
],
[
"Roth",
"Holger R.",
""
],
[
"Kitasaka",
"Takayuki",
""
],
[
"Furukawa",
"Kazuhiro",
""
],
[
"Miyahara",
"Ryoji",
""
],
[
"Hirooka",
"Yoshiki",
""
],
[
"Goto",
"Hidemi",
""
],
[
"Navab",
"Nassir",
""
],
[
"Mori",
"Kensaku",
""
]
] | We propose an estimation method using a recurrent neural network (RNN) of the colon's shape where deformation was occurred by a colonoscope insertion. Colonoscope tracking or a navigation system that navigates physician to polyp positions is needed to reduce such complications as colon perforation. Previous tracking methods caused large tracking errors at the transverse and sigmoid colons because these areas largely deform during colonoscope insertion. Colon deformation should be taken into account in tracking processes. We propose a colon deformation estimation method using RNN and obtain the colonoscope shape from electromagnetic sensors during its insertion into the colon. This method obtains positional, directional, and an insertion length from the colonoscope shape. From its shape, we also calculate the relative features that represent the positional and directional relationships between two points on a colonoscope. Long short-term memory is used to estimate the current colon shape from the past transition of the features of the colonoscope shape. We performed colon shape estimation in a phantom study and correctly estimated the colon shapes during colonoscope insertion with 12.39 (mm) estimation error. |
2205.06802 | Danial Toufani Movaghar | Danial Toufani-Movaghar, Mohammad-Reza Feizi-Derakhshi | Word Embeddings and Validity Indexes in Fuzzy Clustering | 10 Pages, 2 figures. arXiv admin note: substantial text overlap with
arXiv:1907.07672 by other authors | null | null | null | cs.CL cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In the new era of internet systems and applications, a concept of detecting
distinguished topics from huge amounts of text has gained a lot of attention.
These methods use representation of text in a numerical format -- called
embeddings -- to imitate human-based semantic similarity between words. In this
study, we perform a fuzzy-based analysis of various vector representations of
words, i.e., word embeddings. Also we introduce new methods of fuzzy clustering
based on hybrid implementation of fuzzy clustering methods with an evolutionary
algorithm named Forest Optimization. We use two popular fuzzy clustering
algorithms on count-based word embeddings, with different methods and
dimensionality. Words about covid from Kaggle dataset gathered and calculated
into vectors and clustered. The results indicate that fuzzy clustering
algorithms are very sensitive to high-dimensional data, and parameter tuning
can dramatically change their performance. We evaluate results of experiments
with various clustering validity indexes to compare different algorithm
variation with different embeddings accuracy.
| [
{
"created": "Tue, 26 Apr 2022 18:08:19 GMT",
"version": "v1"
}
] | 2022-05-16 | [
[
"Toufani-Movaghar",
"Danial",
""
],
[
"Feizi-Derakhshi",
"Mohammad-Reza",
""
]
] | In the new era of internet systems and applications, a concept of detecting distinguished topics from huge amounts of text has gained a lot of attention. These methods use representation of text in a numerical format -- called embeddings -- to imitate human-based semantic similarity between words. In this study, we perform a fuzzy-based analysis of various vector representations of words, i.e., word embeddings. Also we introduce new methods of fuzzy clustering based on hybrid implementation of fuzzy clustering methods with an evolutionary algorithm named Forest Optimization. We use two popular fuzzy clustering algorithms on count-based word embeddings, with different methods and dimensionality. Words about covid from Kaggle dataset gathered and calculated into vectors and clustered. The results indicate that fuzzy clustering algorithms are very sensitive to high-dimensional data, and parameter tuning can dramatically change their performance. We evaluate results of experiments with various clustering validity indexes to compare different algorithm variation with different embeddings accuracy. |
1410.6513 | Yunan Gu | Yunan Gu and Walid Saad and Mehdi Bennis and Merouane Debbah and Zhu
Han | Matching Theory for Future Wireless Networks: Fundamentals and
Applications | null | null | null | null | cs.IT cs.NI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of novel wireless networking paradigms such as small cell and
cognitive radio networks has forever transformed the way in which wireless
systems are operated. In particular, the need for self-organizing solutions to
manage the scarce spectral resources has become a prevalent theme in many
emerging wireless systems. In this paper, the first comprehensive tutorial on
the use of matching theory, a Nobelprize winning framework, for resource
management in wireless networks is developed. To cater for the unique features
of emerging wireless networks, a novel, wireless-oriented classification of
matching theory is proposed. Then, the key solution concepts and algorithmic
implementations of this framework are exposed. Then, the developed concepts are
applied in three important wireless networking areas in order to demonstrate
the usefulness of this analytical tool. Results show how matching theory can
effectively improve the performance of resource allocation in all three
applications discussed.
| [
{
"created": "Thu, 23 Oct 2014 21:43:31 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Mar 2015 22:22:27 GMT",
"version": "v2"
}
] | 2015-03-16 | [
[
"Gu",
"Yunan",
""
],
[
"Saad",
"Walid",
""
],
[
"Bennis",
"Mehdi",
""
],
[
"Debbah",
"Merouane",
""
],
[
"Han",
"Zhu",
""
]
] | The emergence of novel wireless networking paradigms such as small cell and cognitive radio networks has forever transformed the way in which wireless systems are operated. In particular, the need for self-organizing solutions to manage the scarce spectral resources has become a prevalent theme in many emerging wireless systems. In this paper, the first comprehensive tutorial on the use of matching theory, a Nobelprize winning framework, for resource management in wireless networks is developed. To cater for the unique features of emerging wireless networks, a novel, wireless-oriented classification of matching theory is proposed. Then, the key solution concepts and algorithmic implementations of this framework are exposed. Then, the developed concepts are applied in three important wireless networking areas in order to demonstrate the usefulness of this analytical tool. Results show how matching theory can effectively improve the performance of resource allocation in all three applications discussed. |
2405.20446 | Maya Anderson | Maya Anderson, Guy Amit, Abigail Goldsteen | Is My Data in Your Retrieval Database? Membership Inference Attacks
Against Retrieval Augmented Generation | 16 pages, 3 figures | null | null | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Retrieval Augmented Generation (RAG) systems have shown great promise in
natural language processing. However, their reliance on data stored in a
retrieval database, which may contain proprietary or sensitive information,
introduces new privacy concerns. Specifically, an attacker may be able to infer
whether a certain text passage appears in the retrieval database by observing
the outputs of the RAG system, an attack known as a Membership Inference Attack
(MIA). Despite the significance of this threat, MIAs against RAG systems have
yet remained under-explored. This study addresses this gap by introducing an
efficient and easy-to-use method for conducting MIA against RAG systems. We
demonstrate the effectiveness of our attack using two benchmark datasets and
multiple generative models, showing that the membership of a document in the
retrieval database can be efficiently determined through the creation of an
appropriate prompt in both black-box and gray-box settings. Moreover, we
introduce an initial defense strategy based on adding instructions to the RAG
template, which shows high effectiveness for some datasets and models. Our
findings highlight the importance of implementing security countermeasures in
deployed RAG systems and developing more advanced defenses to protect the
privacy and security of retrieval databases.
| [
{
"created": "Thu, 30 May 2024 19:46:36 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jun 2024 09:39:39 GMT",
"version": "v2"
}
] | 2024-06-10 | [
[
"Anderson",
"Maya",
""
],
[
"Amit",
"Guy",
""
],
[
"Goldsteen",
"Abigail",
""
]
] | Retrieval Augmented Generation (RAG) systems have shown great promise in natural language processing. However, their reliance on data stored in a retrieval database, which may contain proprietary or sensitive information, introduces new privacy concerns. Specifically, an attacker may be able to infer whether a certain text passage appears in the retrieval database by observing the outputs of the RAG system, an attack known as a Membership Inference Attack (MIA). Despite the significance of this threat, MIAs against RAG systems have yet remained under-explored. This study addresses this gap by introducing an efficient and easy-to-use method for conducting MIA against RAG systems. We demonstrate the effectiveness of our attack using two benchmark datasets and multiple generative models, showing that the membership of a document in the retrieval database can be efficiently determined through the creation of an appropriate prompt in both black-box and gray-box settings. Moreover, we introduce an initial defense strategy based on adding instructions to the RAG template, which shows high effectiveness for some datasets and models. Our findings highlight the importance of implementing security countermeasures in deployed RAG systems and developing more advanced defenses to protect the privacy and security of retrieval databases. |
2406.06258 | Xiaojie Li | Xiaojie Li, Chenghao Gu, Shuzhao Xie, Yunpeng Bai, Weixiang Zhang, Zhi
Wang | Tuning-Free Visual Customization via View Iterative Self-Attention
Control | Under review | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Fine-Tuning Diffusion Models enable a wide range of personalized generation
and editing applications on diverse visual modalities. While Low-Rank
Adaptation (LoRA) accelerates the fine-tuning process, it still requires
multiple reference images and time-consuming training, which constrains its
scalability for large-scale and real-time applications. In this paper, we
propose \textit{View Iterative Self-Attention Control (VisCtrl)} to tackle this
challenge. Specifically, VisCtrl is a training-free method that injects the
appearance and structure of a user-specified subject into another subject in
the target image, unlike previous approaches that require fine-tuning the
model. Initially, we obtain the initial noise for both the reference and target
images through DDIM inversion. Then, during the denoising phase, features from
the reference image are injected into the target image via the self-attention
mechanism. Notably, by iteratively performing this feature injection process,
we ensure that the reference image features are gradually integrated into the
target image. This approach results in consistent and harmonious editing with
only one reference image in a few denoising steps. Moreover, benefiting from
our plug-and-play architecture design and the proposed Feature Gradual Sampling
strategy for multi-view editing, our method can be easily extended to edit in
complex visual domains. Extensive experiments show the efficacy of VisCtrl
across a spectrum of tasks, including personalized editing of images, videos,
and 3D scenes.
| [
{
"created": "Mon, 10 Jun 2024 13:41:10 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 03:06:14 GMT",
"version": "v2"
}
] | 2024-06-12 | [
[
"Li",
"Xiaojie",
""
],
[
"Gu",
"Chenghao",
""
],
[
"Xie",
"Shuzhao",
""
],
[
"Bai",
"Yunpeng",
""
],
[
"Zhang",
"Weixiang",
""
],
[
"Wang",
"Zhi",
""
]
] | Fine-Tuning Diffusion Models enable a wide range of personalized generation and editing applications on diverse visual modalities. While Low-Rank Adaptation (LoRA) accelerates the fine-tuning process, it still requires multiple reference images and time-consuming training, which constrains its scalability for large-scale and real-time applications. In this paper, we propose \textit{View Iterative Self-Attention Control (VisCtrl)} to tackle this challenge. Specifically, VisCtrl is a training-free method that injects the appearance and structure of a user-specified subject into another subject in the target image, unlike previous approaches that require fine-tuning the model. Initially, we obtain the initial noise for both the reference and target images through DDIM inversion. Then, during the denoising phase, features from the reference image are injected into the target image via the self-attention mechanism. Notably, by iteratively performing this feature injection process, we ensure that the reference image features are gradually integrated into the target image. This approach results in consistent and harmonious editing with only one reference image in a few denoising steps. Moreover, benefiting from our plug-and-play architecture design and the proposed Feature Gradual Sampling strategy for multi-view editing, our method can be easily extended to edit in complex visual domains. Extensive experiments show the efficacy of VisCtrl across a spectrum of tasks, including personalized editing of images, videos, and 3D scenes. |
2307.09408 | Anne Cathrine Linder | Anne Cathrine Linder and David Lusseau | Resilience of the reported global human-nature interaction network to
pandemic conditions | null | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Understanding human-nature interactions and the architecture of coupled
human-nature systems is crucial for sustainable development. Cultural ecosystem
services (CES), defined as intangible benefits derived from nature exposure,
contribute to maintaining and improving human well-being. However, we have
limited understanding of how well-being benefits emerge from CES co-production.
In this study, for the first time, we estimated the global CES network from
self-reported interactions between nature features and human activities
underpinning CES co-production using social media. First, we used a bottom-up,
approach to define the global repertoire of nature features and human
activities used during CES co-production using 682,000 posts on Reddit. We then
sampled Twitter to estimate the co-occurrence of these features and activities
over the past five years, retrieving 41.7 millions tweets. These tweets were
used to estimate the CES bipartite network, where each link was weighted by the
number of times nature features and human activities co-occurred in tweets. We
expected to observe large changes in the CES network topology in relation to
the global mobility restrictions during the COVID-19 pandemic. This was not the
case and the global CES network was generally resilient. However, a higher
order singular value decomposition of the CES tensor revealed an impulse on the
link between self care activities and urban greenspace. This could be due to an
increased need for self care during the pandemic and urban greenspace enabling
CES to be produced locally. Thus, providing resilience for maintaining
well-being during the pandemic. Our user based analysis also indicated a shift
towards local CES production during the beginning of the pandemic. Thus,
supporting that CES was produced locally. These findings suggest an overall
need for CES and access to features providing CES in local communities.
| [
{
"created": "Tue, 18 Jul 2023 16:27:39 GMT",
"version": "v1"
}
] | 2023-07-19 | [
[
"Linder",
"Anne Cathrine",
""
],
[
"Lusseau",
"David",
""
]
] | Understanding human-nature interactions and the architecture of coupled human-nature systems is crucial for sustainable development. Cultural ecosystem services (CES), defined as intangible benefits derived from nature exposure, contribute to maintaining and improving human well-being. However, we have limited understanding of how well-being benefits emerge from CES co-production. In this study, for the first time, we estimated the global CES network from self-reported interactions between nature features and human activities underpinning CES co-production using social media. First, we used a bottom-up, approach to define the global repertoire of nature features and human activities used during CES co-production using 682,000 posts on Reddit. We then sampled Twitter to estimate the co-occurrence of these features and activities over the past five years, retrieving 41.7 millions tweets. These tweets were used to estimate the CES bipartite network, where each link was weighted by the number of times nature features and human activities co-occurred in tweets. We expected to observe large changes in the CES network topology in relation to the global mobility restrictions during the COVID-19 pandemic. This was not the case and the global CES network was generally resilient. However, a higher order singular value decomposition of the CES tensor revealed an impulse on the link between self care activities and urban greenspace. This could be due to an increased need for self care during the pandemic and urban greenspace enabling CES to be produced locally. Thus, providing resilience for maintaining well-being during the pandemic. Our user based analysis also indicated a shift towards local CES production during the beginning of the pandemic. Thus, supporting that CES was produced locally. These findings suggest an overall need for CES and access to features providing CES in local communities. |
2404.01543 | Ziqian Bai | Ziqian Bai, Feitong Tan, Sean Fanello, Rohit Pandey, Mingsong Dou,
Shichen Liu, Ping Tan, Yinda Zhang | Efficient 3D Implicit Head Avatar with Mesh-anchored Hash Table
Blendshapes | In CVPR2024. Project page:
https://augmentedperception.github.io/monoavatar-plus | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D head avatars built with neural implicit volumetric representations have
achieved unprecedented levels of photorealism. However, the computational cost
of these methods remains a significant barrier to their widespread adoption,
particularly in real-time applications such as virtual reality and
teleconferencing. While attempts have been made to develop fast neural
rendering approaches for static scenes, these methods cannot be simply employed
to support realistic facial expressions, such as in the case of a dynamic
facial performance. To address these challenges, we propose a novel fast 3D
neural implicit head avatar model that achieves real-time rendering while
maintaining fine-grained controllability and high rendering quality. Our key
idea lies in the introduction of local hash table blendshapes, which are
learned and attached to the vertices of an underlying face parametric model.
These per-vertex hash-tables are linearly merged with weights predicted via a
CNN, resulting in expression dependent embeddings. Our novel representation
enables efficient density and color predictions using a lightweight MLP, which
is further accelerated by a hierarchical nearest neighbor search method.
Extensive experiments show that our approach runs in real-time while achieving
comparable rendering quality to state-of-the-arts and decent results on
challenging expressions.
| [
{
"created": "Tue, 2 Apr 2024 00:55:50 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Bai",
"Ziqian",
""
],
[
"Tan",
"Feitong",
""
],
[
"Fanello",
"Sean",
""
],
[
"Pandey",
"Rohit",
""
],
[
"Dou",
"Mingsong",
""
],
[
"Liu",
"Shichen",
""
],
[
"Tan",
"Ping",
""
],
[
"Zhang",
"Yinda",
""
]
] | 3D head avatars built with neural implicit volumetric representations have achieved unprecedented levels of photorealism. However, the computational cost of these methods remains a significant barrier to their widespread adoption, particularly in real-time applications such as virtual reality and teleconferencing. While attempts have been made to develop fast neural rendering approaches for static scenes, these methods cannot be simply employed to support realistic facial expressions, such as in the case of a dynamic facial performance. To address these challenges, we propose a novel fast 3D neural implicit head avatar model that achieves real-time rendering while maintaining fine-grained controllability and high rendering quality. Our key idea lies in the introduction of local hash table blendshapes, which are learned and attached to the vertices of an underlying face parametric model. These per-vertex hash-tables are linearly merged with weights predicted via a CNN, resulting in expression dependent embeddings. Our novel representation enables efficient density and color predictions using a lightweight MLP, which is further accelerated by a hierarchical nearest neighbor search method. Extensive experiments show that our approach runs in real-time while achieving comparable rendering quality to state-of-the-arts and decent results on challenging expressions. |
1307.1718 | Kyle Williams | Pucktada Treeratpituk, Madian Khabsa, C. Lee Giles | Graph-based Approach to Automatic Taxonomy Generation (GraBTax) | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel graph-based approach for constructing concept hierarchy
from a large text corpus. Our algorithm, GraBTax, incorporates both statistical
co-occurrences and lexical similarity in optimizing the structure of the
taxonomy. To automatically generate topic-dependent taxonomies from a large
text corpus, GraBTax first extracts topical terms and their relationships from
the corpus. The algorithm then constructs a weighted graph representing topics
and their associations. A graph partitioning algorithm is then used to
recursively partition the topic graph into a taxonomy. For evaluation, we apply
GraBTax to articles, primarily computer science, in the CiteSeerX digital
library and search engine. The quality of the resulting concept hierarchy is
assessed by both human judges and comparison with Wikipedia categories.
| [
{
"created": "Fri, 5 Jul 2013 21:05:20 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Apr 2014 20:50:22 GMT",
"version": "v2"
}
] | 2014-04-30 | [
[
"Treeratpituk",
"Pucktada",
""
],
[
"Khabsa",
"Madian",
""
],
[
"Giles",
"C. Lee",
""
]
] | We propose a novel graph-based approach for constructing concept hierarchy from a large text corpus. Our algorithm, GraBTax, incorporates both statistical co-occurrences and lexical similarity in optimizing the structure of the taxonomy. To automatically generate topic-dependent taxonomies from a large text corpus, GraBTax first extracts topical terms and their relationships from the corpus. The algorithm then constructs a weighted graph representing topics and their associations. A graph partitioning algorithm is then used to recursively partition the topic graph into a taxonomy. For evaluation, we apply GraBTax to articles, primarily computer science, in the CiteSeerX digital library and search engine. The quality of the resulting concept hierarchy is assessed by both human judges and comparison with Wikipedia categories. |
1204.6304 | Srijith Ravikumar | Sathya Narayanan Nagarajan, Srijith Ravikumar | Model for Predicting End User Web Page Response Time | null | null | null | null | cs.PF | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Perceived responsiveness of a web page is one of the most important and least
understood metrics of web page design, and is critical for attracting and
maintaining a large audience. Web pages can be designed to meet performance
SLAs early in the product lifecycle if there is a way to predict the apparent
responsiveness of a particular page layout. Response time of a web page is
largely influenced by page layout and various network characteristics. Since
the network characteristics vary widely from country to country, accurately
modeling and predicting the perceived responsiveness of a web page from the end
user's perspective has traditionally proven very difficult. We propose a model
for predicting end user web page response time based on web page, network,
browser download and browser rendering characteristics. We start by
understanding the key parameters that affect perceived response time. We then
model each of these parameters individually using experimental tests and
statistical techniques. Finally, we demonstrate the effectiveness of this model
by conducting an experimental study with Yahoo! web pages in two countries and
compare it with 3rd party measurement application.
| [
{
"created": "Fri, 27 Apr 2012 19:21:30 GMT",
"version": "v1"
}
] | 2012-04-30 | [
[
"Nagarajan",
"Sathya Narayanan",
""
],
[
"Ravikumar",
"Srijith",
""
]
] | Perceived responsiveness of a web page is one of the most important and least understood metrics of web page design, and is critical for attracting and maintaining a large audience. Web pages can be designed to meet performance SLAs early in the product lifecycle if there is a way to predict the apparent responsiveness of a particular page layout. Response time of a web page is largely influenced by page layout and various network characteristics. Since the network characteristics vary widely from country to country, accurately modeling and predicting the perceived responsiveness of a web page from the end user's perspective has traditionally proven very difficult. We propose a model for predicting end user web page response time based on web page, network, browser download and browser rendering characteristics. We start by understanding the key parameters that affect perceived response time. We then model each of these parameters individually using experimental tests and statistical techniques. Finally, we demonstrate the effectiveness of this model by conducting an experimental study with Yahoo! web pages in two countries and compare it with 3rd party measurement application. |
1407.0791 | Kiran Garimella | Venkata Rama Kiran Garimella, Ingmar Weber | Co-Following on Twitter | full version of a short paper at Hypertext 2014 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an in-depth study of co-following on Twitter based on the
observation that two Twitter users whose followers have similar friends are
also similar, even though they might not share any direct links or a single
mutual follower. We show how this observation contributes to (i) a better
understanding of language-agnostic user classification on Twitter, (ii)
eliciting opportunities for Computational Social Science, and (iii) improving
online marketing by identifying cross-selling opportunities.
We start with a machine learning problem of predicting a user's preference
among two alternative choices of Twitter friends. We show that co-following
information provides strong signals for diverse classification tasks and that
these signals persist even when (i) the most discriminative features are
removed and (ii) only relatively "sparse" users with fewer than 152 but more
than 43 Twitter friends are considered.
Going beyond mere classification performance optimization, we present
applications of our methodology to Computational Social Science. Here we
confirm stereotypes such as that the country singer Kenny Chesney
(@kennychesney) is more popular among @GOP followers, whereas Lady Gaga
(@ladygaga) enjoys more support from @TheDemocrats followers.
In the domain of marketing we give evidence that celebrity endorsement is
reflected in co-following and we demonstrate how our methodology can be used to
reveal the audience similarities between Apple and Puma and, less obviously,
between Nike and Coca-Cola. Concerning a user's popularity we find a
statistically significant connection between having a more "average"
followership and having more followers than direct rivals. Interestingly, a
\emph{larger} audience also seems to be linked to a \emph{less diverse}
audience in terms of their co-following.
| [
{
"created": "Thu, 3 Jul 2014 06:07:59 GMT",
"version": "v1"
}
] | 2014-07-04 | [
[
"Garimella",
"Venkata Rama Kiran",
""
],
[
"Weber",
"Ingmar",
""
]
] | We present an in-depth study of co-following on Twitter based on the observation that two Twitter users whose followers have similar friends are also similar, even though they might not share any direct links or a single mutual follower. We show how this observation contributes to (i) a better understanding of language-agnostic user classification on Twitter, (ii) eliciting opportunities for Computational Social Science, and (iii) improving online marketing by identifying cross-selling opportunities. We start with a machine learning problem of predicting a user's preference among two alternative choices of Twitter friends. We show that co-following information provides strong signals for diverse classification tasks and that these signals persist even when (i) the most discriminative features are removed and (ii) only relatively "sparse" users with fewer than 152 but more than 43 Twitter friends are considered. Going beyond mere classification performance optimization, we present applications of our methodology to Computational Social Science. Here we confirm stereotypes such as that the country singer Kenny Chesney (@kennychesney) is more popular among @GOP followers, whereas Lady Gaga (@ladygaga) enjoys more support from @TheDemocrats followers. In the domain of marketing we give evidence that celebrity endorsement is reflected in co-following and we demonstrate how our methodology can be used to reveal the audience similarities between Apple and Puma and, less obviously, between Nike and Coca-Cola. Concerning a user's popularity we find a statistically significant connection between having a more "average" followership and having more followers than direct rivals. Interestingly, a \emph{larger} audience also seems to be linked to a \emph{less diverse} audience in terms of their co-following. |
2107.10249 | Feng Zhou | Sue Bai, Dakota Drake Legge, Ashley Young, Shan Bao, Feng Zhou | Investigating External Interaction Modality and Design Between Automated
Vehicles and Pedestrians at Crossings | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | In this study, we investigated the effectiveness and user acceptance of three
external interaction modalities (i.e., visual, auditory, and visual+auditory)
in promoting communications between automated vehicle systems (AVS) and
pedestrians at a crosswalk through a large number of combined designs. For this
purpose, an online survey was designed and distributed to 68 participants. All
participants reported their overall preferences for safety, comfort, trust,
ease of understanding, usability, and acceptance towards the systems. Results
showed that the visual+auditory interaction modality was the mostly preferred,
followed by the visual interaction modality and then the auditory one. We also
tested different visual and auditory interaction methods, and found that
"Pedestrian silhouette on the front of the vehicle" was the best preferred
option while middle-aged participants liked "Chime" much better than young
participants though it was overall better preferred than others. Finally,
communication between the AVS and pedestrians' phones was not well received due
to privacy concerns. These results provided important interface design
recommendations in identifying better combination of visual and auditory
designs and therefore improving AVS communicating their intention with
pedestrians.
| [
{
"created": "Wed, 21 Jul 2021 17:53:47 GMT",
"version": "v1"
}
] | 2021-07-22 | [
[
"Bai",
"Sue",
""
],
[
"Legge",
"Dakota Drake",
""
],
[
"Young",
"Ashley",
""
],
[
"Bao",
"Shan",
""
],
[
"Zhou",
"Feng",
""
]
] | In this study, we investigated the effectiveness and user acceptance of three external interaction modalities (i.e., visual, auditory, and visual+auditory) in promoting communications between automated vehicle systems (AVS) and pedestrians at a crosswalk through a large number of combined designs. For this purpose, an online survey was designed and distributed to 68 participants. All participants reported their overall preferences for safety, comfort, trust, ease of understanding, usability, and acceptance towards the systems. Results showed that the visual+auditory interaction modality was the mostly preferred, followed by the visual interaction modality and then the auditory one. We also tested different visual and auditory interaction methods, and found that "Pedestrian silhouette on the front of the vehicle" was the best preferred option while middle-aged participants liked "Chime" much better than young participants though it was overall better preferred than others. Finally, communication between the AVS and pedestrians' phones was not well received due to privacy concerns. These results provided important interface design recommendations in identifying better combination of visual and auditory designs and therefore improving AVS communicating their intention with pedestrians. |
2106.02289 | Tiago Pimentel | Irene Nikkarinen, Tiago Pimentel, Dami\'an E. Blasi, Ryan Cotterell | Modeling the Unigram Distribution | Irene Nikkarinen and Tiago Pimentel contributed equally to this work.
Accepted to the findings of ACL 2021. Code available in
https://github.com/irenenikk/modelling-unigram | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The unigram distribution is the non-contextual probability of finding a
specific word form in a corpus. While of central importance to the study of
language, it is commonly approximated by each word's sample frequency in the
corpus. This approach, being highly dependent on sample size, assigns zero
probability to any out-of-vocabulary (oov) word form. As a result, it produces
negatively biased probabilities for any oov word form, while positively biased
probabilities to in-corpus words. In this work, we argue in favor of properly
modeling the unigram distribution -- claiming it should be a central task in
natural language processing. With this in mind, we present a novel model for
estimating it in a language (a neuralization of Goldwater et al.'s (2011)
model) and show it produces much better estimates across a diverse set of 7
languages than the na\"ive use of neural character-level language models.
| [
{
"created": "Fri, 4 Jun 2021 07:02:49 GMT",
"version": "v1"
}
] | 2021-06-07 | [
[
"Nikkarinen",
"Irene",
""
],
[
"Pimentel",
"Tiago",
""
],
[
"Blasi",
"Damián E.",
""
],
[
"Cotterell",
"Ryan",
""
]
] | The unigram distribution is the non-contextual probability of finding a specific word form in a corpus. While of central importance to the study of language, it is commonly approximated by each word's sample frequency in the corpus. This approach, being highly dependent on sample size, assigns zero probability to any out-of-vocabulary (oov) word form. As a result, it produces negatively biased probabilities for any oov word form, while positively biased probabilities to in-corpus words. In this work, we argue in favor of properly modeling the unigram distribution -- claiming it should be a central task in natural language processing. With this in mind, we present a novel model for estimating it in a language (a neuralization of Goldwater et al.'s (2011) model) and show it produces much better estimates across a diverse set of 7 languages than the na\"ive use of neural character-level language models. |
2311.09994 | Kathrin Grosse | Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Alexandre Alahi | Towards more Practical Threat Models in Artificial Intelligence Security | 18 pages, 4 figures, 8 tables, accepted to Usenix Security,
incorporated external feedback | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works have identified a gap between research and practice in
artificial intelligence security: threats studied in academia do not always
reflect the practical use and security risks of AI. For example, while models
are often studied in isolation, they form part of larger ML pipelines in
practice. Recent works also brought forward that adversarial manipulations
introduced by academic attacks are impractical. We take a first step towards
describing the full extent of this disparity. To this end, we revisit the
threat models of the six most studied attacks in AI security research and match
them to AI usage in practice via a survey with 271 industrial practitioners. On
the one hand, we find that all existing threat models are indeed applicable. On
the other hand, there are significant mismatches: research is often too
generous with the attacker, assuming access to information not frequently
available in real-world settings. Our paper is thus a call for action to study
more practical threat models in artificial intelligence security.
| [
{
"created": "Thu, 16 Nov 2023 16:09:44 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Mar 2024 13:06:28 GMT",
"version": "v2"
}
] | 2024-03-27 | [
[
"Grosse",
"Kathrin",
""
],
[
"Bieringer",
"Lukas",
""
],
[
"Besold",
"Tarek Richard",
""
],
[
"Alahi",
"Alexandre",
""
]
] | Recent works have identified a gap between research and practice in artificial intelligence security: threats studied in academia do not always reflect the practical use and security risks of AI. For example, while models are often studied in isolation, they form part of larger ML pipelines in practice. Recent works also brought forward that adversarial manipulations introduced by academic attacks are impractical. We take a first step towards describing the full extent of this disparity. To this end, we revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice via a survey with 271 industrial practitioners. On the one hand, we find that all existing threat models are indeed applicable. On the other hand, there are significant mismatches: research is often too generous with the attacker, assuming access to information not frequently available in real-world settings. Our paper is thus a call for action to study more practical threat models in artificial intelligence security. |
2011.03114 | Henggang Cui | Henggang Cui, Fang-Chieh Chou, Jake Charland, Carlos
Vallespi-Gonzalez, Nemanja Djuric | Uncertainty-Aware Vehicle Orientation Estimation for Joint
Detection-Prediction Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object detection is a critical component of a self-driving system, tasked
with inferring the current states of the surrounding traffic actors. While
there exist a number of studies on the problem of inferring the position and
shape of vehicle actors, understanding actors' orientation remains a challenge
for existing state-of-the-art detectors. Orientation is an important property
for downstream modules of an autonomous system, particularly relevant for
motion prediction of stationary or reversing actors where current approaches
struggle. We focus on this task and present a method that extends the existing
models that perform joint object detection and motion prediction, allowing us
to more accurately infer vehicle orientations. In addition, the approach is
able to quantify prediction uncertainty, outputting the probability that the
inferred orientation is flipped, which allows for improved motion prediction
and safer autonomous operations. Empirical results show the benefits of the
approach, obtaining state-of-the-art performance on the open-sourced nuScenes
data set.
| [
{
"created": "Thu, 5 Nov 2020 21:59:44 GMT",
"version": "v1"
}
] | 2020-11-09 | [
[
"Cui",
"Henggang",
""
],
[
"Chou",
"Fang-Chieh",
""
],
[
"Charland",
"Jake",
""
],
[
"Vallespi-Gonzalez",
"Carlos",
""
],
[
"Djuric",
"Nemanja",
""
]
] | Object detection is a critical component of a self-driving system, tasked with inferring the current states of the surrounding traffic actors. While there exist a number of studies on the problem of inferring the position and shape of vehicle actors, understanding actors' orientation remains a challenge for existing state-of-the-art detectors. Orientation is an important property for downstream modules of an autonomous system, particularly relevant for motion prediction of stationary or reversing actors where current approaches struggle. We focus on this task and present a method that extends the existing models that perform joint object detection and motion prediction, allowing us to more accurately infer vehicle orientations. In addition, the approach is able to quantify prediction uncertainty, outputting the probability that the inferred orientation is flipped, which allows for improved motion prediction and safer autonomous operations. Empirical results show the benefits of the approach, obtaining state-of-the-art performance on the open-sourced nuScenes data set. |
2306.02659 | Zhengzhe Xu | Zhengzhe Xu, Yanbo Chen, Zhuozhu Jian, Junbo Tan, Xueqian Wang, Bin
Liang | Hybrid Trajectory Optimization for Autonomous Terrain Traversal of
Articulated Tracked Robots | IEEE Robotics and Automation Letters (RA-L) | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous terrain traversal of articulated tracked robots can reduce
operator cognitive load to enhance task efficiency and facilitate extensive
deployment. We present a novel hybrid trajectory optimization method aimed at
generating efficient, stable, and smooth traversal motions. To achieve this, we
develop a planar robot-terrain contact model and divide the robot's motion into
hybrid modes of driving and traversing. By using a generalized coordinate
description, the configuration space dimension is reduced, which facilitates
real-time planning. The hybrid trajectory optimization is transcribed into a
nonlinear programming problem and divided into subproblems to be solved in a
receding-horizon planning fashion. Mode switching is facilitated by associating
optimized motion durations with a predefined traversal sequence. A
multi-objective cost function is formulated to further improve the traversal
performance. Additionally, map sampling, terrain simplification, and tracking
controller modules are integrated into the autonomous terrain traversal system.
Our approach is validated in simulation and real-world scenarios with the
Searcher robotic platform. Comparative experiments with expert operator control
and state-of-the-art methods show advantages in terms of time and energy
efficiency, stability, and smoothness of motion.
| [
{
"created": "Mon, 5 Jun 2023 07:48:28 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Nov 2023 15:56:04 GMT",
"version": "v2"
},
{
"created": "Thu, 23 Nov 2023 05:22:42 GMT",
"version": "v3"
}
] | 2023-11-27 | [
[
"Xu",
"Zhengzhe",
""
],
[
"Chen",
"Yanbo",
""
],
[
"Jian",
"Zhuozhu",
""
],
[
"Tan",
"Junbo",
""
],
[
"Wang",
"Xueqian",
""
],
[
"Liang",
"Bin",
""
]
] | Autonomous terrain traversal of articulated tracked robots can reduce operator cognitive load to enhance task efficiency and facilitate extensive deployment. We present a novel hybrid trajectory optimization method aimed at generating efficient, stable, and smooth traversal motions. To achieve this, we develop a planar robot-terrain contact model and divide the robot's motion into hybrid modes of driving and traversing. By using a generalized coordinate description, the configuration space dimension is reduced, which facilitates real-time planning. The hybrid trajectory optimization is transcribed into a nonlinear programming problem and divided into subproblems to be solved in a receding-horizon planning fashion. Mode switching is facilitated by associating optimized motion durations with a predefined traversal sequence. A multi-objective cost function is formulated to further improve the traversal performance. Additionally, map sampling, terrain simplification, and tracking controller modules are integrated into the autonomous terrain traversal system. Our approach is validated in simulation and real-world scenarios with the Searcher robotic platform. Comparative experiments with expert operator control and state-of-the-art methods show advantages in terms of time and energy efficiency, stability, and smoothness of motion. |
2407.08787 | Wenshuo Peng | Wenshuo Peng, Kaipeng Zhang, Yue Yang, Hao Zhang, Yu Qiao | Data Adaptive Traceback for Vision-Language Foundation Models in Image
Classification | 9 pages,4 figures | null | 10.1609/aaai.v38i5.28249 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-language foundation models have been incredibly successful in a wide
range of downstream computer vision tasks using adaptation methods. However,
due to the high cost of obtaining pre-training datasets, pairs with weak
image-text correlation in the data exist in large numbers. We call them
weak-paired samples. Due to the limitations of these weak-paired samples, the
pre-training model are unable to mine all the knowledge from pre-training data.
The existing adaptation methods do not consider the missing knowledge, which
may lead to crucial task-related knowledge for the downstream tasks being
ignored. To address this issue, we propose a new adaptation framework called
Data Adaptive Traceback (DAT). Specifically, we utilize a zero-shot-based
method to extract the most downstream task-related subset of the pre-training
data to enable the downstream tasks. Furthermore, we adopt a pseudo-label-based
semi-supervised technique to reuse the pre-training images and a
vision-language contrastive learning method to address the confirmation bias
issue in semi-supervised learning. We conduct extensive experiments that show
our proposed DAT approach meaningfully improves various benchmark datasets
performance over traditional adaptation methods by simply.
| [
{
"created": "Thu, 11 Jul 2024 18:01:58 GMT",
"version": "v1"
}
] | 2024-07-15 | [
[
"Peng",
"Wenshuo",
""
],
[
"Zhang",
"Kaipeng",
""
],
[
"Yang",
"Yue",
""
],
[
"Zhang",
"Hao",
""
],
[
"Qiao",
"Yu",
""
]
] | Vision-language foundation models have been incredibly successful in a wide range of downstream computer vision tasks using adaptation methods. However, due to the high cost of obtaining pre-training datasets, pairs with weak image-text correlation in the data exist in large numbers. We call them weak-paired samples. Due to the limitations of these weak-paired samples, the pre-training model are unable to mine all the knowledge from pre-training data. The existing adaptation methods do not consider the missing knowledge, which may lead to crucial task-related knowledge for the downstream tasks being ignored. To address this issue, we propose a new adaptation framework called Data Adaptive Traceback (DAT). Specifically, we utilize a zero-shot-based method to extract the most downstream task-related subset of the pre-training data to enable the downstream tasks. Furthermore, we adopt a pseudo-label-based semi-supervised technique to reuse the pre-training images and a vision-language contrastive learning method to address the confirmation bias issue in semi-supervised learning. We conduct extensive experiments that show our proposed DAT approach meaningfully improves various benchmark datasets performance over traditional adaptation methods by simply. |
2112.03376 | Michael Rawson | Michael Rawson, Radu Balan | Convergence Guarantees for Deep Epsilon Greedy Policy Learning | null | null | null | null | cs.LG cs.IT math.IT math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Policy learning is a quickly growing area. As robotics and computers control
day-to-day life, their error rate needs to be minimized and controlled. There
are many policy learning methods and bandit methods with provable error rates
that accompany them. We show an error or regret bound and convergence of the
Deep Epsilon Greedy method which chooses actions with a neural network's
prediction. We also show that Epsilon Greedy method regret upper bound is
minimized with cubic root exploration. In experiments with the real-world
dataset MNIST, we construct a nonlinear reinforcement learning problem. We
witness how with either high or low noise, some methods do and some do not
converge which agrees with our proof of convergence.
| [
{
"created": "Thu, 2 Dec 2021 04:05:54 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jan 2022 23:16:36 GMT",
"version": "v2"
}
] | 2022-01-31 | [
[
"Rawson",
"Michael",
""
],
[
"Balan",
"Radu",
""
]
] | Policy learning is a quickly growing area. As robotics and computers control day-to-day life, their error rate needs to be minimized and controlled. There are many policy learning methods and bandit methods with provable error rates that accompany them. We show an error or regret bound and convergence of the Deep Epsilon Greedy method which chooses actions with a neural network's prediction. We also show that Epsilon Greedy method regret upper bound is minimized with cubic root exploration. In experiments with the real-world dataset MNIST, we construct a nonlinear reinforcement learning problem. We witness how with either high or low noise, some methods do and some do not converge which agrees with our proof of convergence. |
1407.3487 | Grigori Fursin | Grigori Fursin | Collective Tuning Initiative | GCC Developers' Summit'09, 14 June 2009, Montreal, Canada | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computing systems rarely deliver best possible performance due to ever
increasing hardware and software complexity and limitations of the current
optimization technology. Additional code and architecture optimizations are
often required to improve execution time, size, power consumption, reliability
and other important characteristics of computing systems. However, it is often
a tedious, repetitive, isolated and time consuming process. In order to
automate, simplify and systematize program optimization and architecture
design, we are developing open-source modular plugin-based Collective Tuning
Infrastructure (CTI, http://cTuning.org) that can distribute optimization
process and leverage optimization experience of multiple users. CTI provides a
novel fully integrated, collaborative, "one button" approach to improve
existing underperfoming computing systems ranging from embedded architectures
to high-performance servers based on systematic iterative compilation,
statistical collective optimization and machine learning. Our experimental
results show that it is possible to reduce execution time (and code size) of
some programs from SPEC2006 and EEMBC among others by more than a factor of 2
automatically. It can also reduce development and testing time considerably.
Together with the first production quality machine learning enabled interactive
research compiler (MILEPOST GCC) this infrastructure opens up many research
opportunities to study and develop future realistic self-tuning and
self-organizing adaptive intelligent computing systems based on systematic
statistical performance evaluation and benchmarking. Finally, using common
optimization repository is intended to improve the quality and reproducibility
of the research on architecture and code optimization.
| [
{
"created": "Sun, 13 Jul 2014 17:13:17 GMT",
"version": "v1"
}
] | 2014-07-15 | [
[
"Fursin",
"Grigori",
""
]
] | Computing systems rarely deliver best possible performance due to ever increasing hardware and software complexity and limitations of the current optimization technology. Additional code and architecture optimizations are often required to improve execution time, size, power consumption, reliability and other important characteristics of computing systems. However, it is often a tedious, repetitive, isolated and time consuming process. In order to automate, simplify and systematize program optimization and architecture design, we are developing open-source modular plugin-based Collective Tuning Infrastructure (CTI, http://cTuning.org) that can distribute optimization process and leverage optimization experience of multiple users. CTI provides a novel fully integrated, collaborative, "one button" approach to improve existing underperfoming computing systems ranging from embedded architectures to high-performance servers based on systematic iterative compilation, statistical collective optimization and machine learning. Our experimental results show that it is possible to reduce execution time (and code size) of some programs from SPEC2006 and EEMBC among others by more than a factor of 2 automatically. It can also reduce development and testing time considerably. Together with the first production quality machine learning enabled interactive research compiler (MILEPOST GCC) this infrastructure opens up many research opportunities to study and develop future realistic self-tuning and self-organizing adaptive intelligent computing systems based on systematic statistical performance evaluation and benchmarking. Finally, using common optimization repository is intended to improve the quality and reproducibility of the research on architecture and code optimization. |
2205.10034 | Liang Shen | Dianhai Yu, Liang Shen, Hongxiang Hao, Weibao Gong, Huachao Wu, Jiang
Bian, Lirong Dai, Haoyi Xiong | MoESys: A Distributed and Efficient Mixture-of-Experts Training and
Inference System for Internet Services | null | null | null | null | cs.DC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While modern internet services, such as chatbots, search engines, and online
advertising, demand the use of large-scale deep neural networks (DNNs),
distributed training and inference over heterogeneous computing systems are
desired to facilitate these DNN models. Mixture-of-Experts (MoE) is one the
most common strategies to lower the cost of training subject to the overall
size of models/data through gating and parallelism in a divide-and-conquer
fashion. While DeepSpeed has made efforts in carrying out large-scale MoE
training over heterogeneous infrastructures, the efficiency of training and
inference could be further improved from several system aspects, including load
balancing, communication/computation efficiency, and memory footprint limits.
In this work, we present a novel MoESys that boosts efficiency in both
large-scale training and inference. Specifically, in the training procedure,
the proposed MoESys adopts an Elastic MoE training strategy with 2D prefetch
and Fusion communication over Hierarchical storage, so as to enjoy efficient
parallelisms. For scalable inference in a single node, especially when the
model size is larger than GPU memory, MoESys builds the CPU-GPU memory jointly
into a ring of sections to load the model, and executes the computation tasks
across the memory sections in a round-robin manner for efficient inference. We
carried out extensive experiments to evaluate MoESys, where MoESys successfully
trains a Unified Feature Optimization (UFO) model with a Sparsely-Gated
Mixture-of-Experts model of 12B parameters in 8 days on 48 A100 GPU cards. The
comparison against the state-of-the-art shows that MoESys outperformed
DeepSpeed with 33% higher throughput (tokens per second) in training and 13%
higher throughput in inference in general. Particularly, under unbalanced MoE
Tasks, e.g., UFO, MoESys achieved 64% higher throughput with 18% lower memory
footprints.
| [
{
"created": "Fri, 20 May 2022 09:09:27 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jun 2023 12:07:22 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Aug 2024 09:23:13 GMT",
"version": "v3"
}
] | 2024-08-13 | [
[
"Yu",
"Dianhai",
""
],
[
"Shen",
"Liang",
""
],
[
"Hao",
"Hongxiang",
""
],
[
"Gong",
"Weibao",
""
],
[
"Wu",
"Huachao",
""
],
[
"Bian",
"Jiang",
""
],
[
"Dai",
"Lirong",
""
],
[
"Xiong",
"Haoyi",
""
]
] | While modern internet services, such as chatbots, search engines, and online advertising, demand the use of large-scale deep neural networks (DNNs), distributed training and inference over heterogeneous computing systems are desired to facilitate these DNN models. Mixture-of-Experts (MoE) is one the most common strategies to lower the cost of training subject to the overall size of models/data through gating and parallelism in a divide-and-conquer fashion. While DeepSpeed has made efforts in carrying out large-scale MoE training over heterogeneous infrastructures, the efficiency of training and inference could be further improved from several system aspects, including load balancing, communication/computation efficiency, and memory footprint limits. In this work, we present a novel MoESys that boosts efficiency in both large-scale training and inference. Specifically, in the training procedure, the proposed MoESys adopts an Elastic MoE training strategy with 2D prefetch and Fusion communication over Hierarchical storage, so as to enjoy efficient parallelisms. For scalable inference in a single node, especially when the model size is larger than GPU memory, MoESys builds the CPU-GPU memory jointly into a ring of sections to load the model, and executes the computation tasks across the memory sections in a round-robin manner for efficient inference. We carried out extensive experiments to evaluate MoESys, where MoESys successfully trains a Unified Feature Optimization (UFO) model with a Sparsely-Gated Mixture-of-Experts model of 12B parameters in 8 days on 48 A100 GPU cards. The comparison against the state-of-the-art shows that MoESys outperformed DeepSpeed with 33% higher throughput (tokens per second) in training and 13% higher throughput in inference in general. Particularly, under unbalanced MoE Tasks, e.g., UFO, MoESys achieved 64% higher throughput with 18% lower memory footprints. |
2002.01886 | Jeremy Castagno | Jeremy Castagno, Ella Atkins | Polylidar -- Polygons from Triangular Meshes | 8 pages, 9 Figures | null | 10.1109/LRA.2020.3002212 | null | cs.CG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents Polylidar, an efficient algorithm to extract non-convex
polygons from 2D point sets, including interior holes. Plane segmented point
clouds can be input into Polylidar to extract their polygonal counterpart,
thereby reducing map size and improving visualization. The algorithm begins by
triangulating the point set and filtering triangles by user configurable
parameters such as triangle edge length. Next, connected triangles are
extracted into triangular mesh regions representing the shape of the point set.
Finally each region is converted to a polygon through a novel boundary
following method which accounts for holes. Real-world and synthetic benchmarks
are presented to comparatively evaluate Polylidar speed and accuracy. Results
show comparable accuracy and more than four times speedup compared to other
concave polygon extraction methods.
| [
{
"created": "Wed, 5 Feb 2020 17:50:57 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Jun 2020 12:31:52 GMT",
"version": "v2"
}
] | 2020-07-24 | [
[
"Castagno",
"Jeremy",
""
],
[
"Atkins",
"Ella",
""
]
] | This paper presents Polylidar, an efficient algorithm to extract non-convex polygons from 2D point sets, including interior holes. Plane segmented point clouds can be input into Polylidar to extract their polygonal counterpart, thereby reducing map size and improving visualization. The algorithm begins by triangulating the point set and filtering triangles by user configurable parameters such as triangle edge length. Next, connected triangles are extracted into triangular mesh regions representing the shape of the point set. Finally each region is converted to a polygon through a novel boundary following method which accounts for holes. Real-world and synthetic benchmarks are presented to comparatively evaluate Polylidar speed and accuracy. Results show comparable accuracy and more than four times speedup compared to other concave polygon extraction methods. |
2011.14214 | Anadi Chaman | Anadi Chaman (1), Ivan Dokmani\'c (2) ((1) University of Illinois at
Urbana-Champaign, (2) University of Basel) | Truly shift-invariant convolutional neural networks | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Thanks to the use of convolution and pooling layers, convolutional neural
networks were for a long time thought to be shift-invariant. However, recent
works have shown that the output of a CNN can change significantly with small
shifts in input: a problem caused by the presence of downsampling (stride)
layers. The existing solutions rely either on data augmentation or on
anti-aliasing, both of which have limitations and neither of which enables
perfect shift invariance. Additionally, the gains obtained from these methods
do not extend to image patterns not seen during training. To address these
challenges, we propose adaptive polyphase sampling (APS), a simple sub-sampling
scheme that allows convolutional neural networks to achieve 100% consistency in
classification performance under shifts, without any loss in accuracy. With
APS, the networks exhibit perfect consistency to shifts even before training,
making it the first approach that makes convolutional neural networks truly
shift-invariant.
| [
{
"created": "Sat, 28 Nov 2020 20:57:35 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Dec 2020 12:46:12 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Dec 2020 12:18:15 GMT",
"version": "v3"
},
{
"created": "Tue, 30 Mar 2021 19:47:57 GMT",
"version": "v4"
}
] | 2021-04-01 | [
[
"Chaman",
"Anadi",
""
],
[
"Dokmanić",
"Ivan",
""
]
] | Thanks to the use of convolution and pooling layers, convolutional neural networks were for a long time thought to be shift-invariant. However, recent works have shown that the output of a CNN can change significantly with small shifts in input: a problem caused by the presence of downsampling (stride) layers. The existing solutions rely either on data augmentation or on anti-aliasing, both of which have limitations and neither of which enables perfect shift invariance. Additionally, the gains obtained from these methods do not extend to image patterns not seen during training. To address these challenges, we propose adaptive polyphase sampling (APS), a simple sub-sampling scheme that allows convolutional neural networks to achieve 100% consistency in classification performance under shifts, without any loss in accuracy. With APS, the networks exhibit perfect consistency to shifts even before training, making it the first approach that makes convolutional neural networks truly shift-invariant. |
1501.02035 | EPTCS | Adri\'an Riesco (Universidad Complutense de Madrid), Juan
Rodr\'iguez-Hortal\'a (Lambdoop Solutions) | Lifting Term Rewriting Derivations in Constructor Systems by Using
Generators | In Proceedings PROLE 2014, arXiv:1501.01693 | EPTCS 173, 2015, pp. 87-99 | 10.4204/EPTCS.173.7 | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Narrowing is a procedure that was first studied in the context of equational
E-unification and that has been used in a wide range of applications. The
classic completeness result due to Hullot states that any term rewriting
derivation starting from an instance of an expression can be "lifted" to a
narrowing derivation, whenever the substitution employed is normalized. In this
paper we adapt the generator- based extra-variables-elimination transformation
used in functional-logic programming to overcome that limitation, so we are
able to lift term rewriting derivations starting from arbitrary instances of
expressions. The proposed technique is limited to left-linear constructor
systems and to derivations reaching a ground expression. We also present a
Maude-based implementation of the technique, using natural rewriting for the
on-demand evaluation strategy.
| [
{
"created": "Fri, 9 Jan 2015 04:00:31 GMT",
"version": "v1"
}
] | 2019-08-15 | [
[
"Riesco",
"Adrián",
"",
"Universidad Complutense de Madrid"
],
[
"Rodríguez-Hortalá",
"Juan",
"",
"Lambdoop Solutions"
]
] | Narrowing is a procedure that was first studied in the context of equational E-unification and that has been used in a wide range of applications. The classic completeness result due to Hullot states that any term rewriting derivation starting from an instance of an expression can be "lifted" to a narrowing derivation, whenever the substitution employed is normalized. In this paper we adapt the generator- based extra-variables-elimination transformation used in functional-logic programming to overcome that limitation, so we are able to lift term rewriting derivations starting from arbitrary instances of expressions. The proposed technique is limited to left-linear constructor systems and to derivations reaching a ground expression. We also present a Maude-based implementation of the technique, using natural rewriting for the on-demand evaluation strategy. |
2203.15507 | Bhagyashri Telsang | Bhagyashri Telsang and Seddik Djouadi | Computation of Centroidal Voronoi Tessellations in High Dimensional
spaces | null | null | null | null | cs.CG | http://creativecommons.org/licenses/by/4.0/ | Owing to the natural interpretation and various desirable mathematical
properties, centroidal Voronoi tessellations (CVT) have found a wide range of
applications and correspondingly a vast development in their literature.
However the computation of CVT in higher dimensional spaces still remains
difficult. In this paper, we exploit the non-uniqueness of CVTs in higher
dimensional spaces for their computation. We construct such high dimensional
tessellations from CVTs in one-dimensional spaces. We then prove that such a
tessellation is centroidal under the condition of independence among densities
over the one-dimensional spaces considered. Various numerical evaluations
backup the theoretical result through the low energy of the tessellations. The
resulting grid-like tessellations are obtained efficiently with minimal
computation time.
| [
{
"created": "Thu, 24 Mar 2022 20:31:27 GMT",
"version": "v1"
}
] | 2022-03-30 | [
[
"Telsang",
"Bhagyashri",
""
],
[
"Djouadi",
"Seddik",
""
]
] | Owing to the natural interpretation and various desirable mathematical properties, centroidal Voronoi tessellations (CVT) have found a wide range of applications and correspondingly a vast development in their literature. However the computation of CVT in higher dimensional spaces still remains difficult. In this paper, we exploit the non-uniqueness of CVTs in higher dimensional spaces for their computation. We construct such high dimensional tessellations from CVTs in one-dimensional spaces. We then prove that such a tessellation is centroidal under the condition of independence among densities over the one-dimensional spaces considered. Various numerical evaluations backup the theoretical result through the low energy of the tessellations. The resulting grid-like tessellations are obtained efficiently with minimal computation time. |
1803.00259 | Jun Zhao | Jun Zhao, Guang Qiu, Ziyu Guan, Wei Zhao, Xiaofei He | Deep Reinforcement Learning for Sponsored Search Real-time Bidding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bidding optimization is one of the most critical problems in online
advertising. Sponsored search (SS) auction, due to the randomness of user query
behavior and platform nature, usually adopts keyword-level bidding strategies.
In contrast, the display advertising (DA), as a relatively simpler scenario for
auction, has taken advantage of real-time bidding (RTB) to boost the
performance for advertisers. In this paper, we consider the RTB problem in
sponsored search auction, named SS-RTB. SS-RTB has a much more complex dynamic
environment, due to stochastic user query behavior and more complex bidding
policies based on multiple keywords of an ad. Most previous methods for DA
cannot be applied. We propose a reinforcement learning (RL) solution for
handling the complex dynamic environment. Although some RL methods have been
proposed for online advertising, they all fail to address the "environment
changing" problem: the state transition probabilities vary between two days.
Motivated by the observation that auction sequences of two days share similar
transition patterns at a proper aggregation level, we formulate a robust MDP
model at hour-aggregation level of the auction data and propose a
control-by-model framework for SS-RTB. Rather than generating bid prices
directly, we decide a bidding model for impressions of each hour and perform
real-time bidding accordingly. We also extend the method to handle the
multi-agent problem. We deployed the SS-RTB system in the e-commerce search
auction platform of Alibaba. Empirical experiments of offline evaluation and
online A/B test demonstrate the effectiveness of our method.
| [
{
"created": "Thu, 1 Mar 2018 09:04:37 GMT",
"version": "v1"
}
] | 2018-03-02 | [
[
"Zhao",
"Jun",
""
],
[
"Qiu",
"Guang",
""
],
[
"Guan",
"Ziyu",
""
],
[
"Zhao",
"Wei",
""
],
[
"He",
"Xiaofei",
""
]
] | Bidding optimization is one of the most critical problems in online advertising. Sponsored search (SS) auction, due to the randomness of user query behavior and platform nature, usually adopts keyword-level bidding strategies. In contrast, the display advertising (DA), as a relatively simpler scenario for auction, has taken advantage of real-time bidding (RTB) to boost the performance for advertisers. In this paper, we consider the RTB problem in sponsored search auction, named SS-RTB. SS-RTB has a much more complex dynamic environment, due to stochastic user query behavior and more complex bidding policies based on multiple keywords of an ad. Most previous methods for DA cannot be applied. We propose a reinforcement learning (RL) solution for handling the complex dynamic environment. Although some RL methods have been proposed for online advertising, they all fail to address the "environment changing" problem: the state transition probabilities vary between two days. Motivated by the observation that auction sequences of two days share similar transition patterns at a proper aggregation level, we formulate a robust MDP model at hour-aggregation level of the auction data and propose a control-by-model framework for SS-RTB. Rather than generating bid prices directly, we decide a bidding model for impressions of each hour and perform real-time bidding accordingly. We also extend the method to handle the multi-agent problem. We deployed the SS-RTB system in the e-commerce search auction platform of Alibaba. Empirical experiments of offline evaluation and online A/B test demonstrate the effectiveness of our method. |
2011.12599 | Valentina Anita Carriero | Valentina Anita Carriero, Marilena Daquino, Aldo Gangemi, Andrea
Giovanni Nuzzolese, Silvio Peroni, Valentina Presutti, Francesca Tomasi | The Landscape of Ontology Reuse Approaches | null | null | 10.3233/SSW200033 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ontology reuse aims to foster interoperability and facilitate knowledge
reuse. Several approaches are typically evaluated by ontology engineers when
bootstrapping a new project. However, current practices are often motivated by
subjective, case-by-case decisions, which hamper the definition of a
recommended behaviour. In this chapter we argue that to date there are no
effective solutions for supporting developers' decision-making process when
deciding on an ontology reuse strategy. The objective is twofold: (i) to survey
current approaches to ontology reuse, presenting motivations, strategies,
benefits and limits, and (ii) to analyse two representative approaches and
discuss their merits.
| [
{
"created": "Wed, 25 Nov 2020 09:21:07 GMT",
"version": "v1"
}
] | 2021-01-01 | [
[
"Carriero",
"Valentina Anita",
""
],
[
"Daquino",
"Marilena",
""
],
[
"Gangemi",
"Aldo",
""
],
[
"Nuzzolese",
"Andrea Giovanni",
""
],
[
"Peroni",
"Silvio",
""
],
[
"Presutti",
"Valentina",
""
],
[
"Tomasi",
"Francesca",
""
]
] | Ontology reuse aims to foster interoperability and facilitate knowledge reuse. Several approaches are typically evaluated by ontology engineers when bootstrapping a new project. However, current practices are often motivated by subjective, case-by-case decisions, which hamper the definition of a recommended behaviour. In this chapter we argue that to date there are no effective solutions for supporting developers' decision-making process when deciding on an ontology reuse strategy. The objective is twofold: (i) to survey current approaches to ontology reuse, presenting motivations, strategies, benefits and limits, and (ii) to analyse two representative approaches and discuss their merits. |
1709.09662 | Jeffrey Johnson | Jeffrey Kane Johnson | Image Space Potential Fields: Constant Size Environment Representation
for Vision-based Subsumption Control Architectures | Maeve Automation Technical Report. arXiv admin note: text overlap
with arXiv:1709.03947 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This technical report presents an environment representation for use in
vision-based navigation. The representation has two useful properties: 1) it
has constant size, which can enable strong run-time guarantees to be made for
control algorithms using it, and 2) it is structurally similar to a camera
image space, which effectively allows control to operate in the sensor space
rather than employing difficult, and often inaccurate, projections into a
structurally different control space (e.g. Euclidean). The presented
representation is intended to form the basis of a vision-based subsumption
control architecture.
| [
{
"created": "Tue, 26 Sep 2017 22:02:53 GMT",
"version": "v1"
}
] | 2017-09-29 | [
[
"Johnson",
"Jeffrey Kane",
""
]
] | This technical report presents an environment representation for use in vision-based navigation. The representation has two useful properties: 1) it has constant size, which can enable strong run-time guarantees to be made for control algorithms using it, and 2) it is structurally similar to a camera image space, which effectively allows control to operate in the sensor space rather than employing difficult, and often inaccurate, projections into a structurally different control space (e.g. Euclidean). The presented representation is intended to form the basis of a vision-based subsumption control architecture. |
1511.04023 | Qian Ma | Qian Ma, Ya-Feng Liu, and Jianwei Huang | Time and Location Aware Mobile Data Pricing | This manuscript serves as the online technical report of the article
accepted by IEEE Transactions on Mobile Computing | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile users' correlated mobility and data consumption patterns often lead to
severe cellular network congestion in peak hours and hot spots. This paper
presents an optimal design of time and location aware mobile data pricing,
which incentivizes users to smooth traffic and reduce network congestion. We
derive the optimal pricing scheme through analyzing a two-stage decision
process, where the operator determines the time and location aware prices by
minimizing his total cost in Stage I, and each mobile user schedules his mobile
traffic by maximizing his payoff (i.e., utility minus payment) in Stage II. We
formulate the two-stage decision problem as a bilevel optimization problem, and
propose a derivative-free algorithm to solve the problem for any increasing
concave user utility functions. We further develop low complexity algorithms
for the commonly used logarithmic and linear utility functions. The optimal
pricing scheme ensures a win-win situation for the operator and users.
Simulations show that the operator can reduce the cost by up to 97.52% in the
logarithmic utility case and 98.70% in the linear utility case, and users can
increase their payoff by up to 79.69% and 106.10% for the two types of
utilities, respectively, comparing with a time and location independent pricing
benchmark. Our study suggests that the operator should provide price discounts
at less crowded time slots and locations, and the discounts need to be
significant when the operator's cost of provisioning excessive traffic is high
or users' willingness to delay traffic is low.
| [
{
"created": "Thu, 12 Nov 2015 19:30:16 GMT",
"version": "v1"
}
] | 2015-11-13 | [
[
"Ma",
"Qian",
""
],
[
"Liu",
"Ya-Feng",
""
],
[
"Huang",
"Jianwei",
""
]
] | Mobile users' correlated mobility and data consumption patterns often lead to severe cellular network congestion in peak hours and hot spots. This paper presents an optimal design of time and location aware mobile data pricing, which incentivizes users to smooth traffic and reduce network congestion. We derive the optimal pricing scheme through analyzing a two-stage decision process, where the operator determines the time and location aware prices by minimizing his total cost in Stage I, and each mobile user schedules his mobile traffic by maximizing his payoff (i.e., utility minus payment) in Stage II. We formulate the two-stage decision problem as a bilevel optimization problem, and propose a derivative-free algorithm to solve the problem for any increasing concave user utility functions. We further develop low complexity algorithms for the commonly used logarithmic and linear utility functions. The optimal pricing scheme ensures a win-win situation for the operator and users. Simulations show that the operator can reduce the cost by up to 97.52% in the logarithmic utility case and 98.70% in the linear utility case, and users can increase their payoff by up to 79.69% and 106.10% for the two types of utilities, respectively, comparing with a time and location independent pricing benchmark. Our study suggests that the operator should provide price discounts at less crowded time slots and locations, and the discounts need to be significant when the operator's cost of provisioning excessive traffic is high or users' willingness to delay traffic is low. |
1608.07952 | Alex Olieman | Alex Olieman, Jaap Kamps, Gleb Satyukov, Emil de Valk | Topical Generalization for Presentation of User Profiles | (to be) presented at DIR'16, November 25, 2016, Delft, The
Netherlands | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained user profile generation approaches have made it increasingly
feasible to display on a profile page in which topics a user has expertise or
interest. Earlier work on topical user profiling has been directed at enhancing
search and personalization functionality, but making such profiles useful for
human consumption presents new challenges. With this work, we have taken a
first step toward a semantic layout mode for topical user profiles. We have
developed a topical generalization approach which finds coherent groups of
topics and adds labels to them, based on their association with broader topics
in the Wikipedia category graph. A nested layout mode, employing topical
generalization, is compared with a simpler flat layout mode in our user study.
The results indicate that users favor the nested structure over flat profiles,
but tend to overlook the specific topics on the lower level. We propose a third
layout mode to address this issue.
| [
{
"created": "Mon, 29 Aug 2016 08:47:12 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Oct 2016 05:24:51 GMT",
"version": "v2"
},
{
"created": "Sun, 20 Nov 2016 01:52:00 GMT",
"version": "v3"
}
] | 2016-11-22 | [
[
"Olieman",
"Alex",
""
],
[
"Kamps",
"Jaap",
""
],
[
"Satyukov",
"Gleb",
""
],
[
"de Valk",
"Emil",
""
]
] | Fine-grained user profile generation approaches have made it increasingly feasible to display on a profile page in which topics a user has expertise or interest. Earlier work on topical user profiling has been directed at enhancing search and personalization functionality, but making such profiles useful for human consumption presents new challenges. With this work, we have taken a first step toward a semantic layout mode for topical user profiles. We have developed a topical generalization approach which finds coherent groups of topics and adds labels to them, based on their association with broader topics in the Wikipedia category graph. A nested layout mode, employing topical generalization, is compared with a simpler flat layout mode in our user study. The results indicate that users favor the nested structure over flat profiles, but tend to overlook the specific topics on the lower level. We propose a third layout mode to address this issue. |
1904.01554 | Ali Payani | Ali Payani and Faramarz Fekri | Learning Algorithms via Neural Logic Networks | Under Review in ICLM2019 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel learning paradigm for Deep Neural Networks (DNN) by using
Boolean logic algebra. We first present the basic differentiable operators of a
Boolean system such as conjunction, disjunction and exclusive-OR and show how
these elementary operators can be combined in a simple and meaningful way to
form Neural Logic Networks (NLNs). We examine the effectiveness of the proposed
NLN framework in learning Boolean functions and discrete-algorithmic tasks. We
demonstrate that, in contrast to the implicit learning in MLP approach, the
proposed neural logic networks can learn the logical functions explicitly that
can be verified and interpreted by human. In particular, we propose a new
framework for learning the inductive logic programming (ILP) problems by
exploiting the explicit representational power of NLN. We show the proposed
neural ILP solver is capable of feats such as predicate invention and recursion
and can outperform the current state of the art neural ILP solvers using a
variety of benchmark tasks such as decimal addition and multiplication, and
sorting on ordered list.
| [
{
"created": "Tue, 2 Apr 2019 17:17:02 GMT",
"version": "v1"
}
] | 2019-04-10 | [
[
"Payani",
"Ali",
""
],
[
"Fekri",
"Faramarz",
""
]
] | We propose a novel learning paradigm for Deep Neural Networks (DNN) by using Boolean logic algebra. We first present the basic differentiable operators of a Boolean system such as conjunction, disjunction and exclusive-OR and show how these elementary operators can be combined in a simple and meaningful way to form Neural Logic Networks (NLNs). We examine the effectiveness of the proposed NLN framework in learning Boolean functions and discrete-algorithmic tasks. We demonstrate that, in contrast to the implicit learning in MLP approach, the proposed neural logic networks can learn the logical functions explicitly that can be verified and interpreted by human. In particular, we propose a new framework for learning the inductive logic programming (ILP) problems by exploiting the explicit representational power of NLN. We show the proposed neural ILP solver is capable of feats such as predicate invention and recursion and can outperform the current state of the art neural ILP solvers using a variety of benchmark tasks such as decimal addition and multiplication, and sorting on ordered list. |
cs/0702099 | Ruoheng Liu | Ruoheng Liu, Ivana Maric, Predrag Spasojevic, and Roy D. Yates | Discrete Memoryless Interference and Broadcast Channels with
Confidential Messages: Secrecy Rate Regions | to appear Special Issue of IEEE Transactions on Information Theory on
Information Theoretic Security | null | 10.1109/TIT.2008.921879 | null | cs.IT math.IT | null | We study information-theoretic security for discrete memoryless interference
and broadcast channels with independent confidential messages sent to two
receivers. Confidential messages are transmitted to their respective receivers
with information-theoretic secrecy. That is, each receiver is kept in total
ignorance with respect to the message intended for the other receiver. The
secrecy level is measured by the equivocation rate at the eavesdropping
receiver. In this paper, we present inner and outer bounds on secrecy capacity
regions for these two communication systems. The derived outer bounds have an
identical mutual information expression that applies to both channel models.
The difference is in the input distributions over which the expression is
optimized. The inner bound rate regions are achieved by random binning
techniques. For the broadcast channel, a double-binning coding scheme allows
for both joint encoding and preserving of confidentiality. Furthermore, we show
that, for a special case of the interference channel, referred to as the switch
channel, the two bound bounds meet. Finally, we describe several transmission
schemes for Gaussian interference channels and derive their achievable rate
regions while ensuring mutual information-theoretic secrecy. An encoding scheme
in which transmitters dedicate some of their power to create artificial noise
is proposed and shown to outperform both time-sharing and simple multiplexed
transmission of the confidential messages.
| [
{
"created": "Sat, 17 Feb 2007 21:02:37 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Dec 2007 23:12:34 GMT",
"version": "v2"
}
] | 2016-11-15 | [
[
"Liu",
"Ruoheng",
""
],
[
"Maric",
"Ivana",
""
],
[
"Spasojevic",
"Predrag",
""
],
[
"Yates",
"Roy D.",
""
]
] | We study information-theoretic security for discrete memoryless interference and broadcast channels with independent confidential messages sent to two receivers. Confidential messages are transmitted to their respective receivers with information-theoretic secrecy. That is, each receiver is kept in total ignorance with respect to the message intended for the other receiver. The secrecy level is measured by the equivocation rate at the eavesdropping receiver. In this paper, we present inner and outer bounds on secrecy capacity regions for these two communication systems. The derived outer bounds have an identical mutual information expression that applies to both channel models. The difference is in the input distributions over which the expression is optimized. The inner bound rate regions are achieved by random binning techniques. For the broadcast channel, a double-binning coding scheme allows for both joint encoding and preserving of confidentiality. Furthermore, we show that, for a special case of the interference channel, referred to as the switch channel, the two bound bounds meet. Finally, we describe several transmission schemes for Gaussian interference channels and derive their achievable rate regions while ensuring mutual information-theoretic secrecy. An encoding scheme in which transmitters dedicate some of their power to create artificial noise is proposed and shown to outperform both time-sharing and simple multiplexed transmission of the confidential messages. |
1211.6166 | Tao Zhu | Tao Zhu, David Phipps, Adam Pridgen, Jedidiah R. Crandall, Dan S.
Wallach | Tracking and Quantifying Censorship on a Chinese Microblogging Site | null | null | null | null | cs.IR cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present measurements and analysis of censorship on Weibo, a popular
microblogging site in China. Since we were limited in the rate at which we
could download posts, we identified users likely to participate in sensitive
topics and recursively followed their social contacts. We also leveraged new
natural language processing techniques to pick out trending topics despite the
use of neologisms, named entities, and informal language usage in Chinese
social media. We found that Weibo dynamically adapts to the changing interests
of its users through multiple layers of filtering. The filtering includes both
retroactively searching posts by keyword or repost links to delete them, and
rejecting posts as they are posted. The trend of sensitive topics is
short-lived, suggesting that the censorship is effective in stopping the
"viral" spread of sensitive issues. We also give evidence that sensitive topics
in Weibo only scarcely propagate beyond a core of sensitive posters.
| [
{
"created": "Mon, 26 Nov 2012 23:54:27 GMT",
"version": "v1"
}
] | 2012-11-28 | [
[
"Zhu",
"Tao",
""
],
[
"Phipps",
"David",
""
],
[
"Pridgen",
"Adam",
""
],
[
"Crandall",
"Jedidiah R.",
""
],
[
"Wallach",
"Dan S.",
""
]
] | We present measurements and analysis of censorship on Weibo, a popular microblogging site in China. Since we were limited in the rate at which we could download posts, we identified users likely to participate in sensitive topics and recursively followed their social contacts. We also leveraged new natural language processing techniques to pick out trending topics despite the use of neologisms, named entities, and informal language usage in Chinese social media. We found that Weibo dynamically adapts to the changing interests of its users through multiple layers of filtering. The filtering includes both retroactively searching posts by keyword or repost links to delete them, and rejecting posts as they are posted. The trend of sensitive topics is short-lived, suggesting that the censorship is effective in stopping the "viral" spread of sensitive issues. We also give evidence that sensitive topics in Weibo only scarcely propagate beyond a core of sensitive posters. |
2202.00598 | Sigrun May | Sigrun May, Sven Hartmann and Frank Klawonn | Combined Pruning for Nested Cross-Validation to Accelerate Automated
Hyperparameter Optimization for Embedded Feature Selection in
High-Dimensional Data with Very Small Sample Sizes | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Background: Embedded feature selection in high-dimensional data with very
small sample sizes requires optimized hyperparameters for the model building
process. For this hyperparameter optimization, nested cross-validation must be
applied to avoid a biased performance estimation. The resulting repeated
training with high-dimensional data leads to very long computation times.
Moreover, it is likely to observe a high variance in the individual performance
evaluation metrics caused by outliers in tiny validation sets. Therefore, early
stopping applying standard pruning algorithms to save time risks discarding
promising hyperparameter sets.
Result: To speed up feature selection for high-dimensional data with tiny
sample size, we adapt the use of a state-of-the-art asynchronous successive
halving pruner. In addition, we combine it with two complementary pruning
strategies based on domain or prior knowledge. One pruning strategy immediately
stops computing trials with semantically meaningless results for the selected
hyperparameter combinations. The other is a new extrapolating threshold pruning
strategy suitable for nested-cross-validation with a high variance of
performance evaluation metrics. In repeated experiments, our combined pruning
strategy keeps all promising trials. At the same time, the calculation time is
substantially reduced compared to using a state-of-the-art asynchronous
successive halving pruner alone. Up to 81.3\% fewer models were trained
achieving the same optimization result.
Conclusion: The proposed combined pruning strategy accelerates data analysis
or enables deeper searches for hyperparameters within the same computation
time. This leads to significant savings in time, money and energy consumption,
opening the door to advanced, time-consuming analyses.
| [
{
"created": "Tue, 1 Feb 2022 17:42:37 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Sep 2022 14:59:00 GMT",
"version": "v2"
}
] | 2022-09-13 | [
[
"May",
"Sigrun",
""
],
[
"Hartmann",
"Sven",
""
],
[
"Klawonn",
"Frank",
""
]
] | Background: Embedded feature selection in high-dimensional data with very small sample sizes requires optimized hyperparameters for the model building process. For this hyperparameter optimization, nested cross-validation must be applied to avoid a biased performance estimation. The resulting repeated training with high-dimensional data leads to very long computation times. Moreover, it is likely to observe a high variance in the individual performance evaluation metrics caused by outliers in tiny validation sets. Therefore, early stopping applying standard pruning algorithms to save time risks discarding promising hyperparameter sets. Result: To speed up feature selection for high-dimensional data with tiny sample size, we adapt the use of a state-of-the-art asynchronous successive halving pruner. In addition, we combine it with two complementary pruning strategies based on domain or prior knowledge. One pruning strategy immediately stops computing trials with semantically meaningless results for the selected hyperparameter combinations. The other is a new extrapolating threshold pruning strategy suitable for nested-cross-validation with a high variance of performance evaluation metrics. In repeated experiments, our combined pruning strategy keeps all promising trials. At the same time, the calculation time is substantially reduced compared to using a state-of-the-art asynchronous successive halving pruner alone. Up to 81.3\% fewer models were trained achieving the same optimization result. Conclusion: The proposed combined pruning strategy accelerates data analysis or enables deeper searches for hyperparameters within the same computation time. This leads to significant savings in time, money and energy consumption, opening the door to advanced, time-consuming analyses. |
2308.06111 | Lars Hillebrand | Lars Hillebrand, Armin Berger, Tobias Deu{\ss}er, Tim Dilmaghani,
Mohamed Khaled, Bernd Kliem, R\"udiger Loitz, Maren Pielka, David Leonhard,
Christian Bauckhage, Rafet Sifa | Improving Zero-Shot Text Matching for Financial Auditing with Large
Language Models | Accepted at DocEng 2023, 4 pages, 1 figure, 2 tables | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Auditing financial documents is a very tedious and time-consuming process. As
of today, it can already be simplified by employing AI-based solutions to
recommend relevant text passages from a report for each legal requirement of
rigorous accounting standards. However, these methods need to be fine-tuned
regularly, and they require abundant annotated data, which is often lacking in
industrial environments. Hence, we present ZeroShotALI, a novel recommender
system that leverages a state-of-the-art large language model (LLM) in
conjunction with a domain-specifically optimized transformer-based
text-matching solution. We find that a two-step approach of first retrieving a
number of best matching document sections per legal requirement with a custom
BERT-based model and second filtering these selections using an LLM yields
significant performance improvements over existing approaches.
| [
{
"created": "Fri, 11 Aug 2023 12:55:09 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Aug 2023 07:45:17 GMT",
"version": "v2"
}
] | 2023-08-15 | [
[
"Hillebrand",
"Lars",
""
],
[
"Berger",
"Armin",
""
],
[
"Deußer",
"Tobias",
""
],
[
"Dilmaghani",
"Tim",
""
],
[
"Khaled",
"Mohamed",
""
],
[
"Kliem",
"Bernd",
""
],
[
"Loitz",
"Rüdiger",
""
],
[
"Pielka",
"Maren",
""
],
[
"Leonhard",
"David",
""
],
[
"Bauckhage",
"Christian",
""
],
[
"Sifa",
"Rafet",
""
]
] | Auditing financial documents is a very tedious and time-consuming process. As of today, it can already be simplified by employing AI-based solutions to recommend relevant text passages from a report for each legal requirement of rigorous accounting standards. However, these methods need to be fine-tuned regularly, and they require abundant annotated data, which is often lacking in industrial environments. Hence, we present ZeroShotALI, a novel recommender system that leverages a state-of-the-art large language model (LLM) in conjunction with a domain-specifically optimized transformer-based text-matching solution. We find that a two-step approach of first retrieving a number of best matching document sections per legal requirement with a custom BERT-based model and second filtering these selections using an LLM yields significant performance improvements over existing approaches. |
1110.3470 | David Lievens | David Lievens and Bill Harrison | Symmetric Encapsulated Multi-Methods | This paper is a variant of David Lievens, William Harrison: Symmetric
encapsulated multi-methods to abstract over application structure. SAC 2009:
1873-1880 that includes full details of the proof of the type soundness
result stated in the original | null | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In object systems, classes take the role of modules, and interfaces consist
of methods. Because methods are encapsulated in objects, interfaces in object
systems do not allow abstracting over \emph{where} methods are implemented.
This implies that any change to the implementation structure may cause a
rippling effect. Sometimes this unduly restricts the scope of software
evolution, in particular for methods with multiple parameters where there is no
clear owner. We propose a simple scheme where symmetric methods may be defined
in the classes of any of their parameters. This allows client code to be
oblivious of what class contains a method implementation, and therefore immune
against it changing. When combined with multiple dynamic dispatch, this scheme
allows for modular extensibility where a method defined in one class is
overridden by a method defined in a class that is not its subtype. In this
paper, we illustrate the scheme by extending a core calculus of class-based
languages with these symmetric encapsulated multi-methods, and prove the result
sound.
| [
{
"created": "Sun, 16 Oct 2011 10:52:53 GMT",
"version": "v1"
}
] | 2011-10-18 | [
[
"Lievens",
"David",
""
],
[
"Harrison",
"Bill",
""
]
] | In object systems, classes take the role of modules, and interfaces consist of methods. Because methods are encapsulated in objects, interfaces in object systems do not allow abstracting over \emph{where} methods are implemented. This implies that any change to the implementation structure may cause a rippling effect. Sometimes this unduly restricts the scope of software evolution, in particular for methods with multiple parameters where there is no clear owner. We propose a simple scheme where symmetric methods may be defined in the classes of any of their parameters. This allows client code to be oblivious of what class contains a method implementation, and therefore immune against it changing. When combined with multiple dynamic dispatch, this scheme allows for modular extensibility where a method defined in one class is overridden by a method defined in a class that is not its subtype. In this paper, we illustrate the scheme by extending a core calculus of class-based languages with these symmetric encapsulated multi-methods, and prove the result sound. |
2103.11061 | Abu Md Niamul Taufique | Abu Md Niamul Taufique, Navya Nagananda, Andreas Savakis | Visualization of Deep Transfer Learning In SAR Imagery | 4 pages, 5 figures | IGARSS 2020 - 2020 IEEE International Geoscience and Remote
Sensing Symposium | 10.1109/IGARSS39084.2020.9324490 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic Aperture Radar (SAR) imagery has diverse applications in land and
marine surveillance. Unlike electro-optical (EO) systems, these systems are not
affected by weather conditions and can be used in the day and night times. With
the growing importance of SAR imagery, it would be desirable if models trained
on widely available EO datasets can also be used for SAR images. In this work,
we consider transfer learning to leverage deep features from a network trained
on an EO ships dataset and generate predictions on SAR imagery. Furthermore, by
exploring the network activations in the form of class-activation maps (CAMs),
we visualize the transfer learning process to SAR imagery and gain insight on
how a deep network interprets a new modality.
| [
{
"created": "Sat, 20 Mar 2021 00:16:15 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Taufique",
"Abu Md Niamul",
""
],
[
"Nagananda",
"Navya",
""
],
[
"Savakis",
"Andreas",
""
]
] | Synthetic Aperture Radar (SAR) imagery has diverse applications in land and marine surveillance. Unlike electro-optical (EO) systems, these systems are not affected by weather conditions and can be used in the day and night times. With the growing importance of SAR imagery, it would be desirable if models trained on widely available EO datasets can also be used for SAR images. In this work, we consider transfer learning to leverage deep features from a network trained on an EO ships dataset and generate predictions on SAR imagery. Furthermore, by exploring the network activations in the form of class-activation maps (CAMs), we visualize the transfer learning process to SAR imagery and gain insight on how a deep network interprets a new modality. |
1911.13214 | Lionel Eyraud-Dubois | Julien Herrmann (UB, LaBRI, TADAAM), Olivier Beaumont (HiePACS, UB,
LaBRI), Lionel Eyraud-Dubois (HiePACS, UB, LaBRI), Julien Hermann, Alexis
Joly (ZENITH, LIRMM, UM), Alena Shilova (HiePACS, UB, LaBRI) | Optimal checkpointing for heterogeneous chains: how to train deep neural
networks with limited memory | null | null | null | null | cs.LG cs.DC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new activation checkpointing method which allows to
significantly decrease memory usage when training Deep Neural Networks with the
back-propagation algorithm. Similarly to checkpoint-ing techniques coming from
the literature on Automatic Differentiation, it consists in dynamically
selecting the forward activations that are saved during the training phase, and
then automatically recomputing missing activations from those previously
recorded. We propose an original computation model that combines two types of
activation savings: either only storing the layer inputs, or recording the
complete history of operations that produced the outputs (this uses more
memory, but requires fewer recomputations in the backward phase), and we
provide an algorithm to compute the optimal computation sequence for this
model. This paper also describes a PyTorch implementation that processes the
entire chain, dealing with any sequential DNN whose internal layers may be
arbitrarily complex and automatically executing it according to the optimal
checkpointing strategy computed given a memory limit. Through extensive
experiments, we show that our implementation consistently outperforms existing
checkpoint-ing approaches for a large class of networks, image sizes and batch
sizes.
| [
{
"created": "Wed, 27 Nov 2019 13:05:11 GMT",
"version": "v1"
}
] | 2019-12-02 | [
[
"Herrmann",
"Julien",
"",
"UB, LaBRI, TADAAM"
],
[
"Beaumont",
"Olivier",
"",
"HiePACS, UB,\n LaBRI"
],
[
"Eyraud-Dubois",
"Lionel",
"",
"HiePACS, UB, LaBRI"
],
[
"Hermann",
"Julien",
"",
"ZENITH, LIRMM, UM"
],
[
"Joly",
"Alexis",
"",
"ZENITH, LIRMM, UM"
],
[
"Shilova",
"Alena",
"",
"HiePACS, UB, LaBRI"
]
] | This paper introduces a new activation checkpointing method which allows to significantly decrease memory usage when training Deep Neural Networks with the back-propagation algorithm. Similarly to checkpoint-ing techniques coming from the literature on Automatic Differentiation, it consists in dynamically selecting the forward activations that are saved during the training phase, and then automatically recomputing missing activations from those previously recorded. We propose an original computation model that combines two types of activation savings: either only storing the layer inputs, or recording the complete history of operations that produced the outputs (this uses more memory, but requires fewer recomputations in the backward phase), and we provide an algorithm to compute the optimal computation sequence for this model. This paper also describes a PyTorch implementation that processes the entire chain, dealing with any sequential DNN whose internal layers may be arbitrarily complex and automatically executing it according to the optimal checkpointing strategy computed given a memory limit. Through extensive experiments, we show that our implementation consistently outperforms existing checkpoint-ing approaches for a large class of networks, image sizes and batch sizes. |
2111.03089 | Rinat Aynulin | Rinat Aynulin, Pavel Chebotarev | Measuring Proximity in Attributed Networks for Community Detection | null | Studies in Computational Intelligence 943 (2021) 27-37 | 10.1007/978-3-030-65347-7_3 | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proximity measures on graphs have a variety of applications in network
analysis, including community detection. Previously they have been mainly
studied in the context of networks without attributes. If node attributes are
taken into account, however, this can provide more insight into the network
structure. In this paper, we extend the definition of some well-studied
proximity measures to attributed networks. To account for attributes, several
attribute similarity measures are used. Finally, the obtained proximity
measures are applied to detect the community structure in some real-world
networks using the spectral clustering algorithm.
| [
{
"created": "Thu, 4 Nov 2021 18:07:05 GMT",
"version": "v1"
}
] | 2022-12-06 | [
[
"Aynulin",
"Rinat",
""
],
[
"Chebotarev",
"Pavel",
""
]
] | Proximity measures on graphs have a variety of applications in network analysis, including community detection. Previously they have been mainly studied in the context of networks without attributes. If node attributes are taken into account, however, this can provide more insight into the network structure. In this paper, we extend the definition of some well-studied proximity measures to attributed networks. To account for attributes, several attribute similarity measures are used. Finally, the obtained proximity measures are applied to detect the community structure in some real-world networks using the spectral clustering algorithm. |
1805.04956 | Moritz Lipp | Moritz Lipp and Misiker Tadesse Aga and Michael Schwarz and Daniel
Gruss and Cl\'ementine Maurice and Lukas Raab and Lukas Lamster | Nethammer: Inducing Rowhammer Faults through Network Requests | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental assumption in software security is that memory contents do not
change unless there is a legitimate deliberate modification. Classical fault
attacks show that this assumption does not hold if the attacker has physical
access. Rowhammer attacks showed that local code execution is already
sufficient to break this assumption. Rowhammer exploits parasitic effects in
DRAM to modify the content of a memory cell without accessing it. Instead,
other memory locations are accessed at a high frequency. All Rowhammer attacks
so far were local attacks, running either in a scripted language or native
code. In this paper, we present Nethammer. Nethammer is the first truly remote
Rowhammer attack, without a single attacker-controlled line of code on the
targeted system. Systems that use uncached memory or flush instructions while
handling network requests, e.g., for interaction with the network device, can
be attacked using Nethammer. Other systems can still be attacked if they are
protected with quality-of-service techniques like Intel CAT. We demonstrate
that the frequency of the cache misses is in all three cases high enough to
induce bit flips. We evaluated different bit flip scenarios. Depending on the
location, the bit flip compromises either the security and integrity of the
system and the data of its users, or it can leave persistent damage on the
system, i.e., persistent denial of service. We investigated Nethammer on
personal computers, servers, and mobile phones. Nethammer is a security
landslide, making the formerly local attack a remote attack.
| [
{
"created": "Sun, 13 May 2018 21:38:29 GMT",
"version": "v1"
}
] | 2018-05-15 | [
[
"Lipp",
"Moritz",
""
],
[
"Aga",
"Misiker Tadesse",
""
],
[
"Schwarz",
"Michael",
""
],
[
"Gruss",
"Daniel",
""
],
[
"Maurice",
"Clémentine",
""
],
[
"Raab",
"Lukas",
""
],
[
"Lamster",
"Lukas",
""
]
] | A fundamental assumption in software security is that memory contents do not change unless there is a legitimate deliberate modification. Classical fault attacks show that this assumption does not hold if the attacker has physical access. Rowhammer attacks showed that local code execution is already sufficient to break this assumption. Rowhammer exploits parasitic effects in DRAM to modify the content of a memory cell without accessing it. Instead, other memory locations are accessed at a high frequency. All Rowhammer attacks so far were local attacks, running either in a scripted language or native code. In this paper, we present Nethammer. Nethammer is the first truly remote Rowhammer attack, without a single attacker-controlled line of code on the targeted system. Systems that use uncached memory or flush instructions while handling network requests, e.g., for interaction with the network device, can be attacked using Nethammer. Other systems can still be attacked if they are protected with quality-of-service techniques like Intel CAT. We demonstrate that the frequency of the cache misses is in all three cases high enough to induce bit flips. We evaluated different bit flip scenarios. Depending on the location, the bit flip compromises either the security and integrity of the system and the data of its users, or it can leave persistent damage on the system, i.e., persistent denial of service. We investigated Nethammer on personal computers, servers, and mobile phones. Nethammer is a security landslide, making the formerly local attack a remote attack. |
2310.02439 | Naiming (Lucy) Liu | Naiming Liu, Shashank Sonkar, Zichao Wang, Simon Woodhead, Richard G.
Baraniuk | Novice Learner and Expert Tutor: Evaluating Math Reasoning Abilities of
Large Language Models with Misconceptions | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We propose novel evaluations for mathematical reasoning capabilities of Large
Language Models (LLMs) based on mathematical misconceptions. Our primary
approach is to simulate LLMs as a novice learner and an expert tutor, aiming to
identify the incorrect answer to math question resulted from a specific
misconception and to recognize the misconception(s) behind an incorrect answer,
respectively. Contrary to traditional LLMs-based mathematical evaluations that
focus on answering math questions correctly, our approach takes inspirations
from principles in educational learning sciences. We explicitly ask LLMs to
mimic a novice learner by answering questions in a specific incorrect manner
based on incomplete knowledge; and to mimic an expert tutor by identifying
misconception(s) corresponding to an incorrect answer to a question. Using
simple grade-school math problems, our experiments reveal that, while LLMs can
easily answer these questions correctly, they struggle to identify 1) the
incorrect answer corresponding to specific incomplete knowledge
(misconceptions); 2) the misconceptions that explain particular incorrect
answers. Our study indicates new opportunities for enhancing LLMs' math
reasoning capabilities, especially on developing robust student simulation and
expert tutoring models in the educational applications such as intelligent
tutoring systems.
| [
{
"created": "Tue, 3 Oct 2023 21:19:50 GMT",
"version": "v1"
}
] | 2023-10-05 | [
[
"Liu",
"Naiming",
""
],
[
"Sonkar",
"Shashank",
""
],
[
"Wang",
"Zichao",
""
],
[
"Woodhead",
"Simon",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] | We propose novel evaluations for mathematical reasoning capabilities of Large Language Models (LLMs) based on mathematical misconceptions. Our primary approach is to simulate LLMs as a novice learner and an expert tutor, aiming to identify the incorrect answer to math question resulted from a specific misconception and to recognize the misconception(s) behind an incorrect answer, respectively. Contrary to traditional LLMs-based mathematical evaluations that focus on answering math questions correctly, our approach takes inspirations from principles in educational learning sciences. We explicitly ask LLMs to mimic a novice learner by answering questions in a specific incorrect manner based on incomplete knowledge; and to mimic an expert tutor by identifying misconception(s) corresponding to an incorrect answer to a question. Using simple grade-school math problems, our experiments reveal that, while LLMs can easily answer these questions correctly, they struggle to identify 1) the incorrect answer corresponding to specific incomplete knowledge (misconceptions); 2) the misconceptions that explain particular incorrect answers. Our study indicates new opportunities for enhancing LLMs' math reasoning capabilities, especially on developing robust student simulation and expert tutoring models in the educational applications such as intelligent tutoring systems. |
2407.19546 | Zeyu Zhang | Biao Wu, Yutong Xie, Zeyu Zhang, Minh Hieu Phan, Qi Chen, Ling Chen,
Qi Wu | XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image
Pre-Training | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Vision-and-language pretraining (VLP) in the medical field utilizes
contrastive learning on image-text pairs to achieve effective transfer across
tasks. Yet, current VLP approaches with the masked modelling strategy face two
challenges when applied to the medical domain. First, current models struggle
to accurately reconstruct key pathological features due to the scarcity of
medical data. Second, most methods only adopt either paired image-text or
image-only data, failing to exploit the combination of both paired and unpaired
data. To this end, this paper proposes a XLIP (Masked modelling for medical
Language-Image Pre-training) framework to enhance pathological learning and
feature learning via unpaired data. First, we introduce the attention-masked
image modelling (AttMIM) and entity-driven masked language modelling module
(EntMLM), which learns to reconstruct pathological visual and textual tokens
via multi-modal feature interaction, thus improving medical-enhanced features.
The AttMIM module masks a portion of the image features that are highly
responsive to textual features. This allows XLIP to improve the reconstruction
of highly similar image data in medicine efficiency. Second, our XLIP
capitalizes unpaired data to enhance multimodal learning by introducing
disease-kind prompts. The experimental results show that XLIP achieves SOTA for
zero-shot and fine-tuning classification performance on five datasets. Our code
will be available at https://github.com/White65534/XLIP
| [
{
"created": "Sun, 28 Jul 2024 17:38:21 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Aug 2024 10:53:37 GMT",
"version": "v2"
}
] | 2024-08-05 | [
[
"Wu",
"Biao",
""
],
[
"Xie",
"Yutong",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Phan",
"Minh Hieu",
""
],
[
"Chen",
"Qi",
""
],
[
"Chen",
"Ling",
""
],
[
"Wu",
"Qi",
""
]
] | Vision-and-language pretraining (VLP) in the medical field utilizes contrastive learning on image-text pairs to achieve effective transfer across tasks. Yet, current VLP approaches with the masked modelling strategy face two challenges when applied to the medical domain. First, current models struggle to accurately reconstruct key pathological features due to the scarcity of medical data. Second, most methods only adopt either paired image-text or image-only data, failing to exploit the combination of both paired and unpaired data. To this end, this paper proposes a XLIP (Masked modelling for medical Language-Image Pre-training) framework to enhance pathological learning and feature learning via unpaired data. First, we introduce the attention-masked image modelling (AttMIM) and entity-driven masked language modelling module (EntMLM), which learns to reconstruct pathological visual and textual tokens via multi-modal feature interaction, thus improving medical-enhanced features. The AttMIM module masks a portion of the image features that are highly responsive to textual features. This allows XLIP to improve the reconstruction of highly similar image data in medicine efficiency. Second, our XLIP capitalizes unpaired data to enhance multimodal learning by introducing disease-kind prompts. The experimental results show that XLIP achieves SOTA for zero-shot and fine-tuning classification performance on five datasets. Our code will be available at https://github.com/White65534/XLIP |
2012.11067 | Alexey Ignatiev | Alexey Ignatiev, Nina Narodytska, Nicholas Asher, Joao Marques-Silva | On Relating 'Why?' and 'Why Not?' Explanations | null | null | null | null | cs.LG cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explanations of Machine Learning (ML) models often address a 'Why?' question.
Such explanations can be related with selecting feature-value pairs which are
sufficient for the prediction. Recent work has investigated explanations that
address a 'Why Not?' question, i.e. finding a change of feature values that
guarantee a change of prediction. Given their goals, these two forms of
explaining predictions of ML models appear to be mostly unrelated. However,
this paper demonstrates otherwise, and establishes a rigorous formal
relationship between 'Why?' and 'Why Not?' explanations. Concretely, the paper
proves that, for any given instance, 'Why?' explanations are minimal hitting
sets of 'Why Not?' explanations and vice-versa. Furthermore, the paper devises
novel algorithms for extracting and enumerating both forms of explanations.
| [
{
"created": "Mon, 21 Dec 2020 01:07:13 GMT",
"version": "v1"
}
] | 2020-12-22 | [
[
"Ignatiev",
"Alexey",
""
],
[
"Narodytska",
"Nina",
""
],
[
"Asher",
"Nicholas",
""
],
[
"Marques-Silva",
"Joao",
""
]
] | Explanations of Machine Learning (ML) models often address a 'Why?' question. Such explanations can be related with selecting feature-value pairs which are sufficient for the prediction. Recent work has investigated explanations that address a 'Why Not?' question, i.e. finding a change of feature values that guarantee a change of prediction. Given their goals, these two forms of explaining predictions of ML models appear to be mostly unrelated. However, this paper demonstrates otherwise, and establishes a rigorous formal relationship between 'Why?' and 'Why Not?' explanations. Concretely, the paper proves that, for any given instance, 'Why?' explanations are minimal hitting sets of 'Why Not?' explanations and vice-versa. Furthermore, the paper devises novel algorithms for extracting and enumerating both forms of explanations. |
2310.20138 | Xinwei Wu | Xinwei Wu, Junzhuo Li, Minghui Xu, Weilong Dong, Shuangzhi Wu, Chao
Bian, Deyi Xiong | DEPN: Detecting and Editing Privacy Neurons in Pretrained Language
Models | EMNLP 2023 | null | null | null | cs.CR cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | Large language models pretrained on a huge amount of data capture rich
knowledge and information in the training data. The ability of data
memorization and regurgitation in pretrained language models, revealed in
previous studies, brings the risk of data leakage. In order to effectively
reduce these risks, we propose a framework DEPN to Detect and Edit Privacy
Neurons in pretrained language models, partially inspired by knowledge neurons
and model editing. In DEPN, we introduce a novel method, termed as privacy
neuron detector, to locate neurons associated with private information, and
then edit these detected privacy neurons by setting their activations to zero.
Furthermore, we propose a privacy neuron aggregator dememorize private
information in a batch processing manner. Experimental results show that our
method can significantly and efficiently reduce the exposure of private data
leakage without deteriorating the performance of the model. Additionally, we
empirically demonstrate the relationship between model memorization and privacy
neurons, from multiple perspectives, including model size, training time,
prompts, privacy neuron distribution, illustrating the robustness of our
approach.
| [
{
"created": "Tue, 31 Oct 2023 03:09:36 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Dec 2023 16:14:24 GMT",
"version": "v2"
}
] | 2023-12-06 | [
[
"Wu",
"Xinwei",
""
],
[
"Li",
"Junzhuo",
""
],
[
"Xu",
"Minghui",
""
],
[
"Dong",
"Weilong",
""
],
[
"Wu",
"Shuangzhi",
""
],
[
"Bian",
"Chao",
""
],
[
"Xiong",
"Deyi",
""
]
] | Large language models pretrained on a huge amount of data capture rich knowledge and information in the training data. The ability of data memorization and regurgitation in pretrained language models, revealed in previous studies, brings the risk of data leakage. In order to effectively reduce these risks, we propose a framework DEPN to Detect and Edit Privacy Neurons in pretrained language models, partially inspired by knowledge neurons and model editing. In DEPN, we introduce a novel method, termed as privacy neuron detector, to locate neurons associated with private information, and then edit these detected privacy neurons by setting their activations to zero. Furthermore, we propose a privacy neuron aggregator dememorize private information in a batch processing manner. Experimental results show that our method can significantly and efficiently reduce the exposure of private data leakage without deteriorating the performance of the model. Additionally, we empirically demonstrate the relationship between model memorization and privacy neurons, from multiple perspectives, including model size, training time, prompts, privacy neuron distribution, illustrating the robustness of our approach. |
2402.16029 | Nuo Chen | Nuo Chen, Yuhan Li, Jianheng Tang, Jia Li | GraphWiz: An Instruction-Following Language Model for Graph Problems | 27pages, 15 tables | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have achieved impressive success across several
fields, but their proficiency in understanding and resolving complex graph
problems is less explored. To bridge this gap, we introduce GraphInstruct, a
novel and comprehensive instruction-tuning dataset designed to equip language
models with the ability to tackle a broad spectrum of graph problems using
explicit reasoning paths. Utilizing GraphInstruct, we build GraphWiz, an
open-source language model capable of resolving various graph problem types
while generating clear reasoning processes. To enhance the model's capability
and reliability, we incorporate the Direct Preference Optimization (DPO)
framework into the graph problem-solving context. The enhanced model,
GraphWiz-DPO, achieves an average accuracy of 65% across nine tasks with
different complexity levels, surpassing GPT-4 which has an average accuracy of
43.8%. Moreover, our research delves into the delicate balance between training
data volume and model performance, highlighting the potential for overfitting
with increased data. We also explore the transferability of the model's
reasoning ability across different graph tasks, indicating the model's
adaptability and practical application potential. Our investigation offers a
new blueprint and valuable insights for developing LLMs specialized in graph
reasoning and problem-solving.
| [
{
"created": "Sun, 25 Feb 2024 08:41:32 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Mar 2024 13:52:12 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Jul 2024 09:15:38 GMT",
"version": "v3"
},
{
"created": "Tue, 2 Jul 2024 06:40:30 GMT",
"version": "v4"
},
{
"created": "Wed, 3 Jul 2024 06:39:59 GMT",
"version": "v5"
}
] | 2024-07-04 | [
[
"Chen",
"Nuo",
""
],
[
"Li",
"Yuhan",
""
],
[
"Tang",
"Jianheng",
""
],
[
"Li",
"Jia",
""
]
] | Large language models (LLMs) have achieved impressive success across several fields, but their proficiency in understanding and resolving complex graph problems is less explored. To bridge this gap, we introduce GraphInstruct, a novel and comprehensive instruction-tuning dataset designed to equip language models with the ability to tackle a broad spectrum of graph problems using explicit reasoning paths. Utilizing GraphInstruct, we build GraphWiz, an open-source language model capable of resolving various graph problem types while generating clear reasoning processes. To enhance the model's capability and reliability, we incorporate the Direct Preference Optimization (DPO) framework into the graph problem-solving context. The enhanced model, GraphWiz-DPO, achieves an average accuracy of 65% across nine tasks with different complexity levels, surpassing GPT-4 which has an average accuracy of 43.8%. Moreover, our research delves into the delicate balance between training data volume and model performance, highlighting the potential for overfitting with increased data. We also explore the transferability of the model's reasoning ability across different graph tasks, indicating the model's adaptability and practical application potential. Our investigation offers a new blueprint and valuable insights for developing LLMs specialized in graph reasoning and problem-solving. |
2204.10982 | Pradeep Kr. Banerjee | Johannes Rauh, Pradeep Kr. Banerjee, Eckehard Olbrich, Guido
Mont\'ufar, J\"urgen Jost | Continuity and Additivity Properties of Information Decompositions | 17 pages | International Journal of Approximate Reasoning, 2023 | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Information decompositions quantify how the Shannon information about a given
random variable is distributed among several other random variables. Various
requirements have been proposed that such a decomposition should satisfy,
leading to different candidate solutions. Curiously, however, only two of the
original requirements that determined the Shannon information have been
considered, namely monotonicity and normalization. Two other important
properties, continuity and additivity, have not been considered. In this
contribution, we focus on the mutual information of two finite variables $Y,Z$
about a third finite variable $S$ and check which of the decompositions satisfy
these two properties. While most of them satisfy continuity, only one of them
is both continuous and additive.
| [
{
"created": "Sat, 23 Apr 2022 03:45:18 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Jul 2023 15:00:06 GMT",
"version": "v2"
}
] | 2023-07-11 | [
[
"Rauh",
"Johannes",
""
],
[
"Banerjee",
"Pradeep Kr.",
""
],
[
"Olbrich",
"Eckehard",
""
],
[
"Montúfar",
"Guido",
""
],
[
"Jost",
"Jürgen",
""
]
] | Information decompositions quantify how the Shannon information about a given random variable is distributed among several other random variables. Various requirements have been proposed that such a decomposition should satisfy, leading to different candidate solutions. Curiously, however, only two of the original requirements that determined the Shannon information have been considered, namely monotonicity and normalization. Two other important properties, continuity and additivity, have not been considered. In this contribution, we focus on the mutual information of two finite variables $Y,Z$ about a third finite variable $S$ and check which of the decompositions satisfy these two properties. While most of them satisfy continuity, only one of them is both continuous and additive. |
2010.14565 | Li-Chia Yang | Li-Chia Yang, Alexander Lerch | Remixing Music with Visual Conditioning | null | 2020 IEEE International Symposium on Multimedia | null | null | cs.SD cs.MM eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a visually conditioned music remixing system by incorporating deep
visual and audio models. The method is based on a state of the art audio-visual
source separation model which performs music instrument source separation with
video information. We modified the model to work with user-selected images
instead of videos as visual input during inference to enable separation of
audio-only content. Furthermore, we propose a remixing engine that generalizes
the task of source separation into music remixing. The proposed method is able
to achieve improved audio quality compared to remixing performed by the
separate-and-add method with a state-of-the-art audio-visual source separation
model.
| [
{
"created": "Tue, 27 Oct 2020 19:12:08 GMT",
"version": "v1"
}
] | 2020-10-29 | [
[
"Yang",
"Li-Chia",
""
],
[
"Lerch",
"Alexander",
""
]
] | We propose a visually conditioned music remixing system by incorporating deep visual and audio models. The method is based on a state of the art audio-visual source separation model which performs music instrument source separation with video information. We modified the model to work with user-selected images instead of videos as visual input during inference to enable separation of audio-only content. Furthermore, we propose a remixing engine that generalizes the task of source separation into music remixing. The proposed method is able to achieve improved audio quality compared to remixing performed by the separate-and-add method with a state-of-the-art audio-visual source separation model. |
2109.04258 | Xiaodong Jia | Xiaodong Jia, Ashish Kumar, Gang Tan | A Derivative-based Parser Generator for Visibly Pushdown Grammars | null | null | null | null | cs.PL cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a derivative-based, functional recognizer and
parser generator for visibly pushdown grammars. The generated parser accepts
ambiguous grammars and produces a parse forest containing all valid parse trees
for an input string in linear time. Each parse tree in the forest can then be
extracted also in linear time. Besides the parser generator, to allow more
flexible forms of the visibly pushdown grammars, we also present a translator
that converts a tagged CFG to a visibly pushdown grammar in a sound way, and
the parse trees of the tagged CFG are further produced by running the semantic
actions embedded in the parse trees of the translated visibly pushdown grammar.
The performance of the parser is compared with a popular parsing tool ANTLR and
other popular hand-crafted parsers. The correctness of the core parsing
algorithm is formally verified in the proof assistant Coq.
| [
{
"created": "Thu, 9 Sep 2021 13:28:31 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Sep 2021 01:50:05 GMT",
"version": "v2"
}
] | 2021-09-13 | [
[
"Jia",
"Xiaodong",
""
],
[
"Kumar",
"Ashish",
""
],
[
"Tan",
"Gang",
""
]
] | In this paper, we present a derivative-based, functional recognizer and parser generator for visibly pushdown grammars. The generated parser accepts ambiguous grammars and produces a parse forest containing all valid parse trees for an input string in linear time. Each parse tree in the forest can then be extracted also in linear time. Besides the parser generator, to allow more flexible forms of the visibly pushdown grammars, we also present a translator that converts a tagged CFG to a visibly pushdown grammar in a sound way, and the parse trees of the tagged CFG are further produced by running the semantic actions embedded in the parse trees of the translated visibly pushdown grammar. The performance of the parser is compared with a popular parsing tool ANTLR and other popular hand-crafted parsers. The correctness of the core parsing algorithm is formally verified in the proof assistant Coq. |
1512.03022 | Robert Elsaesser | Chen Avin and Robert Els\"asser | Breaking the log n barrier on rumor spreading | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | $O(\log n)$ rounds has been a well known upper bound for rumor spreading
using push&pull in the random phone call model (i.e., uniform gossip in the
complete graph). A matching lower bound of $\Omega(\log n)$ is also known for
this special case. Under the assumption of this model and with a natural
addition that nodes can call a partner once they learn its address (e.g., its
IP address) we present a new distributed, address-oblivious and robust
algorithm that uses push&pull with pointer jumping to spread a rumor to all
nodes in only $O(\sqrt{\log n})$ rounds, w.h.p. This algorithm can also cope
with $F= O(n/2^{\sqrt{\log n}})$ node failures, in which case all but $O(F)$
nodes become informed within $O(\sqrt{\log n})$ rounds, w.h.p.
| [
{
"created": "Tue, 8 Dec 2015 16:56:19 GMT",
"version": "v1"
}
] | 2015-12-10 | [
[
"Avin",
"Chen",
""
],
[
"Elsässer",
"Robert",
""
]
] | $O(\log n)$ rounds has been a well known upper bound for rumor spreading using push&pull in the random phone call model (i.e., uniform gossip in the complete graph). A matching lower bound of $\Omega(\log n)$ is also known for this special case. Under the assumption of this model and with a natural addition that nodes can call a partner once they learn its address (e.g., its IP address) we present a new distributed, address-oblivious and robust algorithm that uses push&pull with pointer jumping to spread a rumor to all nodes in only $O(\sqrt{\log n})$ rounds, w.h.p. This algorithm can also cope with $F= O(n/2^{\sqrt{\log n}})$ node failures, in which case all but $O(F)$ nodes become informed within $O(\sqrt{\log n})$ rounds, w.h.p. |
1508.01006 | Dongxu Zhang | Dongxu Zhang and Dong Wang | Relation Classification via Recurrent Neural Network | null | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has gained much success in sentence-level relation
classification. For example, convolutional neural networks (CNN) have delivered
competitive performance without much effort on feature engineering as the
conventional pattern-based methods. Thus a lot of works have been produced
based on CNN structures. However, a key issue that has not been well addressed
by the CNN-based method is the lack of capability to learn temporal features,
especially long-distance dependency between nominal pairs. In this paper, we
propose a simple framework based on recurrent neural networks (RNN) and compare
it with CNN-based model. To show the limitation of popular used SemEval-2010
Task 8 dataset, we introduce another dataset refined from MIMLRE(Angeli et al.,
2014). Experiments on two different datasets strongly indicates that the
RNN-based model can deliver better performance on relation classification, and
it is particularly capable of learning long-distance relation patterns. This
makes it suitable for real-world applications where complicated expressions are
often involved.
| [
{
"created": "Wed, 5 Aug 2015 09:03:46 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Dec 2015 03:51:00 GMT",
"version": "v2"
}
] | 2015-12-29 | [
[
"Zhang",
"Dongxu",
""
],
[
"Wang",
"Dong",
""
]
] | Deep learning has gained much success in sentence-level relation classification. For example, convolutional neural networks (CNN) have delivered competitive performance without much effort on feature engineering as the conventional pattern-based methods. Thus a lot of works have been produced based on CNN structures. However, a key issue that has not been well addressed by the CNN-based method is the lack of capability to learn temporal features, especially long-distance dependency between nominal pairs. In this paper, we propose a simple framework based on recurrent neural networks (RNN) and compare it with CNN-based model. To show the limitation of popular used SemEval-2010 Task 8 dataset, we introduce another dataset refined from MIMLRE(Angeli et al., 2014). Experiments on two different datasets strongly indicates that the RNN-based model can deliver better performance on relation classification, and it is particularly capable of learning long-distance relation patterns. This makes it suitable for real-world applications where complicated expressions are often involved. |
1103.3113 | Xiaojun Tang | Xiaojun Tang, Ruoheng Liu, Predrag Spasojevic, H. Vincent Poor | A Broadcast Approach To Secret Key Generation Over Slow Fading Channels | submitted to IEEE Trans. Information Theory | null | null | null | cs.IT cs.CR math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A secret-key generation scheme based on a layered broadcasting strategy is
introduced for slow-fading channels. In the model considered, Alice wants to
share a key with Bob while keeping the key secret from Eve, who is a passive
eavesdropper. Both Alice-Bob and Alice-Eve channels are assumed to undergo slow
fading, and perfect channel state information (CSI) is assumed to be known only
at the receivers during the transmission. In each fading slot, Alice broadcasts
a continuum of coded layers and, hence, allows Bob to decode at the rate
corresponding to the fading state (unknown to Alice). The index of a reliably
decoded layer is sent back from Bob to Alice via a public and error-free
channel and used to generate a common secret key. In this paper, the achievable
secrecy key rate is first derived for a given power distribution over coded
layers. The optimal power distribution is then characterized. It is shown that
layered broadcast coding can increase the secrecy key rate significantly
compared to single-level coding.
| [
{
"created": "Wed, 16 Mar 2011 07:03:51 GMT",
"version": "v1"
}
] | 2011-03-17 | [
[
"Tang",
"Xiaojun",
""
],
[
"Liu",
"Ruoheng",
""
],
[
"Spasojevic",
"Predrag",
""
],
[
"Poor",
"H. Vincent",
""
]
] | A secret-key generation scheme based on a layered broadcasting strategy is introduced for slow-fading channels. In the model considered, Alice wants to share a key with Bob while keeping the key secret from Eve, who is a passive eavesdropper. Both Alice-Bob and Alice-Eve channels are assumed to undergo slow fading, and perfect channel state information (CSI) is assumed to be known only at the receivers during the transmission. In each fading slot, Alice broadcasts a continuum of coded layers and, hence, allows Bob to decode at the rate corresponding to the fading state (unknown to Alice). The index of a reliably decoded layer is sent back from Bob to Alice via a public and error-free channel and used to generate a common secret key. In this paper, the achievable secrecy key rate is first derived for a given power distribution over coded layers. The optimal power distribution is then characterized. It is shown that layered broadcast coding can increase the secrecy key rate significantly compared to single-level coding. |
1203.0369 | Ak Rs | Abhijit Chowdhury (NSHM College of Management & Technology, Durgapur
West Bengal, INDIA), Angshu Kumar Sinha (NSHM College of Management &
Technology, Durgapur West Bengal, INDIA) Saurabh Dutta (Dr. B.C Roy
Engineering College West Bengal, INDIA) | Introduction of a Triple Prime Symmetric Key Block Cipher | null | International Journal of Computer Applications (0975 - 8887)
Volume 39 - No.7, February 2012 | 10.5120/4831-7089 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes to put forward an innovative algorithm for symmetric key
block cipher named as "Triple Prime Symmetric Key Block Cipher with Variable
Key-Spaces (TPSKBCVK)" that employs triple prime integers as private key-spaces
of varying lengths to encrypt data files. Principles of modular arithmetic have
been elegantly used in the proposed idea of the cipher. Depending on
observations of the results of implementation of the proposed cipher on a set
of real data files of several types, all results are registered and analyzed.
The strength of the underlying design of the cipher and the liberty of using a
long key-space expectedly makes it reasonably non-susceptible against possible
cryptanalytic intrusions. As a future scope of the work, it is intended to
formulate and employ an improved scheme that will use a carrier media (image or
multimedia data file) for a secure transmission of the private keys.
| [
{
"created": "Fri, 2 Mar 2012 04:50:35 GMT",
"version": "v1"
}
] | 2012-03-19 | [
[
"Chowdhury",
"Abhijit",
"",
"NSHM College of Management & Technology, Durgapur\n West Bengal, INDIA"
],
[
"Sinha",
"Angshu Kumar",
"",
"NSHM College of Management &\n Technology, Durgapur West Bengal, INDIA"
],
[
"Dutta",
"Saurabh",
"",
"Dr. B.C Roy\n Engineering College West Bengal, INDIA"
]
] | This paper proposes to put forward an innovative algorithm for symmetric key block cipher named as "Triple Prime Symmetric Key Block Cipher with Variable Key-Spaces (TPSKBCVK)" that employs triple prime integers as private key-spaces of varying lengths to encrypt data files. Principles of modular arithmetic have been elegantly used in the proposed idea of the cipher. Depending on observations of the results of implementation of the proposed cipher on a set of real data files of several types, all results are registered and analyzed. The strength of the underlying design of the cipher and the liberty of using a long key-space expectedly makes it reasonably non-susceptible against possible cryptanalytic intrusions. As a future scope of the work, it is intended to formulate and employ an improved scheme that will use a carrier media (image or multimedia data file) for a secure transmission of the private keys. |
2206.13749 | Rongzhi Zhang | Rongzhi Zhang, Rebecca West, Xiquan Cui, Chao Zhang | Adaptive Multi-view Rule Discovery for Weakly-Supervised Compatible
Products Prediction | KDD 2022 Applied Data Science Track | null | 10.1145/3534678.3539208 | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | On e-commerce platforms, predicting if two products are compatible with each
other is an important functionality to achieve trustworthy product
recommendation and search experience for consumers. However, accurately
predicting product compatibility is difficult due to the heterogeneous product
data and the lack of manually curated training data. We study the problem of
discovering effective labeling rules that can enable weakly-supervised product
compatibility prediction. We develop AMRule, a multi-view rule discovery
framework that can (1) adaptively and iteratively discover novel rulers that
can complement the current weakly-supervised model to improve compatibility
prediction; (2) discover interpretable rules from both structured attribute
tables and unstructured product descriptions. AMRule adaptively discovers
labeling rules from large-error instances via a boosting-style strategy, the
high-quality rules can remedy the current model's weak spots and refine the
model iteratively. For rule discovery from structured product attributes, we
generate composable high-order rules from decision trees; and for rule
discovery from unstructured product descriptions, we generate prompt-based
rules from a pre-trained language model. Experiments on 4 real-world datasets
show that AMRule outperforms the baselines by 5.98% on average and improves
rule quality and rule proposal efficiency.
| [
{
"created": "Tue, 28 Jun 2022 04:11:58 GMT",
"version": "v1"
}
] | 2022-06-29 | [
[
"Zhang",
"Rongzhi",
""
],
[
"West",
"Rebecca",
""
],
[
"Cui",
"Xiquan",
""
],
[
"Zhang",
"Chao",
""
]
] | On e-commerce platforms, predicting if two products are compatible with each other is an important functionality to achieve trustworthy product recommendation and search experience for consumers. However, accurately predicting product compatibility is difficult due to the heterogeneous product data and the lack of manually curated training data. We study the problem of discovering effective labeling rules that can enable weakly-supervised product compatibility prediction. We develop AMRule, a multi-view rule discovery framework that can (1) adaptively and iteratively discover novel rulers that can complement the current weakly-supervised model to improve compatibility prediction; (2) discover interpretable rules from both structured attribute tables and unstructured product descriptions. AMRule adaptively discovers labeling rules from large-error instances via a boosting-style strategy, the high-quality rules can remedy the current model's weak spots and refine the model iteratively. For rule discovery from structured product attributes, we generate composable high-order rules from decision trees; and for rule discovery from unstructured product descriptions, we generate prompt-based rules from a pre-trained language model. Experiments on 4 real-world datasets show that AMRule outperforms the baselines by 5.98% on average and improves rule quality and rule proposal efficiency. |
1705.09552 | Seyed-Mohsen Moosavi-Dezfooli | Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard,
Stefano Soatto | Classification regions of deep neural networks | null | null | null | null | cs.CV cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this paper is to analyze the geometric properties of deep neural
network classifiers in the input space. We specifically study the topology of
classification regions created by deep networks, as well as their associated
decision boundary. Through a systematic empirical investigation, we show that
state-of-the-art deep nets learn connected classification regions, and that the
decision boundary in the vicinity of datapoints is flat along most directions.
We further draw an essential connection between two seemingly unrelated
properties of deep networks: their sensitivity to additive perturbations in the
inputs, and the curvature of their decision boundary. The directions where the
decision boundary is curved in fact remarkably characterize the directions to
which the classifier is the most vulnerable. We finally leverage a fundamental
asymmetry in the curvature of the decision boundary of deep nets, and propose a
method to discriminate between original images, and images perturbed with small
adversarial examples. We show the effectiveness of this purely geometric
approach for detecting small adversarial perturbations in images, and for
recovering the labels of perturbed images.
| [
{
"created": "Fri, 26 May 2017 12:38:48 GMT",
"version": "v1"
}
] | 2017-05-29 | [
[
"Fawzi",
"Alhussein",
""
],
[
"Moosavi-Dezfooli",
"Seyed-Mohsen",
""
],
[
"Frossard",
"Pascal",
""
],
[
"Soatto",
"Stefano",
""
]
] | The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact remarkably characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images. |
2305.09221 | Nazatul Haque Sultan | Nazatul H. Sultan, Shabnam Kasra-Kermanshahi, Yen Tran, Shangqi Lai,
Vijay Varadharajan, Surya Nepal, and Xun Yi | A Multi-Client Searchable Encryption Scheme for IoT Environment | 22 pages, 5 figures, this version was submitted to ESORICS 2023 | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | The proliferation of connected devices through Internet connectivity presents
both opportunities for smart applications and risks to security and privacy. It
is vital to proactively address these concerns to fully leverage the potential
of the Internet of Things. IoT services where one data owner serves multiple
clients, like smart city transportation, smart building management and
healthcare can offer benefits but also bring cybersecurity and data privacy
risks. For example, in healthcare, a hospital may collect data from medical
devices and make it available to multiple clients such as researchers and
pharmaceutical companies. This data can be used to improve medical treatments
and research but if not protected, it can also put patients' personal
information at risk. To ensure the benefits of these services, it is important
to implement proper security and privacy measures. In this paper, we propose a
symmetric searchable encryption scheme with dynamic updates on a database that
has a single owner and multiple clients for IoT environments. Our proposed
scheme supports both forward and backward privacy. Additionally, our scheme
supports a decentralized storage environment in which data owners can outsource
data across multiple servers or even across multiple service providers to
improve security and privacy. Further, it takes a minimum amount of effort and
costs to revoke a client's access to our system at any time. The performance
and formal security analyses of the proposed scheme show that our scheme
provides better functionality, and security and is more efficient in terms of
computation and storage than the closely related works.
| [
{
"created": "Tue, 16 May 2023 06:53:39 GMT",
"version": "v1"
}
] | 2023-05-17 | [
[
"Sultan",
"Nazatul H.",
""
],
[
"Kasra-Kermanshahi",
"Shabnam",
""
],
[
"Tran",
"Yen",
""
],
[
"Lai",
"Shangqi",
""
],
[
"Varadharajan",
"Vijay",
""
],
[
"Nepal",
"Surya",
""
],
[
"Yi",
"Xun",
""
]
] | The proliferation of connected devices through Internet connectivity presents both opportunities for smart applications and risks to security and privacy. It is vital to proactively address these concerns to fully leverage the potential of the Internet of Things. IoT services where one data owner serves multiple clients, like smart city transportation, smart building management and healthcare can offer benefits but also bring cybersecurity and data privacy risks. For example, in healthcare, a hospital may collect data from medical devices and make it available to multiple clients such as researchers and pharmaceutical companies. This data can be used to improve medical treatments and research but if not protected, it can also put patients' personal information at risk. To ensure the benefits of these services, it is important to implement proper security and privacy measures. In this paper, we propose a symmetric searchable encryption scheme with dynamic updates on a database that has a single owner and multiple clients for IoT environments. Our proposed scheme supports both forward and backward privacy. Additionally, our scheme supports a decentralized storage environment in which data owners can outsource data across multiple servers or even across multiple service providers to improve security and privacy. Further, it takes a minimum amount of effort and costs to revoke a client's access to our system at any time. The performance and formal security analyses of the proposed scheme show that our scheme provides better functionality, and security and is more efficient in terms of computation and storage than the closely related works. |
2211.15656 | Hao Dong | Hao Dong, Xianjing Zhang, Jintao Xu, Rui Ai, Weihao Gu, Huimin Lu,
Juho Kannala and Xieyuanli Chen | SuperFusion: Multilevel LiDAR-Camera Fusion for Long-Range HD Map
Generation | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-definition (HD) semantic map generation of the environment is an
essential component of autonomous driving. Existing methods have achieved good
performance in this task by fusing different sensor modalities, such as LiDAR
and camera. However, current works are based on raw data or network
feature-level fusion and only consider short-range HD map generation, limiting
their deployment to realistic autonomous driving applications. In this paper,
we focus on the task of building the HD maps in both short ranges, i.e., within
30 m, and also predicting long-range HD maps up to 90 m, which is required by
downstream path planning and control tasks to improve the smoothness and safety
of autonomous driving. To this end, we propose a novel network named
SuperFusion, exploiting the fusion of LiDAR and camera data at multiple levels.
We use LiDAR depth to improve image depth estimation and use image features to
guide long-range LiDAR feature prediction. We benchmark our SuperFusion on the
nuScenes dataset and a self-recorded dataset and show that it outperforms the
state-of-the-art baseline methods with large margins on all intervals.
Additionally, we apply the generated HD map to a downstream path planning task,
demonstrating that the long-range HD maps predicted by our method can lead to
better path planning for autonomous vehicles. Our code and self-recorded
dataset will be available at https://github.com/haomo-ai/SuperFusion.
| [
{
"created": "Mon, 28 Nov 2022 18:59:02 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Mar 2023 16:01:22 GMT",
"version": "v2"
}
] | 2023-03-17 | [
[
"Dong",
"Hao",
""
],
[
"Zhang",
"Xianjing",
""
],
[
"Xu",
"Jintao",
""
],
[
"Ai",
"Rui",
""
],
[
"Gu",
"Weihao",
""
],
[
"Lu",
"Huimin",
""
],
[
"Kannala",
"Juho",
""
],
[
"Chen",
"Xieyuanli",
""
]
] | High-definition (HD) semantic map generation of the environment is an essential component of autonomous driving. Existing methods have achieved good performance in this task by fusing different sensor modalities, such as LiDAR and camera. However, current works are based on raw data or network feature-level fusion and only consider short-range HD map generation, limiting their deployment to realistic autonomous driving applications. In this paper, we focus on the task of building the HD maps in both short ranges, i.e., within 30 m, and also predicting long-range HD maps up to 90 m, which is required by downstream path planning and control tasks to improve the smoothness and safety of autonomous driving. To this end, we propose a novel network named SuperFusion, exploiting the fusion of LiDAR and camera data at multiple levels. We use LiDAR depth to improve image depth estimation and use image features to guide long-range LiDAR feature prediction. We benchmark our SuperFusion on the nuScenes dataset and a self-recorded dataset and show that it outperforms the state-of-the-art baseline methods with large margins on all intervals. Additionally, we apply the generated HD map to a downstream path planning task, demonstrating that the long-range HD maps predicted by our method can lead to better path planning for autonomous vehicles. Our code and self-recorded dataset will be available at https://github.com/haomo-ai/SuperFusion. |
1810.08691 | Dawei Liang | Dawei Liang, Edison Thomaz | Audio-Based Activities of Daily Living (ADL) Recognition with
Large-Scale Acoustic Embeddings from Online Videos | 18 pages,7 figures; new version: results updates | ACM IMWUT 3(1) 2019 Article 17 | 10.1145/3314404 | null | cs.HC cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the years, activity sensing and recognition has been shown to play a key
enabling role in a wide range of applications, from sustainability and
human-computer interaction to health care. While many recognition tasks have
traditionally employed inertial sensors, acoustic-based methods offer the
benefit of capturing rich contextual information, which can be useful when
discriminating complex activities. Given the emergence of deep learning
techniques and leveraging new, large-scaled multi-media datasets, this paper
revisits the opportunity of training audio-based classifiers without the
onerous and time-consuming task of annotating audio data. We propose a
framework for audio-based activity recognition that makes use of millions of
embedding features from public online video sound clips. Based on the
combination of oversampling and deep learning approaches, our framework does
not require further feature processing or outliers filtering as in prior work.
We evaluated our approach in the context of Activities of Daily Living (ADL) by
recognizing 15 everyday activities with 14 participants in their own homes,
achieving 64.2% and 83.6% averaged within-subject accuracy in terms of top-1
and top-3 classification respectively. Individual class performance was also
examined in the paper to further study the co-occurrence characteristics of the
activities and the robustness of the framework.
| [
{
"created": "Fri, 19 Oct 2018 21:19:16 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Nov 2018 09:08:24 GMT",
"version": "v2"
}
] | 2019-04-09 | [
[
"Liang",
"Dawei",
""
],
[
"Thomaz",
"Edison",
""
]
] | Over the years, activity sensing and recognition has been shown to play a key enabling role in a wide range of applications, from sustainability and human-computer interaction to health care. While many recognition tasks have traditionally employed inertial sensors, acoustic-based methods offer the benefit of capturing rich contextual information, which can be useful when discriminating complex activities. Given the emergence of deep learning techniques and leveraging new, large-scaled multi-media datasets, this paper revisits the opportunity of training audio-based classifiers without the onerous and time-consuming task of annotating audio data. We propose a framework for audio-based activity recognition that makes use of millions of embedding features from public online video sound clips. Based on the combination of oversampling and deep learning approaches, our framework does not require further feature processing or outliers filtering as in prior work. We evaluated our approach in the context of Activities of Daily Living (ADL) by recognizing 15 everyday activities with 14 participants in their own homes, achieving 64.2% and 83.6% averaged within-subject accuracy in terms of top-1 and top-3 classification respectively. Individual class performance was also examined in the paper to further study the co-occurrence characteristics of the activities and the robustness of the framework. |
2303.16411 | Zheng Naishan | Man Zhou, Naishan Zheng, Jie Huang, Chunle Guo, Chongyi Li | Unlocking Masked Autoencoders as Loss Function for Image and Video
Restoration | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image and video restoration has achieved a remarkable leap with the advent of
deep learning. The success of deep learning paradigm lies in three key
components: data, model, and loss. Currently, many efforts have been devoted to
the first two while seldom study focuses on loss function. With the question
``are the de facto optimization functions e.g., $L_1$, $L_2$, and perceptual
losses optimal?'', we explore the potential of loss and raise our belief
``learned loss function empowers the learning capability of neural networks for
image and video restoration''.
Concretely, we stand on the shoulders of the masked Autoencoders (MAE) and
formulate it as a `learned loss function', owing to the fact the pre-trained
MAE innately inherits the prior of image reasoning. We investigate the efficacy
of our belief from three perspectives: 1) from task-customized MAE to native
MAE, 2) from image task to video task, and 3) from transformer structure to
convolution neural network structure. Extensive experiments across multiple
image and video tasks, including image denoising, image super-resolution, image
enhancement, guided image super-resolution, video denoising, and video
enhancement, demonstrate the consistent performance improvements introduced by
the learned loss function. Besides, the learned loss function is preferable as
it can be directly plugged into existing networks during training without
involving computations in the inference stage. Code will be publicly available.
| [
{
"created": "Wed, 29 Mar 2023 02:41:08 GMT",
"version": "v1"
}
] | 2023-03-30 | [
[
"Zhou",
"Man",
""
],
[
"Zheng",
"Naishan",
""
],
[
"Huang",
"Jie",
""
],
[
"Guo",
"Chunle",
""
],
[
"Li",
"Chongyi",
""
]
] | Image and video restoration has achieved a remarkable leap with the advent of deep learning. The success of deep learning paradigm lies in three key components: data, model, and loss. Currently, many efforts have been devoted to the first two while seldom study focuses on loss function. With the question ``are the de facto optimization functions e.g., $L_1$, $L_2$, and perceptual losses optimal?'', we explore the potential of loss and raise our belief ``learned loss function empowers the learning capability of neural networks for image and video restoration''. Concretely, we stand on the shoulders of the masked Autoencoders (MAE) and formulate it as a `learned loss function', owing to the fact the pre-trained MAE innately inherits the prior of image reasoning. We investigate the efficacy of our belief from three perspectives: 1) from task-customized MAE to native MAE, 2) from image task to video task, and 3) from transformer structure to convolution neural network structure. Extensive experiments across multiple image and video tasks, including image denoising, image super-resolution, image enhancement, guided image super-resolution, video denoising, and video enhancement, demonstrate the consistent performance improvements introduced by the learned loss function. Besides, the learned loss function is preferable as it can be directly plugged into existing networks during training without involving computations in the inference stage. Code will be publicly available. |
2208.06111 | Charbel Toumieh | Charbel Toumieh, Alain Lambert | Shape-aware Safe Corridors Generation using Voxel Grids | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Safe Corridors (a series of overlapping convex shapes) have been used
recently in multiple state-of-the-art motion planning methods. They allow to
represent the free space in the environment in an efficient way for collision
avoidance. In this paper, we propose a new framework for generating Safe
Corridors. We assume that we have a voxel grid representation of the
environment. The proposed framework improves on a previous state-of-the-art
voxel grid based Safe Corridor generation method. It also creates a
connectivity graph between polyhedra of a given Safe Corridor that allows to
know which polyhedra intersect with each other. The connectivity graph can be
used in planning methods to reduce computation time. The method is compared to
other state-of-the-art methods in simulations in terms of computation time,
volume covered, safety, number of polyhedron per Safe Corridor and number of
constraints per polyhedron.
| [
{
"created": "Fri, 12 Aug 2022 04:25:39 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Aug 2022 02:34:03 GMT",
"version": "v2"
}
] | 2022-08-17 | [
[
"Toumieh",
"Charbel",
""
],
[
"Lambert",
"Alain",
""
]
] | Safe Corridors (a series of overlapping convex shapes) have been used recently in multiple state-of-the-art motion planning methods. They allow to represent the free space in the environment in an efficient way for collision avoidance. In this paper, we propose a new framework for generating Safe Corridors. We assume that we have a voxel grid representation of the environment. The proposed framework improves on a previous state-of-the-art voxel grid based Safe Corridor generation method. It also creates a connectivity graph between polyhedra of a given Safe Corridor that allows to know which polyhedra intersect with each other. The connectivity graph can be used in planning methods to reduce computation time. The method is compared to other state-of-the-art methods in simulations in terms of computation time, volume covered, safety, number of polyhedron per Safe Corridor and number of constraints per polyhedron. |
1805.02165 | Xinghao Ding | Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley | Joint CS-MRI Reconstruction and Segmentation with a Unified Deep Network | 8 pages, 6 figures, 3 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The need for fast acquisition and automatic analysis of MRI data is growing
in the age of big data. Although compressed sensing magnetic resonance imaging
(CS-MRI) has been studied to accelerate MRI by reducing k-space measurements,
in current CS-MRI techniques MRI applications such as segmentation are
overlooked when doing image reconstruction. In this paper, we test the utility
of CS-MRI methods in automatic segmentation models and propose a unified deep
neural network architecture called SegNetMRI which we apply to the combined
CS-MRI reconstruction and segmentation problem. SegNetMRI is built upon a MRI
reconstruction network with multiple cascaded blocks each containing an
encoder-decoder unit and a data fidelity unit, and MRI segmentation networks
having the same encoder-decoder structure. The two subnetworks are pre-trained
and fine-tuned with shared reconstruction encoders. The outputs are merged into
the final segmentation. Our experiments show that SegNetMRI can improve both
the reconstruction and segmentation performance when using compressive
measurements.
| [
{
"created": "Sun, 6 May 2018 07:45:32 GMT",
"version": "v1"
}
] | 2018-05-08 | [
[
"Sun",
"Liyan",
""
],
[
"Fan",
"Zhiwen",
""
],
[
"Huang",
"Yue",
""
],
[
"Ding",
"Xinghao",
""
],
[
"Paisley",
"John",
""
]
] | The need for fast acquisition and automatic analysis of MRI data is growing in the age of big data. Although compressed sensing magnetic resonance imaging (CS-MRI) has been studied to accelerate MRI by reducing k-space measurements, in current CS-MRI techniques MRI applications such as segmentation are overlooked when doing image reconstruction. In this paper, we test the utility of CS-MRI methods in automatic segmentation models and propose a unified deep neural network architecture called SegNetMRI which we apply to the combined CS-MRI reconstruction and segmentation problem. SegNetMRI is built upon a MRI reconstruction network with multiple cascaded blocks each containing an encoder-decoder unit and a data fidelity unit, and MRI segmentation networks having the same encoder-decoder structure. The two subnetworks are pre-trained and fine-tuned with shared reconstruction encoders. The outputs are merged into the final segmentation. Our experiments show that SegNetMRI can improve both the reconstruction and segmentation performance when using compressive measurements. |
2009.12185 | Tom\'a\v{s} Kroupa | Luk\'a\v{s} Adam, Rostislav Hor\v{c}\'ik, Tom\'a\v{s} Kasl,
Tom\'a\v{s} Kroupa | Double Oracle Algorithm for Computing Equilibria in Continuous Games | null | null | null | null | cs.GT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many efficient algorithms have been designed to recover Nash equilibria of
various classes of finite games. Special classes of continuous games with
infinite strategy spaces, such as polynomial games, can be solved by
semidefinite programming. In general, however, continuous games are not
directly amenable to computational procedures. In this contribution, we develop
an iterative strategy generation technique for finding a Nash equilibrium in a
whole class of continuous two-person zero-sum games with compact strategy sets.
The procedure, which is called the double oracle algorithm, has been
successfully applied to large finite games in the past. We prove the
convergence of the double oracle algorithm to a Nash equilibrium. Moreover, the
algorithm is guaranteed to recover an approximate equilibrium in finitely-many
steps. Our numerical experiments show that it outperforms fictitious play on
several examples of games appearing in the literature. In particular, we
provide a detailed analysis of experiments with a version of the continuous
Colonel Blotto game.
| [
{
"created": "Fri, 25 Sep 2020 12:42:08 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Sep 2020 16:12:53 GMT",
"version": "v2"
}
] | 2020-10-01 | [
[
"Adam",
"Lukáš",
""
],
[
"Horčík",
"Rostislav",
""
],
[
"Kasl",
"Tomáš",
""
],
[
"Kroupa",
"Tomáš",
""
]
] | Many efficient algorithms have been designed to recover Nash equilibria of various classes of finite games. Special classes of continuous games with infinite strategy spaces, such as polynomial games, can be solved by semidefinite programming. In general, however, continuous games are not directly amenable to computational procedures. In this contribution, we develop an iterative strategy generation technique for finding a Nash equilibrium in a whole class of continuous two-person zero-sum games with compact strategy sets. The procedure, which is called the double oracle algorithm, has been successfully applied to large finite games in the past. We prove the convergence of the double oracle algorithm to a Nash equilibrium. Moreover, the algorithm is guaranteed to recover an approximate equilibrium in finitely-many steps. Our numerical experiments show that it outperforms fictitious play on several examples of games appearing in the literature. In particular, we provide a detailed analysis of experiments with a version of the continuous Colonel Blotto game. |
2010.00246 | Zheng Gu | Zheng Gu, Chuanqi Dong, Jing Huo, Wenbin Li, Yang Gao | CariMe: Unpaired Caricature Generation with Multiple Exaggerations | null | null | null | null | cs.CV cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Caricature generation aims to translate real photos into caricatures with
artistic styles and shape exaggerations while maintaining the identity of the
subject. Different from the generic image-to-image translation, drawing a
caricature automatically is a more challenging task due to the existence of
various spacial deformations. Previous caricature generation methods are
obsessed with predicting definite image warping from a given photo while
ignoring the intrinsic representation and distribution for exaggerations in
caricatures. This limits their ability on diverse exaggeration generation. In
this paper, we generalize the caricature generation problem from instance-level
warping prediction to distribution-level deformation modeling. Based on this
assumption, we present the first exploration for unpaired CARIcature generation
with Multiple Exaggerations (CariMe). Technically, we propose a
Multi-exaggeration Warper network to learn the distribution-level mapping from
photo to facial exaggerations. This makes it possible to generate diverse and
reasonable exaggerations from randomly sampled warp codes given one input
photo. To better represent the facial exaggeration and produce fine-grained
warping, a deformation-field-based warping method is also proposed, which helps
us to capture more detailed exaggerations than other point-based warping
methods. Experiments and two perceptual studies prove the superiority of our
method comparing with other state-of-the-art methods, showing the improvement
of our work on caricature generation.
| [
{
"created": "Thu, 1 Oct 2020 08:14:32 GMT",
"version": "v1"
}
] | 2020-10-02 | [
[
"Gu",
"Zheng",
""
],
[
"Dong",
"Chuanqi",
""
],
[
"Huo",
"Jing",
""
],
[
"Li",
"Wenbin",
""
],
[
"Gao",
"Yang",
""
]
] | Caricature generation aims to translate real photos into caricatures with artistic styles and shape exaggerations while maintaining the identity of the subject. Different from the generic image-to-image translation, drawing a caricature automatically is a more challenging task due to the existence of various spacial deformations. Previous caricature generation methods are obsessed with predicting definite image warping from a given photo while ignoring the intrinsic representation and distribution for exaggerations in caricatures. This limits their ability on diverse exaggeration generation. In this paper, we generalize the caricature generation problem from instance-level warping prediction to distribution-level deformation modeling. Based on this assumption, we present the first exploration for unpaired CARIcature generation with Multiple Exaggerations (CariMe). Technically, we propose a Multi-exaggeration Warper network to learn the distribution-level mapping from photo to facial exaggerations. This makes it possible to generate diverse and reasonable exaggerations from randomly sampled warp codes given one input photo. To better represent the facial exaggeration and produce fine-grained warping, a deformation-field-based warping method is also proposed, which helps us to capture more detailed exaggerations than other point-based warping methods. Experiments and two perceptual studies prove the superiority of our method comparing with other state-of-the-art methods, showing the improvement of our work on caricature generation. |
2101.10882 | Sidhanth Mohanty | Siqi Liu, Sidhanth Mohanty, Prasad Raghavendra | On statistical inference when fixed points of belief propagation are
unstable | Title changed. More detailed description of results and technical
overview in the intro | null | null | null | cs.DS math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many statistical inference problems correspond to recovering the values of a
set of hidden variables from sparse observations on them. For instance, in a
planted constraint satisfaction problem such as planted 3-SAT, the clauses are
sparse observations from which the hidden assignment is to be recovered. In the
problem of community detection in a stochastic block model, the community
labels are hidden variables that are to be recovered from the edges of the
graph.
Inspired by ideas from statistical physics, the presence of a stable fixed
point for belief propogation has been widely conjectured to characterize the
computational tractability of these problems. For community detection in
stochastic block models, many of these predictions have been rigorously
confirmed.
In this work, we consider a general model of statistical inference problems
that includes both community detection in stochastic block models, and all
planted constraint satisfaction problems as special cases. We carry out the
cavity method calculations from statistical physics to compute the regime of
parameters where detection and recovery should be algorithmically tractable. At
precisely the predicted tractable regime, we give:
(i) a general polynomial-time algorithm for the problem of detection:
distinguishing an input with a planted signal from one without;
(ii) a general polynomial-time algorithm for the problem of recovery:
outputting a vector that correlates with the hidden assignment significantly
better than a random guess would.
| [
{
"created": "Tue, 26 Jan 2021 15:51:44 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Jul 2021 18:53:55 GMT",
"version": "v2"
}
] | 2021-07-20 | [
[
"Liu",
"Siqi",
""
],
[
"Mohanty",
"Sidhanth",
""
],
[
"Raghavendra",
"Prasad",
""
]
] | Many statistical inference problems correspond to recovering the values of a set of hidden variables from sparse observations on them. For instance, in a planted constraint satisfaction problem such as planted 3-SAT, the clauses are sparse observations from which the hidden assignment is to be recovered. In the problem of community detection in a stochastic block model, the community labels are hidden variables that are to be recovered from the edges of the graph. Inspired by ideas from statistical physics, the presence of a stable fixed point for belief propogation has been widely conjectured to characterize the computational tractability of these problems. For community detection in stochastic block models, many of these predictions have been rigorously confirmed. In this work, we consider a general model of statistical inference problems that includes both community detection in stochastic block models, and all planted constraint satisfaction problems as special cases. We carry out the cavity method calculations from statistical physics to compute the regime of parameters where detection and recovery should be algorithmically tractable. At precisely the predicted tractable regime, we give: (i) a general polynomial-time algorithm for the problem of detection: distinguishing an input with a planted signal from one without; (ii) a general polynomial-time algorithm for the problem of recovery: outputting a vector that correlates with the hidden assignment significantly better than a random guess would. |
2404.09494 | Junfan Li | Junfan Li and Zenglin Xu and Zheshun Wu and Irwin King | On the Necessity of Collaboration in Online Model Selection with
Decentralized Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider online model selection with decentralized data over $M$ clients,
and study the necessity of collaboration among clients. Previous work proposed
various federated algorithms without demonstrating their necessity, while we
answer the question from a novel perspective of computational constraints. We
prove lower bounds on the regret, and propose a federated algorithm and analyze
the upper bound. Our results show (i) collaboration is unnecessary in the
absence of computational constraints on clients; (ii) collaboration is
necessary if the computational cost on each client is limited to $o(K)$, where
$K$ is the number of candidate hypothesis spaces. We clarify the unnecessary
nature of collaboration in previous federated algorithms for distributed online
multi-kernel learning, and improve the regret bounds at a smaller computational
and communication cost. Our algorithm relies on three new techniques including
an improved Bernstein's inequality for martingale, a federated online mirror
descent framework, and decoupling model selection and prediction, which might
be of independent interest.
| [
{
"created": "Mon, 15 Apr 2024 06:32:28 GMT",
"version": "v1"
},
{
"created": "Tue, 14 May 2024 11:29:13 GMT",
"version": "v2"
},
{
"created": "Wed, 22 May 2024 02:07:41 GMT",
"version": "v3"
}
] | 2024-05-24 | [
[
"Li",
"Junfan",
""
],
[
"Xu",
"Zenglin",
""
],
[
"Wu",
"Zheshun",
""
],
[
"King",
"Irwin",
""
]
] | We consider online model selection with decentralized data over $M$ clients, and study the necessity of collaboration among clients. Previous work proposed various federated algorithms without demonstrating their necessity, while we answer the question from a novel perspective of computational constraints. We prove lower bounds on the regret, and propose a federated algorithm and analyze the upper bound. Our results show (i) collaboration is unnecessary in the absence of computational constraints on clients; (ii) collaboration is necessary if the computational cost on each client is limited to $o(K)$, where $K$ is the number of candidate hypothesis spaces. We clarify the unnecessary nature of collaboration in previous federated algorithms for distributed online multi-kernel learning, and improve the regret bounds at a smaller computational and communication cost. Our algorithm relies on three new techniques including an improved Bernstein's inequality for martingale, a federated online mirror descent framework, and decoupling model selection and prediction, which might be of independent interest. |
2110.01098 | Michael Coblenz | Michael Coblenz, Michelle Mazurek, Michael Hicks | Garbage Collection Makes Rust Easier to Use: A Randomized Controlled
Trial of the Bronze Garbage Collector | Michael Coblenz, Michelle L. Mazurek, and Michael Hicks. 2022.
Garbage Collection Makes Rust Easier to Use: A Randomized Controlled Trial of
the Bronze Garbage Collector. In 44th International Conference on Software
Engineering (ICSE '22), May 21-29, 2022, Pittsburgh, PA, USA. ACM, New York,
NY, USA, 12 pages. https://doi.org/10.1145/3510003.3510107 | null | 10.1145/3510003.3510107 | null | cs.SE cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rust is a general-purpose programming language that is both type- and
memory-safe. Rust does not use a garbage collector, but rather achieves these
properties through a sophisticated, but complex, type system. Doing so makes
Rust very efficient, but makes Rust relatively hard to learn and use. We
designed Bronze, an optional, library-based garbage collector for Rust. To see
whether Bronze could make Rust more usable, we conducted a randomized
controlled trial with volunteers from a 633-person class, collecting data from
428 students in total. We found that for a task that required managing complex
aliasing, Bronze users were more likely to complete the task in the time
available, and those who did so required only about a third as much time (4
hours vs. 12 hours). We found no significant difference in total time, even
though Bronze users re-did the task without Bronze afterward. Surveys indicated
that ownership, borrowing, and lifetimes were primary causes of the challenges
that users faced when using Rust.
| [
{
"created": "Sun, 3 Oct 2021 20:26:24 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Feb 2022 14:55:41 GMT",
"version": "v2"
}
] | 2022-02-16 | [
[
"Coblenz",
"Michael",
""
],
[
"Mazurek",
"Michelle",
""
],
[
"Hicks",
"Michael",
""
]
] | Rust is a general-purpose programming language that is both type- and memory-safe. Rust does not use a garbage collector, but rather achieves these properties through a sophisticated, but complex, type system. Doing so makes Rust very efficient, but makes Rust relatively hard to learn and use. We designed Bronze, an optional, library-based garbage collector for Rust. To see whether Bronze could make Rust more usable, we conducted a randomized controlled trial with volunteers from a 633-person class, collecting data from 428 students in total. We found that for a task that required managing complex aliasing, Bronze users were more likely to complete the task in the time available, and those who did so required only about a third as much time (4 hours vs. 12 hours). We found no significant difference in total time, even though Bronze users re-did the task without Bronze afterward. Surveys indicated that ownership, borrowing, and lifetimes were primary causes of the challenges that users faced when using Rust. |
2106.16085 | Jinwook Lee Ph.D. | Matthew J. Schneider and Jinwook Lee | Protecting Time Series Data with Minimal Forecast Loss | found better results | null | null | null | cs.CR cs.DM math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Forecasting could be negatively impacted due to anonymization requirements in
data protection legislation. To measure the potential severity of this problem,
we derive theoretical bounds for the loss to forecasts from additive
exponential smoothing models using protected data. Following the guidelines of
anonymization from the General Data Protection Regulation (GDPR) and California
Consumer Privacy Act (CCPA), we develop the $k$-nearest Time Series ($k$-nTS)
Swapping and $k$-means Time Series ($k$-mTS) Shuffling methods to create
protected time series data that minimizes the loss to forecasts while
preventing a data intruder from detecting privacy issues. For efficient and
effective decision making, we formally model an integer programming problem for
a perfect matching for simultaneous data swapping in each cluster. We call it a
two-party data privacy framework since our optimization model includes the
utilities of a data provider and data intruder. We apply our data protection
methods to thousands of time series and find that it maintains the forecasts
and patterns (level, trend, and seasonality) of time series well compared to
standard data protection methods suggested in legislation. Substantively, our
paper addresses the challenge of protecting time series data when used for
forecasting. Our findings suggest the managerial importance of incorporating
the concerns of forecasters into the data protection itself.
| [
{
"created": "Wed, 30 Jun 2021 14:20:02 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Mar 2023 15:14:44 GMT",
"version": "v2"
}
] | 2023-03-08 | [
[
"Schneider",
"Matthew J.",
""
],
[
"Lee",
"Jinwook",
""
]
] | Forecasting could be negatively impacted due to anonymization requirements in data protection legislation. To measure the potential severity of this problem, we derive theoretical bounds for the loss to forecasts from additive exponential smoothing models using protected data. Following the guidelines of anonymization from the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA), we develop the $k$-nearest Time Series ($k$-nTS) Swapping and $k$-means Time Series ($k$-mTS) Shuffling methods to create protected time series data that minimizes the loss to forecasts while preventing a data intruder from detecting privacy issues. For efficient and effective decision making, we formally model an integer programming problem for a perfect matching for simultaneous data swapping in each cluster. We call it a two-party data privacy framework since our optimization model includes the utilities of a data provider and data intruder. We apply our data protection methods to thousands of time series and find that it maintains the forecasts and patterns (level, trend, and seasonality) of time series well compared to standard data protection methods suggested in legislation. Substantively, our paper addresses the challenge of protecting time series data when used for forecasting. Our findings suggest the managerial importance of incorporating the concerns of forecasters into the data protection itself. |
2403.12463 | Mehmet Tekerek (PhD) | Mehmet Gok, Mehmet Tekerek, Hamza Aydemir | Reinforcement learning based local path planning for mobile robot | 5 Pages, 10 figures, Presented in; Interdisciplinary Conference on
Mechanics, Computers and Electrics ANKARA/TURKEY 27-28 November 2021 | Interdisciplinary Conference on Mechanics, Computers and
Electrics, 27-28 Nov. 2021, Ankara | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Different methods are used for a mobile robot to go to a specific target
location. These methods work in different ways for online and offline
scenarios. In the offline scenario, an environment map is created once, and
fixed path planning is made on this map to reach the target. Path planning
algorithms such as A* and RRT (Rapidly-Exploring Random Tree) are the examples
of offline methods. The most obvious situation here is the need to re-plan the
path for changing conditions of the loaded map. On the other hand, in the
online scenario, the robot moves dynamically to a given target without using a
map by using the perceived data coming from the sensors. Approaches such as SFM
(Social Force Model) are used in online systems. However, these methods suffer
from the requirement of a lot of dynamic sensing data. Thus, it can be said
that the need for re-planning and mapping in offline systems and various system
design requirements in online systems are the subjects that focus on autonomous
mobile robot research. Recently, deep neural network powered Q-Learning methods
are used as an emerging solution to the aforementioned problems in mobile robot
navigation. In this study, machine learning algorithms with deep Q-Learning
(DQN) and Deep DQN architectures, are evaluated for the solution of the
problems presented above to realize path planning of an autonomous mobile robot
to avoid obstacles.
| [
{
"created": "Tue, 24 Oct 2023 18:26:25 GMT",
"version": "v1"
}
] | 2024-03-20 | [
[
"Gok",
"Mehmet",
""
],
[
"Tekerek",
"Mehmet",
""
],
[
"Aydemir",
"Hamza",
""
]
] | Different methods are used for a mobile robot to go to a specific target location. These methods work in different ways for online and offline scenarios. In the offline scenario, an environment map is created once, and fixed path planning is made on this map to reach the target. Path planning algorithms such as A* and RRT (Rapidly-Exploring Random Tree) are the examples of offline methods. The most obvious situation here is the need to re-plan the path for changing conditions of the loaded map. On the other hand, in the online scenario, the robot moves dynamically to a given target without using a map by using the perceived data coming from the sensors. Approaches such as SFM (Social Force Model) are used in online systems. However, these methods suffer from the requirement of a lot of dynamic sensing data. Thus, it can be said that the need for re-planning and mapping in offline systems and various system design requirements in online systems are the subjects that focus on autonomous mobile robot research. Recently, deep neural network powered Q-Learning methods are used as an emerging solution to the aforementioned problems in mobile robot navigation. In this study, machine learning algorithms with deep Q-Learning (DQN) and Deep DQN architectures, are evaluated for the solution of the problems presented above to realize path planning of an autonomous mobile robot to avoid obstacles. |
2011.02068 | Sichang Tu | Amir Zeldes, Lance Martin and Sichang Tu | Exhaustive Entity Recognition for Coptic: Challenges and Solutions | 9 pages, 2 figures, 5 tables. Accepted by The 4th Joint SIGHUM
Workshop on Computational Linguistics for Cultural Heritage, Social Sciences,
Humanities and Literature | null | null | null | cs.CL cs.DL | http://creativecommons.org/licenses/by/4.0/ | Entity recognition provides semantic access to ancient materials in the
Digital Humanities: itexposes people and places of interest in texts that
cannot be read exhaustively, facilitates linkingresources and can provide a
window into text contents, even for texts with no translations. Inthis paper we
present entity recognition for Coptic, the language of Hellenistic era Egypt.
Weevaluate NLP approaches to the task and lay out difficulties in applying them
to a low-resource,morphologically complex language. We present solutions for
named and non-named nested en-tity recognition and semi-automatic entity
linking to Wikipedia, relying on robust dependencyparsing, feature-based CRF
models, and hand-crafted knowledge base resources, enabling highaccuracy NER
with orders of magnitude less data than those used for high resource
languages.The results suggest avenues for research on other languages in
similar settings.
| [
{
"created": "Tue, 3 Nov 2020 23:49:42 GMT",
"version": "v1"
}
] | 2020-11-05 | [
[
"Zeldes",
"Amir",
""
],
[
"Martin",
"Lance",
""
],
[
"Tu",
"Sichang",
""
]
] | Entity recognition provides semantic access to ancient materials in the Digital Humanities: itexposes people and places of interest in texts that cannot be read exhaustively, facilitates linkingresources and can provide a window into text contents, even for texts with no translations. Inthis paper we present entity recognition for Coptic, the language of Hellenistic era Egypt. Weevaluate NLP approaches to the task and lay out difficulties in applying them to a low-resource,morphologically complex language. We present solutions for named and non-named nested en-tity recognition and semi-automatic entity linking to Wikipedia, relying on robust dependencyparsing, feature-based CRF models, and hand-crafted knowledge base resources, enabling highaccuracy NER with orders of magnitude less data than those used for high resource languages.The results suggest avenues for research on other languages in similar settings. |
1602.07808 | Junaid Qadir | Junaid Qadir, Arjuna Sathiaseelan, Liang Wang, Jon Crowcroft | "Resource Pooling" for Wireless Networks: Solutions for the Developing
World | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We live in a world in which there is a great disparity between the lives of
the rich and the poor. Technology offers great promise in bridging this gap. In
particular, wireless technology unfetters developing communities from the
constraints of infrastructure providing a great opportunity to leapfrog years
of neglect and technological waywardness. In this paper, we highlight the role
of resource pooling for wireless networks in the developing world. Resource
pooling involves: (i) abstracting a collection of networked resources to behave
like a single unified resource pool and (ii) developing mechanisms for shifting
load between the various parts of the unified resource pool. The popularity of
resource pooling stems from its ability to provide resilience, high
utilization, and flexibility at an acceptable cost. We show that "resource
pooling", which is very popular in its various manifestations, is the key
unifying principle underlying a diverse number of successful wireless
technologies (such as white space networking, community networks, etc.). We
discuss various applications of resource pooled wireless technologies and
provide a discussion on open issues.
| [
{
"created": "Thu, 25 Feb 2016 05:57:51 GMT",
"version": "v1"
}
] | 2016-02-26 | [
[
"Qadir",
"Junaid",
""
],
[
"Sathiaseelan",
"Arjuna",
""
],
[
"Wang",
"Liang",
""
],
[
"Crowcroft",
"Jon",
""
]
] | We live in a world in which there is a great disparity between the lives of the rich and the poor. Technology offers great promise in bridging this gap. In particular, wireless technology unfetters developing communities from the constraints of infrastructure providing a great opportunity to leapfrog years of neglect and technological waywardness. In this paper, we highlight the role of resource pooling for wireless networks in the developing world. Resource pooling involves: (i) abstracting a collection of networked resources to behave like a single unified resource pool and (ii) developing mechanisms for shifting load between the various parts of the unified resource pool. The popularity of resource pooling stems from its ability to provide resilience, high utilization, and flexibility at an acceptable cost. We show that "resource pooling", which is very popular in its various manifestations, is the key unifying principle underlying a diverse number of successful wireless technologies (such as white space networking, community networks, etc.). We discuss various applications of resource pooled wireless technologies and provide a discussion on open issues. |
1211.2365 | Sylvester Eriksson-Bique | Sylvester Eriksson-Bique (University of Washington and University of
Helsinki) and David Kirkpatrick (University of British Columbia) and Valentin
Polishchuk (University of Helsinki) | Discrete Dubins Paths | 26 pages | null | null | null | cs.DM cs.CG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Dubins path is a shortest path with bounded curvature. The seminal result
in non-holonomic motion planning is that (in the absence of obstacles) a Dubins
path consists either from a circular arc followed by a segment followed by
another arc, or from three circular arcs [Dubins, 1957]. Dubins original proof
uses advanced calculus; later, Dubins result was reproved using control theory
techniques [Reeds and Shepp, 1990], [Sussmann and Tang, 1991], [Boissonnat,
C\'er\'ezo, and Leblond, 1994].
We introduce and study a discrete analogue of curvature-constrained motion.
We show that shortest "bounded-curvature" polygonal paths have the same
structure as Dubins paths. The properties of Dubins paths follow from our
results as a limiting case---this gives a new, "discrete" proof of Dubins
result.
| [
{
"created": "Sun, 11 Nov 2012 01:45:17 GMT",
"version": "v1"
}
] | 2012-11-14 | [
[
"Eriksson-Bique",
"Sylvester",
"",
"University of Washington and University of\n Helsinki"
],
[
"Kirkpatrick",
"David",
"",
"University of British Columbia"
],
[
"Polishchuk",
"Valentin",
"",
"University of Helsinki"
]
] | A Dubins path is a shortest path with bounded curvature. The seminal result in non-holonomic motion planning is that (in the absence of obstacles) a Dubins path consists either from a circular arc followed by a segment followed by another arc, or from three circular arcs [Dubins, 1957]. Dubins original proof uses advanced calculus; later, Dubins result was reproved using control theory techniques [Reeds and Shepp, 1990], [Sussmann and Tang, 1991], [Boissonnat, C\'er\'ezo, and Leblond, 1994]. We introduce and study a discrete analogue of curvature-constrained motion. We show that shortest "bounded-curvature" polygonal paths have the same structure as Dubins paths. The properties of Dubins paths follow from our results as a limiting case---this gives a new, "discrete" proof of Dubins result. |
2209.15181 | Jianzong Wang | Wen Wang, Jianzong Wang, Shijing Si, Zhangcheng Huang, Jing Xiao | RL-MD: A Novel Reinforcement Learning Approach for DNA Motif Discovery | This paper is accepted by DSAA2022. The 9th IEEE International
Conference on Data Science and Advanced Analytics | null | null | null | cs.LG cs.AI q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The extraction of sequence patterns from a collection of functionally linked
unlabeled DNA sequences is known as DNA motif discovery, and it is a key task
in computational biology. Several deep learning-based techniques have recently
been introduced to address this issue. However, these algorithms can not be
used in real-world situations because of the need for labeled data. Here, we
presented RL-MD, a novel reinforcement learning based approach for DNA motif
discovery task. RL-MD takes unlabelled data as input, employs a relative
information-based method to evaluate each proposed motif, and utilizes these
continuous evaluation results as the reward. The experiments show that RL-MD
can identify high-quality motifs in real-world data.
| [
{
"created": "Fri, 30 Sep 2022 02:07:37 GMT",
"version": "v1"
}
] | 2022-10-03 | [
[
"Wang",
"Wen",
""
],
[
"Wang",
"Jianzong",
""
],
[
"Si",
"Shijing",
""
],
[
"Huang",
"Zhangcheng",
""
],
[
"Xiao",
"Jing",
""
]
] | The extraction of sequence patterns from a collection of functionally linked unlabeled DNA sequences is known as DNA motif discovery, and it is a key task in computational biology. Several deep learning-based techniques have recently been introduced to address this issue. However, these algorithms can not be used in real-world situations because of the need for labeled data. Here, we presented RL-MD, a novel reinforcement learning based approach for DNA motif discovery task. RL-MD takes unlabelled data as input, employs a relative information-based method to evaluate each proposed motif, and utilizes these continuous evaluation results as the reward. The experiments show that RL-MD can identify high-quality motifs in real-world data. |
1401.3839 | Silvia Richter | Silvia Richter, Matthias Westphal | The LAMA Planner: Guiding Cost-Based Anytime Planning with Landmarks | null | Journal Of Artificial Intelligence Research, Volume 39, pages
127-177, 2010 | 10.1613/jair.2972 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LAMA is a classical planning system based on heuristic forward search. Its
core feature is the use of a pseudo-heuristic derived from landmarks,
propositional formulas that must be true in every solution of a planning task.
LAMA builds on the Fast Downward planning system, using finite-domain rather
than binary state variables and multi-heuristic search. The latter is employed
to combine the landmark heuristic with a variant of the well-known FF
heuristic. Both heuristics are cost-sensitive, focusing on high-quality
solutions in the case where actions have non-uniform cost. A weighted A* search
is used with iteratively decreasing weights, so that the planner continues to
search for plans of better quality until the search is terminated. LAMA showed
best performance among all planners in the sequential satisficing track of the
International Planning Competition 2008. In this paper we present the system in
detail and investigate which features of LAMA are crucial for its performance.
We present individual results for some of the domains used at the competition,
demonstrating good and bad cases for the techniques implemented in LAMA.
Overall, we find that using landmarks improves performance, whereas the
incorporation of action costs into the heuristic estimators proves not to be
beneficial. We show that in some domains a search that ignores cost solves far
more problems, raising the question of how to deal with action costs more
effectively in the future. The iterated weighted A* search greatly improves
results, and shows synergy effects with the use of landmarks.
| [
{
"created": "Thu, 16 Jan 2014 04:52:55 GMT",
"version": "v1"
}
] | 2014-01-17 | [
[
"Richter",
"Silvia",
""
],
[
"Westphal",
"Matthias",
""
]
] | LAMA is a classical planning system based on heuristic forward search. Its core feature is the use of a pseudo-heuristic derived from landmarks, propositional formulas that must be true in every solution of a planning task. LAMA builds on the Fast Downward planning system, using finite-domain rather than binary state variables and multi-heuristic search. The latter is employed to combine the landmark heuristic with a variant of the well-known FF heuristic. Both heuristics are cost-sensitive, focusing on high-quality solutions in the case where actions have non-uniform cost. A weighted A* search is used with iteratively decreasing weights, so that the planner continues to search for plans of better quality until the search is terminated. LAMA showed best performance among all planners in the sequential satisficing track of the International Planning Competition 2008. In this paper we present the system in detail and investigate which features of LAMA are crucial for its performance. We present individual results for some of the domains used at the competition, demonstrating good and bad cases for the techniques implemented in LAMA. Overall, we find that using landmarks improves performance, whereas the incorporation of action costs into the heuristic estimators proves not to be beneficial. We show that in some domains a search that ignores cost solves far more problems, raising the question of how to deal with action costs more effectively in the future. The iterated weighted A* search greatly improves results, and shows synergy effects with the use of landmarks. |
1807.01857 | Mohammad Masudur Rahman | Mohammad Masudur Rahman, Shamima Yeasmin and Chanchal K. Roy | An IDE-Based Context-Aware Meta Search Engine | 20th Working Conference on Reverse Engineering (WCRE 2013), Koblenz,
Germany, October 2013, pp. 467--471 | null | 10.1109/WCRE.2013.6671324 | null | cs.SE cs.IR | http://creativecommons.org/licenses/by/4.0/ | Traditional web search forces the developers to leave their working
environments and look for solutions in the web browsers. It often does not
consider the context of their programming problems. The context-switching
between the web browser and the working environment is time-consuming and
distracting, and the keyword-based traditional search often does not help much
in problem solving. In this paper, we propose an Eclipse IDE-based web search
solution that collects the data from three web search APIs-- Google, Yahoo,
Bing and a programming Q & A site-- Stack Overflow. It then provides search
results within IDE taking not only the content of the selected error into
account but also the problem context, popularity and search engine
recommendation of the result links. Experiments with 25 run time errors and
exceptions show that the proposed approach outperforms the keyword-based search
approaches with a recommendation accuracy of 96%. We also validate the results
with a user study involving five prospective participants where we get a result
agreement of 64.28%. While the preliminary results are promising, the approach
needs to be further validated with more errors and exceptions followed by a
user study with more participants to establish itself as a complete IDE-based
web search solution.
| [
{
"created": "Thu, 5 Jul 2018 06:05:46 GMT",
"version": "v1"
}
] | 2018-07-06 | [
[
"Rahman",
"Mohammad Masudur",
""
],
[
"Yeasmin",
"Shamima",
""
],
[
"Roy",
"Chanchal K.",
""
]
] | Traditional web search forces the developers to leave their working environments and look for solutions in the web browsers. It often does not consider the context of their programming problems. The context-switching between the web browser and the working environment is time-consuming and distracting, and the keyword-based traditional search often does not help much in problem solving. In this paper, we propose an Eclipse IDE-based web search solution that collects the data from three web search APIs-- Google, Yahoo, Bing and a programming Q & A site-- Stack Overflow. It then provides search results within IDE taking not only the content of the selected error into account but also the problem context, popularity and search engine recommendation of the result links. Experiments with 25 run time errors and exceptions show that the proposed approach outperforms the keyword-based search approaches with a recommendation accuracy of 96%. We also validate the results with a user study involving five prospective participants where we get a result agreement of 64.28%. While the preliminary results are promising, the approach needs to be further validated with more errors and exceptions followed by a user study with more participants to establish itself as a complete IDE-based web search solution. |
1705.08277 | Pieter Leyman | Pieter Leyman and Patrick De Causmaecker | Optimization in large graphs: Toward a better future? | null | null | null | null | cs.DS cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding groups of connected individuals in large graphs with tens of
thousands or more nodes has received considerable attention in academic
research. In this paper, we analyze three main issues with respect to the
recent influx of papers on community detection in (large) graphs, highlight the
specific problems with the current research avenues, and propose a first step
towards a better approach.
First, in spite of the strong interest in community detection, a strong
conceptual and theoretical foundation of connectedness in large graphs is
missing. Yet, it is crucial to be able to determine the specific feats that we
aim to analyze in large networks, to avoid a purely black-or-white view.
Second, in literature commonly employed (meta)heuristic frameworks are
applied for the large graph problems. Currently, it is, however, unclear
whether these techniques are even viable options, and what the added value of
the constituting parts is. Additionally, the manner in which different
algorithms are compared is also ambiguous.
Finally, no analyses of the impact of data parameters on the reported
clusters is done. Nonetheless, it would be interesting to evaluate which
characteristics lead to which type of communities and what their effect is on
computational difficulty.
| [
{
"created": "Tue, 23 May 2017 14:02:15 GMT",
"version": "v1"
}
] | 2017-05-24 | [
[
"Leyman",
"Pieter",
""
],
[
"De Causmaecker",
"Patrick",
""
]
] | Finding groups of connected individuals in large graphs with tens of thousands or more nodes has received considerable attention in academic research. In this paper, we analyze three main issues with respect to the recent influx of papers on community detection in (large) graphs, highlight the specific problems with the current research avenues, and propose a first step towards a better approach. First, in spite of the strong interest in community detection, a strong conceptual and theoretical foundation of connectedness in large graphs is missing. Yet, it is crucial to be able to determine the specific feats that we aim to analyze in large networks, to avoid a purely black-or-white view. Second, in literature commonly employed (meta)heuristic frameworks are applied for the large graph problems. Currently, it is, however, unclear whether these techniques are even viable options, and what the added value of the constituting parts is. Additionally, the manner in which different algorithms are compared is also ambiguous. Finally, no analyses of the impact of data parameters on the reported clusters is done. Nonetheless, it would be interesting to evaluate which characteristics lead to which type of communities and what their effect is on computational difficulty. |
1205.3247 | Zohreh Sanaei | Zohreh Sanaei, Saeid Abolfazli, Abdullah Gani, Rashid Hafeez Khokhar | Tripod of Requirements in Horizontal Heterogeneous Mobile Cloud
Computing | null | Z. Sanaei, S. Abolfazli, A. Gani, and R. H. Khokhar, "Tripod of
requirements in horizontal heterogeneous mobile cloud computing," in Proc.1st
Int'l Conf. Computing, Information Systems, and Communications, 2012 | null | null | cs.DC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Recent trend of mobile computing is emerging toward executing
resource-intensive applications in mobile devices regardless of underlying
resource restrictions (e.g. limited processor and energy) that necessitate
imminent technologies. Prosperity of cloud computing in stationary computers
breeds Mobile Cloud Computing (MCC) technology that aims to augment computing
and storage capabilities of mobile devices besides conserving energy. However,
MCC is more heterogeneous and unreliable (due to wireless connectivity) compare
to cloud computing. Problems like variations in OS, data fragmentation, and
security and privacy discourage and decelerate implementation and pervasiveness
of MCC. In this paper, we describe MCC as a horizontal heterogeneous ecosystem
and identify thirteen critical metrics and approaches that influence on
mobile-cloud solutions and success of MCC. We divide them into three major
classes, namely ubiquity, trust, and energy efficiency and devise a tripod of
requirements in MCC. Our proposed tripod shows that success of MCC is
achievable by reducing mobility challenges (e.g. seamless connectivity,
fragmentation), increasing trust, and enhancing energy efficiency.
| [
{
"created": "Tue, 15 May 2012 03:29:23 GMT",
"version": "v1"
}
] | 2012-05-16 | [
[
"Sanaei",
"Zohreh",
""
],
[
"Abolfazli",
"Saeid",
""
],
[
"Gani",
"Abdullah",
""
],
[
"Khokhar",
"Rashid Hafeez",
""
]
] | Recent trend of mobile computing is emerging toward executing resource-intensive applications in mobile devices regardless of underlying resource restrictions (e.g. limited processor and energy) that necessitate imminent technologies. Prosperity of cloud computing in stationary computers breeds Mobile Cloud Computing (MCC) technology that aims to augment computing and storage capabilities of mobile devices besides conserving energy. However, MCC is more heterogeneous and unreliable (due to wireless connectivity) compare to cloud computing. Problems like variations in OS, data fragmentation, and security and privacy discourage and decelerate implementation and pervasiveness of MCC. In this paper, we describe MCC as a horizontal heterogeneous ecosystem and identify thirteen critical metrics and approaches that influence on mobile-cloud solutions and success of MCC. We divide them into three major classes, namely ubiquity, trust, and energy efficiency and devise a tripod of requirements in MCC. Our proposed tripod shows that success of MCC is achievable by reducing mobility challenges (e.g. seamless connectivity, fragmentation), increasing trust, and enhancing energy efficiency. |
cs/0501011 | Sergei Fedorenko | Sergei Fedorenko | A simple algorithm for decoding Reed-Solomon codes and its relation to
the Welch-Berlekamp algorithm | 7 pages. Submitted to IEEE Transactions on Information Theory | IEEE Transactions on Information Theory, vol. IT-51, no. 3, pp.
1196-1198, 2005. | 10.1109/TIT.2004.842738 | null | cs.IT math.IT | null | A simple and natural Gao algorithm for decoding algebraic codes is described.
Its relation to the Welch-Berlekamp and Euclidean algorithms is given.
| [
{
"created": "Thu, 6 Jan 2005 18:55:37 GMT",
"version": "v1"
}
] | 2007-07-16 | [
[
"Fedorenko",
"Sergei",
""
]
] | A simple and natural Gao algorithm for decoding algebraic codes is described. Its relation to the Welch-Berlekamp and Euclidean algorithms is given. |
2205.01006 | Shi Xian | Xian Shi, Xun Xu, Wanyue Zhang, Xiatian Zhu, Chuan Sheng Foo, Kui Jia | Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic understanding of 3D point cloud relies on learning models with
massively annotated data, which, in many cases, are expensive or difficult to
collect. This has led to an emerging research interest in semi-supervised
learning (SSL) for 3D point cloud. It is commonly assumed in SSL that the
unlabeled data are drawn from the same distribution as that of the labeled
ones; This assumption, however, rarely holds true in realistic environments.
Blindly using out-of-distribution (OOD) unlabeled data could harm SSL
performance. In this work, we propose to selectively utilize unlabeled data
through sample weighting, so that only conducive unlabeled data would be
prioritized. To estimate the weights, we adopt a bi-level optimization
framework which iteratively optimizes a metaobjective on a held-out validation
set and a task-objective on a training set. Faced with the instability of
efficient bi-level optimizers, we further propose three regularization
techniques to enhance the training stability. Extensive experiments on 3D point
cloud classification and segmentation tasks verify the effectiveness of our
proposed method. We also demonstrate the feasibility of a more efficient
training strategy.
| [
{
"created": "Mon, 2 May 2022 16:09:17 GMT",
"version": "v1"
}
] | 2022-05-03 | [
[
"Shi",
"Xian",
""
],
[
"Xu",
"Xun",
""
],
[
"Zhang",
"Wanyue",
""
],
[
"Zhu",
"Xiatian",
""
],
[
"Foo",
"Chuan Sheng",
""
],
[
"Jia",
"Kui",
""
]
] | Semantic understanding of 3D point cloud relies on learning models with massively annotated data, which, in many cases, are expensive or difficult to collect. This has led to an emerging research interest in semi-supervised learning (SSL) for 3D point cloud. It is commonly assumed in SSL that the unlabeled data are drawn from the same distribution as that of the labeled ones; This assumption, however, rarely holds true in realistic environments. Blindly using out-of-distribution (OOD) unlabeled data could harm SSL performance. In this work, we propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized. To estimate the weights, we adopt a bi-level optimization framework which iteratively optimizes a metaobjective on a held-out validation set and a task-objective on a training set. Faced with the instability of efficient bi-level optimizers, we further propose three regularization techniques to enhance the training stability. Extensive experiments on 3D point cloud classification and segmentation tasks verify the effectiveness of our proposed method. We also demonstrate the feasibility of a more efficient training strategy. |
1903.09734 | Kamyar Azizzadenesheli Ph.D. | Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, Animashree Anandkumar | Regularized Learning for Domain Adaptation under Label Shifts | International Conference on Learning Representations (ICLR) 2019 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Regularized Learning under Label shifts (RLLS), a principled and a
practical domain-adaptation algorithm to correct for shifts in the label
distribution between a source and a target domain. We first estimate importance
weights using labeled source data and unlabeled target data, and then train a
classifier on the weighted source samples. We derive a generalization bound for
the classifier on the target domain which is independent of the (ambient) data
dimensions, and instead only depends on the complexity of the function class.
To the best of our knowledge, this is the first generalization bound for the
label-shift problem where the labels in the target domain are not available.
Based on this bound, we propose a regularized estimator for the small-sample
regime which accounts for the uncertainty in the estimated weights. Experiments
on the CIFAR-10 and MNIST datasets show that RLLS improves classification
accuracy, especially in the low sample and large-shift regimes, compared to
previous methods.
| [
{
"created": "Fri, 22 Mar 2019 23:46:24 GMT",
"version": "v1"
}
] | 2020-08-10 | [
[
"Azizzadenesheli",
"Kamyar",
""
],
[
"Liu",
"Anqi",
""
],
[
"Yang",
"Fanny",
""
],
[
"Anandkumar",
"Animashree",
""
]
] | We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target domain. We first estimate importance weights using labeled source data and unlabeled target data, and then train a classifier on the weighted source samples. We derive a generalization bound for the classifier on the target domain which is independent of the (ambient) data dimensions, and instead only depends on the complexity of the function class. To the best of our knowledge, this is the first generalization bound for the label-shift problem where the labels in the target domain are not available. Based on this bound, we propose a regularized estimator for the small-sample regime which accounts for the uncertainty in the estimated weights. Experiments on the CIFAR-10 and MNIST datasets show that RLLS improves classification accuracy, especially in the low sample and large-shift regimes, compared to previous methods. |
0810.2865 | Krzysztof R. Apt | Krzysztof R. Apt, Vincent Conitzer, Mingyu Guo and Evangelos Markakis | Welfare Undominated Groves Mechanisms | 12 pages. To appear in Proceedings of the 4th International Workshop
On Internet And Network Economics (WINE 2008). Springer Lecture Notes in
Computer Science, 2008 | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common objective in mechanism design is to choose the outcome (for example,
allocation of resources) that maximizes the sum of the agents' valuations,
without introducing incentives for agents to misreport their preferences. The
class of Groves mechanisms achieves this; however, these mechanisms require the
agents to make payments, thereby reducing the agents' total welfare.
In this paper we introduce a measure for comparing two mechanisms with
respect to the final welfare they generate. This measure induces a partial
order on mechanisms and we study the question of finding minimal elements with
respect to this partial order. In particular, we say a non-deficit Groves
mechanism is welfare undominated if there exists no other non-deficit Groves
mechanism that always has a smaller or equal sum of payments. We focus on two
domains: (i) auctions with multiple identical units and unit-demand bidders,
and (ii) mechanisms for public project problems. In the first domain we
analytically characterize all welfare undominated Groves mechanisms that are
anonymous and have linear payment functions, by showing that the family of
optimal-in-expectation linear redistribution mechanisms, which were introduced
in [6] and include the Bailey-Cavallo mechanism [1,2], coincides with the
family of welfare undominated Groves mechanisms that are anonymous and linear
in the setting we study. In the second domain we show that the classic VCG
(Clarke) mechanism is welfare undominated for the class of public project
problems with equal participation costs, but is not undominated for a more
general class.
| [
{
"created": "Thu, 16 Oct 2008 08:25:11 GMT",
"version": "v1"
}
] | 2008-10-17 | [
[
"Apt",
"Krzysztof R.",
""
],
[
"Conitzer",
"Vincent",
""
],
[
"Guo",
"Mingyu",
""
],
[
"Markakis",
"Evangelos",
""
]
] | A common objective in mechanism design is to choose the outcome (for example, allocation of resources) that maximizes the sum of the agents' valuations, without introducing incentives for agents to misreport their preferences. The class of Groves mechanisms achieves this; however, these mechanisms require the agents to make payments, thereby reducing the agents' total welfare. In this paper we introduce a measure for comparing two mechanisms with respect to the final welfare they generate. This measure induces a partial order on mechanisms and we study the question of finding minimal elements with respect to this partial order. In particular, we say a non-deficit Groves mechanism is welfare undominated if there exists no other non-deficit Groves mechanism that always has a smaller or equal sum of payments. We focus on two domains: (i) auctions with multiple identical units and unit-demand bidders, and (ii) mechanisms for public project problems. In the first domain we analytically characterize all welfare undominated Groves mechanisms that are anonymous and have linear payment functions, by showing that the family of optimal-in-expectation linear redistribution mechanisms, which were introduced in [6] and include the Bailey-Cavallo mechanism [1,2], coincides with the family of welfare undominated Groves mechanisms that are anonymous and linear in the setting we study. In the second domain we show that the classic VCG (Clarke) mechanism is welfare undominated for the class of public project problems with equal participation costs, but is not undominated for a more general class. |
2310.16624 | Peter Sorrenson | Felix Draxler, Peter Sorrenson, Lea Zimmermann, Armand Rousselot,
Ullrich K\"othe | Free-form Flows: Make Any Architecture a Normalizing Flow | Camera-ready version: accepted at AISTATS 2024 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Normalizing Flows are generative models that directly maximize the
likelihood. Previously, the design of normalizing flows was largely constrained
by the need for analytical invertibility. We overcome this constraint by a
training procedure that uses an efficient estimator for the gradient of the
change of variables formula. This enables any dimension-preserving neural
network to serve as a generative model through maximum likelihood training. Our
approach allows placing the emphasis on tailoring inductive biases precisely to
the task at hand. Specifically, we achieve excellent results in molecule
generation benchmarks utilizing $E(n)$-equivariant networks. Moreover, our
method is competitive in an inverse problem benchmark, while employing
off-the-shelf ResNet architectures.
| [
{
"created": "Wed, 25 Oct 2023 13:23:08 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2024 10:05:18 GMT",
"version": "v2"
}
] | 2024-04-25 | [
[
"Draxler",
"Felix",
""
],
[
"Sorrenson",
"Peter",
""
],
[
"Zimmermann",
"Lea",
""
],
[
"Rousselot",
"Armand",
""
],
[
"Köthe",
"Ullrich",
""
]
] | Normalizing Flows are generative models that directly maximize the likelihood. Previously, the design of normalizing flows was largely constrained by the need for analytical invertibility. We overcome this constraint by a training procedure that uses an efficient estimator for the gradient of the change of variables formula. This enables any dimension-preserving neural network to serve as a generative model through maximum likelihood training. Our approach allows placing the emphasis on tailoring inductive biases precisely to the task at hand. Specifically, we achieve excellent results in molecule generation benchmarks utilizing $E(n)$-equivariant networks. Moreover, our method is competitive in an inverse problem benchmark, while employing off-the-shelf ResNet architectures. |
2402.06500 | Charles Assaad | Lei Zan, Charles K. Assaad, Emilie Devijver, Eric Gaussier, Ali
A\"it-Bachir | On the Fly Detection of Root Causes from Observed Data with Application
to IT Systems | Accepted to CIKM 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a new structural causal model tailored for representing
threshold-based IT systems and presents a new algorithm designed to rapidly
detect root causes of anomalies in such systems. When root causes are not
causally related, the method is proven to be correct; while an extension is
proposed based on the intervention of an agent to relax this assumption. Our
algorithm and its agent-based extension leverage causal discovery from offline
data and engage in subgraph traversal when encountering new anomalies in online
data. Our extensive experiments demonstrate the superior performance of our
methods, even when applied to data generated from alternative structural causal
models or real IT monitoring data.
| [
{
"created": "Fri, 9 Feb 2024 16:10:19 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jul 2024 13:13:30 GMT",
"version": "v2"
}
] | 2024-07-30 | [
[
"Zan",
"Lei",
""
],
[
"Assaad",
"Charles K.",
""
],
[
"Devijver",
"Emilie",
""
],
[
"Gaussier",
"Eric",
""
],
[
"Aït-Bachir",
"Ali",
""
]
] | This paper introduces a new structural causal model tailored for representing threshold-based IT systems and presents a new algorithm designed to rapidly detect root causes of anomalies in such systems. When root causes are not causally related, the method is proven to be correct; while an extension is proposed based on the intervention of an agent to relax this assumption. Our algorithm and its agent-based extension leverage causal discovery from offline data and engage in subgraph traversal when encountering new anomalies in online data. Our extensive experiments demonstrate the superior performance of our methods, even when applied to data generated from alternative structural causal models or real IT monitoring data. |
2205.13764 | Chunhua Shen | Zhi Tian, Xiangxiang Chu, Xiaoming Wang, Xiaolin Wei, Chunhua Shen | Fully Convolutional One-Stage 3D Object Detection on LiDAR Range Images | Accepted to: Proc. Thirty-sixth Conference on Neural Information
Processing Systems (NeurIPS) 2022. 14 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present a simple yet effective fully convolutional one-stage 3D object
detector for LiDAR point clouds of autonomous driving scenes, termed
FCOS-LiDAR. Unlike the dominant methods that use the bird-eye view (BEV), our
proposed detector detects objects from the range view (RV, a.k.a. range image)
of the LiDAR points. Due to the range view's compactness and compatibility with
the LiDAR sensors' sampling process on self-driving cars, the range view-based
object detector can be realized by solely exploiting the vanilla 2D
convolutions, departing from the BEV-based methods which often involve
complicated voxelization operations and sparse convolutions.
For the first time, we show that an RV-based 3D detector with standard 2D
convolutions alone can achieve comparable performance to state-of-the-art
BEV-based detectors while being significantly faster and simpler. More
importantly, almost all previous range view-based detectors only focus on
single-frame point clouds, since it is challenging to fuse multi-frame point
clouds into a single range view. In this work, we tackle this challenging issue
with a novel range view projection mechanism, and for the first time
demonstrate the benefits of fusing multi-frame point clouds for a range-view
based detector. Extensive experiments on nuScenes show the superiority of our
proposed method and we believe that our work can be strong evidence that an
RV-based 3D detector can compare favourably with the current mainstream
BEV-based detectors.
| [
{
"created": "Fri, 27 May 2022 05:42:16 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Sep 2022 03:06:12 GMT",
"version": "v2"
}
] | 2022-09-21 | [
[
"Tian",
"Zhi",
""
],
[
"Chu",
"Xiangxiang",
""
],
[
"Wang",
"Xiaoming",
""
],
[
"Wei",
"Xiaolin",
""
],
[
"Shen",
"Chunhua",
""
]
] | We present a simple yet effective fully convolutional one-stage 3D object detector for LiDAR point clouds of autonomous driving scenes, termed FCOS-LiDAR. Unlike the dominant methods that use the bird-eye view (BEV), our proposed detector detects objects from the range view (RV, a.k.a. range image) of the LiDAR points. Due to the range view's compactness and compatibility with the LiDAR sensors' sampling process on self-driving cars, the range view-based object detector can be realized by solely exploiting the vanilla 2D convolutions, departing from the BEV-based methods which often involve complicated voxelization operations and sparse convolutions. For the first time, we show that an RV-based 3D detector with standard 2D convolutions alone can achieve comparable performance to state-of-the-art BEV-based detectors while being significantly faster and simpler. More importantly, almost all previous range view-based detectors only focus on single-frame point clouds, since it is challenging to fuse multi-frame point clouds into a single range view. In this work, we tackle this challenging issue with a novel range view projection mechanism, and for the first time demonstrate the benefits of fusing multi-frame point clouds for a range-view based detector. Extensive experiments on nuScenes show the superiority of our proposed method and we believe that our work can be strong evidence that an RV-based 3D detector can compare favourably with the current mainstream BEV-based detectors. |
2306.04274 | Richard Blythman | Richard Blythman, Mohamed Arshath, Salvatore Vivona, Jakub Sm\'ekal,
Hithesh Shaji | Decentralized Technologies for AI Hubs | arXiv admin note: substantial text overlap with arXiv:2210.16651 | 2022 Conference on Neural Information Processing Systems Workshops | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI requires heavy amounts of storage and compute with assets that are
commonly stored in AI Hubs. AI Hubs have contributed significantly to the
democratization of AI. However, existing implementations are associated with
certain benefits and limitations that stem from the underlying infrastructure
and governance systems with which they are built. These limitations include
high costs, lack of monetization and reward, lack of control and difficulty of
reproducibility. In the current work, we explore the potential of decentralized
technologies - such as Web3 wallets, peer-to-peer marketplaces, storage and
compute, and DAOs - to address some of these issues. We suggest that these
infrastructural components can be used in combination in the design and
construction of decentralized AI Hubs.
| [
{
"created": "Wed, 7 Jun 2023 09:18:56 GMT",
"version": "v1"
}
] | 2023-06-08 | [
[
"Blythman",
"Richard",
""
],
[
"Arshath",
"Mohamed",
""
],
[
"Vivona",
"Salvatore",
""
],
[
"Smékal",
"Jakub",
""
],
[
"Shaji",
"Hithesh",
""
]
] | AI requires heavy amounts of storage and compute with assets that are commonly stored in AI Hubs. AI Hubs have contributed significantly to the democratization of AI. However, existing implementations are associated with certain benefits and limitations that stem from the underlying infrastructure and governance systems with which they are built. These limitations include high costs, lack of monetization and reward, lack of control and difficulty of reproducibility. In the current work, we explore the potential of decentralized technologies - such as Web3 wallets, peer-to-peer marketplaces, storage and compute, and DAOs - to address some of these issues. We suggest that these infrastructural components can be used in combination in the design and construction of decentralized AI Hubs. |
1904.02809 | Xuanrui Qi | Reynald Affeldt, Jacques Garrigue, Xuanrui Qi, Kazunari Tanaka | Proving tree algorithms for succinct data structures | Accepted to the 10th International Conference on Interactive Theorem
Proving (ITP 2019) | null | null | null | cs.PL cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Succinct data structures give space-efficient representations of large
amounts of data without sacrificing performance. They rely one cleverly
designed data representations and algorithms. We present here the formalization
in Coq/SSReflect of two different tree-based succinct representations and their
accompanying algorithms. One is the Level-Order Unary Degree Sequence, which
encodes the structure of a tree in breadth-first order as a sequence of bits,
where access operations can be defined in terms of Rank and Select, which work
in constant time for static bit sequences. The other represents dynamic bit
sequences as binary balanced trees, where Rank and Select present a low
logarithmic overhead compared to their static versions, and with efficient
insertion and deletion. The two can be stacked to provide a dynamic
representation of dictionaries for instance. While both representations are
well-known, we believe this to be their first formalization and a needed step
towards provably-safe implementations of big data.
| [
{
"created": "Thu, 4 Apr 2019 22:20:12 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jul 2019 09:49:48 GMT",
"version": "v2"
}
] | 2019-07-03 | [
[
"Affeldt",
"Reynald",
""
],
[
"Garrigue",
"Jacques",
""
],
[
"Qi",
"Xuanrui",
""
],
[
"Tanaka",
"Kazunari",
""
]
] | Succinct data structures give space-efficient representations of large amounts of data without sacrificing performance. They rely one cleverly designed data representations and algorithms. We present here the formalization in Coq/SSReflect of two different tree-based succinct representations and their accompanying algorithms. One is the Level-Order Unary Degree Sequence, which encodes the structure of a tree in breadth-first order as a sequence of bits, where access operations can be defined in terms of Rank and Select, which work in constant time for static bit sequences. The other represents dynamic bit sequences as binary balanced trees, where Rank and Select present a low logarithmic overhead compared to their static versions, and with efficient insertion and deletion. The two can be stacked to provide a dynamic representation of dictionaries for instance. While both representations are well-known, we believe this to be their first formalization and a needed step towards provably-safe implementations of big data. |
2303.12048 | Etai Sella | Etai Sella, Gal Fiebelman, Peter Hedman, Hadar Averbuch-Elor | Vox-E: Text-guided Voxel Editing of 3D Objects | ICCV 2023. Project webpage: https://tau-vailab.github.io/Vox-E/ | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large scale text-guided diffusion models have garnered significant attention
due to their ability to synthesize diverse images that convey complex visual
concepts. This generative power has more recently been leveraged to perform
text-to-3D synthesis. In this work, we present a technique that harnesses the
power of latent diffusion models for editing existing 3D objects. Our method
takes oriented 2D images of a 3D object as input and learns a grid-based
volumetric representation of it. To guide the volumetric representation to
conform to a target text prompt, we follow unconditional text-to-3D methods and
optimize a Score Distillation Sampling (SDS) loss. However, we observe that
combining this diffusion-guided loss with an image-based regularization loss
that encourages the representation not to deviate too strongly from the input
object is challenging, as it requires achieving two conflicting goals while
viewing only structure-and-appearance coupled 2D projections. Thus, we
introduce a novel volumetric regularization loss that operates directly in 3D
space, utilizing the explicit nature of our 3D representation to enforce
correlation between the global structure of the original and edited object.
Furthermore, we present a technique that optimizes cross-attention volumetric
grids to refine the spatial extent of the edits. Extensive experiments and
comparisons demonstrate the effectiveness of our approach in creating a myriad
of edits which cannot be achieved by prior works.
| [
{
"created": "Tue, 21 Mar 2023 17:36:36 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Aug 2023 13:45:55 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Sep 2023 05:41:59 GMT",
"version": "v3"
}
] | 2023-09-20 | [
[
"Sella",
"Etai",
""
],
[
"Fiebelman",
"Gal",
""
],
[
"Hedman",
"Peter",
""
],
[
"Averbuch-Elor",
"Hadar",
""
]
] | Large scale text-guided diffusion models have garnered significant attention due to their ability to synthesize diverse images that convey complex visual concepts. This generative power has more recently been leveraged to perform text-to-3D synthesis. In this work, we present a technique that harnesses the power of latent diffusion models for editing existing 3D objects. Our method takes oriented 2D images of a 3D object as input and learns a grid-based volumetric representation of it. To guide the volumetric representation to conform to a target text prompt, we follow unconditional text-to-3D methods and optimize a Score Distillation Sampling (SDS) loss. However, we observe that combining this diffusion-guided loss with an image-based regularization loss that encourages the representation not to deviate too strongly from the input object is challenging, as it requires achieving two conflicting goals while viewing only structure-and-appearance coupled 2D projections. Thus, we introduce a novel volumetric regularization loss that operates directly in 3D space, utilizing the explicit nature of our 3D representation to enforce correlation between the global structure of the original and edited object. Furthermore, we present a technique that optimizes cross-attention volumetric grids to refine the spatial extent of the edits. Extensive experiments and comparisons demonstrate the effectiveness of our approach in creating a myriad of edits which cannot be achieved by prior works. |
2211.04402 | Vaclav Skala | Vaclav Skala | Summation Problem Revisited -- More Robust Computation | 9 pages, 3 Figs, 3 Tabs. Presented at Recent Advances in Computer
Science Conf, 2013 | null | null | null | cs.DS cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | Numerical data processing is a key task across different fields of computer
technology use. However, even simple summation of values is not precise due to
the floating point representation use. This paper presents a practical
algorithm for summation of values convenient for medium and large data sets.
The proposed algorithm is simple, easy to implement. Its computational
complexity is O(N) in the contrary of the Exact Sign Summation Algorithm (ESSA)
approach with O(N^2) run-time complexity. The proposed algorithm is especially
convenient for cases when exponent data differ significantly and many small
values are summed with higher values
| [
{
"created": "Tue, 8 Nov 2022 17:45:06 GMT",
"version": "v1"
}
] | 2022-11-09 | [
[
"Skala",
"Vaclav",
""
]
] | Numerical data processing is a key task across different fields of computer technology use. However, even simple summation of values is not precise due to the floating point representation use. This paper presents a practical algorithm for summation of values convenient for medium and large data sets. The proposed algorithm is simple, easy to implement. Its computational complexity is O(N) in the contrary of the Exact Sign Summation Algorithm (ESSA) approach with O(N^2) run-time complexity. The proposed algorithm is especially convenient for cases when exponent data differ significantly and many small values are summed with higher values |
1605.03651 | Dinh Hoa Nguyen | Dinh Hoa Nguyen | Minimum-Rank Dynamic Output Consensus Design for Heterogeneous Nonlinear
Multi-Agent Systems | Revised version submitted to IEEE Transactions on Control of Network
Systems | null | 10.1109/TCNS.2016.2580908 | null | cs.SY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new and systematic design framework for output
consensus in heterogeneous Multi-Input Multi-Output (MIMO) general nonlinear
Multi-Agent Systems (MASs) subjected to directed communication topology. First,
the input-output feedback linearization method is utilized assuming that the
internal dynamics is Input-to-State Stable (ISS) to obtain linearized
subsystems of agents. Consequently, we propose local dynamic controllers for
agents such that the linearized subsystems have an identical closed-loop
dynamics which has a single pole at the origin whereas other poles are on the
open left half complex plane. This allows us to deal with distinct agents
having arbitrarily vector relative degrees and to derive rank-$1$ cooperative
control inputs for those homogeneous linearized dynamics which results in a
minimum rank distributed dynamic consensus controller for the initial nonlinear
MAS. Moreover, we prove that the coupling strength in the consensus protocol
can be arbitrarily small but positive and hence our consensus design is
non-conservative. Next, our design approach is further strengthened by tackling
the problem of randomly switching communication topologies among agents where
we relax the assumption on the balance of each switched graph and derive a
distributed rank-$1$ dynamic consensus controller. Lastly, a numerical example
is introduced to illustrate the effectiveness of our proposed framework.
| [
{
"created": "Thu, 12 May 2016 01:53:27 GMT",
"version": "v1"
}
] | 2016-11-15 | [
[
"Nguyen",
"Dinh Hoa",
""
]
] | In this paper, we propose a new and systematic design framework for output consensus in heterogeneous Multi-Input Multi-Output (MIMO) general nonlinear Multi-Agent Systems (MASs) subjected to directed communication topology. First, the input-output feedback linearization method is utilized assuming that the internal dynamics is Input-to-State Stable (ISS) to obtain linearized subsystems of agents. Consequently, we propose local dynamic controllers for agents such that the linearized subsystems have an identical closed-loop dynamics which has a single pole at the origin whereas other poles are on the open left half complex plane. This allows us to deal with distinct agents having arbitrarily vector relative degrees and to derive rank-$1$ cooperative control inputs for those homogeneous linearized dynamics which results in a minimum rank distributed dynamic consensus controller for the initial nonlinear MAS. Moreover, we prove that the coupling strength in the consensus protocol can be arbitrarily small but positive and hence our consensus design is non-conservative. Next, our design approach is further strengthened by tackling the problem of randomly switching communication topologies among agents where we relax the assumption on the balance of each switched graph and derive a distributed rank-$1$ dynamic consensus controller. Lastly, a numerical example is introduced to illustrate the effectiveness of our proposed framework. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.