id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1907.01727 | Darion Cassel | Darion Cassel, Yan Huang, Limin Jia | Uncovering Information Flow Policy Violations in C Programs | null | null | null | null | cs.CR cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Programmers of cryptographic applications written in C need to avoid common
mistakes such as sending private data over public channels, modifying trusted
data with untrusted functions, or improperly ordering protocol steps. These
secrecy, integrity, and sequencing policies can be cumbersome to check with
existing general-purpose tools. We have developed a novel means of specifying
and uncovering violations of these policies that allows for a much
lighter-weight approach than previous tools. We embed the policy annotations in
C's type system via a source-to-source translation and leverage existing C
compilers to check for policy violations, achieving high performance and
scalability. We show through case studies of recent cryptographic libraries and
applications that our work is able to express detailed policies for large
bodies of C code and can find subtle policy violations. To gain formal
understanding of our policy annotations, we show formal connections between the
policy annotations and an information flow type system and prove a
noninterference guarantee.
| [
{
"created": "Wed, 3 Jul 2019 04:12:11 GMT",
"version": "v1"
}
] | 2019-07-04 | [
[
"Cassel",
"Darion",
""
],
[
"Huang",
"Yan",
""
],
[
"Jia",
"Limin",
""
]
] | Programmers of cryptographic applications written in C need to avoid common mistakes such as sending private data over public channels, modifying trusted data with untrusted functions, or improperly ordering protocol steps. These secrecy, integrity, and sequencing policies can be cumbersome to check with existing general-purpose tools. We have developed a novel means of specifying and uncovering violations of these policies that allows for a much lighter-weight approach than previous tools. We embed the policy annotations in C's type system via a source-to-source translation and leverage existing C compilers to check for policy violations, achieving high performance and scalability. We show through case studies of recent cryptographic libraries and applications that our work is able to express detailed policies for large bodies of C code and can find subtle policy violations. To gain formal understanding of our policy annotations, we show formal connections between the policy annotations and an information flow type system and prove a noninterference guarantee. |
1906.09463 | Lucas Gren | Emanuel Mellblom, Isar Arason, Lucas Gren and Richard Torkar | The Connection Between Burnout and Personality Types in Software
Developers | null | IEEE Software, 36(5), 2019 | 10.1109/MS.2019.2924769 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the connection between the Five Factor Model personality
traits and burnout in software developers. This study aims to validate
generalizations of findings in other fields. An online survey consisting of a
miniaturized International Personality Item Pool questionnaire for measuring
the Five Factor Model personality traits, and the Shirom-Melamed Burnout
Measure for measuring burnout, were distributed to open source developer
mailing lists, obtaining 47 valid responses. The results from a Bayesian Linear
Regression analysis indicate a strong link between neuroticism and burnout
confirming previous work, while the other Five Factor Model traits were not
adding power to the model. It is important to note that we did not investigate
the quality of work in connection to personality, nor did we take any other
confounding factors into account like, for example, teamwork. Nonetheless,
employers could be aware of, and support, software developers with high
neuroticism.
| [
{
"created": "Sat, 22 Jun 2019 15:40:54 GMT",
"version": "v1"
}
] | 2019-06-25 | [
[
"Mellblom",
"Emanuel",
""
],
[
"Arason",
"Isar",
""
],
[
"Gren",
"Lucas",
""
],
[
"Torkar",
"Richard",
""
]
] | This paper examines the connection between the Five Factor Model personality traits and burnout in software developers. This study aims to validate generalizations of findings in other fields. An online survey consisting of a miniaturized International Personality Item Pool questionnaire for measuring the Five Factor Model personality traits, and the Shirom-Melamed Burnout Measure for measuring burnout, were distributed to open source developer mailing lists, obtaining 47 valid responses. The results from a Bayesian Linear Regression analysis indicate a strong link between neuroticism and burnout confirming previous work, while the other Five Factor Model traits were not adding power to the model. It is important to note that we did not investigate the quality of work in connection to personality, nor did we take any other confounding factors into account like, for example, teamwork. Nonetheless, employers could be aware of, and support, software developers with high neuroticism. |
1703.05486 | Oscar De Somer | Oscar De Somer, Ana Soares, Tristan Kuijpers, Koen Vossen, Koen
Vanthournout and Fred Spiessens | Using Reinforcement Learning for Demand Response of Domestic Hot Water
Buffers: a Real-Life Demonstration | Submitted to IEEE ISGT Europe 2017 | null | null | null | cs.SY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper demonstrates a data-driven control approach for demand response in
real-life residential buildings. The objective is to optimally schedule the
heating cycles of the Domestic Hot Water (DHW) buffer to maximize the
self-consumption of the local photovoltaic (PV) production. A model-based
reinforcement learning technique is used to tackle the underlying sequential
decision-making problem. The proposed algorithm learns the stochastic occupant
behavior, predicts the PV production and takes into account the dynamics of the
system. A real-life experiment with six residential buildings is performed
using this algorithm. The results show that the self-consumption of the PV
production is significantly increased, compared to the default thermostat
control.
| [
{
"created": "Thu, 16 Mar 2017 06:42:07 GMT",
"version": "v1"
}
] | 2017-03-17 | [
[
"De Somer",
"Oscar",
""
],
[
"Soares",
"Ana",
""
],
[
"Kuijpers",
"Tristan",
""
],
[
"Vossen",
"Koen",
""
],
[
"Vanthournout",
"Koen",
""
],
[
"Spiessens",
"Fred",
""
]
] | This paper demonstrates a data-driven control approach for demand response in real-life residential buildings. The objective is to optimally schedule the heating cycles of the Domestic Hot Water (DHW) buffer to maximize the self-consumption of the local photovoltaic (PV) production. A model-based reinforcement learning technique is used to tackle the underlying sequential decision-making problem. The proposed algorithm learns the stochastic occupant behavior, predicts the PV production and takes into account the dynamics of the system. A real-life experiment with six residential buildings is performed using this algorithm. The results show that the self-consumption of the PV production is significantly increased, compared to the default thermostat control. |
2406.16629 | Melanie JI M\"uller | Melanie J.I. M\"uller | Meta-experiments: Improving experimentation through experimentation | 6 pages, 2 figures, 1 table | null | null | null | cs.IR stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A/B testing is widexly used in the industry to optimize customer facing
websites. Many companies employ experimentation specialists to facilitate and
improve the process of A/B testing. Here, we present the application of A/B
testing to this improvement effort itself, by running experiments on the
experimentation process, which we call 'meta-experiments'. We discuss the
challenges of this approach using the example of one of our meta-experiments,
which helped experimenters to run more sufficiently powered A/B tests. We also
point out the benefits of 'dog fooding' for the experimentation specialists
when running their own experiments.
| [
{
"created": "Mon, 24 Jun 2024 13:23:00 GMT",
"version": "v1"
}
] | 2024-06-25 | [
[
"Müller",
"Melanie J. I.",
""
]
] | A/B testing is widexly used in the industry to optimize customer facing websites. Many companies employ experimentation specialists to facilitate and improve the process of A/B testing. Here, we present the application of A/B testing to this improvement effort itself, by running experiments on the experimentation process, which we call 'meta-experiments'. We discuss the challenges of this approach using the example of one of our meta-experiments, which helped experimenters to run more sufficiently powered A/B tests. We also point out the benefits of 'dog fooding' for the experimentation specialists when running their own experiments. |
2203.17191 | Daniel Gehrig | Stepan Tulyakov, Alfredo Bochicchio, Daniel Gehrig, Stamatios
Georgoulis, Yuanyou Li, and Davide Scaramuzza | Time Lens++: Event-based Frame Interpolation with Parametric Non-linear
Flow and Multi-scale Fusion | IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
New Orleans, 2022 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, video frame interpolation using a combination of frame- and
event-based cameras has surpassed traditional image-based methods both in terms
of performance and memory efficiency. However, current methods still suffer
from (i) brittle image-level fusion of complementary interpolation results,
that fails in the presence of artifacts in the fused image, (ii) potentially
temporally inconsistent and inefficient motion estimation procedures, that run
for every inserted frame and (iii) low contrast regions that do not trigger
events, and thus cause events-only motion estimation to generate artifacts.
Moreover, previous methods were only tested on datasets consisting of planar
and faraway scenes, which do not capture the full complexity of the real world.
In this work, we address the above problems by introducing multi-scale
feature-level fusion and computing one-shot non-linear inter-frame motion from
events and images, which can be efficiently sampled for image warping. We also
collect the first large-scale events and frames dataset consisting of more than
100 challenging scenes with depth variations, captured with a new experimental
setup based on a beamsplitter. We show that our method improves the
reconstruction quality by up to 0.2 dB in terms of PSNR and up to 15% in LPIPS
score.
| [
{
"created": "Thu, 31 Mar 2022 17:14:58 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Apr 2022 09:39:00 GMT",
"version": "v2"
}
] | 2022-04-26 | [
[
"Tulyakov",
"Stepan",
""
],
[
"Bochicchio",
"Alfredo",
""
],
[
"Gehrig",
"Daniel",
""
],
[
"Georgoulis",
"Stamatios",
""
],
[
"Li",
"Yuanyou",
""
],
[
"Scaramuzza",
"Davide",
""
]
] | Recently, video frame interpolation using a combination of frame- and event-based cameras has surpassed traditional image-based methods both in terms of performance and memory efficiency. However, current methods still suffer from (i) brittle image-level fusion of complementary interpolation results, that fails in the presence of artifacts in the fused image, (ii) potentially temporally inconsistent and inefficient motion estimation procedures, that run for every inserted frame and (iii) low contrast regions that do not trigger events, and thus cause events-only motion estimation to generate artifacts. Moreover, previous methods were only tested on datasets consisting of planar and faraway scenes, which do not capture the full complexity of the real world. In this work, we address the above problems by introducing multi-scale feature-level fusion and computing one-shot non-linear inter-frame motion from events and images, which can be efficiently sampled for image warping. We also collect the first large-scale events and frames dataset consisting of more than 100 challenging scenes with depth variations, captured with a new experimental setup based on a beamsplitter. We show that our method improves the reconstruction quality by up to 0.2 dB in terms of PSNR and up to 15% in LPIPS score. |
2004.04305 | Shahin Shayandeh | Swadheen Shukla, Lars Liden, Shahin Shayandeh, Eslam Kamal, Jinchao
Li, Matt Mazzola, Thomas Park, Baolin Peng, Jianfeng Gao | Conversation Learner -- A Machine Teaching Tool for Building Dialog
Managers for Task-Oriented Dialog Systems | Accepted to ACL 2020 Demonstration Track | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditionally, industry solutions for building a task-oriented dialog system
have relied on helping dialog authors define rule-based dialog managers,
represented as dialog flows. While dialog flows are intuitively interpretable
and good for simple scenarios, they fall short of performance in terms of the
flexibility needed to handle complex dialogs. On the other hand, purely
machine-learned models can handle complex dialogs, but they are considered to
be black boxes and require large amounts of training data. In this
demonstration, we showcase Conversation Learner, a machine teaching tool for
building dialog managers. It combines the best of both approaches by enabling
dialog authors to create a dialog flow using familiar tools, converting the
dialog flow into a parametric model (e.g., neural networks), and allowing
dialog authors to improve the dialog manager (i.e., the parametric model) over
time by leveraging user-system dialog logs as training data through a machine
teaching interface.
| [
{
"created": "Thu, 9 Apr 2020 00:10:54 GMT",
"version": "v1"
},
{
"created": "Fri, 1 May 2020 20:14:05 GMT",
"version": "v2"
}
] | 2020-05-05 | [
[
"Shukla",
"Swadheen",
""
],
[
"Liden",
"Lars",
""
],
[
"Shayandeh",
"Shahin",
""
],
[
"Kamal",
"Eslam",
""
],
[
"Li",
"Jinchao",
""
],
[
"Mazzola",
"Matt",
""
],
[
"Park",
"Thomas",
""
],
[
"Peng",
"Baolin",
""
],
[
"Gao",
"Jianfeng",
""
]
] | Traditionally, industry solutions for building a task-oriented dialog system have relied on helping dialog authors define rule-based dialog managers, represented as dialog flows. While dialog flows are intuitively interpretable and good for simple scenarios, they fall short of performance in terms of the flexibility needed to handle complex dialogs. On the other hand, purely machine-learned models can handle complex dialogs, but they are considered to be black boxes and require large amounts of training data. In this demonstration, we showcase Conversation Learner, a machine teaching tool for building dialog managers. It combines the best of both approaches by enabling dialog authors to create a dialog flow using familiar tools, converting the dialog flow into a parametric model (e.g., neural networks), and allowing dialog authors to improve the dialog manager (i.e., the parametric model) over time by leveraging user-system dialog logs as training data through a machine teaching interface. |
2105.11989 | Rushabh Patel | Rushabh Patel, Yanhui Guo | Graph Based Link Prediction between Human Phenotypes and Genes | null | null | null | null | cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The learning of genotype-phenotype associations and history of
human disease by doing detailed and precise analysis of phenotypic
abnormalities can be defined as deep phenotyping. To understand and detect this
interaction between phenotype and genotype is a fundamental step when
translating precision medicine to clinical practice. The recent advances in the
field of machine learning is efficient to predict these interactions between
abnormal human phenotypes and genes.
Methods: In this study, we developed a framework to predict links between
human phenotype ontology (HPO) and genes. The annotation data from the
heterogeneous knowledge resources i.e., orphanet, is used to parse human
phenotype-gene associations. To generate the embeddings for the nodes (HPO &
genes), an algorithm called node2vec was used. It performs node sampling on
this graph based on random walks, then learns features over these sampled nodes
to generate embeddings. These embeddings were used to perform the downstream
task to predict the presence of the link between these nodes using 5 different
supervised machine learning algorithms.
Results: The downstream link prediction task shows that the Gradient Boosting
Decision Tree based model (LightGBM) achieved an optimal AUROC 0.904 and AUCPR
0.784. In addition, LightGBM achieved an optimal weighted F1 score of 0.87.
Compared to the other 4 methods LightGBM is able to find more accurate
interaction/link between human phenotype & gene pairs.
| [
{
"created": "Tue, 25 May 2021 14:47:07 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Jun 2021 18:28:30 GMT",
"version": "v2"
}
] | 2021-06-04 | [
[
"Patel",
"Rushabh",
""
],
[
"Guo",
"Yanhui",
""
]
] | Background: The learning of genotype-phenotype associations and history of human disease by doing detailed and precise analysis of phenotypic abnormalities can be defined as deep phenotyping. To understand and detect this interaction between phenotype and genotype is a fundamental step when translating precision medicine to clinical practice. The recent advances in the field of machine learning is efficient to predict these interactions between abnormal human phenotypes and genes. Methods: In this study, we developed a framework to predict links between human phenotype ontology (HPO) and genes. The annotation data from the heterogeneous knowledge resources i.e., orphanet, is used to parse human phenotype-gene associations. To generate the embeddings for the nodes (HPO & genes), an algorithm called node2vec was used. It performs node sampling on this graph based on random walks, then learns features over these sampled nodes to generate embeddings. These embeddings were used to perform the downstream task to predict the presence of the link between these nodes using 5 different supervised machine learning algorithms. Results: The downstream link prediction task shows that the Gradient Boosting Decision Tree based model (LightGBM) achieved an optimal AUROC 0.904 and AUCPR 0.784. In addition, LightGBM achieved an optimal weighted F1 score of 0.87. Compared to the other 4 methods LightGBM is able to find more accurate interaction/link between human phenotype & gene pairs. |
2205.06266 | Jonas Pfeiffer | Jonas Pfeiffer, Naman Goyal, Xi Victoria Lin, Xian Li, James Cross,
Sebastian Riedel, Mikel Artetxe | Lifting the Curse of Multilinguality by Pre-training Modular
Transformers | NAACL 2022 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multilingual pre-trained models are known to suffer from the curse of
multilinguality, which causes per-language performance to drop as they cover
more languages. We address this issue by introducing language-specific modules,
which allows us to grow the total capacity of the model, while keeping the
total number of trainable parameters per language constant. In contrast with
prior work that learns language-specific components post-hoc, we pre-train the
modules of our Cross-lingual Modular (X-Mod) models from the start. Our
experiments on natural language inference, named entity recognition and
question answering show that our approach not only mitigates the negative
interference between languages, but also enables positive transfer, resulting
in improved monolingual and cross-lingual performance. Furthermore, our
approach enables adding languages post-hoc with no measurable drop in
performance, no longer limiting the model usage to the set of pre-trained
languages.
| [
{
"created": "Thu, 12 May 2022 17:59:56 GMT",
"version": "v1"
}
] | 2022-05-13 | [
[
"Pfeiffer",
"Jonas",
""
],
[
"Goyal",
"Naman",
""
],
[
"Lin",
"Xi Victoria",
""
],
[
"Li",
"Xian",
""
],
[
"Cross",
"James",
""
],
[
"Riedel",
"Sebastian",
""
],
[
"Artetxe",
"Mikel",
""
]
] | Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-Mod) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages. |
2402.05687 | Gabriel Martins de Jesus | Gabriel Martins de Jesus, Onel Luis Alcaraz Lopez, Richard Demo Souza,
Nurul Huda Mahmood, Markku Juntti, Matti Latva-Aho | Assessment of the Sparsity-Diversity Trade-offs in Active Users
Detection for mMTC | 5 pages, 5 figures. Manuscript submitted to IEEE Wireless
Communications Letters for review | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wireless communication systems must increasingly support a multitude of
machine-type communications (MTC) devices, thus calling for advanced strategies
for active user detection (AUD). Recent literature has delved into AUD
techniques based on compressed sensing, highlighting the critical role of
signal sparsity. This study investigates the relationship between frequency
diversity and signal sparsity in the AUD problem. Single-antenna users transmit
multiple copies of non-orthogonal pilots across multiple frequency channels and
the base station independently performs AUD in each channel using the
orthogonal matching pursuit algorithm. We note that, although frequency
diversity may improve the likelihood of successful reception of the signals, it
may also damage the channel sparsity level, leading to important trade-offs. We
show that a sparser signal significantly benefits AUD, surpassing the
advantages brought by frequency diversity in scenarios with limited temporal
resources and/or high numbers of receive antennas. Conversely, with longer
pilots and fewer receive antennas, investing in frequency diversity becomes
more impactful, resulting in a tenfold AUD performance improvement.
| [
{
"created": "Thu, 8 Feb 2024 14:06:01 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"de Jesus",
"Gabriel Martins",
""
],
[
"Lopez",
"Onel Luis Alcaraz",
""
],
[
"Souza",
"Richard Demo",
""
],
[
"Mahmood",
"Nurul Huda",
""
],
[
"Juntti",
"Markku",
""
],
[
"Latva-Aho",
"Matti",
""
]
] | Wireless communication systems must increasingly support a multitude of machine-type communications (MTC) devices, thus calling for advanced strategies for active user detection (AUD). Recent literature has delved into AUD techniques based on compressed sensing, highlighting the critical role of signal sparsity. This study investigates the relationship between frequency diversity and signal sparsity in the AUD problem. Single-antenna users transmit multiple copies of non-orthogonal pilots across multiple frequency channels and the base station independently performs AUD in each channel using the orthogonal matching pursuit algorithm. We note that, although frequency diversity may improve the likelihood of successful reception of the signals, it may also damage the channel sparsity level, leading to important trade-offs. We show that a sparser signal significantly benefits AUD, surpassing the advantages brought by frequency diversity in scenarios with limited temporal resources and/or high numbers of receive antennas. Conversely, with longer pilots and fewer receive antennas, investing in frequency diversity becomes more impactful, resulting in a tenfold AUD performance improvement. |
1809.10842 | Yi Wu | Yi Wu, Yuxin Wu, Aviv Tamar, Stuart Russell, Georgia Gkioxari,
Yuandong Tian | Learning and Planning with a Semantic Model | submitted to ICLR 2019 | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building deep reinforcement learning agents that can generalize and adapt to
unseen environments remains a fundamental challenge for AI. This paper
describes progresses on this challenge in the context of man-made environments,
which are visually diverse but contain intrinsic semantic regularities. We
propose a hybrid model-based and model-free approach, LEArning and Planning
with Semantics (LEAPS), consisting of a multi-target sub-policy that acts on
visual inputs, and a Bayesian model over semantic structures. When placed in an
unseen environment, the agent plans with the semantic model to make high-level
decisions, proposes the next sub-target for the sub-policy to execute, and
updates the semantic model based on new observations. We perform experiments in
visual navigation tasks using House3D, a 3D environment that contains diverse
human-designed indoor scenes with real-world objects. LEAPS outperforms strong
baselines that do not explicitly plan using the semantic content.
| [
{
"created": "Fri, 28 Sep 2018 03:30:37 GMT",
"version": "v1"
}
] | 2018-10-01 | [
[
"Wu",
"Yi",
""
],
[
"Wu",
"Yuxin",
""
],
[
"Tamar",
"Aviv",
""
],
[
"Russell",
"Stuart",
""
],
[
"Gkioxari",
"Georgia",
""
],
[
"Tian",
"Yuandong",
""
]
] | Building deep reinforcement learning agents that can generalize and adapt to unseen environments remains a fundamental challenge for AI. This paper describes progresses on this challenge in the context of man-made environments, which are visually diverse but contain intrinsic semantic regularities. We propose a hybrid model-based and model-free approach, LEArning and Planning with Semantics (LEAPS), consisting of a multi-target sub-policy that acts on visual inputs, and a Bayesian model over semantic structures. When placed in an unseen environment, the agent plans with the semantic model to make high-level decisions, proposes the next sub-target for the sub-policy to execute, and updates the semantic model based on new observations. We perform experiments in visual navigation tasks using House3D, a 3D environment that contains diverse human-designed indoor scenes with real-world objects. LEAPS outperforms strong baselines that do not explicitly plan using the semantic content. |
1810.11530 | Pascal Lamblin | Bart van Merri\"enboer, Olivier Breuleux, Arnaud Bergeron, Pascal
Lamblin | Automatic differentiation in ML: Where we are and where we should be
going | null | null | null | null | cs.LG cs.PL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review the current state of automatic differentiation (AD) for array
programming in machine learning (ML), including the different approaches such
as operator overloading (OO) and source transformation (ST) used for AD,
graph-based intermediate representations for programs, and source languages.
Based on these insights, we introduce a new graph-based intermediate
representation (IR) which specifically aims to efficiently support
fully-general AD for array programming. Unlike existing dataflow programming
representations in ML frameworks, our IR naturally supports function calls,
higher-order functions and recursion, making ML models easier to implement. The
ability to represent closures allows us to perform AD using ST without a tape,
making the resulting derivative (adjoint) program amenable to ahead-of-time
optimization using tools from functional language compilers, and enabling
higher-order derivatives. Lastly, we introduce a proof of concept compiler
toolchain called Myia which uses a subset of Python as a front end.
| [
{
"created": "Fri, 26 Oct 2018 21:09:07 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Jan 2019 21:16:54 GMT",
"version": "v2"
}
] | 2019-01-04 | [
[
"van Merriënboer",
"Bart",
""
],
[
"Breuleux",
"Olivier",
""
],
[
"Bergeron",
"Arnaud",
""
],
[
"Lamblin",
"Pascal",
""
]
] | We review the current state of automatic differentiation (AD) for array programming in machine learning (ML), including the different approaches such as operator overloading (OO) and source transformation (ST) used for AD, graph-based intermediate representations for programs, and source languages. Based on these insights, we introduce a new graph-based intermediate representation (IR) which specifically aims to efficiently support fully-general AD for array programming. Unlike existing dataflow programming representations in ML frameworks, our IR naturally supports function calls, higher-order functions and recursion, making ML models easier to implement. The ability to represent closures allows us to perform AD using ST without a tape, making the resulting derivative (adjoint) program amenable to ahead-of-time optimization using tools from functional language compilers, and enabling higher-order derivatives. Lastly, we introduce a proof of concept compiler toolchain called Myia which uses a subset of Python as a front end. |
2108.04197 | Haokai Hong | Haokai Hong, Kai Ye, Min Jiang, Donglin Cao, Kay Chen Tan | Solving Large-Scale Multi-Objective Optimization via Probabilistic
Prediction Model | 17 pages, 2 figures | null | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | The main feature of large-scale multi-objective optimization problems (LSMOP)
is to optimize multiple conflicting objectives while considering thousands of
decision variables at the same time. An efficient LSMOP algorithm should have
the ability to escape the local optimal solution from the huge search space and
find the global optimal. Most of the current researches focus on how to deal
with decision variables. However, due to the large number of decision
variables, it is easy to lead to high computational cost. Maintaining the
diversity of the population is one of the effective ways to improve search
efficiency. In this paper, we propose a probabilistic prediction model based on
trend prediction model and generating-filtering strategy, called LT-PPM, to
tackle the LSMOP. The proposed method enhances the diversity of the population
through importance sampling. At the same time, due to the adoption of an
individual-based evolution mechanism, the computational cost of the proposed
method is independent of the number of decision variables, thus avoiding the
problem of exponential growth of the search space. We compared the proposed
algorithm with several state-of-the-art algorithms for different benchmark
functions. The experimental results and complexity analysis have demonstrated
that the proposed algorithm has significant improvement in terms of its
performance and computational efficiency in large-scale multi-objective
optimization.
| [
{
"created": "Fri, 16 Jul 2021 09:43:35 GMT",
"version": "v1"
}
] | 2021-08-10 | [
[
"Hong",
"Haokai",
""
],
[
"Ye",
"Kai",
""
],
[
"Jiang",
"Min",
""
],
[
"Cao",
"Donglin",
""
],
[
"Tan",
"Kay Chen",
""
]
] | The main feature of large-scale multi-objective optimization problems (LSMOP) is to optimize multiple conflicting objectives while considering thousands of decision variables at the same time. An efficient LSMOP algorithm should have the ability to escape the local optimal solution from the huge search space and find the global optimal. Most of the current researches focus on how to deal with decision variables. However, due to the large number of decision variables, it is easy to lead to high computational cost. Maintaining the diversity of the population is one of the effective ways to improve search efficiency. In this paper, we propose a probabilistic prediction model based on trend prediction model and generating-filtering strategy, called LT-PPM, to tackle the LSMOP. The proposed method enhances the diversity of the population through importance sampling. At the same time, due to the adoption of an individual-based evolution mechanism, the computational cost of the proposed method is independent of the number of decision variables, thus avoiding the problem of exponential growth of the search space. We compared the proposed algorithm with several state-of-the-art algorithms for different benchmark functions. The experimental results and complexity analysis have demonstrated that the proposed algorithm has significant improvement in terms of its performance and computational efficiency in large-scale multi-objective optimization. |
1911.08706 | Xing Niu | Xing Niu, Marine Carpuat | Controlling Neural Machine Translation Formality with Synthetic
Supervision | Accepted at AAAI 2020 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work aims to produce translations that convey source language content at
a formality level that is appropriate for a particular audience. Framing this
problem as a neural sequence-to-sequence task ideally requires training
triplets consisting of a bilingual sentence pair labeled with target language
formality. However, in practice, available training examples are limited to
English sentence pairs of different styles, and bilingual parallel sentences of
unknown formality. We introduce a novel training scheme for multi-task models
that automatically generates synthetic training triplets by inferring the
missing element on the fly, thus enabling end-to-end training. Comprehensive
automatic and human assessments show that our best model outperforms existing
models by producing translations that better match desired formality levels
while preserving the source meaning.
| [
{
"created": "Wed, 20 Nov 2019 04:54:21 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Nov 2019 00:55:36 GMT",
"version": "v2"
}
] | 2019-12-02 | [
[
"Niu",
"Xing",
""
],
[
"Carpuat",
"Marine",
""
]
] | This work aims to produce translations that convey source language content at a formality level that is appropriate for a particular audience. Framing this problem as a neural sequence-to-sequence task ideally requires training triplets consisting of a bilingual sentence pair labeled with target language formality. However, in practice, available training examples are limited to English sentence pairs of different styles, and bilingual parallel sentences of unknown formality. We introduce a novel training scheme for multi-task models that automatically generates synthetic training triplets by inferring the missing element on the fly, thus enabling end-to-end training. Comprehensive automatic and human assessments show that our best model outperforms existing models by producing translations that better match desired formality levels while preserving the source meaning. |
1805.12393 | Yuyu Zhang | Yuyu Zhang, Hanjun Dai, Kamil Toraman, Le Song | KG^2: Learning to Reason Science Exam Questions with Contextual
Knowledge Graph Embeddings | null | null | null | null | cs.LG cs.AI cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question
answering (QA) has been recently released. ARC only contains natural science
questions authored for human exams, which are hard to answer and require
advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art
QA systems fail to significantly outperform random baseline, reflecting the
difficult nature of this task. In this paper, we propose a novel framework for
answering science exam questions, which mimics human solving process in an
open-book exam. To address the reasoning challenge, we construct contextual
knowledge graphs respectively for the question itself and supporting sentences.
Our model learns to reason with neural embeddings of both knowledge graphs.
Experiments on the ARC Challenge Set show that our model outperforms the
previous state-of-the-art QA systems.
| [
{
"created": "Thu, 31 May 2018 09:39:14 GMT",
"version": "v1"
}
] | 2018-06-01 | [
[
"Zhang",
"Yuyu",
""
],
[
"Dai",
"Hanjun",
""
],
[
"Toraman",
"Kamil",
""
],
[
"Song",
"Le",
""
]
] | The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question answering (QA) has been recently released. ARC only contains natural science questions authored for human exams, which are hard to answer and require advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art QA systems fail to significantly outperform random baseline, reflecting the difficult nature of this task. In this paper, we propose a novel framework for answering science exam questions, which mimics human solving process in an open-book exam. To address the reasoning challenge, we construct contextual knowledge graphs respectively for the question itself and supporting sentences. Our model learns to reason with neural embeddings of both knowledge graphs. Experiments on the ARC Challenge Set show that our model outperforms the previous state-of-the-art QA systems. |
1307.3054 | Sucheta Shrivastava | Sayali Nimkar, Sanal Varghese, Sucheta Shrivastava | Contrast Enhancement And Brightness Preservation Using Multi-
Decomposition Histogram Equalization | 9 pages,13 figures | SIPIJ, Vol.4, Issue.3, pp. 85-93 | 10.5121/sipij.2013.4308 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Histogram Equalization (HE) has been an essential addition to the Image
Enhancement world. Enhancement techniques like Classical Histogram Equalization
(CHE), Adaptive Histogram Equalization (ADHE), Bi-Histogram Equalization (BHE)
and Recursive Mean Separate Histogram Equalization (RMSHE) methods enhance
contrast, however, brightness is not well preserved with these methods, which
gives an unpleasant look to the final image obtained. Thus, we introduce a
novel technique Multi-Decomposition Histogram Equalization (MDHE) to eliminate
the drawbacks of the earlier methods. In MDHE, we have decomposed the input
sixty-four parts, applied CHE in each of the sub-images and then finally
interpolated them in correct order. The final image after MDHE results in
contrast enhanced and brightness preserved image compared to all other
techniques mentioned above. We have calculated the various parameters like
PSNR, SNR, RMSE, MSE, etc. for every technique. Our results are well supported
by bar graphs, histograms and the parameter calculations at the end.
| [
{
"created": "Thu, 11 Jul 2013 11:02:57 GMT",
"version": "v1"
}
] | 2013-07-12 | [
[
"Nimkar",
"Sayali",
""
],
[
"Varghese",
"Sanal",
""
],
[
"Shrivastava",
"Sucheta",
""
]
] | Histogram Equalization (HE) has been an essential addition to the Image Enhancement world. Enhancement techniques like Classical Histogram Equalization (CHE), Adaptive Histogram Equalization (ADHE), Bi-Histogram Equalization (BHE) and Recursive Mean Separate Histogram Equalization (RMSHE) methods enhance contrast, however, brightness is not well preserved with these methods, which gives an unpleasant look to the final image obtained. Thus, we introduce a novel technique Multi-Decomposition Histogram Equalization (MDHE) to eliminate the drawbacks of the earlier methods. In MDHE, we have decomposed the input sixty-four parts, applied CHE in each of the sub-images and then finally interpolated them in correct order. The final image after MDHE results in contrast enhanced and brightness preserved image compared to all other techniques mentioned above. We have calculated the various parameters like PSNR, SNR, RMSE, MSE, etc. for every technique. Our results are well supported by bar graphs, histograms and the parameter calculations at the end. |
1011.3170 | James Aspnes | James Aspnes | Slightly smaller splitter networks | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The classic renaming protocol of Moir and Anderson (1995) uses a network of
Theta(n^2) splitters to assign unique names to n processes with unbounded
initial names. We show how to reduce this bound to Theta(n^{3/2}) splitters.
| [
{
"created": "Sun, 14 Nov 2010 00:52:14 GMT",
"version": "v1"
}
] | 2010-11-16 | [
[
"Aspnes",
"James",
""
]
] | The classic renaming protocol of Moir and Anderson (1995) uses a network of Theta(n^2) splitters to assign unique names to n processes with unbounded initial names. We show how to reduce this bound to Theta(n^{3/2}) splitters. |
1907.11034 | Petra Van Den Bos | Petra van den Bos and Frits Vaandrager | State Identification for Labeled Transition Systems with Inputs and
Outputs | null | null | null | null | cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For Finite State Machines (FSMs) a rich testing theory has been developed to
discover aspects of their behavior and ensure their correct functioning.
Although this theory is widely used, e.g., to check conformance of protocol
implementations, its applicability is limited by restrictions of the FSM
framework: the fact that inputs and outputs alternate in an FSM, and outputs
are fully determined by the previous input and state. Labeled Transition
Systems with inputs and outputs (LTSs), as studied in ioco testing theory,
provide a richer framework for testing component oriented systems, but lack the
algorithms for test generation from FSM theory.
In this article, we propose an algorithm for the fundamental problem of state
identification during testing of LTSs. Our algorithm is a direct generalization
of the well-known algorithm for computing adaptive distinguishing sequences for
FSMs proposed by Lee & Yannakakis. Our algorithm has to deal with so-called
compatible states, states that cannot be distinguished in case of an
adversarial system-under-test. Analogous to the result of Lee & Yannakakis, we
prove that if an (adaptive) test exists that distinguishes all pairs of
incompatible states of an LTS, our algorithm will find one. In practice, such
adaptive tests typically do not exist. However, in experiments with an
implementation of our algorithm on an industrial benchmark, we find that tests
produced by our algorithm still distinguish more than 99% of the incompatible
state pairs.
| [
{
"created": "Thu, 25 Jul 2019 13:28:31 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Oct 2019 14:43:22 GMT",
"version": "v2"
}
] | 2019-10-23 | [
[
"Bos",
"Petra van den",
""
],
[
"Vaandrager",
"Frits",
""
]
] | For Finite State Machines (FSMs) a rich testing theory has been developed to discover aspects of their behavior and ensure their correct functioning. Although this theory is widely used, e.g., to check conformance of protocol implementations, its applicability is limited by restrictions of the FSM framework: the fact that inputs and outputs alternate in an FSM, and outputs are fully determined by the previous input and state. Labeled Transition Systems with inputs and outputs (LTSs), as studied in ioco testing theory, provide a richer framework for testing component oriented systems, but lack the algorithms for test generation from FSM theory. In this article, we propose an algorithm for the fundamental problem of state identification during testing of LTSs. Our algorithm is a direct generalization of the well-known algorithm for computing adaptive distinguishing sequences for FSMs proposed by Lee & Yannakakis. Our algorithm has to deal with so-called compatible states, states that cannot be distinguished in case of an adversarial system-under-test. Analogous to the result of Lee & Yannakakis, we prove that if an (adaptive) test exists that distinguishes all pairs of incompatible states of an LTS, our algorithm will find one. In practice, such adaptive tests typically do not exist. However, in experiments with an implementation of our algorithm on an industrial benchmark, we find that tests produced by our algorithm still distinguish more than 99% of the incompatible state pairs. |
1509.02796 | Johannes K\"oster | Johannes K\"oster | Rust-Bio - a fast and safe bioinformatics library | null | null | 10.1093/bioinformatics/btv573 | null | cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Rust-Bio, the first general purpose bioinformatics library for the
innovative Rust programming language. Rust-Bio leverages the unique combination
of speed, memory safety and high-level syntax offered by Rust to provide a fast
and safe set of bioinformatics algorithms and data structures with a focus on
sequence analysis.
| [
{
"created": "Wed, 9 Sep 2015 14:53:02 GMT",
"version": "v1"
}
] | 2015-10-08 | [
[
"Köster",
"Johannes",
""
]
] | We present Rust-Bio, the first general purpose bioinformatics library for the innovative Rust programming language. Rust-Bio leverages the unique combination of speed, memory safety and high-level syntax offered by Rust to provide a fast and safe set of bioinformatics algorithms and data structures with a focus on sequence analysis. |
2406.13188 | Naiming (Lucy) Liu | Naiming Liu, Zichao Wang, Richard Baraniuk | Synthetic Context Generation for Question Generation | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite rapid advancements in large language models (LLMs), QG remains a
challenging problem due to its complicated process, open-ended nature, and the
diverse settings in which question generation occurs. A common approach to
address these challenges involves fine-tuning smaller, custom models using
datasets containing background context, question, and answer. However,
obtaining suitable domain-specific datasets with appropriate context is often
more difficult than acquiring question-answer pairs. In this paper, we
investigate training QG models using synthetic contexts generated by LLMs from
readily available question-answer pairs. We conduct a comprehensive study to
answer critical research questions related to the performance of models trained
on synthetic contexts and their potential impact on QG research and
applications. Our empirical results reveal: 1) contexts are essential for QG
tasks, even if they are synthetic; 2) fine-tuning smaller language models has
the capability of achieving better performances as compared to prompting larger
language models; and 3) synthetic context and real context could achieve
comparable performances. These findings highlight the effectiveness of
synthetic contexts in QG and paves the way for future advancements in the
field.
| [
{
"created": "Wed, 19 Jun 2024 03:37:52 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Liu",
"Naiming",
""
],
[
"Wang",
"Zichao",
""
],
[
"Baraniuk",
"Richard",
""
]
] | Despite rapid advancements in large language models (LLMs), QG remains a challenging problem due to its complicated process, open-ended nature, and the diverse settings in which question generation occurs. A common approach to address these challenges involves fine-tuning smaller, custom models using datasets containing background context, question, and answer. However, obtaining suitable domain-specific datasets with appropriate context is often more difficult than acquiring question-answer pairs. In this paper, we investigate training QG models using synthetic contexts generated by LLMs from readily available question-answer pairs. We conduct a comprehensive study to answer critical research questions related to the performance of models trained on synthetic contexts and their potential impact on QG research and applications. Our empirical results reveal: 1) contexts are essential for QG tasks, even if they are synthetic; 2) fine-tuning smaller language models has the capability of achieving better performances as compared to prompting larger language models; and 3) synthetic context and real context could achieve comparable performances. These findings highlight the effectiveness of synthetic contexts in QG and paves the way for future advancements in the field. |
2103.16185 | Jakub Ruszil | Jakub Ruszil | Approximation algorithm for finding short synchronizing words in
weighted automata | 9 pages, 2 figures | null | null | null | cs.FL cs.DM cs.DS math.CO | http://creativecommons.org/licenses/by/4.0/ | In this paper we are dealing with the issue of finding possibly short
synchronizing words in automata with weight assigned to each letter in the
alphabet $\Sigma$. First we discuss some complexity problems, and then we
present new approximation algorithm in four variations.
| [
{
"created": "Tue, 30 Mar 2021 09:07:57 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Ruszil",
"Jakub",
""
]
] | In this paper we are dealing with the issue of finding possibly short synchronizing words in automata with weight assigned to each letter in the alphabet $\Sigma$. First we discuss some complexity problems, and then we present new approximation algorithm in four variations. |
1206.3634 | Ankur Sahai | Ankur Sahai | Balls into Bins: strict Capacities and Edge Weights | null | null | null | null | cs.DS cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore a novel theoretical model for studying the performance of
distributed storage management systems where the data-centers have limited
capacities (as compared to storage space requested by the users). Prior schemes
such as Balls-into-bins (used for load balancing) neither consider bin
(consumer) capacities (multiple balls into a bin) nor the future performance of
the system after, balls (producer requests) are allocated to bins and restrict
number of balls as a function of the number of bins. Our problem consists of
finding an optimal assignment of the online producer requests to consumers (via
weighted edges) in a complete bipartite graph while ensuring that the total
size of request assigned on a consumer is limited by its capacity. The metric
used to measure the performance in this model is the (minimization of) weighted
sum of the requests assigned on the edges (loads) and their corresponding
weights. We first explore the optimal offline algorithms followed by
competitive analysis of different online techniques. Using oblivious adversary.
LP and Primal-Dual algorithms are used for calculating the optimal offline
solution in O(r*n) time (where r and n are the number of requests and consumers
respectively) while randomized algorithms are used for the online case.
For the simplified model with equal consumer capacities an average-case
competitive ratio of AVG(d) / MIN(d) (where d is the edge weight / distance) is
achieved using an algorithm that has equal probability for selecting any of the
available edges with a running time of $O(r)$. In the extending the model to
arbitrary consumer capacities we show an average case competitive ratio of
AVG(d*c) / (AVG(c) *MIN(d)).
| [
{
"created": "Sat, 16 Jun 2012 07:33:30 GMT",
"version": "v1"
}
] | 2012-06-19 | [
[
"Sahai",
"Ankur",
""
]
] | We explore a novel theoretical model for studying the performance of distributed storage management systems where the data-centers have limited capacities (as compared to storage space requested by the users). Prior schemes such as Balls-into-bins (used for load balancing) neither consider bin (consumer) capacities (multiple balls into a bin) nor the future performance of the system after, balls (producer requests) are allocated to bins and restrict number of balls as a function of the number of bins. Our problem consists of finding an optimal assignment of the online producer requests to consumers (via weighted edges) in a complete bipartite graph while ensuring that the total size of request assigned on a consumer is limited by its capacity. The metric used to measure the performance in this model is the (minimization of) weighted sum of the requests assigned on the edges (loads) and their corresponding weights. We first explore the optimal offline algorithms followed by competitive analysis of different online techniques. Using oblivious adversary. LP and Primal-Dual algorithms are used for calculating the optimal offline solution in O(r*n) time (where r and n are the number of requests and consumers respectively) while randomized algorithms are used for the online case. For the simplified model with equal consumer capacities an average-case competitive ratio of AVG(d) / MIN(d) (where d is the edge weight / distance) is achieved using an algorithm that has equal probability for selecting any of the available edges with a running time of $O(r)$. In the extending the model to arbitrary consumer capacities we show an average case competitive ratio of AVG(d*c) / (AVG(c) *MIN(d)). |
2403.12031 | Xiuyu Li | Qitian Jason Hu, Jacob Bieker, Xiuyu Li, Nan Jiang, Benjamin Keigwin,
Gaurav Ranganath, Kurt Keutzer, Shriyash Kaustubh Upadhyay | RouterBench: A Benchmark for Multi-LLM Routing System | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the range of applications for Large Language Models (LLMs) continues to
grow, the demand for effective serving solutions becomes increasingly critical.
Despite the versatility of LLMs, no single model can optimally address all
tasks and applications, particularly when balancing performance with cost. This
limitation has led to the development of LLM routing systems, which combine the
strengths of various models to overcome the constraints of individual LLMs.
Yet, the absence of a standardized benchmark for evaluating the performance of
LLM routers hinders progress in this area. To bridge this gap, we present
RouterBench, a novel evaluation framework designed to systematically assess the
efficacy of LLM routing systems, along with a comprehensive dataset comprising
over 405k inference outcomes from representative LLMs to support the
development of routing strategies. We further propose a theoretical framework
for LLM routing, and deliver a comparative analysis of various routing
approaches through RouterBench, highlighting their potentials and limitations
within our evaluation framework. This work not only formalizes and advances the
development of LLM routing systems but also sets a standard for their
assessment, paving the way for more accessible and economically viable LLM
deployments. The code and data are available at
https://github.com/withmartian/routerbench.
| [
{
"created": "Mon, 18 Mar 2024 17:59:04 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Mar 2024 17:56:28 GMT",
"version": "v2"
}
] | 2024-03-29 | [
[
"Hu",
"Qitian Jason",
""
],
[
"Bieker",
"Jacob",
""
],
[
"Li",
"Xiuyu",
""
],
[
"Jiang",
"Nan",
""
],
[
"Keigwin",
"Benjamin",
""
],
[
"Ranganath",
"Gaurav",
""
],
[
"Keutzer",
"Kurt",
""
],
[
"Upadhyay",
"Shriyash Kaustubh",
""
]
] | As the range of applications for Large Language Models (LLMs) continues to grow, the demand for effective serving solutions becomes increasingly critical. Despite the versatility of LLMs, no single model can optimally address all tasks and applications, particularly when balancing performance with cost. This limitation has led to the development of LLM routing systems, which combine the strengths of various models to overcome the constraints of individual LLMs. Yet, the absence of a standardized benchmark for evaluating the performance of LLM routers hinders progress in this area. To bridge this gap, we present RouterBench, a novel evaluation framework designed to systematically assess the efficacy of LLM routing systems, along with a comprehensive dataset comprising over 405k inference outcomes from representative LLMs to support the development of routing strategies. We further propose a theoretical framework for LLM routing, and deliver a comparative analysis of various routing approaches through RouterBench, highlighting their potentials and limitations within our evaluation framework. This work not only formalizes and advances the development of LLM routing systems but also sets a standard for their assessment, paving the way for more accessible and economically viable LLM deployments. The code and data are available at https://github.com/withmartian/routerbench. |
2403.13258 | Zengqiang Yan | Xian Lin, Yangyang Xiang, Zhehao Wang, Kwang-Ting Cheng, Zengqiang
Yan, Li Yu | SAMCT: Segment Any CT Allowing Labor-Free Task-Indicator Prompts | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Segment anything model (SAM), a foundation model with superior versatility
and generalization across diverse segmentation tasks, has attracted widespread
attention in medical imaging. However, it has been proved that SAM would
encounter severe performance degradation due to the lack of medical knowledge
in training and local feature encoding. Though several SAM-based models have
been proposed for tuning SAM in medical imaging, they still suffer from
insufficient feature extraction and highly rely on high-quality prompts. In
this paper, we construct a large CT dataset consisting of 1.1M CT images and 5M
masks from public datasets and propose a powerful foundation model SAMCT
allowing labor-free prompts. Specifically, based on SAM, SAMCT is further
equipped with a U-shaped CNN image encoder, a cross-branch interaction module,
and a task-indicator prompt encoder. The U-shaped CNN image encoder works in
parallel with the ViT image encoder in SAM to supplement local features.
Cross-branch interaction enhances the feature expression capability of the CNN
image encoder and the ViT image encoder by exchanging global perception and
local features from one to the other. The task-indicator prompt encoder is a
plug-and-play component to effortlessly encode task-related indicators into
prompt embeddings. In this way, SAMCT can work in an automatic manner in
addition to the semi-automatic interactive strategy in SAM. Extensive
experiments demonstrate the superiority of SAMCT against the state-of-the-art
task-specific and SAM-based medical foundation models on various tasks. The
code, data, and models are released at https://github.com/xianlin7/SAMCT.
| [
{
"created": "Wed, 20 Mar 2024 02:39:15 GMT",
"version": "v1"
}
] | 2024-03-21 | [
[
"Lin",
"Xian",
""
],
[
"Xiang",
"Yangyang",
""
],
[
"Wang",
"Zhehao",
""
],
[
"Cheng",
"Kwang-Ting",
""
],
[
"Yan",
"Zengqiang",
""
],
[
"Yu",
"Li",
""
]
] | Segment anything model (SAM), a foundation model with superior versatility and generalization across diverse segmentation tasks, has attracted widespread attention in medical imaging. However, it has been proved that SAM would encounter severe performance degradation due to the lack of medical knowledge in training and local feature encoding. Though several SAM-based models have been proposed for tuning SAM in medical imaging, they still suffer from insufficient feature extraction and highly rely on high-quality prompts. In this paper, we construct a large CT dataset consisting of 1.1M CT images and 5M masks from public datasets and propose a powerful foundation model SAMCT allowing labor-free prompts. Specifically, based on SAM, SAMCT is further equipped with a U-shaped CNN image encoder, a cross-branch interaction module, and a task-indicator prompt encoder. The U-shaped CNN image encoder works in parallel with the ViT image encoder in SAM to supplement local features. Cross-branch interaction enhances the feature expression capability of the CNN image encoder and the ViT image encoder by exchanging global perception and local features from one to the other. The task-indicator prompt encoder is a plug-and-play component to effortlessly encode task-related indicators into prompt embeddings. In this way, SAMCT can work in an automatic manner in addition to the semi-automatic interactive strategy in SAM. Extensive experiments demonstrate the superiority of SAMCT against the state-of-the-art task-specific and SAM-based medical foundation models on various tasks. The code, data, and models are released at https://github.com/xianlin7/SAMCT. |
1609.05876 | Roberto Alonso | Roberto Alonso, Ra\'ul Monroy, Eduardo Aguirre | On the Phase Transition of Finding a Biclique in a larger Bipartite
Graph | null | null | null | null | cs.AI cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report on the phase transition of finding a complete subgraph, of
specified dimensions, in a bipartite graph. Finding a complete subgraph in a
bipartite graph is a problem that has growing attention in several domains,
including bioinformatics, social network analysis and domain clustering. A key
step for a successful phase transition study is identifying a suitable order
parameter, when none is known. To this purpose, we have applied a decision tree
classifier to real-world instances of this problem, in order to understand what
problem features separate an instance that is hard to solve from those that is
not. We have successfully identified one such order parameter and with it the
phase transition of finding a complete bipartite subgraph of specified
dimensions. Our phase transition study shows an
easy-to-hard-to-easy-to-hard-to-easy pattern. Further, our results indicate
that the hardest instances are in a region where it is more likely that the
corresponding bipartite graph will have a complete subgraph of specified
dimensions, a positive answer. By contrast, instances with a negative answer
are more likely to appear in a region where the computational cost is
negligible. This behaviour is remarkably similar for problems of a number of
different sizes.
| [
{
"created": "Mon, 19 Sep 2016 19:22:08 GMT",
"version": "v1"
}
] | 2016-09-20 | [
[
"Alonso",
"Roberto",
""
],
[
"Monroy",
"Raúl",
""
],
[
"Aguirre",
"Eduardo",
""
]
] | We report on the phase transition of finding a complete subgraph, of specified dimensions, in a bipartite graph. Finding a complete subgraph in a bipartite graph is a problem that has growing attention in several domains, including bioinformatics, social network analysis and domain clustering. A key step for a successful phase transition study is identifying a suitable order parameter, when none is known. To this purpose, we have applied a decision tree classifier to real-world instances of this problem, in order to understand what problem features separate an instance that is hard to solve from those that is not. We have successfully identified one such order parameter and with it the phase transition of finding a complete bipartite subgraph of specified dimensions. Our phase transition study shows an easy-to-hard-to-easy-to-hard-to-easy pattern. Further, our results indicate that the hardest instances are in a region where it is more likely that the corresponding bipartite graph will have a complete subgraph of specified dimensions, a positive answer. By contrast, instances with a negative answer are more likely to appear in a region where the computational cost is negligible. This behaviour is remarkably similar for problems of a number of different sizes. |
2110.11240 | Nathaniel Hoy | Nathaniel Hoy, Theodora Koulouri | A Systematic Review on the Detection of Fake News Articles | 22 Pages, 16 Figures, Currently submitted to ACM TIST - Awaiting
Peer-Review | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | It has been argued that fake news and the spread of false information pose a
threat to societies throughout the world, from influencing the results of
elections to hindering the efforts to manage the COVID-19 pandemic. To combat
this threat, a number of Natural Language Processing (NLP) approaches have been
developed. These leverage a number of datasets, feature extraction/selection
techniques and machine learning (ML) algorithms to detect fake news before it
spreads. While these methods are well-documented, there is less evidence
regarding their efficacy in this domain. By systematically reviewing the
literature, this paper aims to delineate the approaches for fake news detection
that are most performant, identify limitations with existing approaches, and
suggest ways these can be mitigated. The analysis of the results indicates that
Ensemble Methods using a combination of news content and socially-based
features are currently the most effective. Finally, it is proposed that future
research should focus on developing approaches that address generalisability
issues (which, in part, arise from limitations with current datasets),
explainability and bias.
| [
{
"created": "Mon, 18 Oct 2021 21:29:11 GMT",
"version": "v1"
}
] | 2021-10-22 | [
[
"Hoy",
"Nathaniel",
""
],
[
"Koulouri",
"Theodora",
""
]
] | It has been argued that fake news and the spread of false information pose a threat to societies throughout the world, from influencing the results of elections to hindering the efforts to manage the COVID-19 pandemic. To combat this threat, a number of Natural Language Processing (NLP) approaches have been developed. These leverage a number of datasets, feature extraction/selection techniques and machine learning (ML) algorithms to detect fake news before it spreads. While these methods are well-documented, there is less evidence regarding their efficacy in this domain. By systematically reviewing the literature, this paper aims to delineate the approaches for fake news detection that are most performant, identify limitations with existing approaches, and suggest ways these can be mitigated. The analysis of the results indicates that Ensemble Methods using a combination of news content and socially-based features are currently the most effective. Finally, it is proposed that future research should focus on developing approaches that address generalisability issues (which, in part, arise from limitations with current datasets), explainability and bias. |
1511.08952 | Ndapa Nakashole | Ndapandula Nakashole | Bootstrapping Ternary Relation Extractors | 6 pages | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binary relation extraction methods have been widely studied in recent years.
However, few methods have been developed for higher n-ary relation extraction.
One limiting factor is the effort required to generate training data. For
binary relations, one only has to provide a few dozen pairs of entities per
relation, as training data. For ternary relations (n=3), each training instance
is a triplet of entities, placing a greater cognitive load on people. For
example, many people know that Google acquired Youtube but not the dollar
amount or the date of the acquisition and many people know that Hillary Clinton
is married to Bill Clinton by not the location or date of their wedding. This
makes higher n-nary training data generation a time consuming exercise in
searching the Web. We present a resource for training ternary relation
extractors. This was generated using a minimally supervised yet effective
approach. We present statistics on the size and the quality of the dataset.
| [
{
"created": "Sun, 29 Nov 2015 00:49:13 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jul 2019 02:52:34 GMT",
"version": "v2"
}
] | 2019-07-18 | [
[
"Nakashole",
"Ndapandula",
""
]
] | Binary relation extraction methods have been widely studied in recent years. However, few methods have been developed for higher n-ary relation extraction. One limiting factor is the effort required to generate training data. For binary relations, one only has to provide a few dozen pairs of entities per relation, as training data. For ternary relations (n=3), each training instance is a triplet of entities, placing a greater cognitive load on people. For example, many people know that Google acquired Youtube but not the dollar amount or the date of the acquisition and many people know that Hillary Clinton is married to Bill Clinton by not the location or date of their wedding. This makes higher n-nary training data generation a time consuming exercise in searching the Web. We present a resource for training ternary relation extractors. This was generated using a minimally supervised yet effective approach. We present statistics on the size and the quality of the dataset. |
2301.03728 | Armen Aghajanyan | Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen
Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke
Zettlemoyer | Scaling Laws for Generative Mixed-Modal Language Models | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Generative language models define distributions over sequences of tokens that
can represent essentially any combination of data modalities (e.g., any
permutation of image tokens from VQ-VAEs, speech tokens from HuBERT, BPE tokens
for language or code, and so on). To better understand the scaling properties
of such mixed-modal models, we conducted over 250 experiments using seven
different modalities and model sizes ranging from 8 million to 30 billion,
trained on 5-100 billion tokens. We report new mixed-modal scaling laws that
unify the contributions of individual modalities and the interactions between
them. Specifically, we explicitly model the optimal synergy and competition due
to data and model size as an additive term to previous uni-modal scaling laws.
We also find four empirical phenomena observed during the training, such as
emergent coordinate-ascent style training that naturally alternates between
modalities, guidelines for selecting critical hyper-parameters, and connections
between mixed-modal competition and training stability. Finally, we test our
scaling law by training a 30B speech-text model, which significantly
outperforms the corresponding unimodal models. Overall, our research provides
valuable insights into the design and training of mixed-modal generative
models, an important new class of unified models that have unique
distributional properties.
| [
{
"created": "Tue, 10 Jan 2023 00:20:06 GMT",
"version": "v1"
}
] | 2023-01-11 | [
[
"Aghajanyan",
"Armen",
""
],
[
"Yu",
"Lili",
""
],
[
"Conneau",
"Alexis",
""
],
[
"Hsu",
"Wei-Ning",
""
],
[
"Hambardzumyan",
"Karen",
""
],
[
"Zhang",
"Susan",
""
],
[
"Roller",
"Stephen",
""
],
[
"Goyal",
"Naman",
""
],
[
"Levy",
"Omer",
""
],
[
"Zettlemoyer",
"Luke",
""
]
] | Generative language models define distributions over sequences of tokens that can represent essentially any combination of data modalities (e.g., any permutation of image tokens from VQ-VAEs, speech tokens from HuBERT, BPE tokens for language or code, and so on). To better understand the scaling properties of such mixed-modal models, we conducted over 250 experiments using seven different modalities and model sizes ranging from 8 million to 30 billion, trained on 5-100 billion tokens. We report new mixed-modal scaling laws that unify the contributions of individual modalities and the interactions between them. Specifically, we explicitly model the optimal synergy and competition due to data and model size as an additive term to previous uni-modal scaling laws. We also find four empirical phenomena observed during the training, such as emergent coordinate-ascent style training that naturally alternates between modalities, guidelines for selecting critical hyper-parameters, and connections between mixed-modal competition and training stability. Finally, we test our scaling law by training a 30B speech-text model, which significantly outperforms the corresponding unimodal models. Overall, our research provides valuable insights into the design and training of mixed-modal generative models, an important new class of unified models that have unique distributional properties. |
1602.00430 | Biao Sun | Biao Sun, Wenfeng Zhao, Xinshan Zhu | Compressed Sensing for Implantable Neural Recordings Using Co-sparse
Analysis Model and Weighted $\ell_1$-Optimization | 22 pages, 11 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable and energy-efficient wireless data transmission remains a major
challenge in resource-constrained wireless neural recording tasks, where data
compression is generally adopted to relax the burdens on the wireless data
link. Recently, Compressed Sensing (CS) theory has successfully demonstrated
its potential in neural recording application. The main limitation of CS,
however, is that the neural signals have no good sparse representation with
commonly used dictionaries and learning a reliable dictionary is often data
dependent and computationally demanding. In this paper, a novel CS approach for
implantable neural recording is proposed. The main contributions are: 1) The
co-sparse analysis model is adopted to enforce co-sparsity of the neural
signals, therefore overcoming the drawbacks of conventional synthesis model and
enhancing the reconstruction performance. 2) A multi-fractional-order
difference matrix is constructed as the analysis dictionary, thus avoiding the
dictionary learning procedure and reducing the need for previously acquired
data and computational resources. 3) By exploiting the statistical priors of
the analysis coefficients, a weighted analysis $\ell_1$-minimization (WALM)
algorithm is proposed to reconstruct the neural signals. Experimental results
on Leicester neural signal database reveal that the proposed approach
outperforms the state-of-the-art CS-based methods. On the challenging high
compression ratio task, the proposed approach still achieves high
reconstruction performance and spike classification accuracy.
| [
{
"created": "Mon, 1 Feb 2016 08:57:14 GMT",
"version": "v1"
}
] | 2016-02-02 | [
[
"Sun",
"Biao",
""
],
[
"Zhao",
"Wenfeng",
""
],
[
"Zhu",
"Xinshan",
""
]
] | Reliable and energy-efficient wireless data transmission remains a major challenge in resource-constrained wireless neural recording tasks, where data compression is generally adopted to relax the burdens on the wireless data link. Recently, Compressed Sensing (CS) theory has successfully demonstrated its potential in neural recording application. The main limitation of CS, however, is that the neural signals have no good sparse representation with commonly used dictionaries and learning a reliable dictionary is often data dependent and computationally demanding. In this paper, a novel CS approach for implantable neural recording is proposed. The main contributions are: 1) The co-sparse analysis model is adopted to enforce co-sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis model and enhancing the reconstruction performance. 2) A multi-fractional-order difference matrix is constructed as the analysis dictionary, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational resources. 3) By exploiting the statistical priors of the analysis coefficients, a weighted analysis $\ell_1$-minimization (WALM) algorithm is proposed to reconstruct the neural signals. Experimental results on Leicester neural signal database reveal that the proposed approach outperforms the state-of-the-art CS-based methods. On the challenging high compression ratio task, the proposed approach still achieves high reconstruction performance and spike classification accuracy. |
2208.11188 | Vincent Cicirello | Vincent A. Cicirello | On Fitness Landscape Analysis of Permutation Problems: From Distance
Metrics to Mutation Operator Selection | Accepted by: Mobile Networks and Applications, Special Issue on
Foundational Studies in Bio-Inspired Technologies | Mobile Networks and Applications, 28(2): 507-517, April 2023 | 10.1007/s11036-022-02060-z | null | cs.NE cs.AI cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore the theory and expand upon the practice of fitness
landscape analysis for optimization problems over the space of permutations.
Many of the computational and analytical tools for fitness landscape analysis,
such as fitness distance correlation, require identifying a distance metric for
measuring the similarity of different solutions to the problem. We begin with a
survey of the available distance metrics for permutations, and then use
principal component analysis to classify these metrics. The result of this
analysis aligns with existing classifications of permutation problem types
produced through less formal means, including the A-permutation, R-permutation,
and P-permutation types, which classifies problems by whether absolute position
of permutation elements, relative positions of elements, or general precedence
of pairs of elements, is the dominant influence over solution fitness.
Additionally, the formal analysis identifies subtypes within these problem
categories. We see that the classification can assist in identifying
appropriate metrics based on optimization problem feature for use in fitness
landscape analysis. Using optimization problems of each class, we also
demonstrate how the classification scheme can subsequently inform the choice of
mutation operator within an evolutionary algorithm. From this, we present a
classification of a variety of mutation operators as a counterpart to that of
the metrics. Our implementations of the permutation metrics, permutation
mutation operators, and associated evolutionary algorithm, are available in a
pair of open source Java libraries. All of the code necessary to recreate our
analysis and experimental results are also available as open source.
| [
{
"created": "Tue, 23 Aug 2022 20:46:49 GMT",
"version": "v1"
}
] | 2023-11-10 | [
[
"Cicirello",
"Vincent A.",
""
]
] | In this paper, we explore the theory and expand upon the practice of fitness landscape analysis for optimization problems over the space of permutations. Many of the computational and analytical tools for fitness landscape analysis, such as fitness distance correlation, require identifying a distance metric for measuring the similarity of different solutions to the problem. We begin with a survey of the available distance metrics for permutations, and then use principal component analysis to classify these metrics. The result of this analysis aligns with existing classifications of permutation problem types produced through less formal means, including the A-permutation, R-permutation, and P-permutation types, which classifies problems by whether absolute position of permutation elements, relative positions of elements, or general precedence of pairs of elements, is the dominant influence over solution fitness. Additionally, the formal analysis identifies subtypes within these problem categories. We see that the classification can assist in identifying appropriate metrics based on optimization problem feature for use in fitness landscape analysis. Using optimization problems of each class, we also demonstrate how the classification scheme can subsequently inform the choice of mutation operator within an evolutionary algorithm. From this, we present a classification of a variety of mutation operators as a counterpart to that of the metrics. Our implementations of the permutation metrics, permutation mutation operators, and associated evolutionary algorithm, are available in a pair of open source Java libraries. All of the code necessary to recreate our analysis and experimental results are also available as open source. |
1804.05258 | Ross M. McConnell | Pavol Hell and Jing Huang and Ross M. McConnell and Arash Rafiey | Interval-Like Graphs and Digraphs | null | null | null | null | cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We unify several seemingly different graph and digraph classes under one
umbrella. These classes are all broadly speaking different generalizations of
interval graphs, and include, in addition to interval graphs, also adjusted
interval digraphs, threshold graphs, complements of threshold tolerance graphs
(known as `co-TT' graphs), bipartite interval containment graphs, bipartite
co-circular arc graphs, and two-directional orthogonal ray graphs. (The last
three classes coincide, but have been investigated in different contexts.) This
common view is made possible by introducing loops. We also show that all the
above classes are united by a common ordering characterization, the existence
of a min ordering. We propose a common generalization of all these graph and
digraph classes, namely signed-interval digraphs, and show that they are
precisely the digraphs that are characterized by the existence of a min
ordering. We also offer an alternative geometric characterization of these
digraphs. For most of the above example graph and digraph classes, we show that
they are exactly those signed-interval digraphs that satisfy a suitable natural
restriction on the digraph, like having all loops, or having a symmetric
edge-set, or being bipartite. (For instance co-TT graphs are precisely those
signed-interval digraphs that have each edge symmetric.) We also offer some
discussion of recognition algorithms and characterizations, saving the details
for future papers.
| [
{
"created": "Sat, 14 Apr 2018 18:19:33 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jun 2018 21:41:10 GMT",
"version": "v2"
}
] | 2018-06-28 | [
[
"Hell",
"Pavol",
""
],
[
"Huang",
"Jing",
""
],
[
"McConnell",
"Ross M.",
""
],
[
"Rafiey",
"Arash",
""
]
] | We unify several seemingly different graph and digraph classes under one umbrella. These classes are all broadly speaking different generalizations of interval graphs, and include, in addition to interval graphs, also adjusted interval digraphs, threshold graphs, complements of threshold tolerance graphs (known as `co-TT' graphs), bipartite interval containment graphs, bipartite co-circular arc graphs, and two-directional orthogonal ray graphs. (The last three classes coincide, but have been investigated in different contexts.) This common view is made possible by introducing loops. We also show that all the above classes are united by a common ordering characterization, the existence of a min ordering. We propose a common generalization of all these graph and digraph classes, namely signed-interval digraphs, and show that they are precisely the digraphs that are characterized by the existence of a min ordering. We also offer an alternative geometric characterization of these digraphs. For most of the above example graph and digraph classes, we show that they are exactly those signed-interval digraphs that satisfy a suitable natural restriction on the digraph, like having all loops, or having a symmetric edge-set, or being bipartite. (For instance co-TT graphs are precisely those signed-interval digraphs that have each edge symmetric.) We also offer some discussion of recognition algorithms and characterizations, saving the details for future papers. |
2210.12181 | Weizi Li | Weizi Li | Urban Socio-Technical Systems: An Autonomy and Mobility Perspective | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | The future of the human race is urban. The world's population is projected to
grow an additional 2.5 billion by 2050, with all expected to live in urban
areas. This will increase the percentage of urban population from 55% today to
70% within three decades and further strengthen the role of cities as the hub
for information, transportation, and overall socio-economic development. Unlike
any other time in human history, the increasing levels of autonomy and machine
intelligence are transforming cities to be no longer just human agglomerations
but a fusion of humans, machines, and algorithms making collective decisions,
thus complex socio-technical systems. This manuscript summarizes and discusses
my efforts from the urban autonomy and mobility perspective to develop the
urban socio-technical system.
| [
{
"created": "Fri, 21 Oct 2022 18:15:41 GMT",
"version": "v1"
}
] | 2022-10-25 | [
[
"Li",
"Weizi",
""
]
] | The future of the human race is urban. The world's population is projected to grow an additional 2.5 billion by 2050, with all expected to live in urban areas. This will increase the percentage of urban population from 55% today to 70% within three decades and further strengthen the role of cities as the hub for information, transportation, and overall socio-economic development. Unlike any other time in human history, the increasing levels of autonomy and machine intelligence are transforming cities to be no longer just human agglomerations but a fusion of humans, machines, and algorithms making collective decisions, thus complex socio-technical systems. This manuscript summarizes and discusses my efforts from the urban autonomy and mobility perspective to develop the urban socio-technical system. |
2006.09873 | Debanjan Ghosh | Debanjan Ghosh, Beata Beigman Klebanov, Yi Song | An Exploratory Study of Argumentative Writing by Young Students: A
Transformer-based Approach | 15th Workshop on Innovative Use of NLP for Building Educational
Applications, ACL 2020 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a computational exploration of argument critique writing by young
students. Middle school students were asked to criticize an argument presented
in the prompt, focusing on identifying and explaining the reasoning flaws. This
task resembles an established college-level argument critique task. Lexical and
discourse features that utilize detailed domain knowledge to identify critiques
exist for the college task but do not perform well on the young students data.
Instead, transformer-based architecture (e.g., BERT) fine-tuned on a large
corpus of critique essays from the college task performs much better (over 20%
improvement in F1 score). Analysis of the performance of various configurations
of the system suggests that while children's writing does not exhibit the
standard discourse structure of an argumentative essay, it does share basic
local sequential structures with the more mature writers.
| [
{
"created": "Wed, 17 Jun 2020 13:55:31 GMT",
"version": "v1"
}
] | 2020-06-18 | [
[
"Ghosh",
"Debanjan",
""
],
[
"Klebanov",
"Beata Beigman",
""
],
[
"Song",
"Yi",
""
]
] | We present a computational exploration of argument critique writing by young students. Middle school students were asked to criticize an argument presented in the prompt, focusing on identifying and explaining the reasoning flaws. This task resembles an established college-level argument critique task. Lexical and discourse features that utilize detailed domain knowledge to identify critiques exist for the college task but do not perform well on the young students data. Instead, transformer-based architecture (e.g., BERT) fine-tuned on a large corpus of critique essays from the college task performs much better (over 20% improvement in F1 score). Analysis of the performance of various configurations of the system suggests that while children's writing does not exhibit the standard discourse structure of an argumentative essay, it does share basic local sequential structures with the more mature writers. |
2204.01690 | Rahim Taheri | Meysam Ghahramani, Rahim Taheri, Mohammad Shojafar, Reza Javidan,
Shaohua Wan | Deep Image: A precious image based deep learning method for online
malware detection in IoT Environment | 10 pages, 17 figures, SUBMITTED TO IEEE INTERNET OF THINGS JOURNAL,
MARCH 2022 | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The volume of malware and the number of attacks in IoT devices are rising
everyday, which encourages security professionals to continually enhance their
malware analysis tools. Researchers in the field of cyber security have
extensively explored the usage of sophisticated analytics and the efficiency of
malware detection. With the introduction of new malware kinds and attack
routes, security experts confront considerable challenges in developing
efficient malware detection and analysis solutions. In this paper, a different
view of malware analysis is considered and the risk level of each sample
feature is computed, and based on that the risk level of that sample is
calculated. In this way, a criterion is introduced that is used together with
accuracy and FPR criteria for malware analysis in IoT environment. In this
paper, three malware detection methods based on visualization techniques called
the clustering approach, the probabilistic approach, and the deep learning
approach are proposed. Then, in addition to the usual machine learning criteria
namely accuracy and FPR, a proposed criterion based on the risk of samples has
also been used for comparison, with the results showing that the deep learning
approach performed better in detecting malware
| [
{
"created": "Mon, 4 Apr 2022 17:56:55 GMT",
"version": "v1"
}
] | 2022-04-05 | [
[
"Ghahramani",
"Meysam",
""
],
[
"Taheri",
"Rahim",
""
],
[
"Shojafar",
"Mohammad",
""
],
[
"Javidan",
"Reza",
""
],
[
"Wan",
"Shaohua",
""
]
] | The volume of malware and the number of attacks in IoT devices are rising everyday, which encourages security professionals to continually enhance their malware analysis tools. Researchers in the field of cyber security have extensively explored the usage of sophisticated analytics and the efficiency of malware detection. With the introduction of new malware kinds and attack routes, security experts confront considerable challenges in developing efficient malware detection and analysis solutions. In this paper, a different view of malware analysis is considered and the risk level of each sample feature is computed, and based on that the risk level of that sample is calculated. In this way, a criterion is introduced that is used together with accuracy and FPR criteria for malware analysis in IoT environment. In this paper, three malware detection methods based on visualization techniques called the clustering approach, the probabilistic approach, and the deep learning approach are proposed. Then, in addition to the usual machine learning criteria namely accuracy and FPR, a proposed criterion based on the risk of samples has also been used for comparison, with the results showing that the deep learning approach performed better in detecting malware |
1701.05973 | Amirhossein Reisizadeh | Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman
Avestimehr | Coded Computation over Heterogeneous Clusters | This work is published in IEEE Transaction on Information Theory
(2019). A preliminary version of this work was published in IEEE
International Symposium on Information Theory (ISIT) 2017 | null | null | null | cs.DC cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large-scale distributed computing clusters, such as Amazon EC2, there are
several types of "system noise" that can result in major degradation of
performance: bottlenecks due to limited communication bandwidth, latency due to
straggler nodes, etc. On the other hand, these systems enjoy abundance of
redundancy - a vast number of computing nodes and large storage capacity. There
have been recent results that demonstrate the impact of coding for efficient
utilization of computation and storage redundancy to alleviate the effect of
stragglers and communication bottlenecks in homogeneous clusters. In this
paper, we focus on general heterogeneous distributed computing clusters
consisting of a variety of computing machines with different capabilities. We
propose a coding framework for speeding up distributed computing in
heterogeneous clusters by trading redundancy for reducing the latency of
computation. In particular, we propose Heterogeneous Coded Matrix
Multiplication (HCMM) algorithm for performing distributed matrix
multiplication over heterogeneous clusters that is provably asymptotically
optimal for a broad class of processing time distributions. Moreover, we show
that HCMM is unboundedly faster than any uncoded scheme. To demonstrate
practicality of HCMM, we carry out experiments over Amazon EC2 clusters where
HCMM is found to be up to $61\%$, $46\%$ and $36\%$ respectively faster than
three benchmark load allocation schemes - Uniform Uncoded, Load-balanced
Uncoded, and Uniform Coded. Additionally, we provide a generalization to the
problem of optimal load allocation in heterogeneous settings, where we take
into account the monetary costs associated with the clusters. We argue that
HCMM is asymptotically optimal for budget-constrained scenarios as well, and we
develop a heuristic algorithm for (HCMM) load allocation for budget-limited
computation tasks.
| [
{
"created": "Sat, 21 Jan 2017 03:11:47 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Jun 2017 23:06:31 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Apr 2018 21:43:11 GMT",
"version": "v3"
},
{
"created": "Tue, 9 Oct 2018 06:03:03 GMT",
"version": "v4"
},
{
"created": "Wed, 19 Jun 2019 23:58:44 GMT",
"version": "v5"
}
] | 2019-06-21 | [
[
"Reisizadeh",
"Amirhossein",
""
],
[
"Prakash",
"Saurav",
""
],
[
"Pedarsani",
"Ramtin",
""
],
[
"Avestimehr",
"Amir Salman",
""
]
] | In large-scale distributed computing clusters, such as Amazon EC2, there are several types of "system noise" that can result in major degradation of performance: bottlenecks due to limited communication bandwidth, latency due to straggler nodes, etc. On the other hand, these systems enjoy abundance of redundancy - a vast number of computing nodes and large storage capacity. There have been recent results that demonstrate the impact of coding for efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in homogeneous clusters. In this paper, we focus on general heterogeneous distributed computing clusters consisting of a variety of computing machines with different capabilities. We propose a coding framework for speeding up distributed computing in heterogeneous clusters by trading redundancy for reducing the latency of computation. In particular, we propose Heterogeneous Coded Matrix Multiplication (HCMM) algorithm for performing distributed matrix multiplication over heterogeneous clusters that is provably asymptotically optimal for a broad class of processing time distributions. Moreover, we show that HCMM is unboundedly faster than any uncoded scheme. To demonstrate practicality of HCMM, we carry out experiments over Amazon EC2 clusters where HCMM is found to be up to $61\%$, $46\%$ and $36\%$ respectively faster than three benchmark load allocation schemes - Uniform Uncoded, Load-balanced Uncoded, and Uniform Coded. Additionally, we provide a generalization to the problem of optimal load allocation in heterogeneous settings, where we take into account the monetary costs associated with the clusters. We argue that HCMM is asymptotically optimal for budget-constrained scenarios as well, and we develop a heuristic algorithm for (HCMM) load allocation for budget-limited computation tasks. |
1904.08562 | Jonathan Sterling | Jonathan Sterling, Carlo Angiuli, Daniel Gratzer | Cubical Syntax for Reflection-Free Extensional Equality | Extended version; International Conference on Formal Structures for
Computation and Deduction (FSCD), 2019 | null | 10.4230/LIPIcs.FSCD.2019.31 | null | cs.LO math.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We contribute XTT, a cubical reconstruction of Observational Type Theory
which extends Martin-L\"of's intensional type theory with a dependent equality
type that enjoys function extensionality and a judgmental version of the
unicity of identity types principle (UIP): any two elements of the same
equality type are judgmentally equal. Moreover, we conjecture that the typing
relation can be decided in a practical way. In this paper, we establish an
algebraic canonicity theorem using a novel cubical extension (independently
proposed by Awodey) of the logical families or categorical gluing argument
inspired by Coquand and Shulman: every closed element of boolean type is
derivably equal to either 'true' or 'false'.
| [
{
"created": "Thu, 18 Apr 2019 01:52:13 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2019 19:43:41 GMT",
"version": "v2"
}
] | 2021-04-20 | [
[
"Sterling",
"Jonathan",
""
],
[
"Angiuli",
"Carlo",
""
],
[
"Gratzer",
"Daniel",
""
]
] | We contribute XTT, a cubical reconstruction of Observational Type Theory which extends Martin-L\"of's intensional type theory with a dependent equality type that enjoys function extensionality and a judgmental version of the unicity of identity types principle (UIP): any two elements of the same equality type are judgmentally equal. Moreover, we conjecture that the typing relation can be decided in a practical way. In this paper, we establish an algebraic canonicity theorem using a novel cubical extension (independently proposed by Awodey) of the logical families or categorical gluing argument inspired by Coquand and Shulman: every closed element of boolean type is derivably equal to either 'true' or 'false'. |
2403.15786 | Kaiwen Wang | Kaiwen Wang, Yinzhe Shen, Martin Lauer | Adversarial Defense Teacher for Cross-Domain Object Detection under Poor
Visibility Conditions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing object detectors encounter challenges in handling domain shifts
between training and real-world data, particularly under poor visibility
conditions like fog and night. Cutting-edge cross-domain object detection
methods use teacher-student frameworks and compel teacher and student models to
produce consistent predictions under weak and strong augmentations,
respectively. In this paper, we reveal that manually crafted augmentations are
insufficient for optimal teaching and present a simple yet effective framework
named Adversarial Defense Teacher (ADT), leveraging adversarial defense to
enhance teaching quality. Specifically, we employ adversarial attacks,
encouraging the model to generalize on subtly perturbed inputs that effectively
deceive the model. To address small objects under poor visibility conditions,
we propose a Zoom-in Zoom-out strategy, which zooms-in images for better
pseudo-labels and zooms-out images and pseudo-labels to learn refined features.
Our results demonstrate that ADT achieves superior performance, reaching 54.5%
mAP on Foggy Cityscapes, surpassing the previous state-of-the-art by 2.6% mAP.
| [
{
"created": "Sat, 23 Mar 2024 10:16:05 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Wang",
"Kaiwen",
""
],
[
"Shen",
"Yinzhe",
""
],
[
"Lauer",
"Martin",
""
]
] | Existing object detectors encounter challenges in handling domain shifts between training and real-world data, particularly under poor visibility conditions like fog and night. Cutting-edge cross-domain object detection methods use teacher-student frameworks and compel teacher and student models to produce consistent predictions under weak and strong augmentations, respectively. In this paper, we reveal that manually crafted augmentations are insufficient for optimal teaching and present a simple yet effective framework named Adversarial Defense Teacher (ADT), leveraging adversarial defense to enhance teaching quality. Specifically, we employ adversarial attacks, encouraging the model to generalize on subtly perturbed inputs that effectively deceive the model. To address small objects under poor visibility conditions, we propose a Zoom-in Zoom-out strategy, which zooms-in images for better pseudo-labels and zooms-out images and pseudo-labels to learn refined features. Our results demonstrate that ADT achieves superior performance, reaching 54.5% mAP on Foggy Cityscapes, surpassing the previous state-of-the-art by 2.6% mAP. |
1903.07290 | Wonseok Ha | Wonseok Ha and Juhoon Back | An Output Feedback Stabilizer for MIMO Nonlinear Systems with Uncertain
Input Gain: Nonlinear Nominal Model | 8 pages, 6 figures | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper deals with the output feedback stabilization problem of nonlinear
multi-input multi-output systems having an uncertain input gain matrix. It is
assumed that the system has a well-defined vector relative degree and that the
zero dynamics is input-to-state stable. Based on the assumption that there
exists a state feedback controller which globally asymptotically stabilizes the
origin of the nominal closed-loop system, we present an output feedback
stabilizer which recovers the stability of the nominal closed-loop system in
the semi-global practical sense. Compared to previous results, we allow that
the nominal system can have a nonlinear input gain matrix that is a function of
state and this is done by modifying the structure of the disturbance
observer-based robust output feedback controller. It is expected that the
proposed controller can be well applied to the case when the system's
nonlinearity is to be exploited rather than canceled.
| [
{
"created": "Mon, 18 Mar 2019 07:56:01 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"Ha",
"Wonseok",
""
],
[
"Back",
"Juhoon",
""
]
] | This paper deals with the output feedback stabilization problem of nonlinear multi-input multi-output systems having an uncertain input gain matrix. It is assumed that the system has a well-defined vector relative degree and that the zero dynamics is input-to-state stable. Based on the assumption that there exists a state feedback controller which globally asymptotically stabilizes the origin of the nominal closed-loop system, we present an output feedback stabilizer which recovers the stability of the nominal closed-loop system in the semi-global practical sense. Compared to previous results, we allow that the nominal system can have a nonlinear input gain matrix that is a function of state and this is done by modifying the structure of the disturbance observer-based robust output feedback controller. It is expected that the proposed controller can be well applied to the case when the system's nonlinearity is to be exploited rather than canceled. |
2404.07963 | Songlin Xu | Songlin Xu, Xinyu Zhang, Lianhui Qin | EduAgent: Generative Student Agents in Learning | null | null | null | null | cs.CY cs.AI cs.CL cs.HC cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Student simulation in online education is important to address dynamic
learning behaviors of students with diverse backgrounds. Existing simulation
models based on deep learning usually need massive training data, lacking prior
knowledge in educational contexts. Large language models (LLMs) may contain
such prior knowledge since they are pre-trained from a large corpus. However,
because student behaviors are dynamic and multifaceted with individual
differences, directly prompting LLMs is not robust nor accurate enough to
capture fine-grained interactions among diverse student personas, learning
behaviors, and learning outcomes. This work tackles this problem by presenting
a newly annotated fine-grained large-scale dataset and proposing EduAgent, a
novel generative agent framework incorporating cognitive prior knowledge (i.e.,
theoretical findings revealed in cognitive science) to guide LLMs to first
reason correlations among various behaviors and then make simulations. Our two
experiments show that EduAgent could not only mimic and predict learning
behaviors of real students but also generate realistic learning behaviors of
virtual students without real data.
| [
{
"created": "Sat, 23 Mar 2024 18:19:17 GMT",
"version": "v1"
}
] | 2024-04-12 | [
[
"Xu",
"Songlin",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Qin",
"Lianhui",
""
]
] | Student simulation in online education is important to address dynamic learning behaviors of students with diverse backgrounds. Existing simulation models based on deep learning usually need massive training data, lacking prior knowledge in educational contexts. Large language models (LLMs) may contain such prior knowledge since they are pre-trained from a large corpus. However, because student behaviors are dynamic and multifaceted with individual differences, directly prompting LLMs is not robust nor accurate enough to capture fine-grained interactions among diverse student personas, learning behaviors, and learning outcomes. This work tackles this problem by presenting a newly annotated fine-grained large-scale dataset and proposing EduAgent, a novel generative agent framework incorporating cognitive prior knowledge (i.e., theoretical findings revealed in cognitive science) to guide LLMs to first reason correlations among various behaviors and then make simulations. Our two experiments show that EduAgent could not only mimic and predict learning behaviors of real students but also generate realistic learning behaviors of virtual students without real data. |
2209.07386 | Mete \c{S}eref Ahunbay | Mete \c{S}eref Ahunbay and Martin Bichler and Johannes Kn\"orr | Pricing Optimal Outcomes in Coupled and Non-Convex Markets: Theory and
Applications to Electricity Markets | 44 pages, 2 figures | null | null | null | cs.GT | http://creativecommons.org/licenses/by/4.0/ | According to the fundamental theorems of welfare economics, any competitive
equilibrium is Pareto efficient. Unfortunately, competitive equilibrium prices
only exist under strong assumptions such as perfectly divisible goods and
convex preferences. In many real-world markets, participants have non-convex
preferences and the allocation problem needs to consider complex constraints.
Electricity markets are a prime example, but similar problems appear in many
real-world markets, which has led to a growing literature in market design.
Power markets use heuristic pricing rules based on the dual of a relaxed
allocation problem today. With increasing levels of renewables, these rules
have come under scrutiny as they lead to high out-of-market side-payments to
some participants and to inadequate congestion signals. We show that existing
pricing heuristics optimize specific design goals that can be conflicting. The
trade-offs can be substantial, and we establish that the design of pricing
rules is fundamentally a multi-objective optimization problem addressing
different incentives. In addition to traditional multi-objective optimization
techniques using weighing of individual objectives, we introduce a novel
parameter-free pricing rule that minimizes incentives for market participants
to deviate locally. Our theoretical and experimental findings show how the new
pricing rule capitalizes on the upsides of existing pricing rules under
scrutiny today. It leads to prices that incur low make-whole payments while
providing adequate congestion signals and low lost opportunity costs. Our
suggested pricing rule does not require weighing of objectives, it is
computationally scalable, and balances trade-offs in a principled manner,
addressing an important policy issue in electricity markets.
| [
{
"created": "Thu, 15 Sep 2022 15:51:46 GMT",
"version": "v1"
},
{
"created": "Tue, 23 May 2023 13:43:12 GMT",
"version": "v2"
}
] | 2023-05-24 | [
[
"Ahunbay",
"Mete Şeref",
""
],
[
"Bichler",
"Martin",
""
],
[
"Knörr",
"Johannes",
""
]
] | According to the fundamental theorems of welfare economics, any competitive equilibrium is Pareto efficient. Unfortunately, competitive equilibrium prices only exist under strong assumptions such as perfectly divisible goods and convex preferences. In many real-world markets, participants have non-convex preferences and the allocation problem needs to consider complex constraints. Electricity markets are a prime example, but similar problems appear in many real-world markets, which has led to a growing literature in market design. Power markets use heuristic pricing rules based on the dual of a relaxed allocation problem today. With increasing levels of renewables, these rules have come under scrutiny as they lead to high out-of-market side-payments to some participants and to inadequate congestion signals. We show that existing pricing heuristics optimize specific design goals that can be conflicting. The trade-offs can be substantial, and we establish that the design of pricing rules is fundamentally a multi-objective optimization problem addressing different incentives. In addition to traditional multi-objective optimization techniques using weighing of individual objectives, we introduce a novel parameter-free pricing rule that minimizes incentives for market participants to deviate locally. Our theoretical and experimental findings show how the new pricing rule capitalizes on the upsides of existing pricing rules under scrutiny today. It leads to prices that incur low make-whole payments while providing adequate congestion signals and low lost opportunity costs. Our suggested pricing rule does not require weighing of objectives, it is computationally scalable, and balances trade-offs in a principled manner, addressing an important policy issue in electricity markets. |
2306.14142 | Eugene T.Y. Ang | Eugene Ang, Prasanta Bhattacharya, Andrew Lim | Estimating Policy Effects in a Social Network with Independent Set
Sampling | null | null | null | null | cs.SI stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating the impact of policy interventions on respondents who are embedded
in a social network is often challenging due to the presence of network
interference within the treatment groups, as well as between treatment and
non-treatment groups throughout the network. In this paper, we propose a
modeling strategy that combines existing work on stochastic actor-oriented
models (SAOM) with a novel network sampling method based on the identification
of independent sets. By assigning respondents from an independent set to the
treatment, we are able to block any spillover of the treatment and network
influence, thereby allowing us to isolate the direct effect of the treatment
from the indirect network-induced effects, in the immediate term. As a result,
our method allows for the estimation of both the \textit{direct} as well as the
\textit{net effect} of a chosen policy intervention, in the presence of network
effects in the population. We perform a comparative simulation analysis to show
that our proposed sampling technique leads to distinct direct and net effects
of the policy, as well as significant network effects driven by policy-linked
homophily. This study highlights the importance of network sampling techniques
in improving policy evaluation studies and has the potential to help
researchers and policymakers with better planning, designing, and anticipating
policy responses in a networked society.
| [
{
"created": "Sun, 25 Jun 2023 06:30:58 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Sep 2023 12:30:40 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Feb 2024 03:58:58 GMT",
"version": "v3"
}
] | 2024-02-27 | [
[
"Ang",
"Eugene",
""
],
[
"Bhattacharya",
"Prasanta",
""
],
[
"Lim",
"Andrew",
""
]
] | Evaluating the impact of policy interventions on respondents who are embedded in a social network is often challenging due to the presence of network interference within the treatment groups, as well as between treatment and non-treatment groups throughout the network. In this paper, we propose a modeling strategy that combines existing work on stochastic actor-oriented models (SAOM) with a novel network sampling method based on the identification of independent sets. By assigning respondents from an independent set to the treatment, we are able to block any spillover of the treatment and network influence, thereby allowing us to isolate the direct effect of the treatment from the indirect network-induced effects, in the immediate term. As a result, our method allows for the estimation of both the \textit{direct} as well as the \textit{net effect} of a chosen policy intervention, in the presence of network effects in the population. We perform a comparative simulation analysis to show that our proposed sampling technique leads to distinct direct and net effects of the policy, as well as significant network effects driven by policy-linked homophily. This study highlights the importance of network sampling techniques in improving policy evaluation studies and has the potential to help researchers and policymakers with better planning, designing, and anticipating policy responses in a networked society. |
2312.15964 | Hyun Kang | Hyun Kang, Dohae Lee, Myungjin Shin, In-Kwon Lee | Semantic Guidance Tuning for Text-To-Image Diffusion Models | Rework is being done | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in Text-to-Image (T2I) diffusion models have demonstrated
impressive success in generating high-quality images with zero-shot
generalization capabilities. Yet, current models struggle to closely adhere to
prompt semantics, often misrepresenting or overlooking specific attributes. To
address this, we propose a simple, training-free approach that modulates the
guidance direction of diffusion models during inference. We first decompose the
prompt semantics into a set of concepts, and monitor the guidance trajectory in
relation to each concept. Our key observation is that deviations in model's
adherence to prompt semantics are highly correlated with divergence of the
guidance from one or more of these concepts. Based on this observation, we
devise a technique to steer the guidance direction towards any concept from
which the model diverges. Extensive experimentation validates that our method
improves the semantic alignment of images generated by diffusion models in
response to prompts. Project page is available at: https://korguy.github.io/
| [
{
"created": "Tue, 26 Dec 2023 09:02:17 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jan 2024 04:55:29 GMT",
"version": "v2"
}
] | 2024-01-31 | [
[
"Kang",
"Hyun",
""
],
[
"Lee",
"Dohae",
""
],
[
"Shin",
"Myungjin",
""
],
[
"Lee",
"In-Kwon",
""
]
] | Recent advancements in Text-to-Image (T2I) diffusion models have demonstrated impressive success in generating high-quality images with zero-shot generalization capabilities. Yet, current models struggle to closely adhere to prompt semantics, often misrepresenting or overlooking specific attributes. To address this, we propose a simple, training-free approach that modulates the guidance direction of diffusion models during inference. We first decompose the prompt semantics into a set of concepts, and monitor the guidance trajectory in relation to each concept. Our key observation is that deviations in model's adherence to prompt semantics are highly correlated with divergence of the guidance from one or more of these concepts. Based on this observation, we devise a technique to steer the guidance direction towards any concept from which the model diverges. Extensive experimentation validates that our method improves the semantic alignment of images generated by diffusion models in response to prompts. Project page is available at: https://korguy.github.io/ |
0710.4905 | Oliver Kosut | Oliver Kosut, Lang Tong | Distributed Source Coding in the Presence of Byzantine Sensors | 14 pages. Submitted to IEEE Trans. on IT, Information-theoretic
Security special issue, February 2007. Revised October 2007 | null | 10.1109/TIT.2008.921867 | null | cs.IT math.IT | null | The distributed source coding problem is considered when the sensors, or
encoders, are under Byzantine attack; that is, an unknown group of sensors have
been reprogrammed by a malicious intruder to undermine the reconstruction at
the fusion center. Three different forms of the problem are considered. The
first is a variable-rate setup, in which the decoder adaptively chooses the
rates at which the sensors transmit. An explicit characterization of the
variable-rate achievable sum rates is given for any number of sensors and any
groups of traitors. The converse is proved constructively by letting the
traitors simulate a fake distribution and report the generated values as the
true ones. This fake distribution is chosen so that the decoder cannot
determine which sensors are traitors while maximizing the required rate to
decode every value. Achievability is proved using a scheme in which the decoder
receives small packets of information from a sensor until its message can be
decoded, before moving on to the next sensor. The sensors use randomization to
choose from a set of coding functions, which makes it probabilistically
impossible for the traitors to cause the decoder to make an error. Two forms of
the fixed-rate problem are considered, one with deterministic coding and one
with randomized coding. The achievable rate regions are given for both these
problems, and it is shown that lower rates can be achieved with randomized
coding.
| [
{
"created": "Thu, 25 Oct 2007 16:22:59 GMT",
"version": "v1"
}
] | 2016-11-15 | [
[
"Kosut",
"Oliver",
""
],
[
"Tong",
"Lang",
""
]
] | The distributed source coding problem is considered when the sensors, or encoders, are under Byzantine attack; that is, an unknown group of sensors have been reprogrammed by a malicious intruder to undermine the reconstruction at the fusion center. Three different forms of the problem are considered. The first is a variable-rate setup, in which the decoder adaptively chooses the rates at which the sensors transmit. An explicit characterization of the variable-rate achievable sum rates is given for any number of sensors and any groups of traitors. The converse is proved constructively by letting the traitors simulate a fake distribution and report the generated values as the true ones. This fake distribution is chosen so that the decoder cannot determine which sensors are traitors while maximizing the required rate to decode every value. Achievability is proved using a scheme in which the decoder receives small packets of information from a sensor until its message can be decoded, before moving on to the next sensor. The sensors use randomization to choose from a set of coding functions, which makes it probabilistically impossible for the traitors to cause the decoder to make an error. Two forms of the fixed-rate problem are considered, one with deterministic coding and one with randomized coding. The achievable rate regions are given for both these problems, and it is shown that lower rates can be achieved with randomized coding. |
0906.1182 | Paolo Mancarella | P. Mancarella, G. Terreni, F. Sadri, F. Toni, U. Endriss | The CIFF Proof Procedure for Abductive Logic Programming with
Constraints: Theory, Implementation and Experiments | null | null | null | null | cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the CIFF proof procedure for abductive logic programming with
constraints, and we prove its correctness. CIFF is an extension of the IFF
proof procedure for abductive logic programming, relaxing the original
restrictions over variable quantification (allowedness conditions) and
incorporating a constraint solver to deal with numerical constraints as in
constraint logic programming. Finally, we describe the CIFF system, comparing
it with state of the art abductive systems and answer set solvers and showing
how to use it to program some applications. (To appear in Theory and Practice
of Logic Programming - TPLP).
| [
{
"created": "Fri, 5 Jun 2009 16:13:23 GMT",
"version": "v1"
}
] | 2009-06-08 | [
[
"Mancarella",
"P.",
""
],
[
"Terreni",
"G.",
""
],
[
"Sadri",
"F.",
""
],
[
"Toni",
"F.",
""
],
[
"Endriss",
"U.",
""
]
] | We present the CIFF proof procedure for abductive logic programming with constraints, and we prove its correctness. CIFF is an extension of the IFF proof procedure for abductive logic programming, relaxing the original restrictions over variable quantification (allowedness conditions) and incorporating a constraint solver to deal with numerical constraints as in constraint logic programming. Finally, we describe the CIFF system, comparing it with state of the art abductive systems and answer set solvers and showing how to use it to program some applications. (To appear in Theory and Practice of Logic Programming - TPLP). |
2408.04092 | Steven Xia | Siyuan Xia, Chris Zhu, Tapan Srivastava, Bridget Fahey, Raul Castro
Fernandez | Programmable Dataflows: Abstraction and Programming Model for Data
Sharing | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data sharing is central to a wide variety of applications such as fraud
detection, ad matching, and research. The lack of data sharing abstractions
makes the solution to each data sharing problem bespoke and cost-intensive,
hampering value generation. In this paper, we first introduce a data sharing
model to represent every data sharing problem with a sequence of dataflows.
From the model, we distill an abstraction, the contract, which agents use to
communicate the intent of a dataflow and evaluate its consequences, before the
dataflow takes place. This helps agents move towards a common sharing goal
without violating any regulatory and privacy constraints. Then, we design and
implement the contract programming model (CPM), which allows agents to program
data sharing applications catered to each problem's needs.
Contracts permit data sharing, but their interactive nature may introduce
inefficiencies. To mitigate those inefficiencies, we extend the CPM so that it
can save intermediate outputs of dataflows, and skip computation if a dataflow
tries to access data that it does not have access to. In our evaluation, we
show that 1) the contract abstraction is general enough to represent a wide
range of sharing problems, 2) we can write programs for complex data sharing
problems and exhibit qualitative improvements over other alternate
technologies, and 3) quantitatively, our optimizations make sharing programs
written with the CPM efficient.
| [
{
"created": "Wed, 7 Aug 2024 21:15:57 GMT",
"version": "v1"
}
] | 2024-08-09 | [
[
"Xia",
"Siyuan",
""
],
[
"Zhu",
"Chris",
""
],
[
"Srivastava",
"Tapan",
""
],
[
"Fahey",
"Bridget",
""
],
[
"Fernandez",
"Raul Castro",
""
]
] | Data sharing is central to a wide variety of applications such as fraud detection, ad matching, and research. The lack of data sharing abstractions makes the solution to each data sharing problem bespoke and cost-intensive, hampering value generation. In this paper, we first introduce a data sharing model to represent every data sharing problem with a sequence of dataflows. From the model, we distill an abstraction, the contract, which agents use to communicate the intent of a dataflow and evaluate its consequences, before the dataflow takes place. This helps agents move towards a common sharing goal without violating any regulatory and privacy constraints. Then, we design and implement the contract programming model (CPM), which allows agents to program data sharing applications catered to each problem's needs. Contracts permit data sharing, but their interactive nature may introduce inefficiencies. To mitigate those inefficiencies, we extend the CPM so that it can save intermediate outputs of dataflows, and skip computation if a dataflow tries to access data that it does not have access to. In our evaluation, we show that 1) the contract abstraction is general enough to represent a wide range of sharing problems, 2) we can write programs for complex data sharing problems and exhibit qualitative improvements over other alternate technologies, and 3) quantitatively, our optimizations make sharing programs written with the CPM efficient. |
1901.09002 | Tai Sing Lee | Jielin Qiu, Ge Huang, Tai Sing Lee | A Neurally-Inspired Hierarchical Prediction Network for Spatiotemporal
Sequence Learning and Prediction | Some of the results are not replicable | null | null | null | cs.NE cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we developed a hierarchical network model, called Hierarchical
Prediction Network (HPNet), to understand how spatiotemporal memories might be
learned and encoded in the recurrent circuits in the visual cortical hierarchy
for predicting future video frames. This neurally inspired model operates in
the analysis-by-synthesis framework. It contains a feed-forward path that
computes and encodes spatiotemporal features of successive complexity and a
feedback path for the successive levels to project their interpretations to the
level below. Within each level, the feed-forward path and the feedback path
intersect in a recurrent gated circuit, instantiated in a LSTM module, to
generate a prediction or explanation of the incoming signals. The network
learns its internal model of the world by minimizing the errors of its
prediction of the incoming signals at each level of the hierarchy. We found
that hierarchical interaction in the network increases semantic clustering of
global movement patterns in the population codes of the units along the
hierarchy, even in the earliest module. This facilitates the learning of
relationships among movement patterns, yielding state-of-the-art performance in
long range video sequence predictions in the benchmark datasets. The network
model automatically reproduces a variety of prediction suppression and
familiarity suppression neurophysiological phenomena observed in the visual
cortex, suggesting that hierarchical prediction might indeed be an important
principle for representational learning in the visual cortex.
| [
{
"created": "Fri, 25 Jan 2019 18:03:17 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Oct 2021 12:59:31 GMT",
"version": "v2"
}
] | 2021-10-04 | [
[
"Qiu",
"Jielin",
""
],
[
"Huang",
"Ge",
""
],
[
"Lee",
"Tai Sing",
""
]
] | In this paper we developed a hierarchical network model, called Hierarchical Prediction Network (HPNet), to understand how spatiotemporal memories might be learned and encoded in the recurrent circuits in the visual cortical hierarchy for predicting future video frames. This neurally inspired model operates in the analysis-by-synthesis framework. It contains a feed-forward path that computes and encodes spatiotemporal features of successive complexity and a feedback path for the successive levels to project their interpretations to the level below. Within each level, the feed-forward path and the feedback path intersect in a recurrent gated circuit, instantiated in a LSTM module, to generate a prediction or explanation of the incoming signals. The network learns its internal model of the world by minimizing the errors of its prediction of the incoming signals at each level of the hierarchy. We found that hierarchical interaction in the network increases semantic clustering of global movement patterns in the population codes of the units along the hierarchy, even in the earliest module. This facilitates the learning of relationships among movement patterns, yielding state-of-the-art performance in long range video sequence predictions in the benchmark datasets. The network model automatically reproduces a variety of prediction suppression and familiarity suppression neurophysiological phenomena observed in the visual cortex, suggesting that hierarchical prediction might indeed be an important principle for representational learning in the visual cortex. |
1207.1257 | Anton Belov | Anton Belov and Joao Marques-Silva | Generalizing Redundancy in Propositional Logic: Foundations and Hitting
Sets Duality | 13 pages; first part of series on labelled CNF formulas; fixed some
references | null | null | null | cs.LO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detection and elimination of redundant clauses from propositional formulas in
Conjunctive Normal Form (CNF) is a fundamental problem with numerous
application domains, including AI, and has been the subject of extensive
research. Moreover, a number of recent applications motivated various
extensions of this problem. For example, unsatisfiable formulas partitioned
into disjoint subsets of clauses (so-called groups) often need to be simplified
by removing redundant groups, or may contain redundant variables, rather than
clauses. In this report we present a generalized theoretical framework of
labelled CNF formulas that unifies various extensions of the redundancy
detection and removal problem and allows to derive a number of results that
subsume and extend previous work. The follow-up reports contain a number of
additional theoretical results and algorithms for various computational
problems in the context of the proposed framework.
| [
{
"created": "Thu, 5 Jul 2012 13:57:40 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Jul 2012 01:15:08 GMT",
"version": "v2"
}
] | 2012-07-11 | [
[
"Belov",
"Anton",
""
],
[
"Marques-Silva",
"Joao",
""
]
] | Detection and elimination of redundant clauses from propositional formulas in Conjunctive Normal Form (CNF) is a fundamental problem with numerous application domains, including AI, and has been the subject of extensive research. Moreover, a number of recent applications motivated various extensions of this problem. For example, unsatisfiable formulas partitioned into disjoint subsets of clauses (so-called groups) often need to be simplified by removing redundant groups, or may contain redundant variables, rather than clauses. In this report we present a generalized theoretical framework of labelled CNF formulas that unifies various extensions of the redundancy detection and removal problem and allows to derive a number of results that subsume and extend previous work. The follow-up reports contain a number of additional theoretical results and algorithms for various computational problems in the context of the proposed framework. |
2011.10534 | Olivier Carton | Olivier Carton | Ambiguity through the lens of measure theory | null | null | null | null | cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we establish a strong link between the ambiguity for finite
words of a B\"uchi automaton and the ambiguity for infinite words of the same
automaton. This link is based on measure theory. More precisely, we show that
such an automaton is unambiguous, in the sense that no finite word labels two
runs with the same starting state and the same ending state if and only if for
each state, the set of infinite sequences labelling two runs starting from that
state has measure zero. The measure used to define these negligible sets, that
is sets of measure zero, can be any measure computed by a weighted automaton
which is compatible with the B\"uchi automaton. This latter condition is very
natural: the measure must put weight on cylinders [w] where w is the label of
some run in the B\"uchi automaton.
| [
{
"created": "Fri, 20 Nov 2020 18:06:10 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jan 2022 17:48:57 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Feb 2022 14:26:46 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Apr 2022 16:13:46 GMT",
"version": "v4"
}
] | 2022-04-25 | [
[
"Carton",
"Olivier",
""
]
] | In this paper, we establish a strong link between the ambiguity for finite words of a B\"uchi automaton and the ambiguity for infinite words of the same automaton. This link is based on measure theory. More precisely, we show that such an automaton is unambiguous, in the sense that no finite word labels two runs with the same starting state and the same ending state if and only if for each state, the set of infinite sequences labelling two runs starting from that state has measure zero. The measure used to define these negligible sets, that is sets of measure zero, can be any measure computed by a weighted automaton which is compatible with the B\"uchi automaton. This latter condition is very natural: the measure must put weight on cylinders [w] where w is the label of some run in the B\"uchi automaton. |
2406.13355 | Umberto Mart\'inez-Pe\~nas | Umberto Mart\'inez-Pe\~nas and Rub\'en Rodr\'iguez-Ballesteros | Linear codes in the folded Hamming distance and the quasi MDS property | null | null | null | null | cs.IT math.IT | http://creativecommons.org/publicdomain/zero/1.0/ | In this work, we study linear codes with the folded Hamming distance, or
equivalently, codes with the classical Hamming distance that are linear over a
subfield. This includes additive codes. We study MDS codes in this setting and
define quasi MDS (QMDS) codes and dually QMDS codes, which attain a more
relaxed variant of the classical Singleton bound. We provide several general
results concerning these codes, including restriction, shortening, weight
distributions, existence, density, geometric description and bounds on their
lengths relative to their field sizes. We provide explicit examples and a
binary construction with optimal lengths relative to their field sizes, which
beats any MDS code.
| [
{
"created": "Wed, 19 Jun 2024 09:01:11 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Martínez-Peñas",
"Umberto",
""
],
[
"Rodríguez-Ballesteros",
"Rubén",
""
]
] | In this work, we study linear codes with the folded Hamming distance, or equivalently, codes with the classical Hamming distance that are linear over a subfield. This includes additive codes. We study MDS codes in this setting and define quasi MDS (QMDS) codes and dually QMDS codes, which attain a more relaxed variant of the classical Singleton bound. We provide several general results concerning these codes, including restriction, shortening, weight distributions, existence, density, geometric description and bounds on their lengths relative to their field sizes. We provide explicit examples and a binary construction with optimal lengths relative to their field sizes, which beats any MDS code. |
1904.12058 | Muhan Zhang | Muhan Zhang, Yixin Chen | Inductive Matrix Completion Based on Graph Neural Networks | Accepted as a spotlight presentation at ICLR-2020 | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an inductive matrix completion model without using side
information. By factorizing the (rating) matrix into the product of
low-dimensional latent embeddings of rows (users) and columns (items), a
majority of existing matrix completion methods are transductive, since the
learned embeddings cannot generalize to unseen rows/columns or to new matrices.
To make matrix completion inductive, most previous works use content (side
information), such as user's age or movie's genre, to make predictions.
However, high-quality content is not always available, and can be hard to
extract. Under the extreme setting where not any side information is available
other than the matrix to complete, can we still learn an inductive matrix
completion model? In this paper, we propose an Inductive Graph-based Matrix
Completion (IGMC) model to address this problem. IGMC trains a graph neural
network (GNN) based purely on 1-hop subgraphs around (user, item) pairs
generated from the rating matrix and maps these subgraphs to their
corresponding ratings. It achieves highly competitive performance with
state-of-the-art transductive baselines. In addition, IGMC is inductive -- it
can generalize to users/items unseen during the training (given that their
interactions exist), and can even transfer to new tasks. Our transfer learning
experiments show that a model trained out of the MovieLens dataset can be
directly used to predict Douban movie ratings with surprisingly good
performance. Our work demonstrates that: 1) it is possible to train inductive
matrix completion models without using side information while achieving similar
or better performances than state-of-the-art transductive methods; 2) local
graph patterns around a (user, item) pair are effective predictors of the
rating this user gives to the item; and 3) Long-range dependencies might not be
necessary for modeling recommender systems.
| [
{
"created": "Fri, 26 Apr 2019 21:58:46 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Oct 2019 03:08:54 GMT",
"version": "v2"
},
{
"created": "Sun, 16 Feb 2020 04:27:14 GMT",
"version": "v3"
}
] | 2020-02-18 | [
[
"Zhang",
"Muhan",
""
],
[
"Chen",
"Yixin",
""
]
] | We propose an inductive matrix completion model without using side information. By factorizing the (rating) matrix into the product of low-dimensional latent embeddings of rows (users) and columns (items), a majority of existing matrix completion methods are transductive, since the learned embeddings cannot generalize to unseen rows/columns or to new matrices. To make matrix completion inductive, most previous works use content (side information), such as user's age or movie's genre, to make predictions. However, high-quality content is not always available, and can be hard to extract. Under the extreme setting where not any side information is available other than the matrix to complete, can we still learn an inductive matrix completion model? In this paper, we propose an Inductive Graph-based Matrix Completion (IGMC) model to address this problem. IGMC trains a graph neural network (GNN) based purely on 1-hop subgraphs around (user, item) pairs generated from the rating matrix and maps these subgraphs to their corresponding ratings. It achieves highly competitive performance with state-of-the-art transductive baselines. In addition, IGMC is inductive -- it can generalize to users/items unseen during the training (given that their interactions exist), and can even transfer to new tasks. Our transfer learning experiments show that a model trained out of the MovieLens dataset can be directly used to predict Douban movie ratings with surprisingly good performance. Our work demonstrates that: 1) it is possible to train inductive matrix completion models without using side information while achieving similar or better performances than state-of-the-art transductive methods; 2) local graph patterns around a (user, item) pair are effective predictors of the rating this user gives to the item; and 3) Long-range dependencies might not be necessary for modeling recommender systems. |
2203.10744 | Chen Wu | Changran Hu, Akshara Reddi Methukupalli, Yutong Zhou, Chen Wu, Yubo
Chen | Programming Language Agnostic Mining of Code and Language Pairs with
Sequence Labeling Based Question Answering | null | null | null | null | cs.CL cs.PL | http://creativecommons.org/licenses/by/4.0/ | Mining aligned natural language (NL) and programming language (PL) pairs is a
critical task to NL-PL understanding. Existing methods applied specialized
hand-crafted features or separately-trained models for each PL. However, they
usually suffered from low transferability across multiple PLs, especially for
niche PLs with less annotated data. Fortunately, a Stack Overflow answer post
is essentially a sequence of text and code blocks and its global textual
context can provide PL-agnostic supplementary information. In this paper, we
propose a Sequence Labeling based Question Answering (SLQA) method to mine
NL-PL pairs in a PL-agnostic manner. In particular, we propose to apply the BIO
tagging scheme instead of the conventional binary scheme to mine the code
solutions which are often composed of multiple blocks of a post. Experiments on
current single-PL single-block benchmarks and a manually-labeled cross-PL
multi-block benchmark prove the effectiveness and transferability of SLQA. We
further present a parallel NL-PL corpus named Lang2Code automatically mined
with SLQA, which contains about 1.4M pairs on 6 PLs. Under statistical analysis
and downstream evaluation, we demonstrate that Lang2Code is a large-scale
high-quality data resource for further NL-PL research.
| [
{
"created": "Mon, 21 Mar 2022 05:33:59 GMT",
"version": "v1"
}
] | 2022-03-22 | [
[
"Hu",
"Changran",
""
],
[
"Methukupalli",
"Akshara Reddi",
""
],
[
"Zhou",
"Yutong",
""
],
[
"Wu",
"Chen",
""
],
[
"Chen",
"Yubo",
""
]
] | Mining aligned natural language (NL) and programming language (PL) pairs is a critical task to NL-PL understanding. Existing methods applied specialized hand-crafted features or separately-trained models for each PL. However, they usually suffered from low transferability across multiple PLs, especially for niche PLs with less annotated data. Fortunately, a Stack Overflow answer post is essentially a sequence of text and code blocks and its global textual context can provide PL-agnostic supplementary information. In this paper, we propose a Sequence Labeling based Question Answering (SLQA) method to mine NL-PL pairs in a PL-agnostic manner. In particular, we propose to apply the BIO tagging scheme instead of the conventional binary scheme to mine the code solutions which are often composed of multiple blocks of a post. Experiments on current single-PL single-block benchmarks and a manually-labeled cross-PL multi-block benchmark prove the effectiveness and transferability of SLQA. We further present a parallel NL-PL corpus named Lang2Code automatically mined with SLQA, which contains about 1.4M pairs on 6 PLs. Under statistical analysis and downstream evaluation, we demonstrate that Lang2Code is a large-scale high-quality data resource for further NL-PL research. |
2302.04529 | Martijn Goorden | Martijn A. Goorden, Kim G. Larsen, Axel Legay, Florian Lorber, Ulrik
Nyman, Andrzej Wasowski | Timed I/O Automata: It is never too late to complete your timed
specification theory | Version submitted for review | null | null | null | cs.FL cs.SE | http://creativecommons.org/licenses/by/4.0/ | A specification theory combines notions of specifications and implementations
with a satisfaction relation, a refinement relation and a set of operators
supporting stepwise design. We develop a complete specification framework for
real-time systems using Timed I/O Automata as the specification formalism, with
the semantics expressed in terms of Timed I/O Transition Systems. We provide
constructs for refinement, consistency checking, logical and structural
composition, and quotient of specifications -- all indispensable ingredients of
a compositional design methodology. The theory is backed by rigorous proofs and
is being implemented in the open-source tool ECDAR.
| [
{
"created": "Thu, 9 Feb 2023 09:41:48 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jul 2023 07:50:12 GMT",
"version": "v2"
}
] | 2023-07-14 | [
[
"Goorden",
"Martijn A.",
""
],
[
"Larsen",
"Kim G.",
""
],
[
"Legay",
"Axel",
""
],
[
"Lorber",
"Florian",
""
],
[
"Nyman",
"Ulrik",
""
],
[
"Wasowski",
"Andrzej",
""
]
] | A specification theory combines notions of specifications and implementations with a satisfaction relation, a refinement relation and a set of operators supporting stepwise design. We develop a complete specification framework for real-time systems using Timed I/O Automata as the specification formalism, with the semantics expressed in terms of Timed I/O Transition Systems. We provide constructs for refinement, consistency checking, logical and structural composition, and quotient of specifications -- all indispensable ingredients of a compositional design methodology. The theory is backed by rigorous proofs and is being implemented in the open-source tool ECDAR. |
2104.04474 | Chavit Denninnart | Chavit Denninnart, Mohsen Amini Salehi | Harnessing the Potential of Function-Reuse in Multimedia Cloud Systems | null | null | null | null | cs.DC cs.MM | http://creativecommons.org/licenses/by/4.0/ | Cloud-based computing systems can get oversubscribed due to the budget
constraints of their users or limitations in certain resource types. The
oversubscription can, in turn, degrade the users perceived Quality of Service
(QoS). The approach we investigate to mitigate both the oversubscription and
the incurred cost is based on smart reusing of the computation needed to
process the service requests (i.e., tasks). We propose a reusing paradigm for
the tasks that are waiting for execution. This paradigm can be particularly
impactful in serverless platforms where multiple users can request similar
services simultaneously. Our motivation is a multimedia streaming engine that
processes the media segments in an on-demand manner. We propose a mechanism to
identify various types of "mergeable" tasks and aggregate them to improve the
QoS and mitigate the incurred cost. We develop novel approaches to determine
when and how to perform task aggregation such that the QoS of other tasks is
not affected. Evaluation results show that the proposed mechanism can improve
the QoS by significantly reducing the percentage of tasks missing their
deadlines %. In addition, it can and reduce the overall time (and subsequently
the incurred cost) of utilizing cloud services by more than 9%.
| [
{
"created": "Fri, 9 Apr 2021 16:45:53 GMT",
"version": "v1"
}
] | 2021-04-12 | [
[
"Denninnart",
"Chavit",
""
],
[
"Salehi",
"Mohsen Amini",
""
]
] | Cloud-based computing systems can get oversubscribed due to the budget constraints of their users or limitations in certain resource types. The oversubscription can, in turn, degrade the users perceived Quality of Service (QoS). The approach we investigate to mitigate both the oversubscription and the incurred cost is based on smart reusing of the computation needed to process the service requests (i.e., tasks). We propose a reusing paradigm for the tasks that are waiting for execution. This paradigm can be particularly impactful in serverless platforms where multiple users can request similar services simultaneously. Our motivation is a multimedia streaming engine that processes the media segments in an on-demand manner. We propose a mechanism to identify various types of "mergeable" tasks and aggregate them to improve the QoS and mitigate the incurred cost. We develop novel approaches to determine when and how to perform task aggregation such that the QoS of other tasks is not affected. Evaluation results show that the proposed mechanism can improve the QoS by significantly reducing the percentage of tasks missing their deadlines %. In addition, it can and reduce the overall time (and subsequently the incurred cost) of utilizing cloud services by more than 9%. |
2108.05876 | Emiliano De Cristofaro | Pujan Paudel, Jeremy Blackburn, Emiliano De Cristofaro, Savvas
Zannettou, and Gianluca Stringhini | An Early Look at the Gettr Social Network | null | null | null | null | cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the first data-driven analysis of Gettr, a new social
network platform launched by former US President Donald Trump's team. Among
other things, we find that users on the platform heavily discuss politics, with
a focus on the Trump campaign in the US and Bolsonaro's in Brazil. Activity on
the platform has steadily been decreasing since its launch, although a core of
verified users and early adopters kept posting and become central to it.
Finally, although toxicity has been increasing over time, the average level of
toxicity is still lower than the one recently observed on other fringe social
networks like Gab and 4chan. Overall, we provide a first quantitative look at
this new community, observing a lack of organic engagement and activity.
| [
{
"created": "Thu, 12 Aug 2021 17:49:30 GMT",
"version": "v1"
}
] | 2021-08-13 | [
[
"Paudel",
"Pujan",
""
],
[
"Blackburn",
"Jeremy",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Zannettou",
"Savvas",
""
],
[
"Stringhini",
"Gianluca",
""
]
] | This paper presents the first data-driven analysis of Gettr, a new social network platform launched by former US President Donald Trump's team. Among other things, we find that users on the platform heavily discuss politics, with a focus on the Trump campaign in the US and Bolsonaro's in Brazil. Activity on the platform has steadily been decreasing since its launch, although a core of verified users and early adopters kept posting and become central to it. Finally, although toxicity has been increasing over time, the average level of toxicity is still lower than the one recently observed on other fringe social networks like Gab and 4chan. Overall, we provide a first quantitative look at this new community, observing a lack of organic engagement and activity. |
1712.07642 | Fereshteh Sadeghi | Fereshteh Sadeghi, Alexander Toshev, Eric Jang, Sergey Levine | Sim2Real View Invariant Visual Servoing by Recurrent Control | Supplementary video:
https://fsadeghi.github.io/Sim2RealViewInvariantServo | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans are remarkably proficient at controlling their limbs and tools from a
wide range of viewpoints and angles, even in the presence of optical
distortions. In robotics, this ability is referred to as visual servoing:
moving a tool or end-point to a desired location using primarily visual
feedback. In this paper, we study how viewpoint-invariant visual servoing
skills can be learned automatically in a robotic manipulation scenario. To this
end, we train a deep recurrent controller that can automatically determine
which actions move the end-point of a robotic arm to a desired object. The
problem that must be solved by this controller is fundamentally ambiguous:
under severe variation in viewpoint, it may be impossible to determine the
actions in a single feedforward operation. Instead, our visual servoing system
must use its memory of past movements to understand how the actions affect the
robot motion from the current viewpoint, correcting mistakes and gradually
moving closer to the target. This ability is in stark contrast to most visual
servoing methods, which either assume known dynamics or require a calibration
phase. We show how we can learn this recurrent controller using simulated data
and a reinforcement learning objective. We then describe how the resulting
model can be transferred to a real-world robot by disentangling perception from
control and only adapting the visual layers. The adapted model can servo to
previously unseen objects from novel viewpoints on a real-world Kuka IIWA
robotic arm. For supplementary videos, see:
https://fsadeghi.github.io/Sim2RealViewInvariantServo
| [
{
"created": "Wed, 20 Dec 2017 18:54:29 GMT",
"version": "v1"
}
] | 2017-12-21 | [
[
"Sadeghi",
"Fereshteh",
""
],
[
"Toshev",
"Alexander",
""
],
[
"Jang",
"Eric",
""
],
[
"Levine",
"Sergey",
""
]
] | Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints and angles, even in the presence of optical distortions. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we study how viewpoint-invariant visual servoing skills can be learned automatically in a robotic manipulation scenario. To this end, we train a deep recurrent controller that can automatically determine which actions move the end-point of a robotic arm to a desired object. The problem that must be solved by this controller is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing system must use its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to most visual servoing methods, which either assume known dynamics or require a calibration phase. We show how we can learn this recurrent controller using simulated data and a reinforcement learning objective. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://fsadeghi.github.io/Sim2RealViewInvariantServo |
2403.13433 | Zhouhong Gu | Zhouhong Gu, Xiaoxuan Zhu, Haoran Guo, Lin Zhang, Yin Cai, Hao Shen,
Jiangjie Chen, Zheyu Ye, Yifei Dai, Yan Gao, Yao Hu, Hongwei Feng, Yanghua
Xiao | AgentGroupChat: An Interactive Group Chat Simulacra For Better Eliciting
Emergent Behavior | null | null | null | null | cs.AI cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Language significantly influences the formation and evolution of Human
emergent behavior, which is crucial in understanding collective intelligence
within human societies. Considering that the study of how language affects
human behavior needs to put it into the dynamic scenarios in which it is used,
we introduce AgentGroupChat in this paper, a simulation that delves into the
complex role of language in shaping collective behavior through interactive
debate scenarios. Central to this simulation are characters engaging in dynamic
conversation interactions. To enable simulation, we introduce the Verbal
Strategist Agent, utilizing large language models to enhance interaction
strategies by incorporating elements of persona and action. We set four
narrative scenarios based on AgentGroupChat to demonstrate the simulation's
capacity to mimic complex language use in group dynamics. Evaluations focus on
aligning agent behaviors with human expectations and the emergence of
collective behaviors within the simulation. Results reveal that emergent
behaviors materialize from a confluence of factors: a conducive environment for
extensive information exchange, characters with diverse traits, high linguistic
comprehension, and strategic adaptability. During discussions on ``the impact
of AI on humanity'' in AgentGroupChat simulation, philosophers commonly agreed
that ``AI could enhance societal welfare with judicious limitations'' and even
come to a conclusion that ``the essence of true intelligence encompasses
understanding the necessity to constrain self abilities''. Additionally, in the
competitive domain of casting for primary roles in films in AgentGroupChat,
certain actors were ready to reduce their remuneration or accept lesser roles,
motivated by their deep-seated desire to contribute to the project.
| [
{
"created": "Wed, 20 Mar 2024 09:21:32 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Apr 2024 07:40:31 GMT",
"version": "v2"
}
] | 2024-04-05 | [
[
"Gu",
"Zhouhong",
""
],
[
"Zhu",
"Xiaoxuan",
""
],
[
"Guo",
"Haoran",
""
],
[
"Zhang",
"Lin",
""
],
[
"Cai",
"Yin",
""
],
[
"Shen",
"Hao",
""
],
[
"Chen",
"Jiangjie",
""
],
[
"Ye",
"Zheyu",
""
],
[
"Dai",
"Yifei",
""
],
[
"Gao",
"Yan",
""
],
[
"Hu",
"Yao",
""
],
[
"Feng",
"Hongwei",
""
],
[
"Xiao",
"Yanghua",
""
]
] | Language significantly influences the formation and evolution of Human emergent behavior, which is crucial in understanding collective intelligence within human societies. Considering that the study of how language affects human behavior needs to put it into the dynamic scenarios in which it is used, we introduce AgentGroupChat in this paper, a simulation that delves into the complex role of language in shaping collective behavior through interactive debate scenarios. Central to this simulation are characters engaging in dynamic conversation interactions. To enable simulation, we introduce the Verbal Strategist Agent, utilizing large language models to enhance interaction strategies by incorporating elements of persona and action. We set four narrative scenarios based on AgentGroupChat to demonstrate the simulation's capacity to mimic complex language use in group dynamics. Evaluations focus on aligning agent behaviors with human expectations and the emergence of collective behaviors within the simulation. Results reveal that emergent behaviors materialize from a confluence of factors: a conducive environment for extensive information exchange, characters with diverse traits, high linguistic comprehension, and strategic adaptability. During discussions on ``the impact of AI on humanity'' in AgentGroupChat simulation, philosophers commonly agreed that ``AI could enhance societal welfare with judicious limitations'' and even come to a conclusion that ``the essence of true intelligence encompasses understanding the necessity to constrain self abilities''. Additionally, in the competitive domain of casting for primary roles in films in AgentGroupChat, certain actors were ready to reduce their remuneration or accept lesser roles, motivated by their deep-seated desire to contribute to the project. |
2006.07123 | Roman Pogodin | Roman Pogodin and Peter E. Latham | Kernelized information bottleneck leads to biologically plausible
3-factor Hebbian learning in deep networks | Accepted to NeurIPS 2020 | null | null | null | cs.LG q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The state-of-the art machine learning approach to training deep neural
networks, backpropagation, is implausible for real neural networks: neurons
need to know their outgoing weights; training alternates between a bottom-up
forward pass (computation) and a top-down backward pass (learning); and the
algorithm often needs precise labels of many data points. Biologically
plausible approximations to backpropagation, such as feedback alignment, solve
the weight transport problem, but not the other two. Thus, fully biologically
plausible learning rules have so far remained elusive. Here we present a family
of learning rules that does not suffer from any of these problems. It is
motivated by the information bottleneck principle (extended with kernel
methods), in which networks learn to compress the input as much as possible
without sacrificing prediction of the output. The resulting rules have a
3-factor Hebbian structure: they require pre- and post-synaptic firing rates
and an error signal - the third factor - consisting of a global teaching signal
and a layer-specific term, both available without a top-down pass. They do not
require precise labels; instead, they rely on the similarity between pairs of
desired outputs. Moreover, to obtain good performance on hard problems and
retain biological plausibility, our rules need divisive normalization - a known
feature of biological networks. Finally, simulations show that our rules
perform nearly as well as backpropagation on image classification tasks.
| [
{
"created": "Fri, 12 Jun 2020 12:30:53 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Oct 2020 17:00:59 GMT",
"version": "v2"
}
] | 2020-10-26 | [
[
"Pogodin",
"Roman",
""
],
[
"Latham",
"Peter E.",
""
]
] | The state-of-the art machine learning approach to training deep neural networks, backpropagation, is implausible for real neural networks: neurons need to know their outgoing weights; training alternates between a bottom-up forward pass (computation) and a top-down backward pass (learning); and the algorithm often needs precise labels of many data points. Biologically plausible approximations to backpropagation, such as feedback alignment, solve the weight transport problem, but not the other two. Thus, fully biologically plausible learning rules have so far remained elusive. Here we present a family of learning rules that does not suffer from any of these problems. It is motivated by the information bottleneck principle (extended with kernel methods), in which networks learn to compress the input as much as possible without sacrificing prediction of the output. The resulting rules have a 3-factor Hebbian structure: they require pre- and post-synaptic firing rates and an error signal - the third factor - consisting of a global teaching signal and a layer-specific term, both available without a top-down pass. They do not require precise labels; instead, they rely on the similarity between pairs of desired outputs. Moreover, to obtain good performance on hard problems and retain biological plausibility, our rules need divisive normalization - a known feature of biological networks. Finally, simulations show that our rules perform nearly as well as backpropagation on image classification tasks. |
1706.00399 | Veit Elser | Veit Elser, Ti-Yen Lan and Tamir Bendory | Benchmark problems for phase retrieval | 27 pages, 10 figures, new references, modified appendix, improved
presentation | null | null | null | cs.IT math.IT physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the mathematical and algorithmic aspects of the phase
retrieval problem have received considerable attention. Many papers in this
area mention crystallography as a principal application. In crystallography,
the signal to be recovered is periodic and comprised of atomic distributions
arranged homogeneously in the unit cell of the crystal. The crystallographic
problem is both the leading application and one of the hardest forms of phase
retrieval. We have constructed a graded set of benchmark problems for
evaluating algorithms that perform this type of phase retrieval. The data,
publicly available online, is provided in an easily interpretable format. We
also propose a simple and unambiguous success/failure criterion based on the
actual needs in crystallography. Baseline runtimes were obtained with an
iterative algorithm that is similar but more transparent than those used in
crystallography. Empirically, the runtimes grow exponentially with respect to a
new hardness parameter: the sparsity of the signal autocorrelation. We also
review the algorithms used by the leading software packages. This set of
benchmark problems, we hope, will encourage the development of new algorithms
for the phase retrieval problem in general, and crystallography in particular.
| [
{
"created": "Thu, 1 Jun 2017 17:22:43 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Feb 2018 20:31:32 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Jun 2018 15:28:00 GMT",
"version": "v3"
}
] | 2018-06-15 | [
[
"Elser",
"Veit",
""
],
[
"Lan",
"Ti-Yen",
""
],
[
"Bendory",
"Tamir",
""
]
] | In recent years, the mathematical and algorithmic aspects of the phase retrieval problem have received considerable attention. Many papers in this area mention crystallography as a principal application. In crystallography, the signal to be recovered is periodic and comprised of atomic distributions arranged homogeneously in the unit cell of the crystal. The crystallographic problem is both the leading application and one of the hardest forms of phase retrieval. We have constructed a graded set of benchmark problems for evaluating algorithms that perform this type of phase retrieval. The data, publicly available online, is provided in an easily interpretable format. We also propose a simple and unambiguous success/failure criterion based on the actual needs in crystallography. Baseline runtimes were obtained with an iterative algorithm that is similar but more transparent than those used in crystallography. Empirically, the runtimes grow exponentially with respect to a new hardness parameter: the sparsity of the signal autocorrelation. We also review the algorithms used by the leading software packages. This set of benchmark problems, we hope, will encourage the development of new algorithms for the phase retrieval problem in general, and crystallography in particular. |
2312.09087 | J\"ames M\'en\'etrey | J\"ames M\'en\'etrey and Marcelo Pasin and Pascal Felber and Valerio
Schiavoni and Giovanni Mazzeo and Arne Hollum and Darshan Vaydia | A Comprehensive Trusted Runtime for WebAssembly with Intel SGX | This publication incorporates results from the VEDLIoT project, which
received funding from the European Union's Horizon 2020 research and
innovation programme under grant agreement No 957197. arXiv admin note: text
overlap with arXiv:2103.15860 | TDSC: IEEE Transactions on Dependable and Secure Computing,
November, 2023 | 10.1109/TDSC.2023.3334516 | null | cs.CR cs.PF cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-world scenarios, trusted execution environments (TEEs) frequently
host applications that lack the trust of the infrastructure provider, as well
as data owners who have specifically outsourced their data for remote
processing. We present Twine, a trusted runtime for running
WebAssembly-compiled applications within TEEs, establishing a two-way sandbox.
Twine leverages memory safety guarantees of WebAssembly (Wasm) and abstracts
the complexity of TEEs, empowering the execution of legacy and
language-agnostic applications. It extends the standard WebAssembly system
interface (WASI), providing controlled OS services, focusing on I/O.
Additionally, through built-in TEE mechanisms, Twine delivers attestation
capabilities to ensure the integrity of the runtime and the OS services
supplied to the application. We evaluate its performance using general-purpose
benchmarks and real-world applications, showing it compares on par with
state-of-the-art solutions. A case study involving fintech company Credora
reveals that Twine can be deployed in production with reasonable performance
trade-offs, ranging from a 0.7x slowdown to a 1.17x speedup compared to native
run time. Finally, we identify performance improvement through library
optimisation, showcasing one such adjustment that leads up to 4.1x speedup.
Twine is open-source and has been upstreamed into the original Wasm runtime,
WAMR.
| [
{
"created": "Thu, 14 Dec 2023 16:19:00 GMT",
"version": "v1"
}
] | 2023-12-15 | [
[
"Ménétrey",
"Jämes",
""
],
[
"Pasin",
"Marcelo",
""
],
[
"Felber",
"Pascal",
""
],
[
"Schiavoni",
"Valerio",
""
],
[
"Mazzeo",
"Giovanni",
""
],
[
"Hollum",
"Arne",
""
],
[
"Vaydia",
"Darshan",
""
]
] | In real-world scenarios, trusted execution environments (TEEs) frequently host applications that lack the trust of the infrastructure provider, as well as data owners who have specifically outsourced their data for remote processing. We present Twine, a trusted runtime for running WebAssembly-compiled applications within TEEs, establishing a two-way sandbox. Twine leverages memory safety guarantees of WebAssembly (Wasm) and abstracts the complexity of TEEs, empowering the execution of legacy and language-agnostic applications. It extends the standard WebAssembly system interface (WASI), providing controlled OS services, focusing on I/O. Additionally, through built-in TEE mechanisms, Twine delivers attestation capabilities to ensure the integrity of the runtime and the OS services supplied to the application. We evaluate its performance using general-purpose benchmarks and real-world applications, showing it compares on par with state-of-the-art solutions. A case study involving fintech company Credora reveals that Twine can be deployed in production with reasonable performance trade-offs, ranging from a 0.7x slowdown to a 1.17x speedup compared to native run time. Finally, we identify performance improvement through library optimisation, showcasing one such adjustment that leads up to 4.1x speedup. Twine is open-source and has been upstreamed into the original Wasm runtime, WAMR. |
0712.4075 | Vitaly Skachek | Vitaly Skachek, Mark F. Flanagan, Eimear Byrne, Marcus Greferath | Polytope Representations for Linear-Programming Decoding of Non-Binary
Linear Codes | 5 pages, to appear in 2008 IEEE International Symposium on
Information Theory | null | 10.1109/ISIT.2008.4595239 | null | cs.IT math.IT | null | In previous work, we demonstrated how decoding of a non-binary linear code
could be formulated as a linear-programming problem. In this paper, we study
different polytopes for use with linear-programming decoding, and show that for
many classes of codes these polytopes yield a complexity advantage for
decoding. These representations lead to polynomial-time decoders for a wide
variety of classical non-binary linear codes.
| [
{
"created": "Tue, 25 Dec 2007 15:44:01 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Apr 2008 17:35:15 GMT",
"version": "v2"
}
] | 2016-11-18 | [
[
"Skachek",
"Vitaly",
""
],
[
"Flanagan",
"Mark F.",
""
],
[
"Byrne",
"Eimear",
""
],
[
"Greferath",
"Marcus",
""
]
] | In previous work, we demonstrated how decoding of a non-binary linear code could be formulated as a linear-programming problem. In this paper, we study different polytopes for use with linear-programming decoding, and show that for many classes of codes these polytopes yield a complexity advantage for decoding. These representations lead to polynomial-time decoders for a wide variety of classical non-binary linear codes. |
2405.03089 | Ismail Alkhouri | Xitong Zhang, Ismail R. Alkhouri, Rongrong Wang | Structure-Preserving Network Compression Via Low-Rank Induced Training
Through Linear Layers Composition | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep Neural Networks (DNNs) have achieved remarkable success in addressing
many previously unsolvable tasks. However, the storage and computational
requirements associated with DNNs pose a challenge for deploying these trained
models on resource-limited devices. Therefore, a plethora of compression and
pruning techniques have been proposed in recent years. Low-rank decomposition
techniques are among the approaches most utilized to address this problem.
Compared to post-training compression, compression-promoted training is still
under-explored. In this paper, we present a theoretically-justified novel
approach, termed Low-Rank Induced Training (LoRITa), that promotes low-rankness
through the composition of linear layers and compresses by using singular value
truncation. This is achieved without the need to change the structure at
inference time or require constrained and/or additional optimization, other
than the standard weight decay regularization. Moreover, LoRITa eliminates the
need to (i) initialize with pre-trained models and (ii) specify rank selection
prior to training. Our experimental results (i) demonstrate the effectiveness
of our approach using MNIST on Fully Connected Networks, CIFAR10 on Vision
Transformers, and CIFAR10/100 on Convolutional Neural Networks, and (ii)
illustrate that we achieve either competitive or SOTA results when compared to
leading structured pruning methods in terms of FLOPs and parameters drop.
| [
{
"created": "Mon, 6 May 2024 00:58:23 GMT",
"version": "v1"
}
] | 2024-05-07 | [
[
"Zhang",
"Xitong",
""
],
[
"Alkhouri",
"Ismail R.",
""
],
[
"Wang",
"Rongrong",
""
]
] | Deep Neural Networks (DNNs) have achieved remarkable success in addressing many previously unsolvable tasks. However, the storage and computational requirements associated with DNNs pose a challenge for deploying these trained models on resource-limited devices. Therefore, a plethora of compression and pruning techniques have been proposed in recent years. Low-rank decomposition techniques are among the approaches most utilized to address this problem. Compared to post-training compression, compression-promoted training is still under-explored. In this paper, we present a theoretically-justified novel approach, termed Low-Rank Induced Training (LoRITa), that promotes low-rankness through the composition of linear layers and compresses by using singular value truncation. This is achieved without the need to change the structure at inference time or require constrained and/or additional optimization, other than the standard weight decay regularization. Moreover, LoRITa eliminates the need to (i) initialize with pre-trained models and (ii) specify rank selection prior to training. Our experimental results (i) demonstrate the effectiveness of our approach using MNIST on Fully Connected Networks, CIFAR10 on Vision Transformers, and CIFAR10/100 on Convolutional Neural Networks, and (ii) illustrate that we achieve either competitive or SOTA results when compared to leading structured pruning methods in terms of FLOPs and parameters drop. |
2306.08925 | Xiaoyi Bao | Xiaoyi Bao, Xiaotong Jiang, Zhongqing Wang, Yue Zhang, and Guodong
Zhou | Opinion Tree Parsing for Aspect-based Sentiment Analysis | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Extracting sentiment elements using pre-trained generative models has
recently led to large improvements in aspect-based sentiment analysis
benchmarks. However, these models always need large-scale computing resources,
and they also ignore explicit modeling of structure between sentiment elements.
To address these challenges, we propose an opinion tree parsing model, aiming
to parse all the sentiment elements from an opinion tree, which is much faster,
and can explicitly reveal a more comprehensive and complete aspect-level
sentiment structure. In particular, we first introduce a novel context-free
opinion grammar to normalize the opinion tree structure. We then employ a
neural chart-based opinion tree parser to fully explore the correlations among
sentiment elements and parse them into an opinion tree structure. Extensive
experiments show the superiority of our proposed model and the capacity of the
opinion tree parser with the proposed context-free opinion grammar. More
importantly, the results also prove that our model is much faster than previous
models.
| [
{
"created": "Thu, 15 Jun 2023 07:53:14 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Bao",
"Xiaoyi",
""
],
[
"Jiang",
"Xiaotong",
""
],
[
"Wang",
"Zhongqing",
""
],
[
"Zhang",
"Yue",
""
],
[
"Zhou",
"Guodong",
""
]
] | Extracting sentiment elements using pre-trained generative models has recently led to large improvements in aspect-based sentiment analysis benchmarks. However, these models always need large-scale computing resources, and they also ignore explicit modeling of structure between sentiment elements. To address these challenges, we propose an opinion tree parsing model, aiming to parse all the sentiment elements from an opinion tree, which is much faster, and can explicitly reveal a more comprehensive and complete aspect-level sentiment structure. In particular, we first introduce a novel context-free opinion grammar to normalize the opinion tree structure. We then employ a neural chart-based opinion tree parser to fully explore the correlations among sentiment elements and parse them into an opinion tree structure. Extensive experiments show the superiority of our proposed model and the capacity of the opinion tree parser with the proposed context-free opinion grammar. More importantly, the results also prove that our model is much faster than previous models. |
1911.09325 | Yafeng Liu | Liu Yafeng, Chen Tian, Liu Zhongyu, Zhang Lei, Hu Yanjun and Ding
Enjie | Simultaneous Implementation Features Extraction and Recognition Using
C3D Network for WiFi-based Human Activity Recognition | 11 pages, 8 figures, 5 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human actions recognition has attracted more and more people's attention.
Many technology have been developed to express human action's features, such as
image, skeleton-based, and channel state information(CSI). Among them, on
account of CSI's easy to be equipped and undemanding for light, and it has
gained more and more attention in some special scene. However, the relationship
between CSI signal and human actions is very complex, and some preliminary work
must be done to make CSI features easy to understand for computer. Nowadays,
many work departed CSI-based features' action dealing into two parts. One part
is for features extraction and dimension reduce, and the other part is for time
series problems. Some of them even omitted one of the two part work. Therefore,
the accuracies of current recognition systems are far from satisfactory. In
this paper, we propose a new deep learning based approach, i.e. C3D network and
C3D network with attention mechanism, for human actions recognition using CSI
signals. This kind of network can make feature extraction from spatial
convolution and temporal convolution simultaneously, and through this network
the two part of CSI-based human actions recognition mentioned above can be
realized at the same time. The entire algorithm structure is simplified. The
experimental results show that our proposed C3D network is able to achieve the
best recognition performance for all activities when compared with some
benchmark approaches.
| [
{
"created": "Thu, 21 Nov 2019 07:45:46 GMT",
"version": "v1"
}
] | 2019-11-22 | [
[
"Yafeng",
"Liu",
""
],
[
"Tian",
"Chen",
""
],
[
"Zhongyu",
"Liu",
""
],
[
"Lei",
"Zhang",
""
],
[
"Yanjun",
"Hu",
""
],
[
"Enjie",
"Ding",
""
]
] | Human actions recognition has attracted more and more people's attention. Many technology have been developed to express human action's features, such as image, skeleton-based, and channel state information(CSI). Among them, on account of CSI's easy to be equipped and undemanding for light, and it has gained more and more attention in some special scene. However, the relationship between CSI signal and human actions is very complex, and some preliminary work must be done to make CSI features easy to understand for computer. Nowadays, many work departed CSI-based features' action dealing into two parts. One part is for features extraction and dimension reduce, and the other part is for time series problems. Some of them even omitted one of the two part work. Therefore, the accuracies of current recognition systems are far from satisfactory. In this paper, we propose a new deep learning based approach, i.e. C3D network and C3D network with attention mechanism, for human actions recognition using CSI signals. This kind of network can make feature extraction from spatial convolution and temporal convolution simultaneously, and through this network the two part of CSI-based human actions recognition mentioned above can be realized at the same time. The entire algorithm structure is simplified. The experimental results show that our proposed C3D network is able to achieve the best recognition performance for all activities when compared with some benchmark approaches. |
1607.00595 | Datong Zhou | Datong Zhou, Maximilian Balandat, Claire Tomlin | Residential Demand Response Targeting Using Machine Learning with
Observational Data | 8 pages | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The large scale deployment of Advanced Metering Infrastructure among
residential energy customers has served as a boon for energy systems research
relying on granular consumption data. Residential Demand Response aims to
utilize the flexibility of consumers to reduce their energy usage during times
when the grid is strained. Suitable incentive mechanisms to encourage customers
to deviate from their usual behavior have to be implemented to correctly
control the bids into the wholesale electricity market as a Demand Response
provider. In this paper, we present a framework for short term load forecasting
on an individual user level, and relate nonexperimental estimates of Demand
Response efficacy, i.e. the estimated reduction of consumption during Demand
Response events, to the variability of user consumption. We apply our framework
on a data set from a residential Demand Response program in the Western United
States. Our results suggest that users with more variable consumption patterns
are more likely to reduce their consumption compared to users with a more
regular consumption behavior.
| [
{
"created": "Sun, 3 Jul 2016 05:32:50 GMT",
"version": "v1"
}
] | 2016-07-05 | [
[
"Zhou",
"Datong",
""
],
[
"Balandat",
"Maximilian",
""
],
[
"Tomlin",
"Claire",
""
]
] | The large scale deployment of Advanced Metering Infrastructure among residential energy customers has served as a boon for energy systems research relying on granular consumption data. Residential Demand Response aims to utilize the flexibility of consumers to reduce their energy usage during times when the grid is strained. Suitable incentive mechanisms to encourage customers to deviate from their usual behavior have to be implemented to correctly control the bids into the wholesale electricity market as a Demand Response provider. In this paper, we present a framework for short term load forecasting on an individual user level, and relate nonexperimental estimates of Demand Response efficacy, i.e. the estimated reduction of consumption during Demand Response events, to the variability of user consumption. We apply our framework on a data set from a residential Demand Response program in the Western United States. Our results suggest that users with more variable consumption patterns are more likely to reduce their consumption compared to users with a more regular consumption behavior. |
2404.00357 | Tao Li | Tao Li, Qinghua Tao, Weihao Yan, Zehao Lei, Yingwen Wu, Kun Fang,
Mingzhen He, Xiaolin Huang | Revisiting Random Weight Perturbation for Efficiently Improving
Generalization | Accepted to TMLR 2024 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Improving the generalization ability of modern deep neural networks (DNNs) is
a fundamental challenge in machine learning. Two branches of methods have been
proposed to seek flat minima and improve generalization: one led by
sharpness-aware minimization (SAM) minimizes the worst-case neighborhood loss
through adversarial weight perturbation (AWP), and the other minimizes the
expected Bayes objective with random weight perturbation (RWP). While RWP
offers advantages in computation and is closely linked to AWP on a mathematical
basis, its empirical performance has consistently lagged behind that of AWP. In
this paper, we revisit the use of RWP for improving generalization and propose
improvements from two perspectives: i) the trade-off between generalization and
convergence and ii) the random perturbation generation. Through extensive
experimental evaluations, we demonstrate that our enhanced RWP methods achieve
greater efficiency in enhancing generalization, particularly in large-scale
problems, while also offering comparable or even superior performance to SAM.
The code is released at https://github.com/nblt/mARWP.
| [
{
"created": "Sat, 30 Mar 2024 13:18:27 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Li",
"Tao",
""
],
[
"Tao",
"Qinghua",
""
],
[
"Yan",
"Weihao",
""
],
[
"Lei",
"Zehao",
""
],
[
"Wu",
"Yingwen",
""
],
[
"Fang",
"Kun",
""
],
[
"He",
"Mingzhen",
""
],
[
"Huang",
"Xiaolin",
""
]
] | Improving the generalization ability of modern deep neural networks (DNNs) is a fundamental challenge in machine learning. Two branches of methods have been proposed to seek flat minima and improve generalization: one led by sharpness-aware minimization (SAM) minimizes the worst-case neighborhood loss through adversarial weight perturbation (AWP), and the other minimizes the expected Bayes objective with random weight perturbation (RWP). While RWP offers advantages in computation and is closely linked to AWP on a mathematical basis, its empirical performance has consistently lagged behind that of AWP. In this paper, we revisit the use of RWP for improving generalization and propose improvements from two perspectives: i) the trade-off between generalization and convergence and ii) the random perturbation generation. Through extensive experimental evaluations, we demonstrate that our enhanced RWP methods achieve greater efficiency in enhancing generalization, particularly in large-scale problems, while also offering comparable or even superior performance to SAM. The code is released at https://github.com/nblt/mARWP. |
2312.10714 | Siqi Liu | Siqi Liu, Yong-Lu Li, Zhou Fang, Xinpeng Liu, Yang You, Cewu Lu | Primitive-based 3D Human-Object Interaction Modelling and Programming | AAAI2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Embedding Human and Articulated Object Interaction (HAOI) in 3D is an
important direction for a deeper human activity understanding. Different from
previous works that use parametric and CAD models to represent humans and
objects, in this work, we propose a novel 3D geometric primitive-based language
to encode both humans and objects. Given our new paradigm, humans and objects
are all compositions of primitives instead of heterogeneous entities. Thus,
mutual information learning may be achieved between the limited 3D data of
humans and different object categories. Moreover, considering the simplicity of
the expression and the richness of the information it contains, we choose the
superquadric as the primitive representation. To explore an effective embedding
of HAOI for the machine, we build a new benchmark on 3D HAOI consisting of
primitives together with their images and propose a task requiring machines to
recover 3D HAOI using primitives from images. Moreover, we propose a baseline
of single-view 3D reconstruction on HAOI. We believe this primitive-based 3D
HAOI representation would pave the way for 3D HAOI studies. Our code and data
are available at https://mvig-rhos.com/p3haoi.
| [
{
"created": "Sun, 17 Dec 2023 13:16:49 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Liu",
"Siqi",
""
],
[
"Li",
"Yong-Lu",
""
],
[
"Fang",
"Zhou",
""
],
[
"Liu",
"Xinpeng",
""
],
[
"You",
"Yang",
""
],
[
"Lu",
"Cewu",
""
]
] | Embedding Human and Articulated Object Interaction (HAOI) in 3D is an important direction for a deeper human activity understanding. Different from previous works that use parametric and CAD models to represent humans and objects, in this work, we propose a novel 3D geometric primitive-based language to encode both humans and objects. Given our new paradigm, humans and objects are all compositions of primitives instead of heterogeneous entities. Thus, mutual information learning may be achieved between the limited 3D data of humans and different object categories. Moreover, considering the simplicity of the expression and the richness of the information it contains, we choose the superquadric as the primitive representation. To explore an effective embedding of HAOI for the machine, we build a new benchmark on 3D HAOI consisting of primitives together with their images and propose a task requiring machines to recover 3D HAOI using primitives from images. Moreover, we propose a baseline of single-view 3D reconstruction on HAOI. We believe this primitive-based 3D HAOI representation would pave the way for 3D HAOI studies. Our code and data are available at https://mvig-rhos.com/p3haoi. |
2206.04935 | Max M\"uller-Eberstein | Max M\"uller-Eberstein, Rob van der Goot and Barbara Plank | Sort by Structure: Language Model Ranking as Dependency Probing | Accepted at NAACL 2022 (Main Conference) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Making an informed choice of pre-trained language model (LM) is critical for
performance, yet environmentally costly, and as such widely underexplored. The
field of Computer Vision has begun to tackle encoder ranking, with promising
forays into Natural Language Processing, however they lack coverage of
linguistic tasks such as structured prediction. We propose probing to rank LMs,
specifically for parsing dependencies in a given language, by measuring the
degree to which labeled trees are recoverable from an LM's contextualized
embeddings. Across 46 typologically and architecturally diverse LM-language
pairs, our probing approach predicts the best LM choice 79% of the time using
orders of magnitude less compute than training a full parser. Within this
study, we identify and analyze one recently proposed decoupled LM - RemBERT -
and find it strikingly contains less inherent dependency information, but often
yields the best parser after full fine-tuning. Without this outlier our
approach identifies the best LM in 89% of cases.
| [
{
"created": "Fri, 10 Jun 2022 08:10:29 GMT",
"version": "v1"
}
] | 2022-06-13 | [
[
"Müller-Eberstein",
"Max",
""
],
[
"van der Goot",
"Rob",
""
],
[
"Plank",
"Barbara",
""
]
] | Making an informed choice of pre-trained language model (LM) is critical for performance, yet environmentally costly, and as such widely underexplored. The field of Computer Vision has begun to tackle encoder ranking, with promising forays into Natural Language Processing, however they lack coverage of linguistic tasks such as structured prediction. We propose probing to rank LMs, specifically for parsing dependencies in a given language, by measuring the degree to which labeled trees are recoverable from an LM's contextualized embeddings. Across 46 typologically and architecturally diverse LM-language pairs, our probing approach predicts the best LM choice 79% of the time using orders of magnitude less compute than training a full parser. Within this study, we identify and analyze one recently proposed decoupled LM - RemBERT - and find it strikingly contains less inherent dependency information, but often yields the best parser after full fine-tuning. Without this outlier our approach identifies the best LM in 89% of cases. |
2403.18555 | Philip Kenneweg | Philip Kenneweg, Sarah Schr\"oder, Alexander Schulz, Barbara Hammer | Debiasing Sentence Embedders through Contrastive Word Pairs | null | null | 10.5220/0011615300003411 | null | cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | Over the last years, various sentence embedders have been an integral part in
the success of current machine learning approaches to Natural Language
Processing (NLP). Unfortunately, multiple sources have shown that the bias,
inherent in the datasets upon which these embedding methods are trained, is
learned by them. A variety of different approaches to remove biases in
embeddings exists in the literature. Most of these approaches are applicable to
word embeddings and in fewer cases to sentence embeddings. It is problematic
that most debiasing approaches are directly transferred from word embeddings,
therefore these approaches fail to take into account the nonlinear nature of
sentence embedders and the embeddings they produce. It has been shown in
literature that bias information is still present if sentence embeddings are
debiased using such methods. In this contribution, we explore an approach to
remove linear and nonlinear bias information for NLP solutions, without
impacting downstream performance. We compare our approach to common debiasing
methods on classical bias metrics and on bias metrics which take nonlinear
information into account.
| [
{
"created": "Wed, 27 Mar 2024 13:34:59 GMT",
"version": "v1"
}
] | 2024-03-28 | [
[
"Kenneweg",
"Philip",
""
],
[
"Schröder",
"Sarah",
""
],
[
"Schulz",
"Alexander",
""
],
[
"Hammer",
"Barbara",
""
]
] | Over the last years, various sentence embedders have been an integral part in the success of current machine learning approaches to Natural Language Processing (NLP). Unfortunately, multiple sources have shown that the bias, inherent in the datasets upon which these embedding methods are trained, is learned by them. A variety of different approaches to remove biases in embeddings exists in the literature. Most of these approaches are applicable to word embeddings and in fewer cases to sentence embeddings. It is problematic that most debiasing approaches are directly transferred from word embeddings, therefore these approaches fail to take into account the nonlinear nature of sentence embedders and the embeddings they produce. It has been shown in literature that bias information is still present if sentence embeddings are debiased using such methods. In this contribution, we explore an approach to remove linear and nonlinear bias information for NLP solutions, without impacting downstream performance. We compare our approach to common debiasing methods on classical bias metrics and on bias metrics which take nonlinear information into account. |
2309.02011 | Pascal Mattia Esser | Pascal Esser, Satyaki Mukherjee, Debarghya Ghoshdastidar | Representation Learning Dynamics of Self-Supervised Models | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-Supervised Learning (SSL) is an important paradigm for learning
representations from unlabelled data, and SSL with neural networks has been
highly successful in practice. However current theoretical analysis of SSL is
mostly restricted to generalisation error bounds. In contrast, learning
dynamics often provide a precise characterisation of the behaviour of neural
networks based models but, so far, are mainly known in supervised settings. In
this paper, we study the learning dynamics of SSL models, specifically
representations obtained by minimising contrastive and non-contrastive losses.
We show that a naive extension of the dymanics of multivariate regression to
SSL leads to learning trivial scalar representations that demonstrates
dimension collapse in SSL. Consequently, we formulate SSL objectives with
orthogonality constraints on the weights, and derive the exact (network width
independent) learning dynamics of the SSL models trained using gradient descent
on the Grassmannian manifold. We also argue that the infinite width
approximation of SSL models significantly deviate from the neural tangent
kernel approximations of supervised models. We numerically illustrate the
validity of our theoretical findings, and discuss how the presented results
provide a framework for further theoretical analysis of contrastive and
non-contrastive SSL.
| [
{
"created": "Tue, 5 Sep 2023 07:48:45 GMT",
"version": "v1"
}
] | 2023-09-06 | [
[
"Esser",
"Pascal",
""
],
[
"Mukherjee",
"Satyaki",
""
],
[
"Ghoshdastidar",
"Debarghya",
""
]
] | Self-Supervised Learning (SSL) is an important paradigm for learning representations from unlabelled data, and SSL with neural networks has been highly successful in practice. However current theoretical analysis of SSL is mostly restricted to generalisation error bounds. In contrast, learning dynamics often provide a precise characterisation of the behaviour of neural networks based models but, so far, are mainly known in supervised settings. In this paper, we study the learning dynamics of SSL models, specifically representations obtained by minimising contrastive and non-contrastive losses. We show that a naive extension of the dymanics of multivariate regression to SSL leads to learning trivial scalar representations that demonstrates dimension collapse in SSL. Consequently, we formulate SSL objectives with orthogonality constraints on the weights, and derive the exact (network width independent) learning dynamics of the SSL models trained using gradient descent on the Grassmannian manifold. We also argue that the infinite width approximation of SSL models significantly deviate from the neural tangent kernel approximations of supervised models. We numerically illustrate the validity of our theoretical findings, and discuss how the presented results provide a framework for further theoretical analysis of contrastive and non-contrastive SSL. |
2408.07966 | Shunxin Guo | Shunxin Guo, Hongsong Wang, Shuxia Lin, Zhiqiang Kou, Xin Geng | Addressing Skewed Heterogeneity via Federated Prototype Rectification
with Personalization | null | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | Federated learning is an efficient framework designed to facilitate
collaborative model training across multiple distributed devices while
preserving user data privacy. A significant challenge of federated learning is
data-level heterogeneity, i.e., skewed or long-tailed distribution of private
data. Although various methods have been proposed to address this challenge,
most of them assume that the underlying global data is uniformly distributed
across all clients. This paper investigates data-level heterogeneity federated
learning with a brief review and redefines a more practical and challenging
setting called Skewed Heterogeneous Federated Learning (SHFL). Accordingly, we
propose a novel Federated Prototype Rectification with Personalization which
consists of two parts: Federated Personalization and Federated Prototype
Rectification. The former aims to construct balanced decision boundaries
between dominant and minority classes based on private data, while the latter
exploits both inter-class discrimination and intra-class consistency to rectify
empirical prototypes. Experiments on three popular benchmarks show that the
proposed approach outperforms current state-of-the-art methods and achieves
balanced performance in both personalization and generalization.
| [
{
"created": "Thu, 15 Aug 2024 06:26:46 GMT",
"version": "v1"
}
] | 2024-08-16 | [
[
"Guo",
"Shunxin",
""
],
[
"Wang",
"Hongsong",
""
],
[
"Lin",
"Shuxia",
""
],
[
"Kou",
"Zhiqiang",
""
],
[
"Geng",
"Xin",
""
]
] | Federated learning is an efficient framework designed to facilitate collaborative model training across multiple distributed devices while preserving user data privacy. A significant challenge of federated learning is data-level heterogeneity, i.e., skewed or long-tailed distribution of private data. Although various methods have been proposed to address this challenge, most of them assume that the underlying global data is uniformly distributed across all clients. This paper investigates data-level heterogeneity federated learning with a brief review and redefines a more practical and challenging setting called Skewed Heterogeneous Federated Learning (SHFL). Accordingly, we propose a novel Federated Prototype Rectification with Personalization which consists of two parts: Federated Personalization and Federated Prototype Rectification. The former aims to construct balanced decision boundaries between dominant and minority classes based on private data, while the latter exploits both inter-class discrimination and intra-class consistency to rectify empirical prototypes. Experiments on three popular benchmarks show that the proposed approach outperforms current state-of-the-art methods and achieves balanced performance in both personalization and generalization. |
2309.12708 | Yuxiang Yan | Yuxiang Yan, Boda Liu, Jianfei Ai, Qinbu Li, Ru Wan, Jian Pu | PointSSC: A Cooperative Vehicle-Infrastructure Point Cloud Benchmark for
Semantic Scene Completion | ICRA2024, oral & poster | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic Scene Completion (SSC) aims to jointly generate space occupancies
and semantic labels for complex 3D scenes. Most existing SSC models focus on
volumetric representations, which are memory-inefficient for large outdoor
spaces. Point clouds provide a lightweight alternative but existing benchmarks
lack outdoor point cloud scenes with semantic labels. To address this, we
introduce PointSSC, the first cooperative vehicle-infrastructure point cloud
benchmark for semantic scene completion. These scenes exhibit long-range
perception and minimal occlusion. We develop an automated annotation pipeline
leveraging Semantic Segment Anything to efficiently assign semantics. To
benchmark progress, we propose a LiDAR-based model with a Spatial-Aware
Transformer for global and local feature extraction and a Completion and
Segmentation Cooperative Module for joint completion and segmentation. PointSSC
provides a challenging testbed to drive advances in semantic point cloud
completion for real-world navigation. The code and datasets are available at
https://github.com/yyxssm/PointSSC.
| [
{
"created": "Fri, 22 Sep 2023 08:39:16 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Mar 2024 02:50:04 GMT",
"version": "v2"
}
] | 2024-03-08 | [
[
"Yan",
"Yuxiang",
""
],
[
"Liu",
"Boda",
""
],
[
"Ai",
"Jianfei",
""
],
[
"Li",
"Qinbu",
""
],
[
"Wan",
"Ru",
""
],
[
"Pu",
"Jian",
""
]
] | Semantic Scene Completion (SSC) aims to jointly generate space occupancies and semantic labels for complex 3D scenes. Most existing SSC models focus on volumetric representations, which are memory-inefficient for large outdoor spaces. Point clouds provide a lightweight alternative but existing benchmarks lack outdoor point cloud scenes with semantic labels. To address this, we introduce PointSSC, the first cooperative vehicle-infrastructure point cloud benchmark for semantic scene completion. These scenes exhibit long-range perception and minimal occlusion. We develop an automated annotation pipeline leveraging Semantic Segment Anything to efficiently assign semantics. To benchmark progress, we propose a LiDAR-based model with a Spatial-Aware Transformer for global and local feature extraction and a Completion and Segmentation Cooperative Module for joint completion and segmentation. PointSSC provides a challenging testbed to drive advances in semantic point cloud completion for real-world navigation. The code and datasets are available at https://github.com/yyxssm/PointSSC. |
1508.05189 | Hartmut Klauck | Ralph C. Bottesch, Dmitry Gavinsky, Hartmut Klauck | Correlation in Hard Distributions in Communication Complexity | null | null | null | null | cs.CC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the effect that the amount of correlation in a bipartite
distribution has on the communication complexity of a problem under that
distribution. We introduce a new family of complexity measures that
interpolates between the two previously studied extreme cases: the (standard)
randomised communication complexity and the case of distributional complexity
under product distributions.
We give a tight characterisation of the randomised complexity of Disjointness
under distributions with mutual information $k$, showing that it is
$\Theta(\sqrt{n(k+1)})$ for all $0\leq k\leq n$. This smoothly interpolates
between the lower bounds of Babai, Frankl and Simon for the product
distribution case ($k=0$), and the bound of Razborov for the randomised case.
The upper bounds improve and generalise what was known for product
distributions, and imply that any tight bound for Disjointness needs
$\Omega(n)$ bits of mutual information in the corresponding distribution.
We study the same question in the distributional quantum setting, and show a
lower bound of $\Omega((n(k+1))^{1/4})$, and an upper bound, matching up to a
logarithmic factor.
We show that there are total Boolean functions $f_d$ on $2n$ inputs that have
distributional communication complexity $O(\log n)$ under all distributions of
information up to $o(n)$, while the (interactive) distributional complexity
maximised over all distributions is $\Theta(\log d)$ for $6n\leq d\leq
2^{n/100}$.
We show that in the setting of one-way communication under product
distributions, the dependence of communication cost on the allowed error
$\epsilon$ is multiplicative in $\log(1/\epsilon)$ -- the previous upper bounds
had the dependence of more than $1/\epsilon$.
| [
{
"created": "Fri, 21 Aug 2015 07:05:39 GMT",
"version": "v1"
}
] | 2015-08-25 | [
[
"Bottesch",
"Ralph C.",
""
],
[
"Gavinsky",
"Dmitry",
""
],
[
"Klauck",
"Hartmut",
""
]
] | We study the effect that the amount of correlation in a bipartite distribution has on the communication complexity of a problem under that distribution. We introduce a new family of complexity measures that interpolates between the two previously studied extreme cases: the (standard) randomised communication complexity and the case of distributional complexity under product distributions. We give a tight characterisation of the randomised complexity of Disjointness under distributions with mutual information $k$, showing that it is $\Theta(\sqrt{n(k+1)})$ for all $0\leq k\leq n$. This smoothly interpolates between the lower bounds of Babai, Frankl and Simon for the product distribution case ($k=0$), and the bound of Razborov for the randomised case. The upper bounds improve and generalise what was known for product distributions, and imply that any tight bound for Disjointness needs $\Omega(n)$ bits of mutual information in the corresponding distribution. We study the same question in the distributional quantum setting, and show a lower bound of $\Omega((n(k+1))^{1/4})$, and an upper bound, matching up to a logarithmic factor. We show that there are total Boolean functions $f_d$ on $2n$ inputs that have distributional communication complexity $O(\log n)$ under all distributions of information up to $o(n)$, while the (interactive) distributional complexity maximised over all distributions is $\Theta(\log d)$ for $6n\leq d\leq 2^{n/100}$. We show that in the setting of one-way communication under product distributions, the dependence of communication cost on the allowed error $\epsilon$ is multiplicative in $\log(1/\epsilon)$ -- the previous upper bounds had the dependence of more than $1/\epsilon$. |
1802.06157 | Matthieu Lequesne | Matthieu Lequesne, Jean-Pierre Tillich | Attack on the Edon-K Key Encapsulation Mechanism | Submitted to ISIT 2018 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The key encapsulation mechanism Edon-K was proposed in response to the call
for post-quantum cryptography standardization issued by the National Institute
of Standards and Technologies (NIST). This scheme is inspired by the McEliece
scheme but uses another family of codes defined over $\mathbb{F}_{2^{128}}$
instead of $\mathbb{F}_2$ and is not based on the Hamming metric. It allows
significantly shorter public keys than the McEliece scheme. In this paper, we
give a polynomial time algorithm that recovers the encapsulated secret. This
attack makes the scheme insecure for the intended use. We obtain this result by
observing that recovering the error in the McEliece scheme corresponding to
Edon-K can be viewed as a decoding problem for the rank-metric. We show that
the code used in Edon-K is in fact a super-code of a Low Rank Parity Check
(LRPC) code of very small rank (1 or 2). A suitable parity-check matrix for the
super-code of such low rank can be easily derived from for the public key. We
then use this parity-check matrix in a decoding algorithm that was devised for
LRPC codes to recover the error. Finally we explain how we decapsulate the
secret once we have found the error.
| [
{
"created": "Fri, 16 Feb 2018 23:06:50 GMT",
"version": "v1"
}
] | 2018-02-20 | [
[
"Lequesne",
"Matthieu",
""
],
[
"Tillich",
"Jean-Pierre",
""
]
] | The key encapsulation mechanism Edon-K was proposed in response to the call for post-quantum cryptography standardization issued by the National Institute of Standards and Technologies (NIST). This scheme is inspired by the McEliece scheme but uses another family of codes defined over $\mathbb{F}_{2^{128}}$ instead of $\mathbb{F}_2$ and is not based on the Hamming metric. It allows significantly shorter public keys than the McEliece scheme. In this paper, we give a polynomial time algorithm that recovers the encapsulated secret. This attack makes the scheme insecure for the intended use. We obtain this result by observing that recovering the error in the McEliece scheme corresponding to Edon-K can be viewed as a decoding problem for the rank-metric. We show that the code used in Edon-K is in fact a super-code of a Low Rank Parity Check (LRPC) code of very small rank (1 or 2). A suitable parity-check matrix for the super-code of such low rank can be easily derived from for the public key. We then use this parity-check matrix in a decoding algorithm that was devised for LRPC codes to recover the error. Finally we explain how we decapsulate the secret once we have found the error. |
cs/0412095 | Mirela Damian | Mirela Damian and Joseph O'Rourke | Partitioning Regular Polygons into Circular Pieces II:Nonconvex
Partitions | 13 pages, 11 figures | null | null | null | cs.CG cs.DM | null | We explore optimal circular nonconvex partitions of regular k-gons. The
circularity of a polygon is measured by its aspect ratio: the ratio of the
radii of the smallest circumscribing circle to the largest inscribed disk. An
optimal circular partition minimizes the maximum ratio over all pieces in the
partition. We show that the equilateral triangle has an optimal 4-piece
nonconvex partition, the square an optimal 13-piece nonconvex partition, and
the pentagon has an optimal nonconvex partition with more than 20 thousand
pieces. For hexagons and beyond, we provide a general algorithm that approaches
optimality, but does not achieve it.
| [
{
"created": "Tue, 21 Dec 2004 05:41:11 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Damian",
"Mirela",
""
],
[
"O'Rourke",
"Joseph",
""
]
] | We explore optimal circular nonconvex partitions of regular k-gons. The circularity of a polygon is measured by its aspect ratio: the ratio of the radii of the smallest circumscribing circle to the largest inscribed disk. An optimal circular partition minimizes the maximum ratio over all pieces in the partition. We show that the equilateral triangle has an optimal 4-piece nonconvex partition, the square an optimal 13-piece nonconvex partition, and the pentagon has an optimal nonconvex partition with more than 20 thousand pieces. For hexagons and beyond, we provide a general algorithm that approaches optimality, but does not achieve it. |
2109.10750 | Hongbo Zhang | Hongbo Zhang, Yunshuang Li, Yipin Guo, Xinyi Chen, Qinyuan Ren | Control of Pneumatic Artificial Muscles with SNN-based Cerebellar-like
Model | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Soft robotics technologies have gained growing interest in recent years,
which allows various applications from manufacturing to human-robot
interaction. Pneumatic artificial muscle (PAM), a typical soft actuator, has
been widely applied to soft robots. The compliance and resilience of soft
actuators allow soft robots to behave compliant when interacting with
unstructured environments, while the utilization of soft actuators also
introduces nonlinearity and uncertainty. Inspired by Cerebellum's vital
functions in control of human's physical movement, a neural network model of
Cerebellum based on spiking neuron networks (SNNs) is designed. This model is
used as a feed-forward controller in controlling a 1-DOF robot arm driven by
PAMs. The simulation results show that this Cerebellar-based system achieves
good performance and increases the system's response.
| [
{
"created": "Wed, 22 Sep 2021 14:11:05 GMT",
"version": "v1"
}
] | 2021-09-23 | [
[
"Zhang",
"Hongbo",
""
],
[
"Li",
"Yunshuang",
""
],
[
"Guo",
"Yipin",
""
],
[
"Chen",
"Xinyi",
""
],
[
"Ren",
"Qinyuan",
""
]
] | Soft robotics technologies have gained growing interest in recent years, which allows various applications from manufacturing to human-robot interaction. Pneumatic artificial muscle (PAM), a typical soft actuator, has been widely applied to soft robots. The compliance and resilience of soft actuators allow soft robots to behave compliant when interacting with unstructured environments, while the utilization of soft actuators also introduces nonlinearity and uncertainty. Inspired by Cerebellum's vital functions in control of human's physical movement, a neural network model of Cerebellum based on spiking neuron networks (SNNs) is designed. This model is used as a feed-forward controller in controlling a 1-DOF robot arm driven by PAMs. The simulation results show that this Cerebellar-based system achieves good performance and increases the system's response. |
cs/0511016 | Santo Fortunato Dr | Santo Fortunato, Marian Boguna, Alessandro Flammini, Filippo Menczer | How to make the top ten: Approximating PageRank from in-degree | 8 pages, 7 figures, 2 tables | null | null | null | cs.IR physics.soc-ph | null | PageRank has become a key element in the success of search engines, allowing
to rank the most important hits in the top screen of results. One key aspect
that distinguishes PageRank from other prestige measures such as in-degree is
its global nature. From the information provider perspective, this makes it
difficult or impossible to predict how their pages will be ranked. Consequently
a market has emerged for the optimization of search engine results. Here we
study the accuracy with which PageRank can be approximated by in-degree, a
local measure made freely available by search engines. Theoretical and
empirical analyses lead to conclude that given the weak degree correlations in
the Web link graph, the approximation can be relatively accurate, giving
service and information providers an effective new marketing tool.
| [
{
"created": "Thu, 3 Nov 2005 23:01:50 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Fortunato",
"Santo",
""
],
[
"Boguna",
"Marian",
""
],
[
"Flammini",
"Alessandro",
""
],
[
"Menczer",
"Filippo",
""
]
] | PageRank has become a key element in the success of search engines, allowing to rank the most important hits in the top screen of results. One key aspect that distinguishes PageRank from other prestige measures such as in-degree is its global nature. From the information provider perspective, this makes it difficult or impossible to predict how their pages will be ranked. Consequently a market has emerged for the optimization of search engine results. Here we study the accuracy with which PageRank can be approximated by in-degree, a local measure made freely available by search engines. Theoretical and empirical analyses lead to conclude that given the weak degree correlations in the Web link graph, the approximation can be relatively accurate, giving service and information providers an effective new marketing tool. |
2105.03137 | Karl-Ludwig Besser | Eduard Jorswieck, Andrew Lonnstrom, Karl-Ludwig Besser, Stefan Rothe,
Juergen W. Czarske | Achievable Physical-Layer Secrecy in Multi-Mode Fiber Channels using
Artificial Noise | 5 pages, 2 figures | null | 10.1109/ISWCS49558.2021.9562176 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable and secure communication is an important aspect of modern fiber
optic communication. In this work we consider a multi-mode fiber (MMF) channel
wiretapped by an eavesdropper. We assume the transmitter knows the legitimate
channel, but statistical knowledge of the eavesdropper's channel only. We
propose a transmission scheme with artificial noise (AN) for such a channel. In
particular, we formulate the corresponding optimization problem which aims to
maximize the average secrecy rate and develop an algorithm to solve it. We
apply this algorithm to actual measured MMF channels. As real fiber
measurements show, for a 55 mode MMF we can achieve positive average secrecy
rates with the proper use of AN. Furthermore, the gain compared to standard
precoding and power allocation schemes is illustrated.
| [
{
"created": "Fri, 7 May 2021 09:29:25 GMT",
"version": "v1"
}
] | 2021-10-22 | [
[
"Jorswieck",
"Eduard",
""
],
[
"Lonnstrom",
"Andrew",
""
],
[
"Besser",
"Karl-Ludwig",
""
],
[
"Rothe",
"Stefan",
""
],
[
"Czarske",
"Juergen W.",
""
]
] | Reliable and secure communication is an important aspect of modern fiber optic communication. In this work we consider a multi-mode fiber (MMF) channel wiretapped by an eavesdropper. We assume the transmitter knows the legitimate channel, but statistical knowledge of the eavesdropper's channel only. We propose a transmission scheme with artificial noise (AN) for such a channel. In particular, we formulate the corresponding optimization problem which aims to maximize the average secrecy rate and develop an algorithm to solve it. We apply this algorithm to actual measured MMF channels. As real fiber measurements show, for a 55 mode MMF we can achieve positive average secrecy rates with the proper use of AN. Furthermore, the gain compared to standard precoding and power allocation schemes is illustrated. |
1602.05568 | Mohammad Taha Bahadori | Edward Choi, Mohammad Taha Bahadori, Elizabeth Searles, Catherine
Coffey, Jimeng Sun | Multi-layer Representation Learning for Medical Concepts | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning efficient representations for concepts has been proven to be an
important basis for many applications such as machine translation or document
classification. Proper representations of medical concepts such as diagnosis,
medication, procedure codes and visits will have broad applications in
healthcare analytics. However, in Electronic Health Records (EHR) the visit
sequences of patients include multiple concepts (diagnosis, procedure, and
medication codes) per visit. This structure provides two types of relational
information, namely sequential order of visits and co-occurrence of the codes
within each visit. In this work, we propose Med2Vec, which not only learns
distributed representations for both medical codes and visits from a large EHR
dataset with over 3 million visits, but also allows us to interpret the learned
representations confirmed positively by clinical experts. In the experiments,
Med2Vec displays significant improvement in key medical applications compared
to popular baselines such as Skip-gram, GloVe and stacked autoencoder, while
providing clinically meaningful interpretation.
| [
{
"created": "Wed, 17 Feb 2016 20:55:40 GMT",
"version": "v1"
}
] | 2016-02-18 | [
[
"Choi",
"Edward",
""
],
[
"Bahadori",
"Mohammad Taha",
""
],
[
"Searles",
"Elizabeth",
""
],
[
"Coffey",
"Catherine",
""
],
[
"Sun",
"Jimeng",
""
]
] | Learning efficient representations for concepts has been proven to be an important basis for many applications such as machine translation or document classification. Proper representations of medical concepts such as diagnosis, medication, procedure codes and visits will have broad applications in healthcare analytics. However, in Electronic Health Records (EHR) the visit sequences of patients include multiple concepts (diagnosis, procedure, and medication codes) per visit. This structure provides two types of relational information, namely sequential order of visits and co-occurrence of the codes within each visit. In this work, we propose Med2Vec, which not only learns distributed representations for both medical codes and visits from a large EHR dataset with over 3 million visits, but also allows us to interpret the learned representations confirmed positively by clinical experts. In the experiments, Med2Vec displays significant improvement in key medical applications compared to popular baselines such as Skip-gram, GloVe and stacked autoencoder, while providing clinically meaningful interpretation. |
2309.08776 | Josselin Somerville Roberts | Josselin Somerville Roberts, Julia Di | Projected Task-Specific Layers for Multi-Task Reinforcement Learning | null | ICRA 2024 | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Multi-task reinforcement learning could enable robots to scale across a wide
variety of manipulation tasks in homes and workplaces. However, generalizing
from one task to another and mitigating negative task interference still
remains a challenge. Addressing this challenge by successfully sharing
information across tasks will depend on how well the structure underlying the
tasks is captured. In this work, we introduce our new architecture, Projected
Task-Specific Layers (PTSL), that leverages a common policy with dense
task-specific corrections through task-specific layers to better express shared
and variable task information. We then show that our model outperforms the
state of the art on the MT10 and MT50 benchmarks of Meta-World consisting of 10
and 50 goal-conditioned tasks for a Sawyer arm.
| [
{
"created": "Fri, 15 Sep 2023 21:42:06 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Mar 2024 18:51:45 GMT",
"version": "v2"
}
] | 2024-03-07 | [
[
"Roberts",
"Josselin Somerville",
""
],
[
"Di",
"Julia",
""
]
] | Multi-task reinforcement learning could enable robots to scale across a wide variety of manipulation tasks in homes and workplaces. However, generalizing from one task to another and mitigating negative task interference still remains a challenge. Addressing this challenge by successfully sharing information across tasks will depend on how well the structure underlying the tasks is captured. In this work, we introduce our new architecture, Projected Task-Specific Layers (PTSL), that leverages a common policy with dense task-specific corrections through task-specific layers to better express shared and variable task information. We then show that our model outperforms the state of the art on the MT10 and MT50 benchmarks of Meta-World consisting of 10 and 50 goal-conditioned tasks for a Sawyer arm. |
2308.06672 | Yong Wang | Yong Wang and Yanzhong Yao and Jiawei Guo and Zhiming Gao | A practical PINN framework for multi-scale problems with multi-magnitude
loss terms | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For multi-scale problems, the conventional physics-informed neural networks
(PINNs) face some challenges in obtaining available predictions. In this paper,
based on PINNs, we propose a practical deep learning framework for multi-scale
problems by reconstructing the loss function and associating it with special
neural network architectures. New PINN methods derived from the improved PINN
framework differ from the conventional PINN method mainly in two aspects.
First, the new methods use a novel loss function by modifying the standard loss
function through a (grouping) regularization strategy. The regularization
strategy implements a different power operation on each loss term so that all
loss terms composing the loss function are of approximately the same order of
magnitude, which makes all loss terms be optimized synchronously during the
optimization process. Second, for the multi-frequency or high-frequency
problems, in addition to using the modified loss function, new methods upgrade
the neural network architecture from the common fully-connected neural network
to special network architectures such as the Fourier feature architecture, and
the integrated architecture developed by us. The combination of the above two
techniques leads to a significant improvement in the computational accuracy of
multi-scale problems. Several challenging numerical examples demonstrate the
effectiveness of the proposed methods. The proposed methods not only
significantly outperform the conventional PINN method in terms of computational
efficiency and computational accuracy, but also compare favorably with the
state-of-the-art methods in the recent literature. The improved PINN framework
facilitates better application of PINNs to multi-scale problems.
| [
{
"created": "Sun, 13 Aug 2023 03:26:01 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Oct 2023 16:22:20 GMT",
"version": "v2"
}
] | 2023-10-31 | [
[
"Wang",
"Yong",
""
],
[
"Yao",
"Yanzhong",
""
],
[
"Guo",
"Jiawei",
""
],
[
"Gao",
"Zhiming",
""
]
] | For multi-scale problems, the conventional physics-informed neural networks (PINNs) face some challenges in obtaining available predictions. In this paper, based on PINNs, we propose a practical deep learning framework for multi-scale problems by reconstructing the loss function and associating it with special neural network architectures. New PINN methods derived from the improved PINN framework differ from the conventional PINN method mainly in two aspects. First, the new methods use a novel loss function by modifying the standard loss function through a (grouping) regularization strategy. The regularization strategy implements a different power operation on each loss term so that all loss terms composing the loss function are of approximately the same order of magnitude, which makes all loss terms be optimized synchronously during the optimization process. Second, for the multi-frequency or high-frequency problems, in addition to using the modified loss function, new methods upgrade the neural network architecture from the common fully-connected neural network to special network architectures such as the Fourier feature architecture, and the integrated architecture developed by us. The combination of the above two techniques leads to a significant improvement in the computational accuracy of multi-scale problems. Several challenging numerical examples demonstrate the effectiveness of the proposed methods. The proposed methods not only significantly outperform the conventional PINN method in terms of computational efficiency and computational accuracy, but also compare favorably with the state-of-the-art methods in the recent literature. The improved PINN framework facilitates better application of PINNs to multi-scale problems. |
1106.3858 | Yi Gai | Yi Gai, Hua Liu, and Bhaskar Krishnamachari | A Packet Dropping Mechanism for Efficient Operation of M/M/1 Queues with
Selfish Users | This work is an extended version of the conference paper: Y. Gai, H.
Liu and B. Krishnamachari, "A packet dropping-based incentive mechanism for
M/M/1 queues with selfish users", the 30th IEEE International Conference on
Computer Communications (IEEE INFOCOM 2011), China, April, 2011 | null | 10.1016/j.comnet.2015.12.009 | null | cs.GT cs.NI math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a fundamental game theoretic problem concerning selfish users
contributing packets to an M/M/1 queue. In this game, each user controls its
own input rate so as to optimize a desired tradeoff between throughput and
delay. We first show that the original game has an inefficient Nash Equilibrium
(NE), with a Price of Anarchy (PoA) that scales linearly or worse in the number
of users. In order to improve the outcome efficiency, we propose an easily
implementable mechanism design whereby the server randomly drops packets with a
probability that is a function of the total arrival rate. We show that this
results in a modified M/M/1 queueing game that is an ordinal potential game
with at least one NE. In particular, for a linear packet dropping function,
which is similar to the Random Early Detection (RED) algorithm used in Internet
Congestion Control, we prove that there is a unique NE. We also show that the
simple best response dynamic converges to this unique equilibrium. Finally, for
this scheme, we prove that the social welfare (expressed either as the
summation of utilities of all players, or as the summation of the logarithm of
utilities of all players) at the equilibrium point can be arbitrarily close to
the social welfare at the global optimal point, i.e. the PoA can be made
arbitrarily close to 1. We also study the impact of arrival rate estimation
error on the PoA through simulations.
| [
{
"created": "Mon, 20 Jun 2011 10:40:21 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Gai",
"Yi",
""
],
[
"Liu",
"Hua",
""
],
[
"Krishnamachari",
"Bhaskar",
""
]
] | We consider a fundamental game theoretic problem concerning selfish users contributing packets to an M/M/1 queue. In this game, each user controls its own input rate so as to optimize a desired tradeoff between throughput and delay. We first show that the original game has an inefficient Nash Equilibrium (NE), with a Price of Anarchy (PoA) that scales linearly or worse in the number of users. In order to improve the outcome efficiency, we propose an easily implementable mechanism design whereby the server randomly drops packets with a probability that is a function of the total arrival rate. We show that this results in a modified M/M/1 queueing game that is an ordinal potential game with at least one NE. In particular, for a linear packet dropping function, which is similar to the Random Early Detection (RED) algorithm used in Internet Congestion Control, we prove that there is a unique NE. We also show that the simple best response dynamic converges to this unique equilibrium. Finally, for this scheme, we prove that the social welfare (expressed either as the summation of utilities of all players, or as the summation of the logarithm of utilities of all players) at the equilibrium point can be arbitrarily close to the social welfare at the global optimal point, i.e. the PoA can be made arbitrarily close to 1. We also study the impact of arrival rate estimation error on the PoA through simulations. |
2008.10038 | Pengfei Ge | Pengfei Ge, Chuan-Xian Ren, Jiashi Feng, Shuicheng Yan | Dual Adversarial Auto-Encoders for Clustering | null | null | 10.1109/TNNLS.2019.2919948 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a powerful approach for exploratory data analysis, unsupervised clustering
is a fundamental task in computer vision and pattern recognition. Many
clustering algorithms have been developed, but most of them perform
unsatisfactorily on the data with complex structures. Recently, Adversarial
Auto-Encoder (AAE) shows effectiveness on tackling such data by combining
Auto-Encoder (AE) and adversarial training, but it cannot effectively extract
classification information from the unlabeled data. In this work, we propose
Dual Adversarial Auto-encoder (Dual-AAE) which simultaneously maximizes the
likelihood function and mutual information between observed examples and a
subset of latent variables. By performing variational inference on the
objective function of Dual-AAE, we derive a new reconstruction loss which can
be optimized by training a pair of Auto-encoders. Moreover, to avoid mode
collapse, we introduce the clustering regularization term for the category
variable. Experiments on four benchmarks show that Dual-AAE achieves superior
performance over state-of-the-art clustering methods. Besides, by adding a
reject option, the clustering accuracy of Dual-AAE can reach that of supervised
CNN algorithms. Dual-AAE can also be used for disentangling style and content
of images without using supervised information.
| [
{
"created": "Sun, 23 Aug 2020 13:16:34 GMT",
"version": "v1"
}
] | 2020-08-25 | [
[
"Ge",
"Pengfei",
""
],
[
"Ren",
"Chuan-Xian",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Yan",
"Shuicheng",
""
]
] | As a powerful approach for exploratory data analysis, unsupervised clustering is a fundamental task in computer vision and pattern recognition. Many clustering algorithms have been developed, but most of them perform unsatisfactorily on the data with complex structures. Recently, Adversarial Auto-Encoder (AAE) shows effectiveness on tackling such data by combining Auto-Encoder (AE) and adversarial training, but it cannot effectively extract classification information from the unlabeled data. In this work, we propose Dual Adversarial Auto-encoder (Dual-AAE) which simultaneously maximizes the likelihood function and mutual information between observed examples and a subset of latent variables. By performing variational inference on the objective function of Dual-AAE, we derive a new reconstruction loss which can be optimized by training a pair of Auto-encoders. Moreover, to avoid mode collapse, we introduce the clustering regularization term for the category variable. Experiments on four benchmarks show that Dual-AAE achieves superior performance over state-of-the-art clustering methods. Besides, by adding a reject option, the clustering accuracy of Dual-AAE can reach that of supervised CNN algorithms. Dual-AAE can also be used for disentangling style and content of images without using supervised information. |
1607.03830 | Jian Du | Jian Du and Yik-Chung Wu | Distributed Clock Skew and Offset Estimation in Wireless Sensor
Networks: Asynchronous Algorithm and Convergence Analysis | null | null | null | null | cs.DC cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a fully distributed algorithm for joint clock skew
and offset estimation in wireless sensor networks based on belief propagation.
In the proposed algorithm, each node can estimate its clock skew and offset in
a completely distributed and asynchronous way: some nodes may update their
estimates more frequently than others using outdated message from neighboring
nodes. In addition, the proposed algorithm is robust to random packet loss.
Such algorithm does not require any centralized information processing or
coordination, and is scalable with network size. The proposed algorithm
represents a unified framework that encompasses both classes of synchronous and
asynchronous algorithms for network-wide clock synchronization. It is shown
analytically that the proposed asynchronous algorithm converges to the optimal
estimates with estimation mean-square-error at each node approaching the
centralized Cram\'er-Rao bound under any network topology. Simulation results
further show that {the convergence speed is faster than that corresponding to a
synchronous algorithm}.
| [
{
"created": "Sun, 10 Jul 2016 15:38:22 GMT",
"version": "v1"
}
] | 2016-07-14 | [
[
"Du",
"Jian",
""
],
[
"Wu",
"Yik-Chung",
""
]
] | In this paper, we propose a fully distributed algorithm for joint clock skew and offset estimation in wireless sensor networks based on belief propagation. In the proposed algorithm, each node can estimate its clock skew and offset in a completely distributed and asynchronous way: some nodes may update their estimates more frequently than others using outdated message from neighboring nodes. In addition, the proposed algorithm is robust to random packet loss. Such algorithm does not require any centralized information processing or coordination, and is scalable with network size. The proposed algorithm represents a unified framework that encompasses both classes of synchronous and asynchronous algorithms for network-wide clock synchronization. It is shown analytically that the proposed asynchronous algorithm converges to the optimal estimates with estimation mean-square-error at each node approaching the centralized Cram\'er-Rao bound under any network topology. Simulation results further show that {the convergence speed is faster than that corresponding to a synchronous algorithm}. |
1912.01438 | Zirui Wang | Zirui Wang, Shuda Li, Henry Howard-Jenkins, Victor Adrian Prisacariu,
Min Chen | FlowNet3D++: Geometric Losses For Deep Scene Flow Estimation | WACV 2020 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present FlowNet3D++, a deep scene flow estimation network. Inspired by
classical methods, FlowNet3D++ incorporates geometric constraints in the form
of point-to-plane distance and angular alignment between individual vectors in
the flow field, into FlowNet3D. We demonstrate that the addition of these
geometric loss terms improves the previous state-of-art FlowNet3D accuracy from
57.85% to 63.43%. To further demonstrate the effectiveness of our geometric
constraints, we propose a benchmark for flow estimation on the task of dynamic
3D reconstruction, thus providing a more holistic and practical measure of
performance than the breakdown of individual metrics previously used to
evaluate scene flow. This is made possible through the contribution of a novel
pipeline to integrate point-based scene flow predictions into a global dense
volume. FlowNet3D++ achieves up to a 15.0% reduction in reconstruction error
over FlowNet3D, and up to a 35.2% improvement over KillingFusion alone. We will
release our scene flow estimation code later.
| [
{
"created": "Tue, 3 Dec 2019 14:53:56 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Dec 2019 11:16:06 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Apr 2021 14:00:11 GMT",
"version": "v3"
}
] | 2021-04-27 | [
[
"Wang",
"Zirui",
""
],
[
"Li",
"Shuda",
""
],
[
"Howard-Jenkins",
"Henry",
""
],
[
"Prisacariu",
"Victor Adrian",
""
],
[
"Chen",
"Min",
""
]
] | We present FlowNet3D++, a deep scene flow estimation network. Inspired by classical methods, FlowNet3D++ incorporates geometric constraints in the form of point-to-plane distance and angular alignment between individual vectors in the flow field, into FlowNet3D. We demonstrate that the addition of these geometric loss terms improves the previous state-of-art FlowNet3D accuracy from 57.85% to 63.43%. To further demonstrate the effectiveness of our geometric constraints, we propose a benchmark for flow estimation on the task of dynamic 3D reconstruction, thus providing a more holistic and practical measure of performance than the breakdown of individual metrics previously used to evaluate scene flow. This is made possible through the contribution of a novel pipeline to integrate point-based scene flow predictions into a global dense volume. FlowNet3D++ achieves up to a 15.0% reduction in reconstruction error over FlowNet3D, and up to a 35.2% improvement over KillingFusion alone. We will release our scene flow estimation code later. |
1901.08479 | Jaehoon Cha | Jaehoon Cha, Kyeong Soo Kim, Sanghyuk Lee | On the Transformation of Latent Space in Autoencoders | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Noting the importance of the latent variables in inference and learning, we
propose a novel framework for autoencoders based on the homeomorphic
transformation of latent variables, which could reduce the distance between
vectors in the transformed space, while preserving the topological properties
of the original space, and investigate the effect of the latent space
transformation on learning generative models and denoising corrupted data. The
experimental results demonstrate that our generative and denoising models based
on the proposed framework can provide better performance than conventional
variational and denoising autoencoders due to the transformation, where we
evaluate the performance of generative and denoising models in terms of the
Hausdorff distance between the sets of training and processed i.e., either
generated or denoised images, which can objectively measure their differences,
as well as through direct comparison of the visual characteristics of the
processed images.
| [
{
"created": "Thu, 24 Jan 2019 16:13:24 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2019 14:35:57 GMT",
"version": "v2"
}
] | 2019-06-04 | [
[
"Cha",
"Jaehoon",
""
],
[
"Kim",
"Kyeong Soo",
""
],
[
"Lee",
"Sanghyuk",
""
]
] | Noting the importance of the latent variables in inference and learning, we propose a novel framework for autoencoders based on the homeomorphic transformation of latent variables, which could reduce the distance between vectors in the transformed space, while preserving the topological properties of the original space, and investigate the effect of the latent space transformation on learning generative models and denoising corrupted data. The experimental results demonstrate that our generative and denoising models based on the proposed framework can provide better performance than conventional variational and denoising autoencoders due to the transformation, where we evaluate the performance of generative and denoising models in terms of the Hausdorff distance between the sets of training and processed i.e., either generated or denoised images, which can objectively measure their differences, as well as through direct comparison of the visual characteristics of the processed images. |
1708.08552 | Xuanqing Liu | Xuanqing Liu, Cho-Jui Hsieh, Jason D. Lee and Yuekai Sun | An inexact subsampled proximal Newton-type method for large-scale
machine learning | null | null | null | null | cs.LG cs.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a fast proximal Newton-type algorithm for minimizing regularized
finite sums that returns an $\epsilon$-suboptimal point in
$\tilde{\mathcal{O}}(d(n + \sqrt{\kappa d})\log(\frac{1}{\epsilon}))$ FLOPS,
where $n$ is number of samples, $d$ is feature dimension, and $\kappa$ is the
condition number. As long as $n > d$, the proposed method is more efficient
than state-of-the-art accelerated stochastic first-order methods for non-smooth
regularizers which requires $\tilde{\mathcal{O}}(d(n + \sqrt{\kappa
n})\log(\frac{1}{\epsilon}))$ FLOPS. The key idea is to form the subsampled
Newton subproblem in a way that preserves the finite sum structure of the
objective, thereby allowing us to leverage recent developments in stochastic
first-order methods to solve the subproblem. Experimental results verify that
the proposed algorithm outperforms previous algorithms for $\ell_1$-regularized
logistic regression on real datasets.
| [
{
"created": "Mon, 28 Aug 2017 22:47:48 GMT",
"version": "v1"
}
] | 2017-08-30 | [
[
"Liu",
"Xuanqing",
""
],
[
"Hsieh",
"Cho-Jui",
""
],
[
"Lee",
"Jason D.",
""
],
[
"Sun",
"Yuekai",
""
]
] | We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an $\epsilon$-suboptimal point in $\tilde{\mathcal{O}}(d(n + \sqrt{\kappa d})\log(\frac{1}{\epsilon}))$ FLOPS, where $n$ is number of samples, $d$ is feature dimension, and $\kappa$ is the condition number. As long as $n > d$, the proposed method is more efficient than state-of-the-art accelerated stochastic first-order methods for non-smooth regularizers which requires $\tilde{\mathcal{O}}(d(n + \sqrt{\kappa n})\log(\frac{1}{\epsilon}))$ FLOPS. The key idea is to form the subsampled Newton subproblem in a way that preserves the finite sum structure of the objective, thereby allowing us to leverage recent developments in stochastic first-order methods to solve the subproblem. Experimental results verify that the proposed algorithm outperforms previous algorithms for $\ell_1$-regularized logistic regression on real datasets. |
2405.00028 | Pavan L. Veluvali | Pavan L. Veluvali, Jan Heiland, Peter Benner | MaRDIFlow: A CSE workflow framework for abstracting meta-data from FAIR
computational experiments | 13 pages, 7 figures | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | Numerical algorithms and computational tools are instrumental in navigating
and addressing complex simulation and data processing tasks. The exponential
growth of metadata and parameter-driven simulations has led to an increasing
demand for automated workflows that can replicate computational experiments
across platforms. In general, a computational workflow is defined as a
sequential description for accomplishing a scientific objective, often
described by tasks and their associated data dependencies. If characterized
through input-output relation, workflow components can be structured to allow
interchangeable utilization of individual tasks and their accompanying
metadata. In the present work, we develop a novel computational framework,
namely, MaRDIFlow, that focuses on the automation of abstracting meta-data
embedded in an ontology of mathematical objects. This framework also
effectively addresses the inherent execution and environmental dependencies by
incorporating them into multi-layered descriptions. Additionally, we
demonstrate a working prototype with example use cases and methodically
integrate them into our workflow tool and data provenance framework.
Furthermore, we show how to best apply the FAIR principles to computational
workflows, such that abstracted components are Findable, Accessible,
Interoperable, and Reusable in nature.
| [
{
"created": "Wed, 28 Feb 2024 16:13:17 GMT",
"version": "v1"
}
] | 2024-05-02 | [
[
"Veluvali",
"Pavan L.",
""
],
[
"Heiland",
"Jan",
""
],
[
"Benner",
"Peter",
""
]
] | Numerical algorithms and computational tools are instrumental in navigating and addressing complex simulation and data processing tasks. The exponential growth of metadata and parameter-driven simulations has led to an increasing demand for automated workflows that can replicate computational experiments across platforms. In general, a computational workflow is defined as a sequential description for accomplishing a scientific objective, often described by tasks and their associated data dependencies. If characterized through input-output relation, workflow components can be structured to allow interchangeable utilization of individual tasks and their accompanying metadata. In the present work, we develop a novel computational framework, namely, MaRDIFlow, that focuses on the automation of abstracting meta-data embedded in an ontology of mathematical objects. This framework also effectively addresses the inherent execution and environmental dependencies by incorporating them into multi-layered descriptions. Additionally, we demonstrate a working prototype with example use cases and methodically integrate them into our workflow tool and data provenance framework. Furthermore, we show how to best apply the FAIR principles to computational workflows, such that abstracted components are Findable, Accessible, Interoperable, and Reusable in nature. |
2103.13814 | Ni Xiao | Ni Xiao and Lei Zhang | Dynamic Weighted Learning for Unsupervised Domain Adaptation | This paper has been accepted by CVPR2021 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised domain adaptation (UDA) aims to improve the classification
performance on an unlabeled target domain by leveraging information from a
fully labeled source domain. Recent approaches explore domain-invariant and
class-discriminant representations to tackle this task. These methods, however,
ignore the interaction between domain alignment learning and class
discrimination learning. As a result, the missing or inadequate tradeoff
between domain alignment and class discrimination are prone to the problem of
negative transfer. In this paper, we propose Dynamic Weighted Learning (DWL) to
avoid the discriminability vanishing problem caused by excessive alignment
learning and domain misalignment problem caused by excessive discriminant
learning. Technically, DWL dynamically weights the learning losses of alignment
and discriminability by introducing the degree of alignment and
discriminability. Besides, the problem of sample imbalance across domains is
first considered in our work, and we solve the problem by weighing the samples
to guarantee information balance across domains. Extensive experiments
demonstrate that DWL has an excellent performance in several benchmark
datasets.
| [
{
"created": "Mon, 22 Mar 2021 16:11:04 GMT",
"version": "v1"
}
] | 2021-03-26 | [
[
"Xiao",
"Ni",
""
],
[
"Zhang",
"Lei",
""
]
] | Unsupervised domain adaptation (UDA) aims to improve the classification performance on an unlabeled target domain by leveraging information from a fully labeled source domain. Recent approaches explore domain-invariant and class-discriminant representations to tackle this task. These methods, however, ignore the interaction between domain alignment learning and class discrimination learning. As a result, the missing or inadequate tradeoff between domain alignment and class discrimination are prone to the problem of negative transfer. In this paper, we propose Dynamic Weighted Learning (DWL) to avoid the discriminability vanishing problem caused by excessive alignment learning and domain misalignment problem caused by excessive discriminant learning. Technically, DWL dynamically weights the learning losses of alignment and discriminability by introducing the degree of alignment and discriminability. Besides, the problem of sample imbalance across domains is first considered in our work, and we solve the problem by weighing the samples to guarantee information balance across domains. Extensive experiments demonstrate that DWL has an excellent performance in several benchmark datasets. |
1603.01954 | Nan Wu | Nan Wu, Zheyu Liu, Fei Qiao, Xiaojun Guo, Qi Wei, Yuan Xie, Huazhong
Yang | A Real-Time and Energy-Efficient Implementation of
Difference-of-Gaussian with Flexible Thin-Film Transistors | null | null | null | null | cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With many advantageous features, softness and better biocompatibility,
flexible electronic devices have developed rapidly and increasingly attracted
attention. Many currently applications with flexible devices are sensors and
drivers, while there is nearly no utilization aiming at complex computation
since flexible devices have lower electron mobility, simple structure and large
process variation. In this paper, we proposed an innovative method that enabled
flexible devices to implement real-time and energy-efficient
Difference-of-Gaussian, which illustrated feasibility and potentials for them
to achieve complicated real-time computation in future generation products.
| [
{
"created": "Mon, 7 Mar 2016 06:34:19 GMT",
"version": "v1"
}
] | 2016-03-08 | [
[
"Wu",
"Nan",
""
],
[
"Liu",
"Zheyu",
""
],
[
"Qiao",
"Fei",
""
],
[
"Guo",
"Xiaojun",
""
],
[
"Wei",
"Qi",
""
],
[
"Xie",
"Yuan",
""
],
[
"Yang",
"Huazhong",
""
]
] | With many advantageous features, softness and better biocompatibility, flexible electronic devices have developed rapidly and increasingly attracted attention. Many currently applications with flexible devices are sensors and drivers, while there is nearly no utilization aiming at complex computation since flexible devices have lower electron mobility, simple structure and large process variation. In this paper, we proposed an innovative method that enabled flexible devices to implement real-time and energy-efficient Difference-of-Gaussian, which illustrated feasibility and potentials for them to achieve complicated real-time computation in future generation products. |
0804.4753 | Icius Committee | C. E. Huang, and J. S. Chen | Wavelet Based Iterative Learning Control with Fuzzy PD Feedback for
Position Tracking of A Pneumatic Servo System | Uploaded by ICIUS2007 Conference Organizer on behalf of the
author(s). 8 pages, 9 figures, 1 tables | Proceedings of the International Conference on Intelligent
Unmanned System (ICIUS 2007), Bali, Indonesia, October 24-25, 2007, Paper No.
ICIUS2007-A001 | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a wavelet-based iterative learning control (WILC) scheme with
Fuzzy PD feedback is presented for a pneumatic control system with nonsmooth
nonlinearities and uncertain parameters. The wavelet transform is employed to
extract the learnable dynamics from measured output signal before it can be
used to update the control profile. The wavelet transform is adopted to
decompose the original signal into many low-resolution signals that contain the
learnable and unlearnable parts. The desired control profile is then compared
with the learnable part of the transformed signal. Thus, the effects from
unlearnable dynamics on the controlled system can be attenuated by a Fuzzy PD
feedback controller. As for the rules of Fuzzy PD controller in the feedback
loop, a genetic algorithm (GA) is employed to search for the inference rules of
optimization. A proportional-valve controlled pneumatic cylinder actuator
system is used as the control target for simulation. Simulation results have
shown a much-improved positiontracking performance.
| [
{
"created": "Wed, 30 Apr 2008 08:22:19 GMT",
"version": "v1"
}
] | 2008-05-01 | [
[
"Huang",
"C. E.",
""
],
[
"Chen",
"J. S.",
""
]
] | In this paper, a wavelet-based iterative learning control (WILC) scheme with Fuzzy PD feedback is presented for a pneumatic control system with nonsmooth nonlinearities and uncertain parameters. The wavelet transform is employed to extract the learnable dynamics from measured output signal before it can be used to update the control profile. The wavelet transform is adopted to decompose the original signal into many low-resolution signals that contain the learnable and unlearnable parts. The desired control profile is then compared with the learnable part of the transformed signal. Thus, the effects from unlearnable dynamics on the controlled system can be attenuated by a Fuzzy PD feedback controller. As for the rules of Fuzzy PD controller in the feedback loop, a genetic algorithm (GA) is employed to search for the inference rules of optimization. A proportional-valve controlled pneumatic cylinder actuator system is used as the control target for simulation. Simulation results have shown a much-improved positiontracking performance. |
1209.2079 | Ilgin \c{S}afak | Ilg{\i}n \c{S}afak, Emre Akta\c{s} and Ali \"Ozg\"ur Y{\i}lmaz | Error Rate Analysis of GF(q) Network Coded Detect-and-Forward Wireless
Relay Networks Using Equivalent Relay Channel Models | 28 pages, 10 figures. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible | null | 10.1109/TWC.2013.051613.121309 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates simple means of analyzing the error rate performance
of a general q-ary Galois Field network coded detect-and-forward cooperative
relay network with known relay error statistics at the destination. Equivalent
relay channels are used in obtaining an approximate error rate of the relay
network, from which the diversity order is found. Error rate analyses using
equivalent relay channel models are shown to be closely matched with simulation
results. Using the equivalent relay channels, low complexity receivers are
developed whose performances are close to that of the optimal maximum
likelihood receiver.
| [
{
"created": "Mon, 10 Sep 2012 18:04:45 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Feb 2013 21:52:38 GMT",
"version": "v2"
}
] | 2013-11-25 | [
[
"Şafak",
"Ilgın",
""
],
[
"Aktaş",
"Emre",
""
],
[
"Yılmaz",
"Ali Özgür",
""
]
] | This paper investigates simple means of analyzing the error rate performance of a general q-ary Galois Field network coded detect-and-forward cooperative relay network with known relay error statistics at the destination. Equivalent relay channels are used in obtaining an approximate error rate of the relay network, from which the diversity order is found. Error rate analyses using equivalent relay channel models are shown to be closely matched with simulation results. Using the equivalent relay channels, low complexity receivers are developed whose performances are close to that of the optimal maximum likelihood receiver. |
2301.10966 | Xuan Quang Ngo | Thai Nguyen Chau, Xuan Quang Ngo, Van Tu Duong, Trong Trung Nguyen,
Huy Hung Nguyen, Tan Tien Nguyen | Design of Mobile Manipulator for Fire Extinguisher Testing. Part II:
Design and Simulation | 10 pages, 15 figures, the 7th International Conference on Advanced
Engineering, Theory and Applications | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | All flames are extinguished as early as possible, or fire services have to
deal with major conflagrations. This leads to the fact that the quality of fire
extinguishers has become a very sensitive and important issue in firefighting.
Inspired by the development of automatic fire fighting systems, this paper
presents a mobile manipulator to evaluate the power of fire extinguishers,
which is designed according to the standard of fire extinguishers named as ISO
7165:2009 and ISO 11601:2008. A detailed discussion on key specifications
solutions and mechanical design of the chassis of the mobile manipulator has
been presented in Part I: Key Specifications and Conceptual Design. The focus
of this part is on the rest of the mechanical design and controller de-sign of
the mobile manipulator.
| [
{
"created": "Thu, 26 Jan 2023 07:19:20 GMT",
"version": "v1"
}
] | 2023-01-31 | [
[
"Chau",
"Thai Nguyen",
""
],
[
"Ngo",
"Xuan Quang",
""
],
[
"Duong",
"Van Tu",
""
],
[
"Nguyen",
"Trong Trung",
""
],
[
"Nguyen",
"Huy Hung",
""
],
[
"Nguyen",
"Tan Tien",
""
]
] | All flames are extinguished as early as possible, or fire services have to deal with major conflagrations. This leads to the fact that the quality of fire extinguishers has become a very sensitive and important issue in firefighting. Inspired by the development of automatic fire fighting systems, this paper presents a mobile manipulator to evaluate the power of fire extinguishers, which is designed according to the standard of fire extinguishers named as ISO 7165:2009 and ISO 11601:2008. A detailed discussion on key specifications solutions and mechanical design of the chassis of the mobile manipulator has been presented in Part I: Key Specifications and Conceptual Design. The focus of this part is on the rest of the mechanical design and controller de-sign of the mobile manipulator. |
0904.1645 | Cedric Chauve | Cedric Chauve, A\"ida Ouangraoua (LaBRI) | A 3-approximation algorithm for computing a parsimonious first
speciation in the gene duplication model | null | null | null | null | cs.DM cs.DS q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the following problem: from a given set of gene families trees on
a set of genomes, find a first speciation, that splits these genomes into two
subsets, that minimizes the number of gene duplications that happened before
this speciation. We call this problem the Minimum Duplication Bipartition
Problem. Using a generalization of the Minimum Edge-Cut Problem, known as
Submodular Function Minimization, we propose a polynomial time and space
3-approximation algorithm for the Minimum Duplication Bipartition Problem.
| [
{
"created": "Fri, 10 Apr 2009 06:28:07 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Apr 2009 06:29:22 GMT",
"version": "v2"
}
] | 2009-04-15 | [
[
"Chauve",
"Cedric",
"",
"LaBRI"
],
[
"Ouangraoua",
"Aïda",
"",
"LaBRI"
]
] | We consider the following problem: from a given set of gene families trees on a set of genomes, find a first speciation, that splits these genomes into two subsets, that minimizes the number of gene duplications that happened before this speciation. We call this problem the Minimum Duplication Bipartition Problem. Using a generalization of the Minimum Edge-Cut Problem, known as Submodular Function Minimization, we propose a polynomial time and space 3-approximation algorithm for the Minimum Duplication Bipartition Problem. |
1702.08726 | Lenz Belzner | Lenz Belzner, Thomas Gabor | Stacked Thompson Bandits | Accepted at SEsCPS @ ICSE 2017 | null | null | null | cs.SE cs.AI cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Stacked Thompson Bandits (STB) for efficiently generating plans
that are likely to satisfy a given bounded temporal logic requirement. STB uses
a simulation for evaluation of plans, and takes a Bayesian approach to using
the resulting information to guide its search. In particular, we show that
stacking multiarmed bandits and using Thompson sampling to guide the action
selection process for each bandit enables STB to generate plans that satisfy
requirements with a high probability while only searching a fraction of the
search space.
| [
{
"created": "Tue, 28 Feb 2017 10:19:30 GMT",
"version": "v1"
}
] | 2017-03-01 | [
[
"Belzner",
"Lenz",
""
],
[
"Gabor",
"Thomas",
""
]
] | We introduce Stacked Thompson Bandits (STB) for efficiently generating plans that are likely to satisfy a given bounded temporal logic requirement. STB uses a simulation for evaluation of plans, and takes a Bayesian approach to using the resulting information to guide its search. In particular, we show that stacking multiarmed bandits and using Thompson sampling to guide the action selection process for each bandit enables STB to generate plans that satisfy requirements with a high probability while only searching a fraction of the search space. |
2104.14544 | Deqing Sun | Deqing Sun, Daniel Vlasic, Charles Herrmann, Varun Jampani, Michael
Krainin, Huiwen Chang, Ramin Zabih, William T. Freeman, Ce Liu | AutoFlow: Learning a Better Training Set for Optical Flow | CVPR 2021 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic datasets play a critical role in pre-training CNN models for
optical flow, but they are painstaking to generate and hard to adapt to new
applications. To automate the process, we present AutoFlow, a simple and
effective method to render training data for optical flow that optimizes the
performance of a model on a target dataset. AutoFlow takes a layered approach
to render synthetic data, where the motion, shape, and appearance of each layer
are controlled by learnable hyperparameters. Experimental results show that
AutoFlow achieves state-of-the-art accuracy in pre-training both PWC-Net and
RAFT. Our code and data are available at https://autoflow-google.github.io .
| [
{
"created": "Thu, 29 Apr 2021 17:55:23 GMT",
"version": "v1"
}
] | 2021-04-30 | [
[
"Sun",
"Deqing",
""
],
[
"Vlasic",
"Daniel",
""
],
[
"Herrmann",
"Charles",
""
],
[
"Jampani",
"Varun",
""
],
[
"Krainin",
"Michael",
""
],
[
"Chang",
"Huiwen",
""
],
[
"Zabih",
"Ramin",
""
],
[
"Freeman",
"William T.",
""
],
[
"Liu",
"Ce",
""
]
] | Synthetic datasets play a critical role in pre-training CNN models for optical flow, but they are painstaking to generate and hard to adapt to new applications. To automate the process, we present AutoFlow, a simple and effective method to render training data for optical flow that optimizes the performance of a model on a target dataset. AutoFlow takes a layered approach to render synthetic data, where the motion, shape, and appearance of each layer are controlled by learnable hyperparameters. Experimental results show that AutoFlow achieves state-of-the-art accuracy in pre-training both PWC-Net and RAFT. Our code and data are available at https://autoflow-google.github.io . |
2003.13141 | Jie Chen | Jie Chen, Zhiheng Li, Jiebo Luo, and Chenliang Xu | Learning a Weakly-Supervised Video Actor-Action Segmentation Model with
a Wise Selection | 11 pages, 8 figures, cvpr-2020 supplementary video:
https://youtu.be/CX1hEOV9tlo | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address weakly-supervised video actor-action segmentation (VAAS), which
extends general video object segmentation (VOS) to additionally consider action
labels of the actors. The most successful methods on VOS synthesize a pool of
pseudo-annotations (PAs) and then refine them iteratively. However, they face
challenges as to how to select from a massive amount of PAs high-quality ones,
how to set an appropriate stop condition for weakly-supervised training, and
how to initialize PAs pertaining to VAAS. To overcome these challenges, we
propose a general Weakly-Supervised framework with a Wise Selection of training
samples and model evaluation criterion (WS^2). Instead of blindly trusting
quality-inconsistent PAs, WS^2 employs a learning-based selection to select
effective PAs and a novel region integrity criterion as a stopping condition
for weakly-supervised training. In addition, a 3D-Conv GCAM is devised to adapt
to the VAAS task. Extensive experiments show that WS^2 achieves
state-of-the-art performance on both weakly-supervised VOS and VAAS tasks and
is on par with the best fully-supervised method on VAAS.
| [
{
"created": "Sun, 29 Mar 2020 21:15:18 GMT",
"version": "v1"
}
] | 2020-03-31 | [
[
"Chen",
"Jie",
""
],
[
"Li",
"Zhiheng",
""
],
[
"Luo",
"Jiebo",
""
],
[
"Xu",
"Chenliang",
""
]
] | We address weakly-supervised video actor-action segmentation (VAAS), which extends general video object segmentation (VOS) to additionally consider action labels of the actors. The most successful methods on VOS synthesize a pool of pseudo-annotations (PAs) and then refine them iteratively. However, they face challenges as to how to select from a massive amount of PAs high-quality ones, how to set an appropriate stop condition for weakly-supervised training, and how to initialize PAs pertaining to VAAS. To overcome these challenges, we propose a general Weakly-Supervised framework with a Wise Selection of training samples and model evaluation criterion (WS^2). Instead of blindly trusting quality-inconsistent PAs, WS^2 employs a learning-based selection to select effective PAs and a novel region integrity criterion as a stopping condition for weakly-supervised training. In addition, a 3D-Conv GCAM is devised to adapt to the VAAS task. Extensive experiments show that WS^2 achieves state-of-the-art performance on both weakly-supervised VOS and VAAS tasks and is on par with the best fully-supervised method on VAAS. |
2210.02719 | Chenxi Sun | Chenxi Sun and Hongyan Li and Moxian Song and Derun Cai and Baofeng
Zhang and Shenda Hong | Continuous Diagnosis and Prognosis by Controlling the Update Process of
Deep Neural Networks | 41 pages, 15 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continuous diagnosis and prognosis are essential for intensive care patients.
It can provide more opportunities for timely treatment and rational resource
allocation, especially for sepsis, a main cause of death in ICU, and COVID-19,
a new worldwide epidemic. Although deep learning methods have shown their great
superiority in many medical tasks, they tend to catastrophically forget, over
fit, and get results too late when performing diagnosis and prognosis in the
continuous mode. In this work, we summarized the three requirements of this
task, proposed a new concept, continuous classification of time series (CCTS),
and designed a novel model training method, restricted update strategy of
neural networks (RU). In the context of continuous prognosis, our method
outperformed all baselines and achieved the average accuracy of 90%, 97%, and
85% on sepsis prognosis, COVID-19 mortality prediction, and eight diseases
classification. Superiorly, our method can also endow deep learning with
interpretability, having the potential to explore disease mechanisms and
provide a new horizon for medical research. We have achieved disease staging
for sepsis and COVID-19, discovering four stages and three stages with their
typical biomarkers respectively. Further, our method is a data-agnostic and
model-agnostic plug-in, it can be used to continuously prognose other diseases
with staging and even implement CCTS in other fields.
| [
{
"created": "Thu, 6 Oct 2022 07:06:45 GMT",
"version": "v1"
}
] | 2022-10-07 | [
[
"Sun",
"Chenxi",
""
],
[
"Li",
"Hongyan",
""
],
[
"Song",
"Moxian",
""
],
[
"Cai",
"Derun",
""
],
[
"Zhang",
"Baofeng",
""
],
[
"Hong",
"Shenda",
""
]
] | Continuous diagnosis and prognosis are essential for intensive care patients. It can provide more opportunities for timely treatment and rational resource allocation, especially for sepsis, a main cause of death in ICU, and COVID-19, a new worldwide epidemic. Although deep learning methods have shown their great superiority in many medical tasks, they tend to catastrophically forget, over fit, and get results too late when performing diagnosis and prognosis in the continuous mode. In this work, we summarized the three requirements of this task, proposed a new concept, continuous classification of time series (CCTS), and designed a novel model training method, restricted update strategy of neural networks (RU). In the context of continuous prognosis, our method outperformed all baselines and achieved the average accuracy of 90%, 97%, and 85% on sepsis prognosis, COVID-19 mortality prediction, and eight diseases classification. Superiorly, our method can also endow deep learning with interpretability, having the potential to explore disease mechanisms and provide a new horizon for medical research. We have achieved disease staging for sepsis and COVID-19, discovering four stages and three stages with their typical biomarkers respectively. Further, our method is a data-agnostic and model-agnostic plug-in, it can be used to continuously prognose other diseases with staging and even implement CCTS in other fields. |
2405.04271 | Arne Rubehn | Arne Rubehn, Jessica Nieder, Robert Forkel, Johann-Mattis List | Generating Feature Vectors from Phonetic Transcriptions in
Cross-Linguistic Data Formats | To appear in the Proceedings of the 2024 Meeting of the Society for
Computation in Linguistics (SCiL) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | When comparing speech sounds across languages, scholars often make use of
feature representations of individual sounds in order to determine fine-grained
sound similarities. Although binary feature systems for large numbers of speech
sounds have been proposed, large-scale computational applications often face
the challenges that the proposed feature systems -- even if they list features
for several thousand sounds -- only cover a smaller part of the numerous speech
sounds reflected in actual cross-linguistic data. In order to address the
problem of missing data for attested speech sounds, we propose a new approach
that can create binary feature vectors dynamically for all sounds that can be
represented in the the standardized version of the International Phonetic
Alphabet proposed by the Cross-Linguistic Transcription Systems (CLTS)
reference catalog. Since CLTS is actively used in large data collections,
covering more than 2,000 distinct language varieties, our procedure for the
generation of binary feature vectors provides immediate access to a very large
collection of multilingual wordlists. Testing our feature system in different
ways on different datasets proves that the system is not only useful to provide
a straightforward means to compare the similarity of speech sounds, but also
illustrates its potential to be used in future cross-linguistic machine
learning applications.
| [
{
"created": "Tue, 7 May 2024 12:40:59 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Rubehn",
"Arne",
""
],
[
"Nieder",
"Jessica",
""
],
[
"Forkel",
"Robert",
""
],
[
"List",
"Johann-Mattis",
""
]
] | When comparing speech sounds across languages, scholars often make use of feature representations of individual sounds in order to determine fine-grained sound similarities. Although binary feature systems for large numbers of speech sounds have been proposed, large-scale computational applications often face the challenges that the proposed feature systems -- even if they list features for several thousand sounds -- only cover a smaller part of the numerous speech sounds reflected in actual cross-linguistic data. In order to address the problem of missing data for attested speech sounds, we propose a new approach that can create binary feature vectors dynamically for all sounds that can be represented in the the standardized version of the International Phonetic Alphabet proposed by the Cross-Linguistic Transcription Systems (CLTS) reference catalog. Since CLTS is actively used in large data collections, covering more than 2,000 distinct language varieties, our procedure for the generation of binary feature vectors provides immediate access to a very large collection of multilingual wordlists. Testing our feature system in different ways on different datasets proves that the system is not only useful to provide a straightforward means to compare the similarity of speech sounds, but also illustrates its potential to be used in future cross-linguistic machine learning applications. |
2310.12505 | Boyi Deng | Boyi Deng, Wenjie Wang, Fuli Feng, Yang Deng, Qifan Wang, Xiangnan He | Attack Prompt Generation for Red Teaming and Defending Large Language
Models | Accepted to EMNLP 2023 (Findings) | null | null | null | cs.CL cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) are susceptible to red teaming attacks, which
can induce LLMs to generate harmful content. Previous research constructs
attack prompts via manual or automatic methods, which have their own
limitations on construction cost and quality. To address these issues, we
propose an integrated approach that combines manual and automatic methods to
economically generate high-quality attack prompts. Specifically, considering
the impressive capabilities of newly emerged LLMs, we propose an attack
framework to instruct LLMs to mimic human-generated prompts through in-context
learning. Furthermore, we propose a defense framework that fine-tunes victim
LLMs through iterative interactions with the attack framework to enhance their
safety against red teaming attacks. Extensive experiments on different LLMs
validate the effectiveness of our proposed attack and defense frameworks.
Additionally, we release a series of attack prompts datasets named SAP with
varying sizes, facilitating the safety evaluation and enhancement of more LLMs.
Our code and dataset is available on https://github.com/Aatrox103/SAP .
| [
{
"created": "Thu, 19 Oct 2023 06:15:05 GMT",
"version": "v1"
}
] | 2023-10-20 | [
[
"Deng",
"Boyi",
""
],
[
"Wang",
"Wenjie",
""
],
[
"Feng",
"Fuli",
""
],
[
"Deng",
"Yang",
""
],
[
"Wang",
"Qifan",
""
],
[
"He",
"Xiangnan",
""
]
] | Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content. Previous research constructs attack prompts via manual or automatic methods, which have their own limitations on construction cost and quality. To address these issues, we propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts. Specifically, considering the impressive capabilities of newly emerged LLMs, we propose an attack framework to instruct LLMs to mimic human-generated prompts through in-context learning. Furthermore, we propose a defense framework that fine-tunes victim LLMs through iterative interactions with the attack framework to enhance their safety against red teaming attacks. Extensive experiments on different LLMs validate the effectiveness of our proposed attack and defense frameworks. Additionally, we release a series of attack prompts datasets named SAP with varying sizes, facilitating the safety evaluation and enhancement of more LLMs. Our code and dataset is available on https://github.com/Aatrox103/SAP . |
2210.10515 | Pouria Mehrabi | Pouria Mehrabi, Hamid D. Taghirad | A Segment-Wise Gaussian Process-Based Ground Segmentation With Local
Smoothness Estimation | null | null | null | null | cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Both in terrestrial and extraterrestrial environments, the precise and
informative model of the ground and the surface ahead is crucial for navigation
and obstacle avoidance. The ground surface is not always flat and it may be
sloped, bumpy and rough specially in off-road terrestrial scenes. In bumpy and
rough scenes the functional relationship of the surface-related features may
vary in different areas of the ground, as the structure of the ground surface
may vary suddenly and further the measured point cloud of the ground does not
bear smoothness. Thus, the ground-related features must be obtained based on
local estimates or even point estimates. To tackle this problem, the
segment-wise GP-based ground segmentation method with local smoothness
estimation is proposed. This method is an extension to our previous method in
which a realistic measurement of the length-scale values were provided for the
covariance kernel in each line-segment to give precise estimation of the ground
for sloped terrains. In this extension, the value of the length-scale is
estimated locally for each data point which makes it much more precise for the
rough scenes while being not computationally complex and more robust to
under-segmentation, sparsity and under-represent-ability. The segment-wise task
is performed to estimate a partial continuous model of the ground for each
radial range segment. Simulation results show the effectiveness of the proposed
method to give a continuous and precise estimation of the ground surface in
rough and bumpy scenes while being fast enough for real-world applications.
| [
{
"created": "Wed, 19 Oct 2022 12:42:21 GMT",
"version": "v1"
}
] | 2022-10-20 | [
[
"Mehrabi",
"Pouria",
""
],
[
"Taghirad",
"Hamid D.",
""
]
] | Both in terrestrial and extraterrestrial environments, the precise and informative model of the ground and the surface ahead is crucial for navigation and obstacle avoidance. The ground surface is not always flat and it may be sloped, bumpy and rough specially in off-road terrestrial scenes. In bumpy and rough scenes the functional relationship of the surface-related features may vary in different areas of the ground, as the structure of the ground surface may vary suddenly and further the measured point cloud of the ground does not bear smoothness. Thus, the ground-related features must be obtained based on local estimates or even point estimates. To tackle this problem, the segment-wise GP-based ground segmentation method with local smoothness estimation is proposed. This method is an extension to our previous method in which a realistic measurement of the length-scale values were provided for the covariance kernel in each line-segment to give precise estimation of the ground for sloped terrains. In this extension, the value of the length-scale is estimated locally for each data point which makes it much more precise for the rough scenes while being not computationally complex and more robust to under-segmentation, sparsity and under-represent-ability. The segment-wise task is performed to estimate a partial continuous model of the ground for each radial range segment. Simulation results show the effectiveness of the proposed method to give a continuous and precise estimation of the ground surface in rough and bumpy scenes while being fast enough for real-world applications. |
2405.02150 | Manoel Horta Ribeiro | Giuseppe Russo Latona, Manoel Horta Ribeiro, Tim R. Davidson, Veniamin
Veselovsky, Robert West | The AI Review Lottery: Widespread AI-Assisted Peer Reviews Boost Paper
Scores and Acceptance Rates | Manoel Horta Ribeiro, Tim R. Davidson, and Veniamin Veselovsky
contributed equally to this work | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Journals and conferences worry that peer reviews assisted by artificial
intelligence (AI), in particular, large language models (LLMs), may negatively
influence the validity and fairness of the peer-review system, a cornerstone of
modern science. In this work, we address this concern with a quasi-experimental
study of the prevalence and impact of AI-assisted peer reviews in the context
of the 2024 International Conference on Learning Representations (ICLR), a
large and prestigious machine-learning conference. Our contributions are
threefold. Firstly, we obtain a lower bound for the prevalence of AI-assisted
reviews at ICLR 2024 using the GPTZero LLM detector, estimating that at least
$15.8\%$ of reviews were written with AI assistance. Secondly, we estimate the
impact of AI-assisted reviews on submission scores. Considering pairs of
reviews with different scores assigned to the same paper, we find that in
$53.4\%$ of pairs the AI-assisted review scores higher than the human review
($p = 0.002$; relative difference in probability of scoring higher: $+14.4\%$
in favor of AI-assisted reviews). Thirdly, we assess the impact of receiving an
AI-assisted peer review on submission acceptance. In a matched study,
submissions near the acceptance threshold that received an AI-assisted peer
review were $4.9$ percentage points ($p = 0.024$) more likely to be accepted
than submissions that did not. Overall, we show that AI-assisted reviews are
consequential to the peer-review process and offer a discussion on future
implications of current trends
| [
{
"created": "Fri, 3 May 2024 14:56:43 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Latona",
"Giuseppe Russo",
""
],
[
"Ribeiro",
"Manoel Horta",
""
],
[
"Davidson",
"Tim R.",
""
],
[
"Veselovsky",
"Veniamin",
""
],
[
"West",
"Robert",
""
]
] | Journals and conferences worry that peer reviews assisted by artificial intelligence (AI), in particular, large language models (LLMs), may negatively influence the validity and fairness of the peer-review system, a cornerstone of modern science. In this work, we address this concern with a quasi-experimental study of the prevalence and impact of AI-assisted peer reviews in the context of the 2024 International Conference on Learning Representations (ICLR), a large and prestigious machine-learning conference. Our contributions are threefold. Firstly, we obtain a lower bound for the prevalence of AI-assisted reviews at ICLR 2024 using the GPTZero LLM detector, estimating that at least $15.8\%$ of reviews were written with AI assistance. Secondly, we estimate the impact of AI-assisted reviews on submission scores. Considering pairs of reviews with different scores assigned to the same paper, we find that in $53.4\%$ of pairs the AI-assisted review scores higher than the human review ($p = 0.002$; relative difference in probability of scoring higher: $+14.4\%$ in favor of AI-assisted reviews). Thirdly, we assess the impact of receiving an AI-assisted peer review on submission acceptance. In a matched study, submissions near the acceptance threshold that received an AI-assisted peer review were $4.9$ percentage points ($p = 0.024$) more likely to be accepted than submissions that did not. Overall, we show that AI-assisted reviews are consequential to the peer-review process and offer a discussion on future implications of current trends |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.