id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.16867
|
Elif Akata
|
Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias
Bethge, Eric Schulz
|
Playing repeated games with Large Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) are transforming society and permeating into
diverse applications. As a result, LLMs will frequently interact with us and
other agents. It is, therefore, of great societal value to understand how LLMs
behave in interactive social settings. Here, we propose to use behavioral game
theory to study LLM's cooperation and coordination behavior. To do so, we let
different LLMs (GPT-3, GPT-3.5, and GPT-4) play finitely repeated games with
each other and with other, human-like strategies. Our results show that LLMs
generally perform well in such tasks and also uncover persistent behavioral
signatures. In a large set of two players-two strategies games, we find that
LLMs are particularly good at games where valuing their own self-interest pays
off, like the iterated Prisoner's Dilemma family. However, they behave
sub-optimally in games that require coordination. We, therefore, further focus
on two games from these distinct families. In the canonical iterated Prisoner's
Dilemma, we find that GPT-4 acts particularly unforgivingly, always defecting
after another agent has defected only once. In the Battle of the Sexes, we find
that GPT-4 cannot match the behavior of the simple convention to alternate
between options. We verify that these behavioral signatures are stable across
robustness checks. Finally, we show how GPT-4's behavior can be modified by
providing further information about the other player as well as by asking it to
predict the other player's actions before making a choice. These results enrich
our understanding of LLM's social behavior and pave the way for a behavioral
game theory for machines.
|
[
{
"created": "Fri, 26 May 2023 12:17:59 GMT",
"version": "v1"
}
] |
2023-05-29
|
[
[
"Akata",
"Elif",
""
],
[
"Schulz",
"Lion",
""
],
[
"Coda-Forno",
"Julian",
""
],
[
"Oh",
"Seong Joon",
""
],
[
"Bethge",
"Matthias",
""
],
[
"Schulz",
"Eric",
""
]
] |
Large Language Models (LLMs) are transforming society and permeating into diverse applications. As a result, LLMs will frequently interact with us and other agents. It is, therefore, of great societal value to understand how LLMs behave in interactive social settings. Here, we propose to use behavioral game theory to study LLM's cooperation and coordination behavior. To do so, we let different LLMs (GPT-3, GPT-3.5, and GPT-4) play finitely repeated games with each other and with other, human-like strategies. Our results show that LLMs generally perform well in such tasks and also uncover persistent behavioral signatures. In a large set of two players-two strategies games, we find that LLMs are particularly good at games where valuing their own self-interest pays off, like the iterated Prisoner's Dilemma family. However, they behave sub-optimally in games that require coordination. We, therefore, further focus on two games from these distinct families. In the canonical iterated Prisoner's Dilemma, we find that GPT-4 acts particularly unforgivingly, always defecting after another agent has defected only once. In the Battle of the Sexes, we find that GPT-4 cannot match the behavior of the simple convention to alternate between options. We verify that these behavioral signatures are stable across robustness checks. Finally, we show how GPT-4's behavior can be modified by providing further information about the other player as well as by asking it to predict the other player's actions before making a choice. These results enrich our understanding of LLM's social behavior and pave the way for a behavioral game theory for machines.
|
2009.02775
|
Suvam Mukherjee
|
Suvam Mukherjee, Oded Padon, Sharon Shoham, Deepak D'Souza, Noam
Rinetzky
|
A Thread-Local Semantics and Efficient Static Analyses for Race Free
Programs
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data race free (DRF) programs constitute an important class of concurrent
programs. In this paper we provide a framework for designing and proving the
correctness of data flow analyses that target this class of programs. These
analyses are in the same spirit as the "sync-CFG" analysis proposed in earlier
literature. To achieve this, we first propose a novel concrete semantics for
DRF programs, called L-DRF, that is thread-local in nature---each thread
operates on its own copy of the data state. We show that abstractions of our
semantics allow us to reduce the analysis of DRF programs to a sequential
analysis. This aids in rapidly porting existing sequential analyses to sound
and scalable analyses for DRF programs. Next, we parameterize L-DRF with a
partitioning of the program variables into "regions" which are accessed
atomically. Abstractions of the region-parameterized semantics yield more
precise analyses for "region-race" free concurrent programs. We instantiate
these abstractions to devise efficient relational analyses for race free
programs, which we have implemented in a prototype tool called RATCOP. On the
benchmarks, RATCOP was able to prove up to 65% of the assertions, in comparison
to 25% proved by our baseline. Moreover, in a comparative study with a recent
concurrent static analyzer, RATCOP was up to 5 orders of magnitude faster.
|
[
{
"created": "Sun, 6 Sep 2020 17:01:51 GMT",
"version": "v1"
}
] |
2020-09-08
|
[
[
"Mukherjee",
"Suvam",
""
],
[
"Padon",
"Oded",
""
],
[
"Shoham",
"Sharon",
""
],
[
"D'Souza",
"Deepak",
""
],
[
"Rinetzky",
"Noam",
""
]
] |
Data race free (DRF) programs constitute an important class of concurrent programs. In this paper we provide a framework for designing and proving the correctness of data flow analyses that target this class of programs. These analyses are in the same spirit as the "sync-CFG" analysis proposed in earlier literature. To achieve this, we first propose a novel concrete semantics for DRF programs, called L-DRF, that is thread-local in nature---each thread operates on its own copy of the data state. We show that abstractions of our semantics allow us to reduce the analysis of DRF programs to a sequential analysis. This aids in rapidly porting existing sequential analyses to sound and scalable analyses for DRF programs. Next, we parameterize L-DRF with a partitioning of the program variables into "regions" which are accessed atomically. Abstractions of the region-parameterized semantics yield more precise analyses for "region-race" free concurrent programs. We instantiate these abstractions to devise efficient relational analyses for race free programs, which we have implemented in a prototype tool called RATCOP. On the benchmarks, RATCOP was able to prove up to 65% of the assertions, in comparison to 25% proved by our baseline. Moreover, in a comparative study with a recent concurrent static analyzer, RATCOP was up to 5 orders of magnitude faster.
|
2203.00733
|
Kazuhiro Sasabuchi
|
Daichi Saito, Kazuhiro Sasabuchi, Naoki Wake, Jun Takamatsu, Hideki
Koike, Katsushi Ikeuchi
|
Task-grasping from human demonstration
|
7 pages, 8 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
A challenge in robot grasping is to achieve task-grasping which is to select
a grasp that is advantageous to the success of tasks before and after grasps.
One of the frameworks to address this difficulty is Learning-from-Observation
(LfO), which obtains various hints from human demonstrations. This paper solves
three issues in the grasping skills in the LfO framework: 1) how to
functionally mimic human-demonstrated grasps to robots with limited grasp
capability, 2) how to coordinate grasp skills with reaching body mimicking, 3)
how to robustly perform grasps under object pose and shape uncertainty. A deep
reinforcement learning using contact-web based rewards and domain randomization
of approach directions is proposed to achieve such robust mimicked grasping
skills. Experiment results show that the trained grasping skills can be applied
in an LfO system and executed on a real robot. In addition, it is shown that
the trained skill is robust to errors in the object pose and to the uncertainty
of the object shape and can be combined with various reach-coordination.
|
[
{
"created": "Tue, 1 Mar 2022 20:38:41 GMT",
"version": "v1"
}
] |
2022-03-03
|
[
[
"Saito",
"Daichi",
""
],
[
"Sasabuchi",
"Kazuhiro",
""
],
[
"Wake",
"Naoki",
""
],
[
"Takamatsu",
"Jun",
""
],
[
"Koike",
"Hideki",
""
],
[
"Ikeuchi",
"Katsushi",
""
]
] |
A challenge in robot grasping is to achieve task-grasping which is to select a grasp that is advantageous to the success of tasks before and after grasps. One of the frameworks to address this difficulty is Learning-from-Observation (LfO), which obtains various hints from human demonstrations. This paper solves three issues in the grasping skills in the LfO framework: 1) how to functionally mimic human-demonstrated grasps to robots with limited grasp capability, 2) how to coordinate grasp skills with reaching body mimicking, 3) how to robustly perform grasps under object pose and shape uncertainty. A deep reinforcement learning using contact-web based rewards and domain randomization of approach directions is proposed to achieve such robust mimicked grasping skills. Experiment results show that the trained grasping skills can be applied in an LfO system and executed on a real robot. In addition, it is shown that the trained skill is robust to errors in the object pose and to the uncertainty of the object shape and can be combined with various reach-coordination.
|
1704.06420
|
Xiaofang Sun
|
Xiaofang Sun and Shihao Yan and Nan Yang and Zhiguo Ding and Chao Shen
and Zhangdui Zhong
|
Short-Packet Communications in Non-Orthogonal Multiple Access Systems
|
6 pages, 4 figures. This paper has already been submitted to IEEE ICC
2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work introduces, for the first time, non-orthogonal multiple access
(NOMA) into short-packet communications to achieve low latency in wireless
networks. Specifically, we address the optimization of transmission rates and
power allocation to maximize the effective throughput of the user with a higher
channel gain while guaranteeing the other user achieving a certain level of
effective throughput. To demonstrate the benefits of NOMA, we analyze the
performance of orthogonal multiple access (OMA) as a benchmark. Our examination
shows that NOMA can significantly outperform OMA by achieving a higher
effective throughput with the same latency or incurring a lower latency to
achieve the same effective throughput targets. Surprisingly, we find that the
performance gap between NOMA and OMA becomes more prominent when the effective
throughput targets at the two users become closer to each other. This
demonstrates that NOMA can significantly reduce the latency in the context of
short-packet communications with practical constraints.
|
[
{
"created": "Fri, 21 Apr 2017 07:21:15 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Sep 2017 04:36:47 GMT",
"version": "v2"
}
] |
2017-09-20
|
[
[
"Sun",
"Xiaofang",
""
],
[
"Yan",
"Shihao",
""
],
[
"Yang",
"Nan",
""
],
[
"Ding",
"Zhiguo",
""
],
[
"Shen",
"Chao",
""
],
[
"Zhong",
"Zhangdui",
""
]
] |
This work introduces, for the first time, non-orthogonal multiple access (NOMA) into short-packet communications to achieve low latency in wireless networks. Specifically, we address the optimization of transmission rates and power allocation to maximize the effective throughput of the user with a higher channel gain while guaranteeing the other user achieving a certain level of effective throughput. To demonstrate the benefits of NOMA, we analyze the performance of orthogonal multiple access (OMA) as a benchmark. Our examination shows that NOMA can significantly outperform OMA by achieving a higher effective throughput with the same latency or incurring a lower latency to achieve the same effective throughput targets. Surprisingly, we find that the performance gap between NOMA and OMA becomes more prominent when the effective throughput targets at the two users become closer to each other. This demonstrates that NOMA can significantly reduce the latency in the context of short-packet communications with practical constraints.
|
2404.19379
|
Zhigang Sun
|
Zhigang Sun, Zixu Wang, Lavdim Halilaj, Juergen Luettin
|
SemanticFormer: Holistic and Semantic Traffic Scene Representation for
Trajectory Prediction using Knowledge Graphs
|
8 pages, 7 figures, has been accepted for publication in the IEEE
Robotics and Automation Letters (RA-L)
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Trajectory prediction in autonomous driving relies on accurate representation
of all relevant contexts of the driving scene, including traffic participants,
road topology, traffic signs, as well as their semantic relations to each
other. Despite increased attention to this issue, most approaches in trajectory
prediction do not consider all of these factors sufficiently. We present
SemanticFormer, an approach for predicting multimodal trajectories by reasoning
over a semantic traffic scene graph using a hybrid approach. It utilizes
high-level information in the form of meta-paths, i.e. trajectories on which an
agent is allowed to drive from a knowledge graph which is then processed by a
novel pipeline based on multiple attention mechanisms to predict accurate
trajectories. SemanticFormer comprises a hierarchical heterogeneous graph
encoder to capture spatio-temporal and relational information across agents as
well as between agents and road elements. Further, it includes a predictor to
fuse different encodings and decode trajectories with probabilities. Finally, a
refinement module assesses permitted meta-paths of trajectories and speed
profiles to obtain final predicted trajectories. Evaluation of the nuScenes
benchmark demonstrates improved performance compared to several SOTA methods.
In addition, we demonstrate that our knowledge graph can be easily added to two
graph-based existing SOTA methods, namely VectorNet and Laformer, replacing
their original homogeneous graphs. The evaluation results suggest that by
adding our knowledge graph the performance of the original methods is enhanced
by 5% and 4%, respectively.
|
[
{
"created": "Tue, 30 Apr 2024 09:11:04 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 14:56:12 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Jul 2024 04:51:21 GMT",
"version": "v3"
}
] |
2024-07-02
|
[
[
"Sun",
"Zhigang",
""
],
[
"Wang",
"Zixu",
""
],
[
"Halilaj",
"Lavdim",
""
],
[
"Luettin",
"Juergen",
""
]
] |
Trajectory prediction in autonomous driving relies on accurate representation of all relevant contexts of the driving scene, including traffic participants, road topology, traffic signs, as well as their semantic relations to each other. Despite increased attention to this issue, most approaches in trajectory prediction do not consider all of these factors sufficiently. We present SemanticFormer, an approach for predicting multimodal trajectories by reasoning over a semantic traffic scene graph using a hybrid approach. It utilizes high-level information in the form of meta-paths, i.e. trajectories on which an agent is allowed to drive from a knowledge graph which is then processed by a novel pipeline based on multiple attention mechanisms to predict accurate trajectories. SemanticFormer comprises a hierarchical heterogeneous graph encoder to capture spatio-temporal and relational information across agents as well as between agents and road elements. Further, it includes a predictor to fuse different encodings and decode trajectories with probabilities. Finally, a refinement module assesses permitted meta-paths of trajectories and speed profiles to obtain final predicted trajectories. Evaluation of the nuScenes benchmark demonstrates improved performance compared to several SOTA methods. In addition, we demonstrate that our knowledge graph can be easily added to two graph-based existing SOTA methods, namely VectorNet and Laformer, replacing their original homogeneous graphs. The evaluation results suggest that by adding our knowledge graph the performance of the original methods is enhanced by 5% and 4%, respectively.
|
1804.05290
|
Tengchan Zeng
|
Tengchan Zeng, Omid Semiari, Walid Saad, and Mehdi Bennis
|
Joint Communication and Control for Wireless Autonomous Vehicular
Platoon Systems
|
Accepted in IEEE Transactions on Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous vehicular platoons will play an important role in improving
on-road safety in tomorrow's smart cities. Vehicles in an autonomous platoon
can exploit vehicle-to-vehicle (V2V) communications to collect information,
such as velocity and acceleration, from surrounding vehicles so as to maintain
the target velocity and inter-vehicle distance. However, due to the dynamic
on-vehicle data processing rate and the uncertainty of the wireless channel,
V2V communications within a platoon will experience a delay. Such delay can
impair the vehicles' ability to stabilize the operation of the platoon. In this
paper, a novel framework is proposed to optimize a platoon's operation while
jointly consider the delay of the wireless network and the stability of the
vehicle's control system. First, stability analysis for the control system is
performed and the maximum wireless system delay requirements which can prevent
the instability of the control system are derived. Then, delay analysis is
conducted to determine the end-to-end delay, including queuing, processing, and
transmission delay for the V2V link in the wireless network. Subsequently,
using the derived delay, a lower bound and an approximated expression of the
reliability for the wireless system, defined as the probability that the
wireless system meets the control system's delay needs, are derived. Then, the
control parameters are optimized to maximize the derived wireless system
reliability. Simulation results corroborate the analytical derivations and
study the impact of parameters, such as the platoon size, on the reliability
performance of the vehicular platoon. More importantly, the simulation results
disclose the benefits of integrating control system and wireless network design
while providing guidelines for designing autonomous platoons so as to realize
the required wireless network reliability and control system stability.
|
[
{
"created": "Sun, 15 Apr 2018 00:25:13 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jul 2019 01:29:49 GMT",
"version": "v2"
}
] |
2019-07-22
|
[
[
"Zeng",
"Tengchan",
""
],
[
"Semiari",
"Omid",
""
],
[
"Saad",
"Walid",
""
],
[
"Bennis",
"Mehdi",
""
]
] |
Autonomous vehicular platoons will play an important role in improving on-road safety in tomorrow's smart cities. Vehicles in an autonomous platoon can exploit vehicle-to-vehicle (V2V) communications to collect information, such as velocity and acceleration, from surrounding vehicles so as to maintain the target velocity and inter-vehicle distance. However, due to the dynamic on-vehicle data processing rate and the uncertainty of the wireless channel, V2V communications within a platoon will experience a delay. Such delay can impair the vehicles' ability to stabilize the operation of the platoon. In this paper, a novel framework is proposed to optimize a platoon's operation while jointly consider the delay of the wireless network and the stability of the vehicle's control system. First, stability analysis for the control system is performed and the maximum wireless system delay requirements which can prevent the instability of the control system are derived. Then, delay analysis is conducted to determine the end-to-end delay, including queuing, processing, and transmission delay for the V2V link in the wireless network. Subsequently, using the derived delay, a lower bound and an approximated expression of the reliability for the wireless system, defined as the probability that the wireless system meets the control system's delay needs, are derived. Then, the control parameters are optimized to maximize the derived wireless system reliability. Simulation results corroborate the analytical derivations and study the impact of parameters, such as the platoon size, on the reliability performance of the vehicular platoon. More importantly, the simulation results disclose the benefits of integrating control system and wireless network design while providing guidelines for designing autonomous platoons so as to realize the required wireless network reliability and control system stability.
|
2110.02121
|
Adejuyigbe Fajemisin
|
Adejuyigbe Fajemisin, Donato Maragno, Dick den Hertog
|
Optimization with Constraint Learning: A Framework and Survey
| null | null | null | null |
cs.LG math.OC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Many real-life optimization problems frequently contain one or more
constraints or objectives for which there are no explicit formulas. If data is
however available, these data can be used to learn the constraints. The
benefits of this approach are clearly seen, however there is a need for this
process to be carried out in a structured manner. This paper therefore provides
a framework for Optimization with Constraint Learning (OCL) which we believe
will help to formalize and direct the process of learning constraints from
data. This framework includes the following steps: (i) setup of the conceptual
optimization model, (ii) data gathering and preprocessing, (iii) selection and
training of predictive models, (iv) resolution of the optimization model, and
(v) verification and improvement of the optimization model. We then review the
recent OCL literature in light of this framework, and highlight current trends,
as well as areas for future research.
|
[
{
"created": "Tue, 5 Oct 2021 15:42:06 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Sep 2022 15:00:48 GMT",
"version": "v2"
}
] |
2022-09-23
|
[
[
"Fajemisin",
"Adejuyigbe",
""
],
[
"Maragno",
"Donato",
""
],
[
"Hertog",
"Dick den",
""
]
] |
Many real-life optimization problems frequently contain one or more constraints or objectives for which there are no explicit formulas. If data is however available, these data can be used to learn the constraints. The benefits of this approach are clearly seen, however there is a need for this process to be carried out in a structured manner. This paper therefore provides a framework for Optimization with Constraint Learning (OCL) which we believe will help to formalize and direct the process of learning constraints from data. This framework includes the following steps: (i) setup of the conceptual optimization model, (ii) data gathering and preprocessing, (iii) selection and training of predictive models, (iv) resolution of the optimization model, and (v) verification and improvement of the optimization model. We then review the recent OCL literature in light of this framework, and highlight current trends, as well as areas for future research.
|
1907.10936
|
Zhijie Zhang
|
Zhijie Zhang and Huazhu Fu and Hang Dai and Jianbing Shen and Yanwei
Pang and Ling Shao
|
ET-Net: A Generic Edge-aTtention Guidance Network for Medical Image
Segmentation
|
MICCAI 2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segmentation is a fundamental task in medical image analysis. However, most
existing methods focus on primary region extraction and ignore edge
information, which is useful for obtaining accurate segmentation. In this
paper, we propose a generic medical segmentation method, called Edge-aTtention
guidance Network (ET-Net), which embeds edge-attention representations to guide
the segmentation network. Specifically, an edge guidance module is utilized to
learn the edge-attention representations in the early encoding layers, which
are then transferred to the multi-scale decoding layers, fused using a weighted
aggregation module. The experimental results on four segmentation tasks (i.e.,
optic disc/cup and vessel segmentation in retinal images, and lung segmentation
in chest X-Ray and CT images) demonstrate that preserving edge-attention
representations contributes to the final segmentation accuracy, and our
proposed method outperforms current state-of-the-art segmentation methods. The
source code of our method is available at https://github.com/ZzzJzzZ/ETNet.
|
[
{
"created": "Thu, 25 Jul 2019 10:00:08 GMT",
"version": "v1"
}
] |
2019-07-26
|
[
[
"Zhang",
"Zhijie",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Dai",
"Hang",
""
],
[
"Shen",
"Jianbing",
""
],
[
"Pang",
"Yanwei",
""
],
[
"Shao",
"Ling",
""
]
] |
Segmentation is a fundamental task in medical image analysis. However, most existing methods focus on primary region extraction and ignore edge information, which is useful for obtaining accurate segmentation. In this paper, we propose a generic medical segmentation method, called Edge-aTtention guidance Network (ET-Net), which embeds edge-attention representations to guide the segmentation network. Specifically, an edge guidance module is utilized to learn the edge-attention representations in the early encoding layers, which are then transferred to the multi-scale decoding layers, fused using a weighted aggregation module. The experimental results on four segmentation tasks (i.e., optic disc/cup and vessel segmentation in retinal images, and lung segmentation in chest X-Ray and CT images) demonstrate that preserving edge-attention representations contributes to the final segmentation accuracy, and our proposed method outperforms current state-of-the-art segmentation methods. The source code of our method is available at https://github.com/ZzzJzzZ/ETNet.
|
2011.11317
|
Jochen Meyer
|
Jochen Meyer, Thomas Fr\"ohlich, Kai von Holdt
|
Corona-Warn-App: Erste Ergebnisse einer Onlineumfrage zur
(Nicht-)Nutzung und Gebrauch
|
in German. In the original version, there was a minor bug in
calculating percentages for reasons of non-use (page 6), resulting in wrong
figures. These have been corrected. Now the figures are correct
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this study, the German "Corona-Warn-App" of the German Federal Government
and the Robert-Koch-Institute is examined by means of a non-representative
online survey with 1482 participants for reasons of use and non-use. The study
provides insights into user behavior with the app during the Corona pandemic,
highlights the topic of data protection and how the app is used in general. Our
results show that the app is often not used due to privacy concerns, but that
there are also technical problems and doubts about its usefulness. In addition,
the app is mainly used due to altruistic reasons and is often opened to view
the own risk assessment and to ensure its functionality. To better understand
the results, we compare our results with a sample of infas 360 with 10553
participants. It is shown that the results of this study can be compared to a
larger population. Finally, the results are discussed and recommendations for
action are derived.
|
[
{
"created": "Mon, 23 Nov 2020 10:38:28 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Nov 2020 16:15:04 GMT",
"version": "v2"
}
] |
2020-11-25
|
[
[
"Meyer",
"Jochen",
""
],
[
"Fröhlich",
"Thomas",
""
],
[
"von Holdt",
"Kai",
""
]
] |
In this study, the German "Corona-Warn-App" of the German Federal Government and the Robert-Koch-Institute is examined by means of a non-representative online survey with 1482 participants for reasons of use and non-use. The study provides insights into user behavior with the app during the Corona pandemic, highlights the topic of data protection and how the app is used in general. Our results show that the app is often not used due to privacy concerns, but that there are also technical problems and doubts about its usefulness. In addition, the app is mainly used due to altruistic reasons and is often opened to view the own risk assessment and to ensure its functionality. To better understand the results, we compare our results with a sample of infas 360 with 10553 participants. It is shown that the results of this study can be compared to a larger population. Finally, the results are discussed and recommendations for action are derived.
|
2402.07114
|
Rudrajit Das
|
Rudrajit Das, Naman Agarwal, Sujay Sanghavi, Inderjit S. Dhillon
|
Towards Quantifying the Preconditioning Effect of Adam
| null | null | null | null |
cs.LG cs.NA math.NA math.OC stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
There is a notable dearth of results characterizing the preconditioning
effect of Adam and showing how it may alleviate the curse of ill-conditioning
-- an issue plaguing gradient descent (GD). In this work, we perform a detailed
analysis of Adam's preconditioning effect for quadratic functions and quantify
to what extent Adam can mitigate the dependence on the condition number of the
Hessian. Our key finding is that Adam can suffer less from the condition number
but at the expense of suffering a dimension-dependent quantity. Specifically,
for a $d$-dimensional quadratic with a diagonal Hessian having condition number
$\kappa$, we show that the effective condition number-like quantity controlling
the iteration complexity of Adam without momentum is $\mathcal{O}(\min(d,
\kappa))$. For a diagonally dominant Hessian, we obtain a bound of
$\mathcal{O}(\min(d \sqrt{d \kappa}, \kappa))$ for the corresponding quantity.
Thus, when $d < \mathcal{O}(\kappa^p)$ where $p = 1$ for a diagonal Hessian and
$p = 1/3$ for a diagonally dominant Hessian, Adam can outperform GD (which has
an $\mathcal{O}(\kappa)$ dependence). On the negative side, our results suggest
that Adam can be worse than GD for a sufficiently non-diagonal Hessian even if
$d \ll \mathcal{O}(\kappa^{1/3})$; we corroborate this with empirical evidence.
Finally, we extend our analysis to functions satisfying per-coordinate
Lipschitz smoothness and a modified version of the Polyak-\L ojasiewicz
condition.
|
[
{
"created": "Sun, 11 Feb 2024 06:21:18 GMT",
"version": "v1"
}
] |
2024-02-14
|
[
[
"Das",
"Rudrajit",
""
],
[
"Agarwal",
"Naman",
""
],
[
"Sanghavi",
"Sujay",
""
],
[
"Dhillon",
"Inderjit S.",
""
]
] |
There is a notable dearth of results characterizing the preconditioning effect of Adam and showing how it may alleviate the curse of ill-conditioning -- an issue plaguing gradient descent (GD). In this work, we perform a detailed analysis of Adam's preconditioning effect for quadratic functions and quantify to what extent Adam can mitigate the dependence on the condition number of the Hessian. Our key finding is that Adam can suffer less from the condition number but at the expense of suffering a dimension-dependent quantity. Specifically, for a $d$-dimensional quadratic with a diagonal Hessian having condition number $\kappa$, we show that the effective condition number-like quantity controlling the iteration complexity of Adam without momentum is $\mathcal{O}(\min(d, \kappa))$. For a diagonally dominant Hessian, we obtain a bound of $\mathcal{O}(\min(d \sqrt{d \kappa}, \kappa))$ for the corresponding quantity. Thus, when $d < \mathcal{O}(\kappa^p)$ where $p = 1$ for a diagonal Hessian and $p = 1/3$ for a diagonally dominant Hessian, Adam can outperform GD (which has an $\mathcal{O}(\kappa)$ dependence). On the negative side, our results suggest that Adam can be worse than GD for a sufficiently non-diagonal Hessian even if $d \ll \mathcal{O}(\kappa^{1/3})$; we corroborate this with empirical evidence. Finally, we extend our analysis to functions satisfying per-coordinate Lipschitz smoothness and a modified version of the Polyak-\L ojasiewicz condition.
|
1204.1718
|
Nathan Collier Nathan Collier
|
Nathan Collier, David Pardo, Maciej Paszynski, Victor M. Calo
|
Computational complexity and memory usage for multi-frontal direct
solvers in structured mesh finite elements
|
8 pages, 2 figures
| null | null | null |
cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multi-frontal direct solver is the state-of-the-art algorithm for the
direct solution of sparse linear systems. This paper provides computational
complexity and memory usage estimates for the application of the multi-frontal
direct solver algorithm on linear systems resulting from B-spline-based
isogeometric finite elements, where the mesh is a structured grid. Specifically
we provide the estimates for systems resulting from $C^{p-1}$ polynomial
B-spline spaces and compare them to those obtained using $C^0$ spaces.
|
[
{
"created": "Sun, 8 Apr 2012 08:07:47 GMT",
"version": "v1"
}
] |
2012-04-10
|
[
[
"Collier",
"Nathan",
""
],
[
"Pardo",
"David",
""
],
[
"Paszynski",
"Maciej",
""
],
[
"Calo",
"Victor M.",
""
]
] |
The multi-frontal direct solver is the state-of-the-art algorithm for the direct solution of sparse linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from B-spline-based isogeometric finite elements, where the mesh is a structured grid. Specifically we provide the estimates for systems resulting from $C^{p-1}$ polynomial B-spline spaces and compare them to those obtained using $C^0$ spaces.
|
1907.11893
|
Sabah Al-Fedaghi Dr.
|
Sabah Al-Fedaghi
|
Five Generic Processes for Behavior Description in Software Engineering
|
12 pages, 35 figures
|
International Journal of Computer Science and Information
Security, Vol. 17, No. 7, July 2019 International Journal of Computer Science
and Information Security, Vol. 17, No. 7, July 2019
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Behavior modeling and software architecture specification are attracting more
attention in software engineering. Describing both of them in integrated models
yields numerous advantages for coping with complexity since the models are
platform independent. They can be decomposed to be developed independently by
experts of the respective fields, and they are highly reusable and may be
subjected to formal analysis. Typically, behavior is defined as the occurrence
of an action, a pattern over time, or any change in or movement of an object.
In systems studies, there are many different approaches to modeling behavior,
such as grounding behavior simultaneously on state transitions, natural
language, and flowcharts. These different descriptions make it difficult to
compare objects with each other for consistency. This paper attempts to propose
some conceptual preliminaries to a definition of behavior in software
engineering. The main objective is to clarify the research area concerned with
system behavior aspects and to create a common platform for future research.
Five generic elementary processes (creating, processing, releasing, receiving,
and transferring) are used to form a unifying higher-order process called a
thinging machine (TM) that is utilized as a template in modeling behavior of
systems. Additionally, a TM includes memory and triggering relations among
stages of processes (machines). A TM is applied to many examples from the
literature to examine their behavioristic aspects. The results show that a TM
is a valuable tool for analyzing and modeling behavior in a system.
|
[
{
"created": "Sat, 27 Jul 2019 11:02:50 GMT",
"version": "v1"
}
] |
2019-07-30
|
[
[
"Al-Fedaghi",
"Sabah",
""
]
] |
Behavior modeling and software architecture specification are attracting more attention in software engineering. Describing both of them in integrated models yields numerous advantages for coping with complexity since the models are platform independent. They can be decomposed to be developed independently by experts of the respective fields, and they are highly reusable and may be subjected to formal analysis. Typically, behavior is defined as the occurrence of an action, a pattern over time, or any change in or movement of an object. In systems studies, there are many different approaches to modeling behavior, such as grounding behavior simultaneously on state transitions, natural language, and flowcharts. These different descriptions make it difficult to compare objects with each other for consistency. This paper attempts to propose some conceptual preliminaries to a definition of behavior in software engineering. The main objective is to clarify the research area concerned with system behavior aspects and to create a common platform for future research. Five generic elementary processes (creating, processing, releasing, receiving, and transferring) are used to form a unifying higher-order process called a thinging machine (TM) that is utilized as a template in modeling behavior of systems. Additionally, a TM includes memory and triggering relations among stages of processes (machines). A TM is applied to many examples from the literature to examine their behavioristic aspects. The results show that a TM is a valuable tool for analyzing and modeling behavior in a system.
|
2009.13454
|
Mubariz Zaffar
|
Mihnea-Alexandru Tomit\u{a}, Mubariz Zaffar, Michael Milford, Klaus
McDonald-Maier and Shoaib Ehsan
|
ConvSequential-SLAM: A Sequence-based, Training-less Visual Place
Recognition Technique for Changing Environments
|
10 pages, currently under-review
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Place Recognition (VPR) is the ability to correctly recall a
previously visited place under changing viewpoints and appearances. A large
number of handcrafted and deep-learning-based VPR techniques exist, where the
former suffer from appearance changes and the latter have significant
computational needs. In this paper, we present a new handcrafted VPR technique
that achieves state-of-the-art place matching performance under challenging
conditions. Our technique combines the best of 2 existing trainingless VPR
techniques, SeqSLAM and CoHOG, which are each robust to conditional and
viewpoint changes, respectively. This blend, namely ConvSequential-SLAM,
utilises sequential information and block-normalisation to handle appearance
changes, while using regional-convolutional matching to achieve
viewpoint-invariance. We analyse content-overlap in-between query frames to
find a minimum sequence length, while also re-using the image entropy
information for environment-based sequence length tuning. State-of-the-art
performance is reported in contrast to 8 contemporary VPR techniques on 4
public datasets. Qualitative insights and an ablation study on sequence length
are also provided.
|
[
{
"created": "Mon, 28 Sep 2020 16:31:29 GMT",
"version": "v1"
}
] |
2020-09-29
|
[
[
"Tomită",
"Mihnea-Alexandru",
""
],
[
"Zaffar",
"Mubariz",
""
],
[
"Milford",
"Michael",
""
],
[
"McDonald-Maier",
"Klaus",
""
],
[
"Ehsan",
"Shoaib",
""
]
] |
Visual Place Recognition (VPR) is the ability to correctly recall a previously visited place under changing viewpoints and appearances. A large number of handcrafted and deep-learning-based VPR techniques exist, where the former suffer from appearance changes and the latter have significant computational needs. In this paper, we present a new handcrafted VPR technique that achieves state-of-the-art place matching performance under challenging conditions. Our technique combines the best of 2 existing trainingless VPR techniques, SeqSLAM and CoHOG, which are each robust to conditional and viewpoint changes, respectively. This blend, namely ConvSequential-SLAM, utilises sequential information and block-normalisation to handle appearance changes, while using regional-convolutional matching to achieve viewpoint-invariance. We analyse content-overlap in-between query frames to find a minimum sequence length, while also re-using the image entropy information for environment-based sequence length tuning. State-of-the-art performance is reported in contrast to 8 contemporary VPR techniques on 4 public datasets. Qualitative insights and an ablation study on sequence length are also provided.
|
2404.15588
|
Xiangci Li
|
Xiangci Li, Sihao Chen, Rajvi Kapadia, Jessica Ouyang, Fan Zhang
|
Minimal Evidence Group Identification for Claim Verification
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Claim verification in real-world settings (e.g. against a large collection of
candidate evidences retrieved from the web) typically requires identifying and
aggregating a complete set of evidence pieces that collectively provide full
support to the claim. The problem becomes particularly challenging when there
exists distinct sets of evidence that could be used to verify the claim from
different perspectives. In this paper, we formally define and study the problem
of identifying such minimal evidence groups (MEGs) for claim verification. We
show that MEG identification can be reduced from Set Cover problem, based on
entailment inference of whether a given evidence group provides full/partial
support to a claim. Our proposed approach achieves 18.4% and 34.8% absolute
improvements on the WiCE and SciFact datasets over LLM prompting. Finally, we
demonstrate the benefits of MEGs in downstream applications such as claim
generation.
|
[
{
"created": "Wed, 24 Apr 2024 01:44:09 GMT",
"version": "v1"
}
] |
2024-04-25
|
[
[
"Li",
"Xiangci",
""
],
[
"Chen",
"Sihao",
""
],
[
"Kapadia",
"Rajvi",
""
],
[
"Ouyang",
"Jessica",
""
],
[
"Zhang",
"Fan",
""
]
] |
Claim verification in real-world settings (e.g. against a large collection of candidate evidences retrieved from the web) typically requires identifying and aggregating a complete set of evidence pieces that collectively provide full support to the claim. The problem becomes particularly challenging when there exists distinct sets of evidence that could be used to verify the claim from different perspectives. In this paper, we formally define and study the problem of identifying such minimal evidence groups (MEGs) for claim verification. We show that MEG identification can be reduced from Set Cover problem, based on entailment inference of whether a given evidence group provides full/partial support to a claim. Our proposed approach achieves 18.4% and 34.8% absolute improvements on the WiCE and SciFact datasets over LLM prompting. Finally, we demonstrate the benefits of MEGs in downstream applications such as claim generation.
|
2407.20798
|
Norman Di Palo
|
Norman Di Palo, Leonard Hasenclever, Jan Humplik, Arunkumar Byravan
|
Diffusion Augmented Agents: A Framework for Efficient Exploration and
Transfer Learning
|
Published at 3rd Conference on Lifelong Learning Agents (CoLLAs),
2024
| null | null | null |
cs.LG cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Diffusion Augmented Agents (DAAG), a novel framework that
leverages large language models, vision language models, and diffusion models
to improve sample efficiency and transfer learning in reinforcement learning
for embodied agents. DAAG hindsight relabels the agent's past experience by
using diffusion models to transform videos in a temporally and geometrically
consistent way to align with target instructions with a technique we call
Hindsight Experience Augmentation. A large language model orchestrates this
autonomous process without requiring human supervision, making it well-suited
for lifelong learning scenarios. The framework reduces the amount of
reward-labeled data needed to 1) finetune a vision language model that acts as
a reward detector, and 2) train RL agents on new tasks. We demonstrate the
sample efficiency gains of DAAG in simulated robotics environments involving
manipulation and navigation. Our results show that DAAG improves learning of
reward detectors, transferring past experience, and acquiring new tasks - key
abilities for developing efficient lifelong learning agents. Supplementary
material and visualizations are available on our website
https://sites.google.com/view/diffusion-augmented-agents/
|
[
{
"created": "Tue, 30 Jul 2024 13:01:31 GMT",
"version": "v1"
}
] |
2024-07-31
|
[
[
"Di Palo",
"Norman",
""
],
[
"Hasenclever",
"Leonard",
""
],
[
"Humplik",
"Jan",
""
],
[
"Byravan",
"Arunkumar",
""
]
] |
We introduce Diffusion Augmented Agents (DAAG), a novel framework that leverages large language models, vision language models, and diffusion models to improve sample efficiency and transfer learning in reinforcement learning for embodied agents. DAAG hindsight relabels the agent's past experience by using diffusion models to transform videos in a temporally and geometrically consistent way to align with target instructions with a technique we call Hindsight Experience Augmentation. A large language model orchestrates this autonomous process without requiring human supervision, making it well-suited for lifelong learning scenarios. The framework reduces the amount of reward-labeled data needed to 1) finetune a vision language model that acts as a reward detector, and 2) train RL agents on new tasks. We demonstrate the sample efficiency gains of DAAG in simulated robotics environments involving manipulation and navigation. Our results show that DAAG improves learning of reward detectors, transferring past experience, and acquiring new tasks - key abilities for developing efficient lifelong learning agents. Supplementary material and visualizations are available on our website https://sites.google.com/view/diffusion-augmented-agents/
|
2406.00359
|
Hannaneh Barahouei Pasandi
|
Hannah B. Pasandi, Faith Parastar
|
Location Privacy in B5G/6G: Systematization of Knowledge
|
13 pages; 7 Figures
| null | null | null |
cs.NI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
As we transition into the era of B5G/6G networks, the promise of seamless,
high-speed connectivity brings unprecedented opportunities and challenges.
Among the most critical concerns is the preservation of location privacy, given
the enhanced precision and pervasive connectivity of these advanced networks.
This paper systematically reviews the state of knowledge on location privacy in
B5G/6G networks, highlighting the architectural advancements and
infrastructural complexities that contribute to increased privacy risks. The
urgency of studying these technologies is underscored by the rapid adoption of
B5G/6G and the growing sophistication of location tracking methods. We evaluate
current and emerging privacy-preserving mechanisms, exploring the implications
of sophisticated tracking methods and the challenges posed by the complex
network infrastructures. Our findings reveal the effectiveness of various
mitigation strategies and emphasize the important role of physical layer
security. Additionally, we propose innovative approaches, including
decentralized authentication systems and the potential of satellite
communications, to enhance location privacy. By addressing these challenges,
this paper provides a comprehensive perspective on preserving user privacy in
the rapidly evolving landscape of modern communication networks.
|
[
{
"created": "Sat, 1 Jun 2024 08:25:07 GMT",
"version": "v1"
}
] |
2024-06-04
|
[
[
"Pasandi",
"Hannah B.",
""
],
[
"Parastar",
"Faith",
""
]
] |
As we transition into the era of B5G/6G networks, the promise of seamless, high-speed connectivity brings unprecedented opportunities and challenges. Among the most critical concerns is the preservation of location privacy, given the enhanced precision and pervasive connectivity of these advanced networks. This paper systematically reviews the state of knowledge on location privacy in B5G/6G networks, highlighting the architectural advancements and infrastructural complexities that contribute to increased privacy risks. The urgency of studying these technologies is underscored by the rapid adoption of B5G/6G and the growing sophistication of location tracking methods. We evaluate current and emerging privacy-preserving mechanisms, exploring the implications of sophisticated tracking methods and the challenges posed by the complex network infrastructures. Our findings reveal the effectiveness of various mitigation strategies and emphasize the important role of physical layer security. Additionally, we propose innovative approaches, including decentralized authentication systems and the potential of satellite communications, to enhance location privacy. By addressing these challenges, this paper provides a comprehensive perspective on preserving user privacy in the rapidly evolving landscape of modern communication networks.
|
2404.06201
|
Wei Ma
|
Zhihao Lin and Wei Ma and Tao Lin and Yaowen Zheng and Jingquan Ge and
Jun Wang and Jacques Klein and Tegawende Bissyande and Yang Liu and Li Li
|
Open-Source AI-based SE Tools: Opportunities and Challenges of
Collaborative Software Learning
| null | null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) have become instrumental in advancing software
engineering (SE) tasks, showcasing their efficacy in code understanding and
beyond. Like traditional SE tools, open-source collaboration is key in
realising the excellent products. However, with AI models, the essential need
is in data. The collaboration of these AI-based SE models hinges on maximising
the sources of high-quality data. However, data especially of high quality,
often holds commercial or sensitive value, making it less accessible for
open-source AI-based SE projects. This reality presents a significant barrier
to the development and enhancement of AI-based SE tools within the software
engineering community. Therefore, researchers need to find solutions for
enabling open-source AI-based SE models to tap into resources by different
organisations. Addressing this challenge, our position paper investigates one
solution to facilitate access to diverse organizational resources for
open-source AI models, ensuring privacy and commercial sensitivities are
respected. We introduce a governance framework centered on federated learning
(FL), designed to foster the joint development and maintenance of open-source
AI code models while safeguarding data privacy and security. Additionally, we
present guidelines for developers on AI-based SE tool collaboration, covering
data requirements, model architecture, updating strategies, and version
control. Given the significant influence of data characteristics on FL, our
research examines the effect of code data heterogeneity on FL performance.
|
[
{
"created": "Tue, 9 Apr 2024 10:47:02 GMT",
"version": "v1"
}
] |
2024-04-10
|
[
[
"Lin",
"Zhihao",
""
],
[
"Ma",
"Wei",
""
],
[
"Lin",
"Tao",
""
],
[
"Zheng",
"Yaowen",
""
],
[
"Ge",
"Jingquan",
""
],
[
"Wang",
"Jun",
""
],
[
"Klein",
"Jacques",
""
],
[
"Bissyande",
"Tegawende",
""
],
[
"Liu",
"Yang",
""
],
[
"Li",
"Li",
""
]
] |
Large Language Models (LLMs) have become instrumental in advancing software engineering (SE) tasks, showcasing their efficacy in code understanding and beyond. Like traditional SE tools, open-source collaboration is key in realising the excellent products. However, with AI models, the essential need is in data. The collaboration of these AI-based SE models hinges on maximising the sources of high-quality data. However, data especially of high quality, often holds commercial or sensitive value, making it less accessible for open-source AI-based SE projects. This reality presents a significant barrier to the development and enhancement of AI-based SE tools within the software engineering community. Therefore, researchers need to find solutions for enabling open-source AI-based SE models to tap into resources by different organisations. Addressing this challenge, our position paper investigates one solution to facilitate access to diverse organizational resources for open-source AI models, ensuring privacy and commercial sensitivities are respected. We introduce a governance framework centered on federated learning (FL), designed to foster the joint development and maintenance of open-source AI code models while safeguarding data privacy and security. Additionally, we present guidelines for developers on AI-based SE tool collaboration, covering data requirements, model architecture, updating strategies, and version control. Given the significant influence of data characteristics on FL, our research examines the effect of code data heterogeneity on FL performance.
|
2005.13303
|
Junqi Zhang
|
Junqi Zhang, Bing Bai, Ye Lin, Jian Liang, Kun Bai, Fei Wang
|
General-Purpose User Embeddings based on Mobile App Usage
|
To be published in the KDD2020 proceedings as a full paper
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we report our recent practice at Tencent for user modeling
based on mobile app usage. User behaviors on mobile app usage, including
retention, installation, and uninstallation, can be a good indicator for both
long-term and short-term interests of users. For example, if a user installs
Snapseed recently, she might have a growing interest in photographing. Such
information is valuable for numerous downstream applications, including
advertising, recommendations, etc. Traditionally, user modeling from mobile app
usage heavily relies on handcrafted feature engineering, which requires onerous
human work for different downstream applications, and could be sub-optimal
without domain experts. However, automatic user modeling based on mobile app
usage faces unique challenges, including (1) retention, installation, and
uninstallation are heterogeneous but need to be modeled collectively, (2) user
behaviors are distributed unevenly over time, and (3) many long-tailed apps
suffer from serious sparsity. In this paper, we present a tailored
AutoEncoder-coupled Transformer Network (AETN), by which we overcome these
challenges and achieve the goals of reducing manual efforts and boosting
performance. We have deployed the model at Tencent, and both online/offline
experiments from multiple domains of downstream applications have demonstrated
the effectiveness of the output user embeddings.
|
[
{
"created": "Wed, 27 May 2020 12:01:50 GMT",
"version": "v1"
}
] |
2020-05-28
|
[
[
"Zhang",
"Junqi",
""
],
[
"Bai",
"Bing",
""
],
[
"Lin",
"Ye",
""
],
[
"Liang",
"Jian",
""
],
[
"Bai",
"Kun",
""
],
[
"Wang",
"Fei",
""
]
] |
In this paper, we report our recent practice at Tencent for user modeling based on mobile app usage. User behaviors on mobile app usage, including retention, installation, and uninstallation, can be a good indicator for both long-term and short-term interests of users. For example, if a user installs Snapseed recently, she might have a growing interest in photographing. Such information is valuable for numerous downstream applications, including advertising, recommendations, etc. Traditionally, user modeling from mobile app usage heavily relies on handcrafted feature engineering, which requires onerous human work for different downstream applications, and could be sub-optimal without domain experts. However, automatic user modeling based on mobile app usage faces unique challenges, including (1) retention, installation, and uninstallation are heterogeneous but need to be modeled collectively, (2) user behaviors are distributed unevenly over time, and (3) many long-tailed apps suffer from serious sparsity. In this paper, we present a tailored AutoEncoder-coupled Transformer Network (AETN), by which we overcome these challenges and achieve the goals of reducing manual efforts and boosting performance. We have deployed the model at Tencent, and both online/offline experiments from multiple domains of downstream applications have demonstrated the effectiveness of the output user embeddings.
|
1705.04828
|
Yiluan Guo
|
Yiluan Guo, Hossein Nejati, Ngai-Man Cheung
|
Deep neural networks on graph signals for brain imaging analysis
|
Accepted by ICIP 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brain imaging data such as EEG or MEG are high-dimensional spatiotemporal
data often degraded by complex, non-Gaussian noise. For reliable analysis of
brain imaging data, it is important to extract discriminative, low-dimensional
intrinsic representation of the recorded data. This work proposes a new method
to learn the low-dimensional representations from the noise-degraded
measurements. In particular, our work proposes a new deep neural network design
that integrates graph information such as brain connectivity with
fully-connected layers. Our work leverages efficient graph filter design using
Chebyshev polynomial and recent work on convolutional nets on graph-structured
data. Our approach exploits graph structure as the prior side information,
localized graph filter for feature extraction and neural networks for high
capacity learning. Experiments on real MEG datasets show that our approach can
extract more discriminative representations, leading to improved accuracy in a
supervised classification task.
|
[
{
"created": "Sat, 13 May 2017 13:50:47 GMT",
"version": "v1"
}
] |
2017-05-16
|
[
[
"Guo",
"Yiluan",
""
],
[
"Nejati",
"Hossein",
""
],
[
"Cheung",
"Ngai-Man",
""
]
] |
Brain imaging data such as EEG or MEG are high-dimensional spatiotemporal data often degraded by complex, non-Gaussian noise. For reliable analysis of brain imaging data, it is important to extract discriminative, low-dimensional intrinsic representation of the recorded data. This work proposes a new method to learn the low-dimensional representations from the noise-degraded measurements. In particular, our work proposes a new deep neural network design that integrates graph information such as brain connectivity with fully-connected layers. Our work leverages efficient graph filter design using Chebyshev polynomial and recent work on convolutional nets on graph-structured data. Our approach exploits graph structure as the prior side information, localized graph filter for feature extraction and neural networks for high capacity learning. Experiments on real MEG datasets show that our approach can extract more discriminative representations, leading to improved accuracy in a supervised classification task.
|
1806.01175
|
Artemij Amiranashvili
|
Artemij Amiranashvili, Alexey Dosovitskiy, Vladlen Koltun, Thomas Brox
|
TD or not TD: Analyzing the Role of Temporal Differencing in Deep
Reinforcement Learning
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our understanding of reinforcement learning (RL) has been shaped by
theoretical and empirical results that were obtained decades ago using tabular
representations and linear function approximators. These results suggest that
RL methods that use temporal differencing (TD) are superior to direct Monte
Carlo estimation (MC). How do these results hold up in deep RL, which deals
with perceptually complex environments and deep nonlinear models? In this
paper, we re-examine the role of TD in modern deep RL, using specially designed
environments that control for specific factors that affect performance, such as
reward sparsity, reward delay, and the perceptual complexity of the task. When
comparing TD with infinite-horizon MC, we are able to reproduce classic results
in modern settings. Yet we also find that finite-horizon MC is not inferior to
TD, even when rewards are sparse or delayed. This makes MC a viable alternative
to TD in deep RL.
|
[
{
"created": "Mon, 4 Jun 2018 16:16:51 GMT",
"version": "v1"
}
] |
2018-06-05
|
[
[
"Amiranashvili",
"Artemij",
""
],
[
"Dosovitskiy",
"Alexey",
""
],
[
"Koltun",
"Vladlen",
""
],
[
"Brox",
"Thomas",
""
]
] |
Our understanding of reinforcement learning (RL) has been shaped by theoretical and empirical results that were obtained decades ago using tabular representations and linear function approximators. These results suggest that RL methods that use temporal differencing (TD) are superior to direct Monte Carlo estimation (MC). How do these results hold up in deep RL, which deals with perceptually complex environments and deep nonlinear models? In this paper, we re-examine the role of TD in modern deep RL, using specially designed environments that control for specific factors that affect performance, such as reward sparsity, reward delay, and the perceptual complexity of the task. When comparing TD with infinite-horizon MC, we are able to reproduce classic results in modern settings. Yet we also find that finite-horizon MC is not inferior to TD, even when rewards are sparse or delayed. This makes MC a viable alternative to TD in deep RL.
|
2207.03047
|
Qian Ye
|
Qian Ye, Masanori Suganuma, Takayuki Okatani
|
Single-image Defocus Deblurring by Integration of Defocus Map Prediction
Tracing the Inverse Problem Computation
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we consider the problem in defocus image deblurring. Previous
classical methods follow two-steps approaches, i.e., first defocus map
estimation and then the non-blind deblurring. In the era of deep learning, some
researchers have tried to address these two problems by CNN. However, the
simple concatenation of defocus map, which represents the blur level, leads to
suboptimal performance. Considering the spatial variant property of the defocus
blur and the blur level indicated in the defocus map, we employ the defocus map
as conditional guidance to adjust the features from the input blurring images
instead of simple concatenation. Then we propose a simple but effective network
with spatial modulation based on the defocus map. To achieve this, we design a
network consisting of three sub-networks, including the defocus map estimation
network, a condition network that encodes the defocus map into condition
features, and the defocus deblurring network that performs spatially dynamic
modulation based on the condition features. Moreover, the spatially dynamic
modulation is based on an affine transform function to adjust the features from
the input blurry images. Experimental results show that our method can achieve
better quantitative and qualitative evaluation performance than the existing
state-of-the-art methods on the commonly used public test datasets.
|
[
{
"created": "Thu, 7 Jul 2022 02:15:33 GMT",
"version": "v1"
}
] |
2022-07-08
|
[
[
"Ye",
"Qian",
""
],
[
"Suganuma",
"Masanori",
""
],
[
"Okatani",
"Takayuki",
""
]
] |
In this paper, we consider the problem in defocus image deblurring. Previous classical methods follow two-steps approaches, i.e., first defocus map estimation and then the non-blind deblurring. In the era of deep learning, some researchers have tried to address these two problems by CNN. However, the simple concatenation of defocus map, which represents the blur level, leads to suboptimal performance. Considering the spatial variant property of the defocus blur and the blur level indicated in the defocus map, we employ the defocus map as conditional guidance to adjust the features from the input blurring images instead of simple concatenation. Then we propose a simple but effective network with spatial modulation based on the defocus map. To achieve this, we design a network consisting of three sub-networks, including the defocus map estimation network, a condition network that encodes the defocus map into condition features, and the defocus deblurring network that performs spatially dynamic modulation based on the condition features. Moreover, the spatially dynamic modulation is based on an affine transform function to adjust the features from the input blurry images. Experimental results show that our method can achieve better quantitative and qualitative evaluation performance than the existing state-of-the-art methods on the commonly used public test datasets.
|
2205.02468
|
Jiongyu Guo
|
Jiongyu Guo, Defang Chen, Can Wang
|
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural
Networks
|
Accepted to IJCNN-2022
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Existing knowledge distillation methods on graph neural networks (GNNs) are
almost offline, where the student model extracts knowledge from a powerful
teacher model to improve its performance. However, a pre-trained teacher model
is not always accessible due to training cost, privacy, etc. In this paper, we
propose a novel online knowledge distillation framework to resolve this
problem. Specifically, each student GNN model learns the extracted local
structure from another simultaneously trained counterpart in an alternating
training procedure. We further develop a cross-layer distillation strategy by
aligning ahead one student layer with the layer in different depth of another
student model, which theoretically makes the structure information spread over
all layers. Experimental results on five datasets including PPI,
Coauthor-CS/Physics and Amazon-Computer/Photo demonstrate that the student
performance is consistently boosted in our collaborative training framework
without the supervision of a pre-trained teacher model. In addition, we also
find that our alignahead technique can accelerate the model convergence speed
and its effectiveness can be generally improved by increasing the student
numbers in training. Code is available:
https://github.com/GuoJY-eatsTG/Alignahead
|
[
{
"created": "Thu, 5 May 2022 06:48:13 GMT",
"version": "v1"
}
] |
2022-05-06
|
[
[
"Guo",
"Jiongyu",
""
],
[
"Chen",
"Defang",
""
],
[
"Wang",
"Can",
""
]
] |
Existing knowledge distillation methods on graph neural networks (GNNs) are almost offline, where the student model extracts knowledge from a powerful teacher model to improve its performance. However, a pre-trained teacher model is not always accessible due to training cost, privacy, etc. In this paper, we propose a novel online knowledge distillation framework to resolve this problem. Specifically, each student GNN model learns the extracted local structure from another simultaneously trained counterpart in an alternating training procedure. We further develop a cross-layer distillation strategy by aligning ahead one student layer with the layer in different depth of another student model, which theoretically makes the structure information spread over all layers. Experimental results on five datasets including PPI, Coauthor-CS/Physics and Amazon-Computer/Photo demonstrate that the student performance is consistently boosted in our collaborative training framework without the supervision of a pre-trained teacher model. In addition, we also find that our alignahead technique can accelerate the model convergence speed and its effectiveness can be generally improved by increasing the student numbers in training. Code is available: https://github.com/GuoJY-eatsTG/Alignahead
|
0807.3582
|
Shashi Kiran Chilappagari
|
Shashi Kiran Chilappagari, Dung Viet Nguyen, Bane Vasic and Michael W.
Marcellin
|
Error Correction Capability of Column-Weight-Three LDPC Codes: Part II
|
7 pages, 7 figures, submitted to IEEE Transactions on Information
Theory (July 2008)
| null |
10.1109/TIT.2009.2015990
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The relation between the girth and the error correction capability of
column-weight-three LDPC codes is investigated. Specifically, it is shown that
the Gallager A algorithm can correct $g/2-1$ errors in $g/2$ iterations on a
Tanner graph of girth $g \geq 10$.
|
[
{
"created": "Wed, 23 Jul 2008 01:04:27 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Chilappagari",
"Shashi Kiran",
""
],
[
"Nguyen",
"Dung Viet",
""
],
[
"Vasic",
"Bane",
""
],
[
"Marcellin",
"Michael W.",
""
]
] |
The relation between the girth and the error correction capability of column-weight-three LDPC codes is investigated. Specifically, it is shown that the Gallager A algorithm can correct $g/2-1$ errors in $g/2$ iterations on a Tanner graph of girth $g \geq 10$.
|
1401.2503
|
Tao Xiong
|
Tao Xiong, Yukun Bao, Zhongyi Hu
|
Does Restraining End Effect Matter in EMD-Based Modeling Framework for
Time Series Prediction? Some Experimental Evidences
|
28 pages
|
Neurocomputing. 123, 2013: 174-184
|
10.1016/j.neucom.2013.07.004
| null |
cs.AI stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following the "decomposition-and-ensemble" principle, the empirical mode
decomposition (EMD)-based modeling framework has been widely used as a
promising alternative for nonlinear and nonstationary time series modeling and
prediction. The end effect, which occurs during the sifting process of EMD and
is apt to distort the decomposed sub-series and hurt the modeling process
followed, however, has been ignored in previous studies. Addressing the end
effect issue, this study proposes to incorporate end condition methods into
EMD-based decomposition and ensemble modeling framework for one- and multi-step
ahead time series prediction. Four well-established end condition methods,
Mirror method, Coughlin's method, Slope-based method, and Rato's method, are
selected, and support vector regression (SVR) is employed as the modeling
technique. For the purpose of justification and comparison, well-known NN3
competition data sets are used and four well-established prediction models are
selected as benchmarks. The experimental results demonstrated that significant
improvement can be achieved by the proposed EMD-based SVR models with end
condition methods. The EMD-SBM-SVR model and EMD-Rato-SVR model, in particular,
achieved the best prediction performances in terms of goodness of forecast
measures and equality of accuracy of competing forecasts test.
|
[
{
"created": "Sat, 11 Jan 2014 06:08:04 GMT",
"version": "v1"
}
] |
2014-01-14
|
[
[
"Xiong",
"Tao",
""
],
[
"Bao",
"Yukun",
""
],
[
"Hu",
"Zhongyi",
""
]
] |
Following the "decomposition-and-ensemble" principle, the empirical mode decomposition (EMD)-based modeling framework has been widely used as a promising alternative for nonlinear and nonstationary time series modeling and prediction. The end effect, which occurs during the sifting process of EMD and is apt to distort the decomposed sub-series and hurt the modeling process followed, however, has been ignored in previous studies. Addressing the end effect issue, this study proposes to incorporate end condition methods into EMD-based decomposition and ensemble modeling framework for one- and multi-step ahead time series prediction. Four well-established end condition methods, Mirror method, Coughlin's method, Slope-based method, and Rato's method, are selected, and support vector regression (SVR) is employed as the modeling technique. For the purpose of justification and comparison, well-known NN3 competition data sets are used and four well-established prediction models are selected as benchmarks. The experimental results demonstrated that significant improvement can be achieved by the proposed EMD-based SVR models with end condition methods. The EMD-SBM-SVR model and EMD-Rato-SVR model, in particular, achieved the best prediction performances in terms of goodness of forecast measures and equality of accuracy of competing forecasts test.
|
1905.03410
|
Jonathan Scarlett
|
Zihan Li, Matthias Fresacher, Jonathan Scarlett
|
Learning Erd\H{o}s-R\'enyi Random Graphs via Edge Detecting Queries
|
NeurIPS 2019
| null | null | null |
cs.IT cs.DM cs.LG math.IT math.PR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider the problem of learning an unknown graph via
queries on groups of nodes, with the result indicating whether or not at least
one edge is present among those nodes. While learning arbitrary graphs with $n$
nodes and $k$ edges is known to be hard in the sense of requiring $\Omega(
\min\{ k^2 \log n, n^2\})$ tests (even when a small probability of error is
allowed), we show that learning an Erd\H{o}s-R\'enyi random graph with an
average of $\bar{k}$ edges is much easier; namely, one can attain
asymptotically vanishing error probability with only $O(\bar{k}\log n)$ tests.
We establish such bounds for a variety of algorithms inspired by the group
testing problem, with explicit constant factors indicating a near-optimal
number of tests, and in some cases asymptotic optimality including constant
factors. In addition, we present an alternative design that permits a
near-optimal sublinear decoding time of $O(\bar{k} \log^2 \bar{k} + \bar{k}
\log n)$.
|
[
{
"created": "Thu, 9 May 2019 02:10:17 GMT",
"version": "v1"
},
{
"created": "Sat, 11 May 2019 01:13:51 GMT",
"version": "v2"
},
{
"created": "Sun, 6 Oct 2019 11:02:45 GMT",
"version": "v3"
},
{
"created": "Fri, 3 Jan 2020 22:47:29 GMT",
"version": "v4"
}
] |
2020-01-07
|
[
[
"Li",
"Zihan",
""
],
[
"Fresacher",
"Matthias",
""
],
[
"Scarlett",
"Jonathan",
""
]
] |
In this paper, we consider the problem of learning an unknown graph via queries on groups of nodes, with the result indicating whether or not at least one edge is present among those nodes. While learning arbitrary graphs with $n$ nodes and $k$ edges is known to be hard in the sense of requiring $\Omega( \min\{ k^2 \log n, n^2\})$ tests (even when a small probability of error is allowed), we show that learning an Erd\H{o}s-R\'enyi random graph with an average of $\bar{k}$ edges is much easier; namely, one can attain asymptotically vanishing error probability with only $O(\bar{k}\log n)$ tests. We establish such bounds for a variety of algorithms inspired by the group testing problem, with explicit constant factors indicating a near-optimal number of tests, and in some cases asymptotic optimality including constant factors. In addition, we present an alternative design that permits a near-optimal sublinear decoding time of $O(\bar{k} \log^2 \bar{k} + \bar{k} \log n)$.
|
1712.00433
|
Zhishuai Zhang
|
Zhishuai Zhang, Siyuan Qiao, Cihang Xie, Wei Shen, Bo Wang, Alan L.
Yuille
|
Single-Shot Object Detection with Enriched Semantics
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel single shot object detection network named Detection with
Enriched Semantics (DES). Our motivation is to enrich the semantics of object
detection features within a typical deep detector, by a semantic segmentation
branch and a global activation module. The segmentation branch is supervised by
weak segmentation ground-truth, i.e., no extra annotation is required. In
conjunction with that, we employ a global activation module which learns
relationship between channels and object classes in a self-supervised manner.
Comprehensive experimental results on both PASCAL VOC and MS COCO detection
datasets demonstrate the effectiveness of the proposed method. In particular,
with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of
32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on
a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on
VOC2007 with an inference speed of 13.0 milliseconds per image.
|
[
{
"created": "Fri, 1 Dec 2017 18:18:42 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Apr 2018 01:01:25 GMT",
"version": "v2"
}
] |
2018-04-10
|
[
[
"Zhang",
"Zhishuai",
""
],
[
"Qiao",
"Siyuan",
""
],
[
"Xie",
"Cihang",
""
],
[
"Shen",
"Wei",
""
],
[
"Wang",
"Bo",
""
],
[
"Yuille",
"Alan L.",
""
]
] |
We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.
|
2309.16729
|
Frederic Jurie
|
Sidney Besnard, Fr\'ed\'eric Jurie (UNICAEN), Jalal M. Fadili (NU,
ENSICAEN, GREYC)
|
SimPINNs: Simulation-Driven Physics-Informed Neural Networks for
Enhanced Performance in Nonlinear Inverse Problems
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel approach to solve inverse problems by
leveraging deep learning techniques. The objective is to infer unknown
parameters that govern a physical system based on observed data. We focus on
scenarios where the underlying forward model demonstrates pronounced nonlinear
behaviour, and where the dimensionality of the unknown parameter space is
substantially smaller than that of the observations. Our proposed method builds
upon physics-informed neural networks (PINNs) trained with a hybrid loss
function that combines observed data with simulated data generated by a known
(approximate) physical model. Experimental results on an orbit restitution
problem demonstrate that our approach surpasses the performance of standard
PINNs, providing improved accuracy and robustness.
|
[
{
"created": "Wed, 27 Sep 2023 06:34:55 GMT",
"version": "v1"
}
] |
2023-10-02
|
[
[
"Besnard",
"Sidney",
"",
"UNICAEN"
],
[
"Jurie",
"Frédéric",
"",
"UNICAEN"
],
[
"Fadili",
"Jalal M.",
"",
"NU,\n ENSICAEN, GREYC"
]
] |
This paper introduces a novel approach to solve inverse problems by leveraging deep learning techniques. The objective is to infer unknown parameters that govern a physical system based on observed data. We focus on scenarios where the underlying forward model demonstrates pronounced nonlinear behaviour, and where the dimensionality of the unknown parameter space is substantially smaller than that of the observations. Our proposed method builds upon physics-informed neural networks (PINNs) trained with a hybrid loss function that combines observed data with simulated data generated by a known (approximate) physical model. Experimental results on an orbit restitution problem demonstrate that our approach surpasses the performance of standard PINNs, providing improved accuracy and robustness.
|
2002.02831
|
Dongwei Chen
|
Dongwei Chen, Daliang Xu, Dong Tong, Kang Sun, Xuetao Guan, Chun Yang,
Xu Cheng
|
Saturation Memory Access: Mitigating Memory Spatial Errors without
Terminating Programs
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memory spatial errors, i.e., buffer overflow vulnerabilities, have been a
well-known issue in computer security for a long time and remain one of the
root causes of exploitable vulnerabilities. Most of the existing mitigation
tools adopt a fail-stop strategy to protect programs from intrusions, which
means the victim program will be terminated upon detecting a memory safety
violation. Unfortunately, the fail-stop strategy harms the availability of
software.
In this paper, we propose Saturation Memory Access (SMA), a memory spatial
error mitigation mechanism that prevents out-of-bounds access without
terminating a program. SMA is based on a key observation that developers
generally do not rely on out-of-bounds accesses to implement program logic. SMA
modifies dynamic memory allocators and adds paddings to objects to form an
enlarged object boundary. By dynamically correcting all the out-of-bounds
accesses to operate on the enlarged protecting boundaries, SMA can tolerate
out-of-bounds accesses. For the sake of compatibility, we chose tagged pointers
to record the boundary metadata of a memory object in the pointer itself, and
correct the address upon detecting out-of-bounds access.
We have implemented the prototype of SMA on LLVM 10.0. Our results show that
our compiler enables the programs to execute successfully through buffer
overflow attacks. Experiments on MiBench show that our prototype incurs an
overhead of 78\%. Further optimizations would require ISA supports.
|
[
{
"created": "Fri, 7 Feb 2020 15:07:00 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Apr 2020 07:43:00 GMT",
"version": "v2"
}
] |
2020-04-07
|
[
[
"Chen",
"Dongwei",
""
],
[
"Xu",
"Daliang",
""
],
[
"Tong",
"Dong",
""
],
[
"Sun",
"Kang",
""
],
[
"Guan",
"Xuetao",
""
],
[
"Yang",
"Chun",
""
],
[
"Cheng",
"Xu",
""
]
] |
Memory spatial errors, i.e., buffer overflow vulnerabilities, have been a well-known issue in computer security for a long time and remain one of the root causes of exploitable vulnerabilities. Most of the existing mitigation tools adopt a fail-stop strategy to protect programs from intrusions, which means the victim program will be terminated upon detecting a memory safety violation. Unfortunately, the fail-stop strategy harms the availability of software. In this paper, we propose Saturation Memory Access (SMA), a memory spatial error mitigation mechanism that prevents out-of-bounds access without terminating a program. SMA is based on a key observation that developers generally do not rely on out-of-bounds accesses to implement program logic. SMA modifies dynamic memory allocators and adds paddings to objects to form an enlarged object boundary. By dynamically correcting all the out-of-bounds accesses to operate on the enlarged protecting boundaries, SMA can tolerate out-of-bounds accesses. For the sake of compatibility, we chose tagged pointers to record the boundary metadata of a memory object in the pointer itself, and correct the address upon detecting out-of-bounds access. We have implemented the prototype of SMA on LLVM 10.0. Our results show that our compiler enables the programs to execute successfully through buffer overflow attacks. Experiments on MiBench show that our prototype incurs an overhead of 78\%. Further optimizations would require ISA supports.
|
2302.01820
|
Thomas Nindel
|
Thomas K. Nindel, Mohcen Hafidi, Tom\'a\v{s} Iser and Alexander Wilkie
|
Automatic inference of a anatomically meaningful solid wood texture from
a single photograph
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Wood is a volumetric material with a very large appearance gamut that is
further enlarged by numerous finishing techniques. Computer graphics has made
considerable progress in creating sophisticated and flexible appearance models
that allow convincing renderings of wooden materials.
However, these do not yet allow fully automatic appearance matching to a
concrete exemplar piece of wood, and have to be fine-tuned by hand. More
general appearance matching strategies are incapable of reconstructing
anatomically meaningful volumetric information. This is essential for
applications where the internal structure of wood is significant, such as
non-planar furniture parts machined from a solid block of wood, translucent
appearance of thin wooden layers, or in the field of dendrochronology.
In this paper, we provide the two key ingredients for automatic matching of a
procedural wood appearance model to exemplar photographs: a good
initialization, built on detecting and modelling the ring structure, and a
phase-based loss function that allows to accurately recover growth ring
deformations and gives anatomically meaningful results.
Our ring-detection technique is based on curved Gabor filters, and robustly
works for a considerable range of wood types.
|
[
{
"created": "Fri, 3 Feb 2023 15:54:24 GMT",
"version": "v1"
}
] |
2023-02-06
|
[
[
"Nindel",
"Thomas K.",
""
],
[
"Hafidi",
"Mohcen",
""
],
[
"Iser",
"Tomáš",
""
],
[
"Wilkie",
"Alexander",
""
]
] |
Wood is a volumetric material with a very large appearance gamut that is further enlarged by numerous finishing techniques. Computer graphics has made considerable progress in creating sophisticated and flexible appearance models that allow convincing renderings of wooden materials. However, these do not yet allow fully automatic appearance matching to a concrete exemplar piece of wood, and have to be fine-tuned by hand. More general appearance matching strategies are incapable of reconstructing anatomically meaningful volumetric information. This is essential for applications where the internal structure of wood is significant, such as non-planar furniture parts machined from a solid block of wood, translucent appearance of thin wooden layers, or in the field of dendrochronology. In this paper, we provide the two key ingredients for automatic matching of a procedural wood appearance model to exemplar photographs: a good initialization, built on detecting and modelling the ring structure, and a phase-based loss function that allows to accurately recover growth ring deformations and gives anatomically meaningful results. Our ring-detection technique is based on curved Gabor filters, and robustly works for a considerable range of wood types.
|
1007.3835
|
Nuno P. Lopes
|
Nuno P. Lopes, Juan A. Navarro, Andrey Rybalchenko, Atul Singh
|
Applying Prolog to Develop Distributed Systems
| null |
Theory and Practice of Logic Programming, 26th Int'l. Conference
on Logic Programming (ICLP'10) Special Issue, 10(4-6):691-707, July 2010
|
10.1017/S1471068410000360
| null |
cs.PL cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Development of distributed systems is a difficult task. Declarative
programming techniques hold a promising potential for effectively supporting
programmer in this challenge. While Datalog-based languages have been actively
explored for programming distributed systems, Prolog received relatively little
attention in this application area so far. In this paper we present a
Prolog-based programming system, called DAHL, for the declarative development
of distributed systems. DAHL extends Prolog with an event-driven control
mechanism and built-in networking procedures. Our experimental evaluation using
a distributed hash-table data structure, a protocol for achieving Byzantine
fault tolerance, and a distributed software model checker - all implemented in
DAHL - indicates the viability of the approach.
|
[
{
"created": "Thu, 22 Jul 2010 09:28:10 GMT",
"version": "v1"
}
] |
2010-07-23
|
[
[
"Lopes",
"Nuno P.",
""
],
[
"Navarro",
"Juan A.",
""
],
[
"Rybalchenko",
"Andrey",
""
],
[
"Singh",
"Atul",
""
]
] |
Development of distributed systems is a difficult task. Declarative programming techniques hold a promising potential for effectively supporting programmer in this challenge. While Datalog-based languages have been actively explored for programming distributed systems, Prolog received relatively little attention in this application area so far. In this paper we present a Prolog-based programming system, called DAHL, for the declarative development of distributed systems. DAHL extends Prolog with an event-driven control mechanism and built-in networking procedures. Our experimental evaluation using a distributed hash-table data structure, a protocol for achieving Byzantine fault tolerance, and a distributed software model checker - all implemented in DAHL - indicates the viability of the approach.
|
1001.4119
|
Stephane Gaubert
|
Xavier Allamigeon, Stephane Gaubert, Eric Goubault
|
The tropical double description method
|
12 pages, prepared for the Proceedings of the Symposium on
Theoretical Aspects of Computer Science, 2010, Nancy, France
| null |
10.4230/LIPIcs.STACS.2010.2443
| null |
cs.CG cs.DM
|
http://creativecommons.org/licenses/by/3.0/
|
We develop a tropical analogue of the classical double description method
allowing one to compute an internal representation (in terms of vertices) of a
polyhedron defined externally (by inequalities). The heart of the tropical
algorithm is a characterization of the extreme points of a polyhedron in terms
of a system of constraints which define it. We show that checking the
extremality of a point reduces to checking whether there is only one minimal
strongly connected component in an hypergraph. The latter problem can be solved
in almost linear time, which allows us to eliminate quickly redundant
generators. We report extensive tests (including benchmarks from an application
to static analysis) showing that the method outperforms experimentally the
previous ones by orders of magnitude. The present tools also lead to worst case
bounds which improve the ones provided by previous methods.
|
[
{
"created": "Sat, 23 Jan 2010 02:01:06 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Feb 2010 14:55:37 GMT",
"version": "v2"
}
] |
2011-12-30
|
[
[
"Allamigeon",
"Xavier",
""
],
[
"Gaubert",
"Stephane",
""
],
[
"Goubault",
"Eric",
""
]
] |
We develop a tropical analogue of the classical double description method allowing one to compute an internal representation (in terms of vertices) of a polyhedron defined externally (by inequalities). The heart of the tropical algorithm is a characterization of the extreme points of a polyhedron in terms of a system of constraints which define it. We show that checking the extremality of a point reduces to checking whether there is only one minimal strongly connected component in an hypergraph. The latter problem can be solved in almost linear time, which allows us to eliminate quickly redundant generators. We report extensive tests (including benchmarks from an application to static analysis) showing that the method outperforms experimentally the previous ones by orders of magnitude. The present tools also lead to worst case bounds which improve the ones provided by previous methods.
|
2301.01104
|
Yang Tian
|
Wei Xiong, Muyuan Ma, Xiaomeng Huang, Ziyang Zhang, Pei Sun, Yang Tian
|
KoopmanLab: machine learning for solving complex physics equations
| null | null | null | null |
cs.LG cs.NA math.NA physics.comp-ph physics.flu-dyn
|
http://creativecommons.org/licenses/by/4.0/
|
Numerous physics theories are rooted in partial differential equations
(PDEs). However, the increasingly intricate physics equations, especially those
that lack analytic solutions or closed forms, have impeded the further
development of physics. Computationally solving PDEs by classic numerical
approaches suffers from the trade-off between accuracy and efficiency and is
not applicable to the empirical data generated by unknown latent PDEs. To
overcome this challenge, we present KoopmanLab, an efficient module of the
Koopman neural operator family, for learning PDEs without analytic solutions or
closed forms. Our module consists of multiple variants of the Koopman neural
operator (KNO), a kind of mesh-independent neural-network-based PDE solvers
developed following dynamic system theory. The compact variants of KNO can
accurately solve PDEs with small model sizes while the large variants of KNO
are more competitive in predicting highly complicated dynamic systems govern by
unknown, high-dimensional, and non-linear PDEs. All variants are validated by
mesh-independent and long-term prediction experiments implemented on
representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers
equation in fluid mechanics) and ERA5 (i.e., one of the largest high-resolution
global-scale climate data sets in earth physics). These demonstrations suggest
the potential of KoopmanLab to be a fundamental tool in diverse physics studies
related to equations or dynamic systems.
|
[
{
"created": "Tue, 3 Jan 2023 13:58:39 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Jan 2023 05:04:24 GMT",
"version": "v2"
},
{
"created": "Sun, 19 Mar 2023 13:44:49 GMT",
"version": "v3"
}
] |
2023-03-21
|
[
[
"Xiong",
"Wei",
""
],
[
"Ma",
"Muyuan",
""
],
[
"Huang",
"Xiaomeng",
""
],
[
"Zhang",
"Ziyang",
""
],
[
"Sun",
"Pei",
""
],
[
"Tian",
"Yang",
""
]
] |
Numerous physics theories are rooted in partial differential equations (PDEs). However, the increasingly intricate physics equations, especially those that lack analytic solutions or closed forms, have impeded the further development of physics. Computationally solving PDEs by classic numerical approaches suffers from the trade-off between accuracy and efficiency and is not applicable to the empirical data generated by unknown latent PDEs. To overcome this challenge, we present KoopmanLab, an efficient module of the Koopman neural operator family, for learning PDEs without analytic solutions or closed forms. Our module consists of multiple variants of the Koopman neural operator (KNO), a kind of mesh-independent neural-network-based PDE solvers developed following dynamic system theory. The compact variants of KNO can accurately solve PDEs with small model sizes while the large variants of KNO are more competitive in predicting highly complicated dynamic systems govern by unknown, high-dimensional, and non-linear PDEs. All variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation in fluid mechanics) and ERA5 (i.e., one of the largest high-resolution global-scale climate data sets in earth physics). These demonstrations suggest the potential of KoopmanLab to be a fundamental tool in diverse physics studies related to equations or dynamic systems.
|
2112.06571
|
Takeyoshi Nagasato
|
Takeyoshi Nagasato, Kei Ishida, Ali Ercan, Tongbi Tu, Masato Kiyama,
Motoki Amagasaki, Kazuki Yokoo
|
Extension of Convolutional Neural Network along Temporal and Vertical
Directions for Precipitation Downscaling
| null | null | null | null |
cs.LG physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has been utilized for the statistical downscaling of climate
data. Specifically, a two-dimensional (2D) convolutional neural network (CNN)
has been successfully applied to precipitation estimation. This study
implements a three-dimensional (3D) CNN to estimate watershed-scale daily
precipitation from 3D atmospheric data and compares the results with those for
a 2D CNN. The 2D CNN is extended along the time direction (3D-CNN-Time) and the
vertical direction (3D-CNN-Vert). The precipitation estimates of these extended
CNNs are compared with those of the 2D CNN in terms of the root-mean-square
error (RMSE), Nash-Sutcliffe efficiency (NSE), and 99th percentile RMSE. It is
found that both 3D-CNN-Time and 3D-CNN-Vert improve the model accuracy for
precipitation estimation compared to the 2D CNN. 3D-CNN-Vert provided the best
estimates during the training and test periods in terms of RMSE and NSE.
|
[
{
"created": "Mon, 13 Dec 2021 11:26:12 GMT",
"version": "v1"
}
] |
2021-12-14
|
[
[
"Nagasato",
"Takeyoshi",
""
],
[
"Ishida",
"Kei",
""
],
[
"Ercan",
"Ali",
""
],
[
"Tu",
"Tongbi",
""
],
[
"Kiyama",
"Masato",
""
],
[
"Amagasaki",
"Motoki",
""
],
[
"Yokoo",
"Kazuki",
""
]
] |
Deep learning has been utilized for the statistical downscaling of climate data. Specifically, a two-dimensional (2D) convolutional neural network (CNN) has been successfully applied to precipitation estimation. This study implements a three-dimensional (3D) CNN to estimate watershed-scale daily precipitation from 3D atmospheric data and compares the results with those for a 2D CNN. The 2D CNN is extended along the time direction (3D-CNN-Time) and the vertical direction (3D-CNN-Vert). The precipitation estimates of these extended CNNs are compared with those of the 2D CNN in terms of the root-mean-square error (RMSE), Nash-Sutcliffe efficiency (NSE), and 99th percentile RMSE. It is found that both 3D-CNN-Time and 3D-CNN-Vert improve the model accuracy for precipitation estimation compared to the 2D CNN. 3D-CNN-Vert provided the best estimates during the training and test periods in terms of RMSE and NSE.
|
2205.05304
|
Xinyu Bian
|
Xinyu Bian, Yuyi Mao, Jun Zhang
|
Error Rate Analysis for Grant-free Massive Random Access with
Short-Packet Transmission
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grant-free massive random access (RA) is a promising protocol to support the
massive machine-type communications (mMTC) scenario in 5G and beyond networks.
In this paper, we focus on the error rate analysis in grant-free massive RA,
which is critical for practical deployment but has not been well studied. We
consider a two-phase frame structure, with a pilot transmission phase for
activity detection and channel estimation, followed by a data transmission
phase with coded data symbols. Considering the characteristics of short-packet
transmission, we analyze the block error rate (BLER) in the finite blocklength
regime to characterize the data transmission performance. The analysis involves
characterizing the activity detection and channel estimation errors as well as
applying the random matrix theory (RMT) to analyze the distribution of the
post-processing signal-to-noise ratio (SNR). As a case study, the derived BLER
expression is further simplified to optimize the pilot length. Simulation
results verify our analysis and demonstrate its effectiveness in pilot length
optimization.
|
[
{
"created": "Wed, 11 May 2022 07:13:48 GMT",
"version": "v1"
}
] |
2022-05-12
|
[
[
"Bian",
"Xinyu",
""
],
[
"Mao",
"Yuyi",
""
],
[
"Zhang",
"Jun",
""
]
] |
Grant-free massive random access (RA) is a promising protocol to support the massive machine-type communications (mMTC) scenario in 5G and beyond networks. In this paper, we focus on the error rate analysis in grant-free massive RA, which is critical for practical deployment but has not been well studied. We consider a two-phase frame structure, with a pilot transmission phase for activity detection and channel estimation, followed by a data transmission phase with coded data symbols. Considering the characteristics of short-packet transmission, we analyze the block error rate (BLER) in the finite blocklength regime to characterize the data transmission performance. The analysis involves characterizing the activity detection and channel estimation errors as well as applying the random matrix theory (RMT) to analyze the distribution of the post-processing signal-to-noise ratio (SNR). As a case study, the derived BLER expression is further simplified to optimize the pilot length. Simulation results verify our analysis and demonstrate its effectiveness in pilot length optimization.
|
2207.12271
|
Gopiram Roshan Lal
|
G Roshan Lal and Varun Mithal
|
NN2Rules: Extracting Rule List from Neural Networks
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present an algorithm, NN2Rules, to convert a trained neural network into a
rule list. Rule lists are more interpretable since they align better with the
way humans make decisions. NN2Rules is a decompositional approach to rule
extraction, i.e., it extracts a set of decision rules from the parameters of
the trained neural network model. We show that the decision rules extracted
have the same prediction as the neural network on any input presented to it,
and hence the same accuracy. A key contribution of NN2Rules is that it allows
hidden neuron behavior to be either soft-binary (eg. sigmoid activation) or
rectified linear (ReLU) as opposed to existing decompositional approaches that
were developed with the assumption of soft-binary activation.
|
[
{
"created": "Mon, 4 Jul 2022 09:19:47 GMT",
"version": "v1"
}
] |
2022-07-26
|
[
[
"Lal",
"G Roshan",
""
],
[
"Mithal",
"Varun",
""
]
] |
We present an algorithm, NN2Rules, to convert a trained neural network into a rule list. Rule lists are more interpretable since they align better with the way humans make decisions. NN2Rules is a decompositional approach to rule extraction, i.e., it extracts a set of decision rules from the parameters of the trained neural network model. We show that the decision rules extracted have the same prediction as the neural network on any input presented to it, and hence the same accuracy. A key contribution of NN2Rules is that it allows hidden neuron behavior to be either soft-binary (eg. sigmoid activation) or rectified linear (ReLU) as opposed to existing decompositional approaches that were developed with the assumption of soft-binary activation.
|
2309.16520
|
Wenqi Jiang
|
Wenqi Jiang, Martin Parvanov, Gustavo Alonso
|
SwiftSpatial: Spatial Joins on Modern Hardware
| null | null | null | null |
cs.DB cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Spatial joins are among the most time-consuming queries in spatial data
management systems. In this paper, we propose SwiftSpatial, a specialized
accelerator architecture tailored for spatial joins. SwiftSpatial contains
multiple high-performance join units with innovative hybrid parallelism,
several efficient memory management units, and an integrated on-chip join
scheduler. We prototype SwiftSpatial on an FPGA and incorporate the R-tree
synchronous traversal algorithm as the control flow. Benchmarked against
various CPU and GPU-based spatial data processing systems, SwiftSpatial
demonstrates a latency reduction of up to 5.36x relative to the best-performing
baseline, while requiring 6.16x less power. The remarkable performance and
energy efficiency of SwiftSpatial lay a solid foundation for its future
integration into spatial data management systems, both in data centers and at
the edge.
|
[
{
"created": "Thu, 28 Sep 2023 15:26:36 GMT",
"version": "v1"
}
] |
2023-09-29
|
[
[
"Jiang",
"Wenqi",
""
],
[
"Parvanov",
"Martin",
""
],
[
"Alonso",
"Gustavo",
""
]
] |
Spatial joins are among the most time-consuming queries in spatial data management systems. In this paper, we propose SwiftSpatial, a specialized accelerator architecture tailored for spatial joins. SwiftSpatial contains multiple high-performance join units with innovative hybrid parallelism, several efficient memory management units, and an integrated on-chip join scheduler. We prototype SwiftSpatial on an FPGA and incorporate the R-tree synchronous traversal algorithm as the control flow. Benchmarked against various CPU and GPU-based spatial data processing systems, SwiftSpatial demonstrates a latency reduction of up to 5.36x relative to the best-performing baseline, while requiring 6.16x less power. The remarkable performance and energy efficiency of SwiftSpatial lay a solid foundation for its future integration into spatial data management systems, both in data centers and at the edge.
|
2403.10833
|
Guillaume Sartoretti
|
Yuhong Cao and Rui Zhao and Yizhuo Wang and Bairan Xiang and Guillaume
Sartoretti
|
Deep Reinforcement Learning-based Large-scale Robot Exploration
|
\c{opyright} 20XX IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a deep reinforcement learning (DRL) based reactive
planner to solve large-scale Lidar-based autonomous robot exploration problems
in 2D action space. Our DRL-based planner allows the agent to reactively plan
its exploration path by making implicit predictions about unknown areas, based
on a learned estimation of the underlying transition model of the environment.
To this end, our approach relies on learned attention mechanisms for their
powerful ability to capture long-term dependencies at different spatial scales
to reason about the robot's entire belief over known areas. Our approach relies
on ground truth information (i.e., privileged learning) to guide the
environment estimation during training, as well as on a graph rarefaction
algorithm, which allows models trained in small-scale environments to scale to
large-scale ones. Simulation results show that our model exhibits better
exploration efficiency (12% in path length, 6% in makespan) and lower planning
time (60%) than the state-of-the-art planners in a 130m x 100m benchmark
scenario. We also validate our learned model on hardware.
|
[
{
"created": "Sat, 16 Mar 2024 06:56:32 GMT",
"version": "v1"
}
] |
2024-03-19
|
[
[
"Cao",
"Yuhong",
""
],
[
"Zhao",
"Rui",
""
],
[
"Wang",
"Yizhuo",
""
],
[
"Xiang",
"Bairan",
""
],
[
"Sartoretti",
"Guillaume",
""
]
] |
In this work, we propose a deep reinforcement learning (DRL) based reactive planner to solve large-scale Lidar-based autonomous robot exploration problems in 2D action space. Our DRL-based planner allows the agent to reactively plan its exploration path by making implicit predictions about unknown areas, based on a learned estimation of the underlying transition model of the environment. To this end, our approach relies on learned attention mechanisms for their powerful ability to capture long-term dependencies at different spatial scales to reason about the robot's entire belief over known areas. Our approach relies on ground truth information (i.e., privileged learning) to guide the environment estimation during training, as well as on a graph rarefaction algorithm, which allows models trained in small-scale environments to scale to large-scale ones. Simulation results show that our model exhibits better exploration efficiency (12% in path length, 6% in makespan) and lower planning time (60%) than the state-of-the-art planners in a 130m x 100m benchmark scenario. We also validate our learned model on hardware.
|
1510.07273
|
Hampei Sasahara
|
Hampei Sasahara and Kazunori Hayashi and Masaaki Nagahara
|
Multiuser Detection by MAP Estimation with Sum-of-Absolute-Values
Relaxation
|
submitted; 6 pages, 7 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we consider multiuser detection that copes with multiple
access interference caused in star-topology machine-to-machine (M2M)
communications. We assume that the transmitted signals are discrete-valued
(e.g. binary signals taking values of $\pm 1$), which is taken into account as
prior information in detection. We formulate the detection problem as the
maximum a posteriori (MAP) estimation, which is relaxed to a convex
optimization called the sum-of-absolute-values (SOAV) optimization. The SOAV
optimization can be efficiently solved by a proximal splitting algorithm, for
which we give the proximity operator in a closed form. Numerical simulations
are shown to illustrate the effectiveness of the proposed approach compared
with the linear minimum mean-square-error (LMMSE) and the least absolute
shrinkage and selection operator (LASSO) methods.
|
[
{
"created": "Sun, 25 Oct 2015 17:29:13 GMT",
"version": "v1"
}
] |
2015-10-27
|
[
[
"Sasahara",
"Hampei",
""
],
[
"Hayashi",
"Kazunori",
""
],
[
"Nagahara",
"Masaaki",
""
]
] |
In this article, we consider multiuser detection that copes with multiple access interference caused in star-topology machine-to-machine (M2M) communications. We assume that the transmitted signals are discrete-valued (e.g. binary signals taking values of $\pm 1$), which is taken into account as prior information in detection. We formulate the detection problem as the maximum a posteriori (MAP) estimation, which is relaxed to a convex optimization called the sum-of-absolute-values (SOAV) optimization. The SOAV optimization can be efficiently solved by a proximal splitting algorithm, for which we give the proximity operator in a closed form. Numerical simulations are shown to illustrate the effectiveness of the proposed approach compared with the linear minimum mean-square-error (LMMSE) and the least absolute shrinkage and selection operator (LASSO) methods.
|
1808.10062
|
Andrew Lohn
|
Andrew J. Lohn
|
Timelines for In-Code Discovery of Zero-Day Vulnerabilities and
Supply-Chain Attacks
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Zero-day vulnerabilities can be accidentally or maliciously placed in code
and can remain in place for years. In this study, we address an aspect of their
longevity by considering the likelihood that they will be discovered in the
code across versions. We approximate well-disguised vulnerabilities as only
being discoverable if the relevant lines of code are explicitly examined, and
obvious vulnerabilities as being discoverable if any part of the relevant file
is examined. We analyze the version-to-version changes in three types of open
source software (Mozilla Firefox, GNU/Linus, and glibc) to understand the rate
at which the various pieces of code are amended and find that much of the
revision behavior can be captured with a simple intuitive model. We use that
model and the data from over a billion unique lines of code in 87 different
versions of software to specify the bounds for in-code discoverability of
vulnerabilities - from expertly hidden to obviously observable.
|
[
{
"created": "Wed, 29 Aug 2018 23:05:08 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Aug 2018 16:57:46 GMT",
"version": "v2"
}
] |
2018-09-03
|
[
[
"Lohn",
"Andrew J.",
""
]
] |
Zero-day vulnerabilities can be accidentally or maliciously placed in code and can remain in place for years. In this study, we address an aspect of their longevity by considering the likelihood that they will be discovered in the code across versions. We approximate well-disguised vulnerabilities as only being discoverable if the relevant lines of code are explicitly examined, and obvious vulnerabilities as being discoverable if any part of the relevant file is examined. We analyze the version-to-version changes in three types of open source software (Mozilla Firefox, GNU/Linus, and glibc) to understand the rate at which the various pieces of code are amended and find that much of the revision behavior can be captured with a simple intuitive model. We use that model and the data from over a billion unique lines of code in 87 different versions of software to specify the bounds for in-code discoverability of vulnerabilities - from expertly hidden to obviously observable.
|
0904.3648
|
Florentina Pintea
|
Tiberiu Marius Karnyanszky, Mihai Titu
|
Computer Aided Optimization of the Unconventional Processing
|
6 pages,exposed on 1st "European Conference on Computer Sciences &
Applications" - XA2006, Timisoara, Romania
|
Ann. Univ. Tibiscus Comp. Sci. Series IV (2006), 85-90
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The unconventional technologies, currently applied at a certain category of
materials, difficult to be processed through usual techniques, have undergone
during the last 60 years all the stages, since their discovery to their use on
a large scale. They are based on elementary mechanisms which run the processing
through classic methods, yet, they use in addition the interconnections of
these methods. This leads to a plus in performance by increasing the outcomes
precision, reducing the processing time, increasing the quality of the finite
product, etc. This performance can be much increased by using the computer and
a software product in assisting the human operator in the processing by an
unconventional method such as; the electric or electro-chemical erosion, the
complex electric-electro-chemical erosion, the processing by a laser fascicle
and so on. The present work presents such an application based on a data base
combining the previous experimental results, which proposes a method of
optimization of the outcomes.
|
[
{
"created": "Thu, 23 Apr 2009 10:38:24 GMT",
"version": "v1"
}
] |
2009-04-24
|
[
[
"Karnyanszky",
"Tiberiu Marius",
""
],
[
"Titu",
"Mihai",
""
]
] |
The unconventional technologies, currently applied at a certain category of materials, difficult to be processed through usual techniques, have undergone during the last 60 years all the stages, since their discovery to their use on a large scale. They are based on elementary mechanisms which run the processing through classic methods, yet, they use in addition the interconnections of these methods. This leads to a plus in performance by increasing the outcomes precision, reducing the processing time, increasing the quality of the finite product, etc. This performance can be much increased by using the computer and a software product in assisting the human operator in the processing by an unconventional method such as; the electric or electro-chemical erosion, the complex electric-electro-chemical erosion, the processing by a laser fascicle and so on. The present work presents such an application based on a data base combining the previous experimental results, which proposes a method of optimization of the outcomes.
|
1705.07687
|
Aitor Garcia Pablos
|
Aitor Garc\'ia-Pablos, Montse Cuadros, German Rigau
|
W2VLDA: Almost Unsupervised System for Aspect Based Sentiment Analysis
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increase of online customer opinions in specialised websites and
social networks, the necessity of automatic systems to help to organise and
classify customer reviews by domain-specific aspect/categories and sentiment
polarity is more important than ever. Supervised approaches to Aspect Based
Sentiment Analysis obtain good results for the domain/language their are
trained on, but having manually labelled data for training supervised systems
for all domains and languages are usually very costly and time consuming. In
this work we describe W2VLDA, an almost unsupervised system based on topic
modelling, that combined with some other unsupervised methods and a minimal
configuration, performs aspect/category classifiation,
aspect-terms/opinion-words separation and sentiment polarity classification for
any given domain and language. We evaluate the performance of the aspect and
sentiment classification in the multilingual SemEval 2016 task 5 (ABSA)
dataset. We show competitive results for several languages (English, Spanish,
French and Dutch) and domains (hotels, restaurants, electronic-devices).
|
[
{
"created": "Mon, 22 May 2017 12:01:10 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jul 2017 07:36:48 GMT",
"version": "v2"
}
] |
2017-07-19
|
[
[
"García-Pablos",
"Aitor",
""
],
[
"Cuadros",
"Montse",
""
],
[
"Rigau",
"German",
""
]
] |
With the increase of online customer opinions in specialised websites and social networks, the necessity of automatic systems to help to organise and classify customer reviews by domain-specific aspect/categories and sentiment polarity is more important than ever. Supervised approaches to Aspect Based Sentiment Analysis obtain good results for the domain/language their are trained on, but having manually labelled data for training supervised systems for all domains and languages are usually very costly and time consuming. In this work we describe W2VLDA, an almost unsupervised system based on topic modelling, that combined with some other unsupervised methods and a minimal configuration, performs aspect/category classifiation, aspect-terms/opinion-words separation and sentiment polarity classification for any given domain and language. We evaluate the performance of the aspect and sentiment classification in the multilingual SemEval 2016 task 5 (ABSA) dataset. We show competitive results for several languages (English, Spanish, French and Dutch) and domains (hotels, restaurants, electronic-devices).
|
1908.11248
|
Josef Mal\'ik
|
Josef Mal\'ik, Ond\v{r}ej Such\'y, Tom\'a\v{s} Valla
|
Efficient Implementation of Color Coding Algorithm for Subgraph
Isomorphism Problem
|
Extended abstract of this paper will appear in the proceedings of the
Special Event on Analysis of Experimental Algorithms, SEA2 2019, Lecture
Notes in Computer Science, Springer
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the subgraph isomorphism problem where, given two graphs G
(source graph) and F (pattern graph), one is to decide whether there is a (not
necessarily induced) subgraph of G isomorphic to F. While many practical
heuristic algorithms have been developed for the problem, as pointed out by
McCreesh et al. [JAIR 2018], for each of them there are rather small instances
which they cannot cope. Therefore, developing an alternative approach that
could possibly cope with these hard instances would be of interest.
A seminal paper by Alon, Yuster and Zwick [J. ACM 1995] introduced the color
coding approach to solve the problem, where the main part is a dynamic
programming over color subsets and partial mappings. As with many
exponential-time dynamic programming algorithms, the memory requirements
constitute the main limiting factor for its usage. Because these requirements
grow exponentially with the treewidth of the pattern graph, all existing
implementations based on the color coding principle restrict themselves to
specific pattern graphs, e.g., paths or trees. In contrast, we provide an
efficient implementation of the algorithm significantly reducing its memory
requirements so that it can be used for pattern graphs of larger treewidth.
Moreover, our implementation not only decides the existence of an isomorphic
subgraph, but it also enumerates all such subgraphs (or given number of them).
We provide an extensive experimental comparison of our implementation to
other available solvers for the problem.
|
[
{
"created": "Thu, 29 Aug 2019 14:15:11 GMT",
"version": "v1"
}
] |
2019-08-30
|
[
[
"Malík",
"Josef",
""
],
[
"Suchý",
"Ondřej",
""
],
[
"Valla",
"Tomáš",
""
]
] |
We consider the subgraph isomorphism problem where, given two graphs G (source graph) and F (pattern graph), one is to decide whether there is a (not necessarily induced) subgraph of G isomorphic to F. While many practical heuristic algorithms have been developed for the problem, as pointed out by McCreesh et al. [JAIR 2018], for each of them there are rather small instances which they cannot cope. Therefore, developing an alternative approach that could possibly cope with these hard instances would be of interest. A seminal paper by Alon, Yuster and Zwick [J. ACM 1995] introduced the color coding approach to solve the problem, where the main part is a dynamic programming over color subsets and partial mappings. As with many exponential-time dynamic programming algorithms, the memory requirements constitute the main limiting factor for its usage. Because these requirements grow exponentially with the treewidth of the pattern graph, all existing implementations based on the color coding principle restrict themselves to specific pattern graphs, e.g., paths or trees. In contrast, we provide an efficient implementation of the algorithm significantly reducing its memory requirements so that it can be used for pattern graphs of larger treewidth. Moreover, our implementation not only decides the existence of an isomorphic subgraph, but it also enumerates all such subgraphs (or given number of them). We provide an extensive experimental comparison of our implementation to other available solvers for the problem.
|
0908.0570
|
Piyush Rai
|
Piyush Rai and Hal Daum\'e III
|
The Infinite Hierarchical Factor Regression Model
| null |
NIPS 2008
| null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a nonparametric Bayesian factor regression model that accounts for
uncertainty in the number of factors, and the relationship between factors. To
accomplish this, we propose a sparse variant of the Indian Buffet Process and
couple this with a hierarchical model over factors, based on Kingman's
coalescent. We apply this model to two problems (factor analysis and factor
regression) in gene-expression data analysis.
|
[
{
"created": "Wed, 5 Aug 2009 01:10:09 GMT",
"version": "v1"
}
] |
2009-08-06
|
[
[
"Rai",
"Piyush",
""
],
[
"Daumé",
"Hal",
"III"
]
] |
We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman's coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis.
|
2110.14844
|
Haonan Wang
|
Yao Zhou, Haonan Wang, Jingrui He, Haixun Wang
|
From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems
| null | null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the prevalence of deep learning based embedding approaches, recommender
systems have become a proven and indispensable tool in various information
filtering applications. However, many of them remain difficult to diagnose what
aspects of the deep models' input drive the final ranking decision, thus, they
cannot often be understood by human stakeholders. In this paper, we investigate
the dilemma between recommendation and explainability, and show that by
utilizing the contextual features (e.g., item reviews from users), we can
design a series of explainable recommender systems without sacrificing their
performance. In particular, we propose three types of explainable
recommendation strategies with gradual change of model transparency: whitebox,
graybox, and blackbox. Each strategy explains its ranking decisions via
different mechanisms: attention weights, adversarial perturbations, and
counterfactual perturbations. We apply these explainable models on five
real-world data sets under the contextualized setting where users and items
have explicit interactions. The empirical results show that our model achieves
highly competitive ranking performance, and generates accurate and effective
explanations in terms of numerous quantitative metrics and qualitative
visualizations.
|
[
{
"created": "Thu, 28 Oct 2021 01:54:04 GMT",
"version": "v1"
}
] |
2021-10-29
|
[
[
"Zhou",
"Yao",
""
],
[
"Wang",
"Haonan",
""
],
[
"He",
"Jingrui",
""
],
[
"Wang",
"Haixun",
""
]
] |
With the prevalence of deep learning based embedding approaches, recommender systems have become a proven and indispensable tool in various information filtering applications. However, many of them remain difficult to diagnose what aspects of the deep models' input drive the final ranking decision, thus, they cannot often be understood by human stakeholders. In this paper, we investigate the dilemma between recommendation and explainability, and show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems without sacrificing their performance. In particular, we propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox. Each strategy explains its ranking decisions via different mechanisms: attention weights, adversarial perturbations, and counterfactual perturbations. We apply these explainable models on five real-world data sets under the contextualized setting where users and items have explicit interactions. The empirical results show that our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
|
2207.00308
|
Fabrizio d'Amore
|
Fabrizio d'Amore
|
Quality increases as the error rate decreases
|
6 pages + 1 page author info
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper we propose an approach to the design of processes and software
that aims at decreasing human and software errors, that so frequently happen,
making affected people using and wasting a lot of time for the need of fixing
the errors. We base our statements on the natural relationship between quality
and error rate, increasing the latter as the error rate decreases. We try to
classify errors into several types and address techniques to reduce the
likelihood of making mistakes, depending on the type of error.
We focus on this approach related to organization, management and software
design that will allow to be more effective and efficient in this period where
mankind has been affected by a severe pandemic and where we need to be more
efficient and effective in all processes, aiming at an industrial renaissance
which we know to be not too far and easily reachable once the path to follow
has been characterized, also in the light of the experience.
|
[
{
"created": "Fri, 1 Jul 2022 09:55:46 GMT",
"version": "v1"
}
] |
2022-07-04
|
[
[
"d'Amore",
"Fabrizio",
""
]
] |
In this paper we propose an approach to the design of processes and software that aims at decreasing human and software errors, that so frequently happen, making affected people using and wasting a lot of time for the need of fixing the errors. We base our statements on the natural relationship between quality and error rate, increasing the latter as the error rate decreases. We try to classify errors into several types and address techniques to reduce the likelihood of making mistakes, depending on the type of error. We focus on this approach related to organization, management and software design that will allow to be more effective and efficient in this period where mankind has been affected by a severe pandemic and where we need to be more efficient and effective in all processes, aiming at an industrial renaissance which we know to be not too far and easily reachable once the path to follow has been characterized, also in the light of the experience.
|
1708.04202
|
Marwin Segler
|
Marwin H.S. Segler, Mike Preuss, Mark P. Waller
|
Learning to Plan Chemical Syntheses
| null |
Nature 555 (2018), 604-610
|
10.1038/nature25978
| null |
cs.AI cs.LG physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
From medicines to materials, small organic molecules are indispensable for
human well-being. To plan their syntheses, chemists employ a problem solving
technique called retrosynthesis. In retrosynthesis, target molecules are
recursively transformed into increasingly simpler precursor compounds until a
set of readily available starting materials is obtained. Computer-aided
retrosynthesis would be a highly valuable tool, however, past approaches were
slow and provided results of unsatisfactory quality. Here, we employ Monte
Carlo Tree Search (MCTS) to efficiently discover retrosynthetic routes. MCTS
was combined with an expansion policy network that guides the search, and an
"in-scope" filter network to pre-select the most promising retrosynthetic
steps. These deep neural networks were trained on 12 million reactions, which
represents essentially all reactions ever published in organic chemistry. Our
system solves almost twice as many molecules and is 30 times faster in
comparison to the traditional search method based on extracted rules and
hand-coded heuristics. Finally after a 60 year history of computer-aided
synthesis planning, chemists can no longer distinguish between routes generated
by a computer system and real routes taken from the scientific literature. We
anticipate that our method will accelerate drug and materials discovery by
assisting chemists to plan better syntheses faster, and by enabling fully
automated robot synthesis.
|
[
{
"created": "Mon, 14 Aug 2017 16:46:08 GMT",
"version": "v1"
}
] |
2018-04-17
|
[
[
"Segler",
"Marwin H. S.",
""
],
[
"Preuss",
"Mike",
""
],
[
"Waller",
"Mark P.",
""
]
] |
From medicines to materials, small organic molecules are indispensable for human well-being. To plan their syntheses, chemists employ a problem solving technique called retrosynthesis. In retrosynthesis, target molecules are recursively transformed into increasingly simpler precursor compounds until a set of readily available starting materials is obtained. Computer-aided retrosynthesis would be a highly valuable tool, however, past approaches were slow and provided results of unsatisfactory quality. Here, we employ Monte Carlo Tree Search (MCTS) to efficiently discover retrosynthetic routes. MCTS was combined with an expansion policy network that guides the search, and an "in-scope" filter network to pre-select the most promising retrosynthetic steps. These deep neural networks were trained on 12 million reactions, which represents essentially all reactions ever published in organic chemistry. Our system solves almost twice as many molecules and is 30 times faster in comparison to the traditional search method based on extracted rules and hand-coded heuristics. Finally after a 60 year history of computer-aided synthesis planning, chemists can no longer distinguish between routes generated by a computer system and real routes taken from the scientific literature. We anticipate that our method will accelerate drug and materials discovery by assisting chemists to plan better syntheses faster, and by enabling fully automated robot synthesis.
|
2009.08270
|
Saloni Dash
|
Saloni Dash, Vineeth N Balasubramanian, Amit Sharma
|
Evaluating and Mitigating Bias in Image Classifiers: A Causal
Perspective Using Counterfactuals
|
Accepted for Publication at WACV 2022
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Counterfactual examples for an input -- perturbations that change specific
features but not others -- have been shown to be useful for evaluating bias of
machine learning models, e.g., against specific demographic groups. However,
generating counterfactual examples for images is non-trivial due to the
underlying causal structure on the various features of an image. To be
meaningful, generated perturbations need to satisfy constraints implied by the
causal model. We present a method for generating counterfactuals by
incorporating a structural causal model (SCM) in an improved variant of
Adversarially Learned Inference (ALI), that generates counterfactuals in
accordance with the causal relationships between attributes of an image. Based
on the generated counterfactuals, we show how to explain a pre-trained machine
learning classifier, evaluate its bias, and mitigate the bias using a
counterfactual regularizer. On the Morpho-MNIST dataset, our method generates
counterfactuals comparable in quality to prior work on SCM-based
counterfactuals (DeepSCM), while on the more complex CelebA dataset our method
outperforms DeepSCM in generating high-quality valid counterfactuals. Moreover,
generated counterfactuals are indistinguishable from reconstructed images in a
human evaluation experiment and we subsequently use them to evaluate the
fairness of a standard classifier trained on CelebA data. We show that the
classifier is biased w.r.t. skin and hair color, and how counterfactual
regularization can remove those biases.
|
[
{
"created": "Thu, 17 Sep 2020 13:19:31 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Dec 2020 07:52:28 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Feb 2021 09:12:31 GMT",
"version": "v3"
},
{
"created": "Thu, 6 Jan 2022 12:40:39 GMT",
"version": "v4"
}
] |
2022-01-07
|
[
[
"Dash",
"Saloni",
""
],
[
"Balasubramanian",
"Vineeth N",
""
],
[
"Sharma",
"Amit",
""
]
] |
Counterfactual examples for an input -- perturbations that change specific features but not others -- have been shown to be useful for evaluating bias of machine learning models, e.g., against specific demographic groups. However, generating counterfactual examples for images is non-trivial due to the underlying causal structure on the various features of an image. To be meaningful, generated perturbations need to satisfy constraints implied by the causal model. We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in an improved variant of Adversarially Learned Inference (ALI), that generates counterfactuals in accordance with the causal relationships between attributes of an image. Based on the generated counterfactuals, we show how to explain a pre-trained machine learning classifier, evaluate its bias, and mitigate the bias using a counterfactual regularizer. On the Morpho-MNIST dataset, our method generates counterfactuals comparable in quality to prior work on SCM-based counterfactuals (DeepSCM), while on the more complex CelebA dataset our method outperforms DeepSCM in generating high-quality valid counterfactuals. Moreover, generated counterfactuals are indistinguishable from reconstructed images in a human evaluation experiment and we subsequently use them to evaluate the fairness of a standard classifier trained on CelebA data. We show that the classifier is biased w.r.t. skin and hair color, and how counterfactual regularization can remove those biases.
|
2406.09553
|
Umur Aybars Ciftci
|
Umur Aybars Ciftci, Ali Kemal Tanriverdi, Ilke Demir
|
My Body My Choice: Human-Centric Full-Body Anonymization
|
AI for Content Creation Workshop @ CVPR 2024
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In an era of increasing privacy concerns for our online presence, we propose
that the decision to appear in a piece of content should only belong to the
owner of the body. Although some automatic approaches for full-body
anonymization have been proposed, human-guided anonymization can adapt to
various contexts, such as cultural norms, personal relations, esthetic
concerns, and security issues. ''My Body My Choice'' (MBMC) enables physical
and adversarial anonymization by removal and swapping approaches aimed for four
tasks, designed by single or multi, ControlNet or GAN modules, combining
several diffusion models. We evaluate anonymization on seven datasets; compare
with SOTA inpainting and anonymization methods; evaluate by image, adversarial,
and generative metrics; and conduct reidentification experiments.
|
[
{
"created": "Thu, 13 Jun 2024 19:40:30 GMT",
"version": "v1"
}
] |
2024-06-17
|
[
[
"Ciftci",
"Umur Aybars",
""
],
[
"Tanriverdi",
"Ali Kemal",
""
],
[
"Demir",
"Ilke",
""
]
] |
In an era of increasing privacy concerns for our online presence, we propose that the decision to appear in a piece of content should only belong to the owner of the body. Although some automatic approaches for full-body anonymization have been proposed, human-guided anonymization can adapt to various contexts, such as cultural norms, personal relations, esthetic concerns, and security issues. ''My Body My Choice'' (MBMC) enables physical and adversarial anonymization by removal and swapping approaches aimed for four tasks, designed by single or multi, ControlNet or GAN modules, combining several diffusion models. We evaluate anonymization on seven datasets; compare with SOTA inpainting and anonymization methods; evaluate by image, adversarial, and generative metrics; and conduct reidentification experiments.
|
2407.07742
|
Ekram Hossain
|
Atefeh Termehchi, Ekram Hossain, and Isaac Woungang
|
Science-Informed Deep Learning (ScIDL) With Applications to Wireless
Communications
| null | null | null | null |
cs.IT cs.LG cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the extensive and growing capabilities offered by deep learning (DL),
more researchers are turning to DL to address complex challenges in
next-generation (xG) communications. However, despite its progress, DL also
reveals several limitations that are becoming increasingly evident. One
significant issue is its lack of interpretability, which is especially critical
for safety-sensitive applications. Another significant consideration is that DL
may not comply with the constraints set by physics laws or given security
standards, which are essential for reliable DL. Additionally, DL models often
struggle outside their training data distributions, which is known as poor
generalization. Moreover, there is a scarcity of theoretical guidance on
designing DL algorithms. These challenges have prompted the emergence of a
burgeoning field known as science-informed DL (ScIDL). ScIDL aims to integrate
existing scientific knowledge with DL techniques to develop more powerful
algorithms. The core objective of this article is to provide a brief tutorial
on ScIDL that illustrates its building blocks and distinguishes it from
conventional DL. Furthermore, we discuss both recent applications of ScIDL and
potential future research directions in the field of wireless communications.
|
[
{
"created": "Sat, 29 Jun 2024 02:35:39 GMT",
"version": "v1"
}
] |
2024-07-11
|
[
[
"Termehchi",
"Atefeh",
""
],
[
"Hossain",
"Ekram",
""
],
[
"Woungang",
"Isaac",
""
]
] |
Given the extensive and growing capabilities offered by deep learning (DL), more researchers are turning to DL to address complex challenges in next-generation (xG) communications. However, despite its progress, DL also reveals several limitations that are becoming increasingly evident. One significant issue is its lack of interpretability, which is especially critical for safety-sensitive applications. Another significant consideration is that DL may not comply with the constraints set by physics laws or given security standards, which are essential for reliable DL. Additionally, DL models often struggle outside their training data distributions, which is known as poor generalization. Moreover, there is a scarcity of theoretical guidance on designing DL algorithms. These challenges have prompted the emergence of a burgeoning field known as science-informed DL (ScIDL). ScIDL aims to integrate existing scientific knowledge with DL techniques to develop more powerful algorithms. The core objective of this article is to provide a brief tutorial on ScIDL that illustrates its building blocks and distinguishes it from conventional DL. Furthermore, we discuss both recent applications of ScIDL and potential future research directions in the field of wireless communications.
|
2204.08883
|
Liwei Yuan
|
Liwei Yuan and Hideaki Ishii
|
Event-triggered Approximate Byzantine Consensus with Multi-hop
Communication
|
arXiv admin note: text overlap with arXiv:2201.03214
| null |
10.1109/TSP.2023.3266975
| null |
cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider a resilient consensus problem for the multi-agent
network where some of the agents are subject to Byzantine attacks and may
transmit erroneous state values to their neighbors. In particular, we develop
an event-triggered update rule to tackle this problem as well as reduce the
communication for each agent. Our approach is based on the mean subsequence
reduced (MSR) algorithm with agents being capable to communicate with multi-hop
neighbors. Since delays are critical in such an environment, we provide
necessary graph conditions for the proposed algorithm to perform well with
delays in the communication. We highlight that through multi-hop communication,
the network connectivity can be reduced especially in comparison with the
common onehop communication case. Lastly, we show the effectiveness of the
proposed algorithm by a numerical example.
|
[
{
"created": "Tue, 19 Apr 2022 13:29:02 GMT",
"version": "v1"
}
] |
2023-06-07
|
[
[
"Yuan",
"Liwei",
""
],
[
"Ishii",
"Hideaki",
""
]
] |
In this paper, we consider a resilient consensus problem for the multi-agent network where some of the agents are subject to Byzantine attacks and may transmit erroneous state values to their neighbors. In particular, we develop an event-triggered update rule to tackle this problem as well as reduce the communication for each agent. Our approach is based on the mean subsequence reduced (MSR) algorithm with agents being capable to communicate with multi-hop neighbors. Since delays are critical in such an environment, we provide necessary graph conditions for the proposed algorithm to perform well with delays in the communication. We highlight that through multi-hop communication, the network connectivity can be reduced especially in comparison with the common onehop communication case. Lastly, we show the effectiveness of the proposed algorithm by a numerical example.
|
1505.00908
|
Ludovic Denoyer
|
Aur\'elia L\'eon and Ludovic Denoyer
|
Reinforced Decision Trees
| null | null | null |
Accepted as a poster at EWRL 2015
|
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to speed-up classification models when facing a large number of
categories, one usual approach consists in organizing the categories in a
particular structure, this structure being then used as a way to speed-up the
prediction computation. This is for example the case when using
error-correcting codes or even hierarchies of categories. But in the majority
of approaches, this structure is chosen \textit{by hand}, or during a
preliminary step, and not integrated in the learning process. We propose a new
model called Reinforced Decision Tree which simultaneously learns how to
organize categories in a tree structure and how to classify any input based on
this structure. This approach keeps the advantages of existing techniques (low
inference complexity) but allows one to build efficient classifiers in one
learning step. The learning algorithm is inspired by reinforcement learning and
policy-gradient techniques which allows us to integrate the two steps (building
the tree, and learning the classifier) in one single algorithm.
|
[
{
"created": "Tue, 5 May 2015 07:58:40 GMT",
"version": "v1"
}
] |
2015-11-26
|
[
[
"Léon",
"Aurélia",
""
],
[
"Denoyer",
"Ludovic",
""
]
] |
In order to speed-up classification models when facing a large number of categories, one usual approach consists in organizing the categories in a particular structure, this structure being then used as a way to speed-up the prediction computation. This is for example the case when using error-correcting codes or even hierarchies of categories. But in the majority of approaches, this structure is chosen \textit{by hand}, or during a preliminary step, and not integrated in the learning process. We propose a new model called Reinforced Decision Tree which simultaneously learns how to organize categories in a tree structure and how to classify any input based on this structure. This approach keeps the advantages of existing techniques (low inference complexity) but allows one to build efficient classifiers in one learning step. The learning algorithm is inspired by reinforcement learning and policy-gradient techniques which allows us to integrate the two steps (building the tree, and learning the classifier) in one single algorithm.
|
0802.2869
|
Pascal Weil
|
Wouter Gelade, Frank Neven
|
Succinctness of the Complement and Intersection of Regular Expressions
| null |
Dans Proceedings of the 25th Annual Symposium on the Theoretical
Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
| null | null |
cs.CC
| null |
We study the succinctness of the complement and intersection of regular
expressions. In particular, we show that when constructing a regular expression
defining the complement of a given regular expression, a double exponential
size increase cannot be avoided. Similarly, when constructing a regular
expression defining the intersection of a fixed and an arbitrary number of
regular expressions, an exponential and double exponential size increase,
respectively, can in worst-case not be avoided. All mentioned lower bounds
improve the existing ones by one exponential and are tight in the sense that
the target expression can be constructed in the corresponding time class, i.e.,
exponential or double exponential time. As a by-product, we generalize a
theorem by Ehrenfeucht and Zeiger stating that there is a class of DFAs which
are exponentially more succinct than regular expressions, to a fixed
four-letter alphabet. When the given regular expressions are one-unambiguous,
as for instance required by the XML Schema specification, the complement can be
computed in polynomial time whereas the bounds concerning intersection continue
to hold. For the subclass of single-occurrence regular expressions, we prove a
tight exponential lower bound for intersection.
|
[
{
"created": "Wed, 20 Feb 2008 14:40:53 GMT",
"version": "v1"
}
] |
2008-02-21
|
[
[
"Gelade",
"Wouter",
""
],
[
"Neven",
"Frank",
""
]
] |
We study the succinctness of the complement and intersection of regular expressions. In particular, we show that when constructing a regular expression defining the complement of a given regular expression, a double exponential size increase cannot be avoided. Similarly, when constructing a regular expression defining the intersection of a fixed and an arbitrary number of regular expressions, an exponential and double exponential size increase, respectively, can in worst-case not be avoided. All mentioned lower bounds improve the existing ones by one exponential and are tight in the sense that the target expression can be constructed in the corresponding time class, i.e., exponential or double exponential time. As a by-product, we generalize a theorem by Ehrenfeucht and Zeiger stating that there is a class of DFAs which are exponentially more succinct than regular expressions, to a fixed four-letter alphabet. When the given regular expressions are one-unambiguous, as for instance required by the XML Schema specification, the complement can be computed in polynomial time whereas the bounds concerning intersection continue to hold. For the subclass of single-occurrence regular expressions, we prove a tight exponential lower bound for intersection.
|
2104.03634
|
Pablo Pueyo
|
Pablo Pueyo, Eduardo Montijano, Ana C. Murillo and Mac Schwager
|
CineMPC: Controlling Camera Intrinsics and Extrinsics for Autonomous
Cinematography
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CineMPC, an algorithm to autonomously control a UAV-borne video
camera in a nonlinear Model Predicted Control (MPC) loop. CineMPC controls both
the position and orientation of the camera -- the camera extrinsics -- as well
as the lens focal length, focal distance, and aperture -- the camera
intrinsics. While some existing solutions autonomously control the position and
orientation of the camera, no existing solutions also control the intrinsic
parameters, which are essential tools for rich cinematographic expression. The
intrinsic parameters control the parts of the scene that are focused or
blurred, the viewers' perception of depth in the scene and the position of the
targets in the image. CineMPC closes the loop from camera images to UAV
trajectory and lens parameters in order to follow the desired relative
trajectory and image composition as the targets move through the scene.
Experiments using a photo-realistic environment demonstrate the capabilities
of the proposed control framework to successfully achieve a full array of
cinematographic effects not possible without full camera control.
|
[
{
"created": "Thu, 8 Apr 2021 09:36:24 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Feb 2022 10:32:46 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Feb 2022 12:09:11 GMT",
"version": "v3"
}
] |
2022-02-23
|
[
[
"Pueyo",
"Pablo",
""
],
[
"Montijano",
"Eduardo",
""
],
[
"Murillo",
"Ana C.",
""
],
[
"Schwager",
"Mac",
""
]
] |
We present CineMPC, an algorithm to autonomously control a UAV-borne video camera in a nonlinear Model Predicted Control (MPC) loop. CineMPC controls both the position and orientation of the camera -- the camera extrinsics -- as well as the lens focal length, focal distance, and aperture -- the camera intrinsics. While some existing solutions autonomously control the position and orientation of the camera, no existing solutions also control the intrinsic parameters, which are essential tools for rich cinematographic expression. The intrinsic parameters control the parts of the scene that are focused or blurred, the viewers' perception of depth in the scene and the position of the targets in the image. CineMPC closes the loop from camera images to UAV trajectory and lens parameters in order to follow the desired relative trajectory and image composition as the targets move through the scene. Experiments using a photo-realistic environment demonstrate the capabilities of the proposed control framework to successfully achieve a full array of cinematographic effects not possible without full camera control.
|
2404.04344
|
Tom Hanika
|
Tom Hanika and Robert J\"aschke
|
A Repository for Formal Contexts
|
16 pages
| null | null | null |
cs.AI cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data is always at the center of the theoretical development and investigation
of the applicability of formal concept analysis. It is therefore not surprising
that a large number of data sets are repeatedly used in scholarly articles and
software tools, acting as de facto standard data sets. However, the
distribution of the data sets poses a problem for the sustainable development
of the research field. There is a lack of a central location that provides and
describes FCA data sets and links them to already known analysis results. This
article analyses the current state of the dissemination of FCA data sets,
presents the requirements for a central FCA repository, and highlights the
challenges for this.
|
[
{
"created": "Fri, 5 Apr 2024 18:27:04 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Hanika",
"Tom",
""
],
[
"Jäschke",
"Robert",
""
]
] |
Data is always at the center of the theoretical development and investigation of the applicability of formal concept analysis. It is therefore not surprising that a large number of data sets are repeatedly used in scholarly articles and software tools, acting as de facto standard data sets. However, the distribution of the data sets poses a problem for the sustainable development of the research field. There is a lack of a central location that provides and describes FCA data sets and links them to already known analysis results. This article analyses the current state of the dissemination of FCA data sets, presents the requirements for a central FCA repository, and highlights the challenges for this.
|
2312.09501
|
Longzhong Lin
|
Longzhong Lin, Xuewu Lin, Tianwei Lin, Lichao Huang, Rong Xiong, Yue
Wang
|
EDA: Evolving and Distinct Anchors for Multimodal Motion Prediction
|
Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI2024)
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion prediction is a crucial task in autonomous driving, and one of its
major challenges lands in the multimodality of future behaviors. Many
successful works have utilized mixture models which require identification of
positive mixture components, and correspondingly fall into two main lines:
prediction-based and anchor-based matching. The prediction clustering
phenomenon in prediction-based matching makes it difficult to pick
representative trajectories for downstream tasks, while the anchor-based
matching suffers from a limited regression capability. In this paper, we
introduce a novel paradigm, named Evolving and Distinct Anchors (EDA), to
define the positive and negative components for multimodal motion prediction
based on mixture models. We enable anchors to evolve and redistribute
themselves under specific scenes for an enlarged regression capacity.
Furthermore, we select distinct anchors before matching them with the ground
truth, which results in impressive scoring performance. Our approach enhances
all metrics compared to the baseline MTR, particularly with a notable relative
reduction of 13.5% in Miss Rate, resulting in state-of-the-art performance on
the Waymo Open Motion Dataset. Code is available at
https://github.com/Longzhong-Lin/EDA.
|
[
{
"created": "Fri, 15 Dec 2023 02:55:24 GMT",
"version": "v1"
}
] |
2023-12-18
|
[
[
"Lin",
"Longzhong",
""
],
[
"Lin",
"Xuewu",
""
],
[
"Lin",
"Tianwei",
""
],
[
"Huang",
"Lichao",
""
],
[
"Xiong",
"Rong",
""
],
[
"Wang",
"Yue",
""
]
] |
Motion prediction is a crucial task in autonomous driving, and one of its major challenges lands in the multimodality of future behaviors. Many successful works have utilized mixture models which require identification of positive mixture components, and correspondingly fall into two main lines: prediction-based and anchor-based matching. The prediction clustering phenomenon in prediction-based matching makes it difficult to pick representative trajectories for downstream tasks, while the anchor-based matching suffers from a limited regression capability. In this paper, we introduce a novel paradigm, named Evolving and Distinct Anchors (EDA), to define the positive and negative components for multimodal motion prediction based on mixture models. We enable anchors to evolve and redistribute themselves under specific scenes for an enlarged regression capacity. Furthermore, we select distinct anchors before matching them with the ground truth, which results in impressive scoring performance. Our approach enhances all metrics compared to the baseline MTR, particularly with a notable relative reduction of 13.5% in Miss Rate, resulting in state-of-the-art performance on the Waymo Open Motion Dataset. Code is available at https://github.com/Longzhong-Lin/EDA.
|
1804.10331
|
Ankur Mallick
|
Ankur Mallick, Malhar Chaudhari, Utsav Sheth, Ganesh Palanikumar,
Gauri Joshi
|
Rateless Codes for Near-Perfect Load Balancing in Distributed
Matrix-Vector Multiplication
| null | null | null | null |
cs.DC cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale machine learning and data mining applications require computer
systems to perform massive matrix-vector and matrix-matrix multiplication
operations that need to be parallelized across multiple nodes. The presence of
straggling nodes -- computing nodes that unpredictably slowdown or fail -- is a
major bottleneck in such distributed computations. Ideal load balancing
strategies that dynamically allocate more tasks to faster nodes require
knowledge or monitoring of node speeds as well as the ability to quickly move
data. Recently proposed fixed-rate erasure coding strategies can handle
unpredictable node slowdown, but they ignore partial work done by straggling
nodes thus resulting in a lot of redundant computation. We propose a
\emph{rateless fountain coding} strategy that achieves the best of both worlds
-- we prove that its latency is asymptotically equal to ideal load balancing,
and it performs asymptotically zero redundant computations. Our idea is to
create linear combinations of the $m$ rows of the matrix and assign these
encoded rows to different worker nodes. The original matrix-vector product can
be decoded as soon as slightly more than $m$ row-vector products are
collectively finished by the nodes. We conduct experiments in three computing
environments: local parallel computing, Amazon EC2, and Amazon Lambda, which
show that rateless coding gives as much as $3\times$ speed-up over uncoded
schemes.
|
[
{
"created": "Fri, 27 Apr 2018 03:41:04 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Apr 2018 15:06:01 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Oct 2018 02:15:33 GMT",
"version": "v3"
},
{
"created": "Wed, 31 Oct 2018 23:45:53 GMT",
"version": "v4"
},
{
"created": "Wed, 30 Oct 2019 21:39:14 GMT",
"version": "v5"
}
] |
2019-11-01
|
[
[
"Mallick",
"Ankur",
""
],
[
"Chaudhari",
"Malhar",
""
],
[
"Sheth",
"Utsav",
""
],
[
"Palanikumar",
"Ganesh",
""
],
[
"Joshi",
"Gauri",
""
]
] |
Large-scale machine learning and data mining applications require computer systems to perform massive matrix-vector and matrix-matrix multiplication operations that need to be parallelized across multiple nodes. The presence of straggling nodes -- computing nodes that unpredictably slowdown or fail -- is a major bottleneck in such distributed computations. Ideal load balancing strategies that dynamically allocate more tasks to faster nodes require knowledge or monitoring of node speeds as well as the ability to quickly move data. Recently proposed fixed-rate erasure coding strategies can handle unpredictable node slowdown, but they ignore partial work done by straggling nodes thus resulting in a lot of redundant computation. We propose a \emph{rateless fountain coding} strategy that achieves the best of both worlds -- we prove that its latency is asymptotically equal to ideal load balancing, and it performs asymptotically zero redundant computations. Our idea is to create linear combinations of the $m$ rows of the matrix and assign these encoded rows to different worker nodes. The original matrix-vector product can be decoded as soon as slightly more than $m$ row-vector products are collectively finished by the nodes. We conduct experiments in three computing environments: local parallel computing, Amazon EC2, and Amazon Lambda, which show that rateless coding gives as much as $3\times$ speed-up over uncoded schemes.
|
1809.08004
|
Francesco Tudisco
|
Francesca Arrigo and Francesco Tudisco
|
Multi-Dimensional, Multilayer, Nonlinear and Dynamic HITS
| null | null | null | null |
cs.SI cs.LG math.NA physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a ranking model for temporal multi-dimensional weighted and
directed networks based on the Perron eigenvector of a multi-homogeneous
order-preserving map. The model extends to the temporal multilayer setting the
HITS algorithm and defines five centrality vectors: two for the nodes, two for
the layers, and one for the temporal stamps. Nonlinearity is introduced in the
standard HITS model in order to guarantee existence and uniqueness of these
centrality vectors for any network, without any requirement on its connectivity
structure. We introduce a globally convergent power iteration like algorithm
for the computation of the centrality vectors. Numerical experiments on
real-world networks are performed in order to assess the effectiveness of the
proposed model and showcase the performance of the accompanying algorithm.
|
[
{
"created": "Fri, 21 Sep 2018 09:27:59 GMT",
"version": "v1"
}
] |
2018-09-24
|
[
[
"Arrigo",
"Francesca",
""
],
[
"Tudisco",
"Francesco",
""
]
] |
We introduce a ranking model for temporal multi-dimensional weighted and directed networks based on the Perron eigenvector of a multi-homogeneous order-preserving map. The model extends to the temporal multilayer setting the HITS algorithm and defines five centrality vectors: two for the nodes, two for the layers, and one for the temporal stamps. Nonlinearity is introduced in the standard HITS model in order to guarantee existence and uniqueness of these centrality vectors for any network, without any requirement on its connectivity structure. We introduce a globally convergent power iteration like algorithm for the computation of the centrality vectors. Numerical experiments on real-world networks are performed in order to assess the effectiveness of the proposed model and showcase the performance of the accompanying algorithm.
|
2105.03074
|
Ivan Tjuawinata
|
Ivan Tjuawinata and Chaoping Xing
|
Leakage-Resilient Secret Sharing with Constant Share Size
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the leakage resilience of AG code-based ramp secret sharing
schemes extending the leakage resilience of linear threshold secret sharing
schemes over prime fields done by Benhamouda et al. Since there is not any
explicit efficient construction of AG codes over prime fields, we consider
constructions over prime fields with the help of concatenation method and those
over field extensions. Extending the Fourier analysis done by Benhamouda et
al., concatenated algebraic geometric codes over prime fields do produce some
nice leakage-resilient secret sharing schemes. One natural and curious question
is whether AG codes over extension fields produce better leakage-resilient
secret sharing schemes than the construction based on concatenated AG codes.
Such construction provides several advantages compared to the construction over
prime fields using concatenation method. First, AG codes over extension fields
give secret sharing schemes with smaller reconstruction for a fixed privacy
parameter t. Second, concatenated AG codes do not enjoy strong multiplicity and
hence they are not applicable to secure MPC schemes. It is also confirmed that
indeed AG codes over extension fields have stronger leakage-resilience under
some reasonable assumptions. These three advantages strongly motivate the study
of secret sharing schemes from AG codes over extension fields. The current
paper has two main contributions: 1, we obtain leakage-resilient secret sharing
schemes with constant share sizes and unbounded numbers of players. Like Shamir
secret scheme, our schemes enjoy multiplicity and hence can be applied to MPC.
2, via a sophisticated Fourier Analysis, we analyze the leakage-resilience of
secret sharing schemes from codes over extension fields. This is of its own
theoretical interest independent of its application to secret sharing schemes
from algebraic geometric codes over extension fields.
|
[
{
"created": "Fri, 7 May 2021 05:59:22 GMT",
"version": "v1"
},
{
"created": "Sun, 30 May 2021 02:03:26 GMT",
"version": "v2"
}
] |
2021-06-01
|
[
[
"Tjuawinata",
"Ivan",
""
],
[
"Xing",
"Chaoping",
""
]
] |
We consider the leakage resilience of AG code-based ramp secret sharing schemes extending the leakage resilience of linear threshold secret sharing schemes over prime fields done by Benhamouda et al. Since there is not any explicit efficient construction of AG codes over prime fields, we consider constructions over prime fields with the help of concatenation method and those over field extensions. Extending the Fourier analysis done by Benhamouda et al., concatenated algebraic geometric codes over prime fields do produce some nice leakage-resilient secret sharing schemes. One natural and curious question is whether AG codes over extension fields produce better leakage-resilient secret sharing schemes than the construction based on concatenated AG codes. Such construction provides several advantages compared to the construction over prime fields using concatenation method. First, AG codes over extension fields give secret sharing schemes with smaller reconstruction for a fixed privacy parameter t. Second, concatenated AG codes do not enjoy strong multiplicity and hence they are not applicable to secure MPC schemes. It is also confirmed that indeed AG codes over extension fields have stronger leakage-resilience under some reasonable assumptions. These three advantages strongly motivate the study of secret sharing schemes from AG codes over extension fields. The current paper has two main contributions: 1, we obtain leakage-resilient secret sharing schemes with constant share sizes and unbounded numbers of players. Like Shamir secret scheme, our schemes enjoy multiplicity and hence can be applied to MPC. 2, via a sophisticated Fourier Analysis, we analyze the leakage-resilience of secret sharing schemes from codes over extension fields. This is of its own theoretical interest independent of its application to secret sharing schemes from algebraic geometric codes over extension fields.
|
2307.05916
|
Peter Kim
|
Peter Yongho Kim, Junbeom Kwon, Sunghwan Joo, Sangyoon Bae, Donggyu
Lee, Yoonho Jung, Shinjae Yoo, Jiook Cha, Taesup Moon
|
SwiFT: Swin 4D fMRI Transformer
|
NeurIPS 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Modeling spatiotemporal brain dynamics from high-dimensional data, such as
functional Magnetic Resonance Imaging (fMRI), is a formidable task in
neuroscience. Existing approaches for fMRI analysis utilize hand-crafted
features, but the process of feature extraction risks losing essential
information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D
fMRI Transformer), a Swin Transformer architecture that can learn brain
dynamics directly from fMRI volumes in a memory and computation-efficient
manner. SwiFT achieves this by implementing a 4D window multi-head
self-attention mechanism and absolute positional embeddings. We evaluate SwiFT
using multiple large-scale resting-state fMRI datasets, including the Human
Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK
Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our
experimental outcomes reveal that SwiFT consistently outperforms recent
state-of-the-art models. Furthermore, by leveraging its end-to-end learning
capability, we show that contrastive loss-based self-supervised pre-training of
SwiFT can enhance performance on downstream tasks. Additionally, we employ an
explainable AI method to identify the brain regions associated with sex
classification. To our knowledge, SwiFT is the first Swin Transformer
architecture to process dimensional spatiotemporal brain functional data in an
end-to-end fashion. Our work holds substantial potential in facilitating
scalable learning of functional brain imaging in neuroscience research by
reducing the hurdles associated with applying Transformer models to
high-dimensional fMRI.
|
[
{
"created": "Wed, 12 Jul 2023 04:53:36 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Oct 2023 04:54:00 GMT",
"version": "v2"
}
] |
2023-11-01
|
[
[
"Kim",
"Peter Yongho",
""
],
[
"Kwon",
"Junbeom",
""
],
[
"Joo",
"Sunghwan",
""
],
[
"Bae",
"Sangyoon",
""
],
[
"Lee",
"Donggyu",
""
],
[
"Jung",
"Yoonho",
""
],
[
"Yoo",
"Shinjae",
""
],
[
"Cha",
"Jiook",
""
],
[
"Moon",
"Taesup",
""
]
] |
Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI.
|
2310.13307
|
Soyeong Jeong
|
Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park
|
Test-Time Self-Adaptive Small Language Models for Question Answering
|
EMNLP Findings 2023
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent instruction-finetuned large language models (LMs) have achieved
notable performances in various tasks, such as question-answering (QA).
However, despite their ability to memorize a vast amount of general knowledge
across diverse tasks, they might be suboptimal on specific tasks due to their
limited capacity to transfer and adapt knowledge to target tasks. Moreover,
further finetuning LMs with labeled datasets is often infeasible due to their
absence, but it is also questionable if we can transfer smaller LMs having
limited knowledge only with unlabeled test data. In this work, we show and
investigate the capabilities of smaller self-adaptive LMs, only with unlabeled
test data. In particular, we first stochastically generate multiple answers,
and then ensemble them while filtering out low-quality samples to mitigate
noise from inaccurate labels. Our proposed self-adaption strategy demonstrates
significant performance improvements on benchmark QA datasets with higher
robustness across diverse prompts, enabling LMs to stay stable. Code is
available at: https://github.com/starsuzi/T-SAS.
|
[
{
"created": "Fri, 20 Oct 2023 06:49:32 GMT",
"version": "v1"
}
] |
2023-10-23
|
[
[
"Jeong",
"Soyeong",
""
],
[
"Baek",
"Jinheon",
""
],
[
"Cho",
"Sukmin",
""
],
[
"Hwang",
"Sung Ju",
""
],
[
"Park",
"Jong C.",
""
]
] |
Recent instruction-finetuned large language models (LMs) have achieved notable performances in various tasks, such as question-answering (QA). However, despite their ability to memorize a vast amount of general knowledge across diverse tasks, they might be suboptimal on specific tasks due to their limited capacity to transfer and adapt knowledge to target tasks. Moreover, further finetuning LMs with labeled datasets is often infeasible due to their absence, but it is also questionable if we can transfer smaller LMs having limited knowledge only with unlabeled test data. In this work, we show and investigate the capabilities of smaller self-adaptive LMs, only with unlabeled test data. In particular, we first stochastically generate multiple answers, and then ensemble them while filtering out low-quality samples to mitigate noise from inaccurate labels. Our proposed self-adaption strategy demonstrates significant performance improvements on benchmark QA datasets with higher robustness across diverse prompts, enabling LMs to stay stable. Code is available at: https://github.com/starsuzi/T-SAS.
|
2102.00785
|
Akrati Saxena
|
Akrati Saxena, George Fletcher, Mykola Pechenizkiy
|
NodeSim: Node Similarity based Network Embedding for Diverse Link
Prediction
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In real-world complex networks, understanding the dynamics of their evolution
has been of great interest to the scientific community. Predicting future links
is an essential task of social network analysis as the addition or removal of
the links over time leads to the network evolution. In a network, links can be
categorized as intra-community links if both end nodes of the link belong to
the same community, otherwise inter-community links. The existing
link-prediction methods have mainly focused on achieving high accuracy for
intra-community link prediction. In this work, we propose a network embedding
method, called NodeSim, which captures both similarities between the nodes and
the community structure while learning the low-dimensional representation of
the network. The embedding is learned using the proposed NodeSim random walk,
which efficiently explores the diverse neighborhood while keeping the more
similar nodes closer in the context of the node. We verify the efficacy of the
proposed embedding method over state-of-the-art methods using diverse link
prediction. We propose a machine learning model for link prediction that
considers both the nodes' embedding and their community information to predict
the link between two given nodes. Extensive experimental results on several
real-world networks demonstrate the effectiveness of the proposed framework for
both inter and intra-community link prediction.
|
[
{
"created": "Mon, 1 Feb 2021 11:50:29 GMT",
"version": "v1"
}
] |
2021-02-02
|
[
[
"Saxena",
"Akrati",
""
],
[
"Fletcher",
"George",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
] |
In real-world complex networks, understanding the dynamics of their evolution has been of great interest to the scientific community. Predicting future links is an essential task of social network analysis as the addition or removal of the links over time leads to the network evolution. In a network, links can be categorized as intra-community links if both end nodes of the link belong to the same community, otherwise inter-community links. The existing link-prediction methods have mainly focused on achieving high accuracy for intra-community link prediction. In this work, we propose a network embedding method, called NodeSim, which captures both similarities between the nodes and the community structure while learning the low-dimensional representation of the network. The embedding is learned using the proposed NodeSim random walk, which efficiently explores the diverse neighborhood while keeping the more similar nodes closer in the context of the node. We verify the efficacy of the proposed embedding method over state-of-the-art methods using diverse link prediction. We propose a machine learning model for link prediction that considers both the nodes' embedding and their community information to predict the link between two given nodes. Extensive experimental results on several real-world networks demonstrate the effectiveness of the proposed framework for both inter and intra-community link prediction.
|
2312.01468
|
Bo Yang
|
Bo Yang, Xiaoyu Ji, Zizhi Jin, Yushi Cheng, Wenyuan Xu
|
Exploring Adversarial Robustness of LiDAR-Camera Fusion Model in
Autonomous Driving
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our study assesses the adversarial robustness of LiDAR-camera fusion models
in 3D object detection. We introduce an attack technique that, by simply adding
a limited number of physically constrained adversarial points above a car, can
make the car undetectable by the fusion model. Experimental results reveal that
even without changes to the image data channel, the fusion model can be
deceived solely by manipulating the LiDAR data channel. This finding raises
safety concerns in the field of autonomous driving. Further, we explore how the
quantity of adversarial points, the distance between the front-near car and the
LiDAR-equipped car, and various angular factors affect the attack success rate.
We believe our research can contribute to the understanding of multi-sensor
robustness, offering insights and guidance to enhance the safety of autonomous
driving.
|
[
{
"created": "Sun, 3 Dec 2023 17:48:40 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jan 2024 06:36:23 GMT",
"version": "v2"
}
] |
2024-01-10
|
[
[
"Yang",
"Bo",
""
],
[
"Ji",
"Xiaoyu",
""
],
[
"Jin",
"Zizhi",
""
],
[
"Cheng",
"Yushi",
""
],
[
"Xu",
"Wenyuan",
""
]
] |
Our study assesses the adversarial robustness of LiDAR-camera fusion models in 3D object detection. We introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a car, can make the car undetectable by the fusion model. Experimental results reveal that even without changes to the image data channel, the fusion model can be deceived solely by manipulating the LiDAR data channel. This finding raises safety concerns in the field of autonomous driving. Further, we explore how the quantity of adversarial points, the distance between the front-near car and the LiDAR-equipped car, and various angular factors affect the attack success rate. We believe our research can contribute to the understanding of multi-sensor robustness, offering insights and guidance to enhance the safety of autonomous driving.
|
2304.08981
|
Zheng Lian
|
Zheng Lian, Haiyang Sun, Licai Sun, Kang Chen, Mingyu Xu, Kexin Wang,
Ke Xu, Yu He, Ying Li, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang,
Erik Cambria, Guoying Zhao, Bj\"orn W. Schuller, Jianhua Tao
|
MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised
Learning
| null | null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The first Multimodal Emotion Recognition Challenge (MER 2023) was
successfully held at ACM Multimedia. The challenge focuses on system robustness
and consists of three distinct tracks: (1) MER-MULTI, where participants are
required to recognize both discrete and dimensional emotions; (2) MER-NOISE, in
which noise is added to test videos for modality robustness evaluation; (3)
MER-SEMI, which provides a large amount of unlabeled samples for
semi-supervised learning. In this paper, we introduce the motivation behind
this challenge, describe the benchmark dataset, and provide some statistics
about participants. To continue using this dataset after MER 2023, please sign
a new End User License Agreement and send it to our official email address
merchallenge.contact@gmail.com. We believe this high-quality dataset can become
a new benchmark in multimodal emotion recognition, especially for the Chinese
research community.
|
[
{
"created": "Tue, 18 Apr 2023 13:23:42 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Sep 2023 04:03:28 GMT",
"version": "v2"
}
] |
2023-09-15
|
[
[
"Lian",
"Zheng",
""
],
[
"Sun",
"Haiyang",
""
],
[
"Sun",
"Licai",
""
],
[
"Chen",
"Kang",
""
],
[
"Xu",
"Mingyu",
""
],
[
"Wang",
"Kexin",
""
],
[
"Xu",
"Ke",
""
],
[
"He",
"Yu",
""
],
[
"Li",
"Ying",
""
],
[
"Zhao",
"Jinming",
""
],
[
"Liu",
"Ye",
""
],
[
"Liu",
"Bin",
""
],
[
"Yi",
"Jiangyan",
""
],
[
"Wang",
"Meng",
""
],
[
"Cambria",
"Erik",
""
],
[
"Zhao",
"Guoying",
""
],
[
"Schuller",
"Björn W.",
""
],
[
"Tao",
"Jianhua",
""
]
] |
The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia. The challenge focuses on system robustness and consists of three distinct tracks: (1) MER-MULTI, where participants are required to recognize both discrete and dimensional emotions; (2) MER-NOISE, in which noise is added to test videos for modality robustness evaluation; (3) MER-SEMI, which provides a large amount of unlabeled samples for semi-supervised learning. In this paper, we introduce the motivation behind this challenge, describe the benchmark dataset, and provide some statistics about participants. To continue using this dataset after MER 2023, please sign a new End User License Agreement and send it to our official email address merchallenge.contact@gmail.com. We believe this high-quality dataset can become a new benchmark in multimodal emotion recognition, especially for the Chinese research community.
|
2403.17752
|
Wangyue Li
|
Wangyue Li, Liangzhi Li, Tong Xiang, Xiao Liu, Wei Deng, Noa Garcia
|
Can multiple-choice questions really be useful in detecting the
abilities of LLMs?
|
LREC-COLING 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Multiple-choice questions (MCQs) are widely used in the evaluation of large
language models (LLMs) due to their simplicity and efficiency. However, there
are concerns about whether MCQs can truly measure LLM's capabilities,
particularly in knowledge-intensive scenarios where long-form generation (LFG)
answers are required. The misalignment between the task and the evaluation
method demands a thoughtful analysis of MCQ's efficacy, which we undertake in
this paper by evaluating nine LLMs on four question-answering (QA) datasets in
two languages: Chinese and English. We identify a significant issue: LLMs
exhibit an order sensitivity in bilingual MCQs, favoring answers located at
specific positions, i.e., the first position. We further quantify the gap
between MCQs and long-form generation questions (LFGQs) by comparing their
direct outputs, token logits, and embeddings. Our results reveal a relatively
low correlation between answers from MCQs and LFGQs for identical questions.
Additionally, we propose two methods to quantify the consistency and confidence
of LLMs' output, which can be generalized to other QA evaluation benchmarks.
Notably, our analysis challenges the idea that the higher the consistency, the
greater the accuracy. We also find MCQs to be less reliable than LFGQs in terms
of expected calibration error. Finally, the misalignment between MCQs and LFGQs
is not only reflected in the evaluation performance but also in the embedding
space. Our code and models can be accessed at
https://github.com/Meetyou-AI-Lab/Can-MC-Evaluate-LLMs.
|
[
{
"created": "Tue, 26 Mar 2024 14:43:48 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Mar 2024 09:57:05 GMT",
"version": "v2"
},
{
"created": "Thu, 23 May 2024 13:32:25 GMT",
"version": "v3"
}
] |
2024-05-24
|
[
[
"Li",
"Wangyue",
""
],
[
"Li",
"Liangzhi",
""
],
[
"Xiang",
"Tong",
""
],
[
"Liu",
"Xiao",
""
],
[
"Deng",
"Wei",
""
],
[
"Garcia",
"Noa",
""
]
] |
Multiple-choice questions (MCQs) are widely used in the evaluation of large language models (LLMs) due to their simplicity and efficiency. However, there are concerns about whether MCQs can truly measure LLM's capabilities, particularly in knowledge-intensive scenarios where long-form generation (LFG) answers are required. The misalignment between the task and the evaluation method demands a thoughtful analysis of MCQ's efficacy, which we undertake in this paper by evaluating nine LLMs on four question-answering (QA) datasets in two languages: Chinese and English. We identify a significant issue: LLMs exhibit an order sensitivity in bilingual MCQs, favoring answers located at specific positions, i.e., the first position. We further quantify the gap between MCQs and long-form generation questions (LFGQs) by comparing their direct outputs, token logits, and embeddings. Our results reveal a relatively low correlation between answers from MCQs and LFGQs for identical questions. Additionally, we propose two methods to quantify the consistency and confidence of LLMs' output, which can be generalized to other QA evaluation benchmarks. Notably, our analysis challenges the idea that the higher the consistency, the greater the accuracy. We also find MCQs to be less reliable than LFGQs in terms of expected calibration error. Finally, the misalignment between MCQs and LFGQs is not only reflected in the evaluation performance but also in the embedding space. Our code and models can be accessed at https://github.com/Meetyou-AI-Lab/Can-MC-Evaluate-LLMs.
|
1511.02325
|
Zhenyu Xiao
|
Zhenyu Xiao, Lin Bai, Jinho Choi
|
Iterative Joint Beamforming Training with Constant-Amplitude Phased
Arrays in Millimeter-Wave Communication
|
4 pages
|
IEEE Communications Letters, vol. 18, no. 5, pp. 829-832, May 2014
|
10.1109/LCOMM.2014.040214.140351
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In millimeter-wave communications (MMWC), in order to compensate for high
propagation attenuation, phased arrays are favored to achieve array gain by
beamforming, where transmitting and receiving antenna arrays need to be jointly
trained to obtain appropriate antenna weight vectors (AWVs). Since the
amplitude of each element of the AWV is usually constraint constant to simplify
the design of phased arrays in MMWC, the existing singular vector based
beamforming training scheme cannot be used for such devices. Thus, in this
letter, a steering vector based iterative beamforming training scheme, which
exploits the directional feature of MMWC channels, is proposed for devices with
constant-amplitude phased arrays. Performance evaluations show that the
proposed scheme achieves a fast convergence rate as well as a near optimal
array gain.
|
[
{
"created": "Sat, 7 Nov 2015 08:59:39 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Xiao",
"Zhenyu",
""
],
[
"Bai",
"Lin",
""
],
[
"Choi",
"Jinho",
""
]
] |
In millimeter-wave communications (MMWC), in order to compensate for high propagation attenuation, phased arrays are favored to achieve array gain by beamforming, where transmitting and receiving antenna arrays need to be jointly trained to obtain appropriate antenna weight vectors (AWVs). Since the amplitude of each element of the AWV is usually constraint constant to simplify the design of phased arrays in MMWC, the existing singular vector based beamforming training scheme cannot be used for such devices. Thus, in this letter, a steering vector based iterative beamforming training scheme, which exploits the directional feature of MMWC channels, is proposed for devices with constant-amplitude phased arrays. Performance evaluations show that the proposed scheme achieves a fast convergence rate as well as a near optimal array gain.
|
2110.12243
|
Michael Kranzlein
|
Michael Kranzlein, Emma Manning, Siyao Peng, Shira Wein, Aryaman
Arora, Bradford Salen, Nathan Schneider
|
PASTRIE: A Corpus of Prepositions Annotated with Supersense Tags in
Reddit International English
|
Expanded from the version published at the Linguistic Annotation
Workshop 2020
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Prepositions Annotated with Supersense Tags in Reddit
International English ("PASTRIE") corpus, a new dataset containing manually
annotated preposition supersenses of English data from presumed speakers of
four L1s: English, French, German, and Spanish. The annotations are
comprehensive, covering all preposition types and tokens in the sample. Along
with the corpus, we provide analysis of distributional patterns across the
included L1s and a discussion of the influence of L1s on L2 preposition choice.
|
[
{
"created": "Sat, 23 Oct 2021 15:22:45 GMT",
"version": "v1"
}
] |
2021-10-26
|
[
[
"Kranzlein",
"Michael",
""
],
[
"Manning",
"Emma",
""
],
[
"Peng",
"Siyao",
""
],
[
"Wein",
"Shira",
""
],
[
"Arora",
"Aryaman",
""
],
[
"Salen",
"Bradford",
""
],
[
"Schneider",
"Nathan",
""
]
] |
We present the Prepositions Annotated with Supersense Tags in Reddit International English ("PASTRIE") corpus, a new dataset containing manually annotated preposition supersenses of English data from presumed speakers of four L1s: English, French, German, and Spanish. The annotations are comprehensive, covering all preposition types and tokens in the sample. Along with the corpus, we provide analysis of distributional patterns across the included L1s and a discussion of the influence of L1s on L2 preposition choice.
|
1902.05795
|
Yuhui Wang
|
Yuhui Wang, Hao He, Xiaoyang Tan
|
Robust Reinforcement Learning in POMDPs with Incomplete and Noisy
Observations
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In real-world scenarios, the observation data for reinforcement learning with
continuous control is commonly noisy and part of it may be dynamically missing
over time, which violates the assumption of many current methods developed for
this. We addressed the issue within the framework of partially observable
Markov Decision Process (POMDP) using a model-based method, in which the
transition model is estimated from the incomplete and noisy observations using
a newly proposed surrogate loss function with local approximation, while the
policy and value function is learned with the help of belief imputation. For
the latter purpose, a generative model is constructed and is seamlessly
incorporated into the belief updating procedure of POMDP, which enables robust
execution even under a significant incompleteness and noise. The effectiveness
of the proposed method is verified on a collection of benchmark tasks, showing
that our approach outperforms several compared methods under various
challenging scenarios.
|
[
{
"created": "Fri, 15 Feb 2019 12:47:50 GMT",
"version": "v1"
}
] |
2019-02-18
|
[
[
"Wang",
"Yuhui",
""
],
[
"He",
"Hao",
""
],
[
"Tan",
"Xiaoyang",
""
]
] |
In real-world scenarios, the observation data for reinforcement learning with continuous control is commonly noisy and part of it may be dynamically missing over time, which violates the assumption of many current methods developed for this. We addressed the issue within the framework of partially observable Markov Decision Process (POMDP) using a model-based method, in which the transition model is estimated from the incomplete and noisy observations using a newly proposed surrogate loss function with local approximation, while the policy and value function is learned with the help of belief imputation. For the latter purpose, a generative model is constructed and is seamlessly incorporated into the belief updating procedure of POMDP, which enables robust execution even under a significant incompleteness and noise. The effectiveness of the proposed method is verified on a collection of benchmark tasks, showing that our approach outperforms several compared methods under various challenging scenarios.
|
2307.05260
|
Ashutosh Modi
|
Abhinav Joshi and Akshat Sharma and Sai Kiran Tanikella and Ashutosh
Modi
|
U-CREAT: Unsupervised Case Retrieval using Events extrAcTion
|
Accepted at ACL 2023, 15 pages (12 main + 3 Appendix)
| null | null | null |
cs.IR cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The task of Prior Case Retrieval (PCR) in the legal domain is about
automatically citing relevant (based on facts and precedence) prior legal cases
in a given query case. To further promote research in PCR, in this paper, we
propose a new large benchmark (in English) for the PCR task: IL-PCR (Indian
Legal Prior Case Retrieval) corpus. Given the complex nature of case relevance
and the long size of legal documents, BM25 remains a strong baseline for
ranking the cited prior documents. In this work, we explore the role of events
in legal case retrieval and propose an unsupervised retrieval method-based
pipeline U-CREAT (Unsupervised Case Retrieval using Events Extraction). We find
that the proposed unsupervised retrieval method significantly increases
performance compared to BM25 and makes retrieval faster by a considerable
margin, making it applicable to real-time case retrieval systems. Our proposed
system is generic, we show that it generalizes across two different legal
systems (Indian and Canadian), and it shows state-of-the-art performance on the
benchmarks for both the legal systems (IL-PCR and COLIEE corpora).
|
[
{
"created": "Tue, 11 Jul 2023 13:51:12 GMT",
"version": "v1"
}
] |
2023-07-12
|
[
[
"Joshi",
"Abhinav",
""
],
[
"Sharma",
"Akshat",
""
],
[
"Tanikella",
"Sai Kiran",
""
],
[
"Modi",
"Ashutosh",
""
]
] |
The task of Prior Case Retrieval (PCR) in the legal domain is about automatically citing relevant (based on facts and precedence) prior legal cases in a given query case. To further promote research in PCR, in this paper, we propose a new large benchmark (in English) for the PCR task: IL-PCR (Indian Legal Prior Case Retrieval) corpus. Given the complex nature of case relevance and the long size of legal documents, BM25 remains a strong baseline for ranking the cited prior documents. In this work, we explore the role of events in legal case retrieval and propose an unsupervised retrieval method-based pipeline U-CREAT (Unsupervised Case Retrieval using Events Extraction). We find that the proposed unsupervised retrieval method significantly increases performance compared to BM25 and makes retrieval faster by a considerable margin, making it applicable to real-time case retrieval systems. Our proposed system is generic, we show that it generalizes across two different legal systems (Indian and Canadian), and it shows state-of-the-art performance on the benchmarks for both the legal systems (IL-PCR and COLIEE corpora).
|
1205.2350
|
Samir Medjiah
|
Samir Medjiah (LaBRI), Toufik Ahmed (LaBRI), Francine Krief (LaBRI)
|
AGEM: Adaptive Greedy-Compass Energy-aware Multipath Routing Protocol
for WMSNs
| null |
7th IEEE Consumer Communications and Networking Conference (CCNC),
2010, Las Vegas : United States (2010)
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an Adaptive Greedy-compass Energy-aware Multipath
protocol (AGEM), a novel routing protocol for wireless multimedia sensors
networks (WMSNs). AGEM uses sensors node positions to make packet forwarding
decisions. These decisions are made online, at each forwarding node, in such a
way that there is no need for global network topology knowledge and
maintenance. AGEM routing protocol performs load-balancing to minimize energy
consumption among nodes using twofold policy: (1) smart greedy forwarding,
based on adaptive compass and (2) walking back forwarding to avoid holes.
Performance evaluations of AGEM compared to GPSR (Greedy Perimeter Stateless
Routing) show that AGEM can: (a) maximize the network lifetime, (b) guarantee
quality of service for video stream transmission, and (c) scale better on
densely deployed wireless sensors network.
|
[
{
"created": "Thu, 10 May 2012 19:15:46 GMT",
"version": "v1"
}
] |
2012-05-11
|
[
[
"Medjiah",
"Samir",
"",
"LaBRI"
],
[
"Ahmed",
"Toufik",
"",
"LaBRI"
],
[
"Krief",
"Francine",
"",
"LaBRI"
]
] |
This paper presents an Adaptive Greedy-compass Energy-aware Multipath protocol (AGEM), a novel routing protocol for wireless multimedia sensors networks (WMSNs). AGEM uses sensors node positions to make packet forwarding decisions. These decisions are made online, at each forwarding node, in such a way that there is no need for global network topology knowledge and maintenance. AGEM routing protocol performs load-balancing to minimize energy consumption among nodes using twofold policy: (1) smart greedy forwarding, based on adaptive compass and (2) walking back forwarding to avoid holes. Performance evaluations of AGEM compared to GPSR (Greedy Perimeter Stateless Routing) show that AGEM can: (a) maximize the network lifetime, (b) guarantee quality of service for video stream transmission, and (c) scale better on densely deployed wireless sensors network.
|
1409.4977
|
Meghana Nasre Ms.
|
Pratik Ghoshal, Meghana Nasre, Prajakta Nimbhorkar
|
Rank Maximal Matchings -- Structure and Algorithms
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let G = (A U P, E) be a bipartite graph where A denotes a set of agents, P
denotes a set of posts and ranks on the edges denote preferences of the agents
over posts. A matching M in G is rank-maximal if it matches the maximum number
of applicants to their top-rank post, subject to this, the maximum number of
applicants to their second rank post and so on.
In this paper, we develop a switching graph characterization of rank-maximal
matchings, which is a useful tool that encodes all rank-maximal matchings in an
instance. The characterization leads to simple and efficient algorithms for
several interesting problems. In particular, we give an efficient algorithm to
compute the set of rank-maximal pairs in an instance. We show that the problem
of counting the number of rank-maximal matchings is #P-Complete and also give
an FPRAS for the problem. Finally, we consider the problem of deciding whether
a rank-maximal matching is popular among all the rank-maximal matchings in a
given instance, and give an efficient algorithm for the problem.
|
[
{
"created": "Wed, 17 Sep 2014 12:57:52 GMT",
"version": "v1"
}
] |
2014-09-18
|
[
[
"Ghoshal",
"Pratik",
""
],
[
"Nasre",
"Meghana",
""
],
[
"Nimbhorkar",
"Prajakta",
""
]
] |
Let G = (A U P, E) be a bipartite graph where A denotes a set of agents, P denotes a set of posts and ranks on the edges denote preferences of the agents over posts. A matching M in G is rank-maximal if it matches the maximum number of applicants to their top-rank post, subject to this, the maximum number of applicants to their second rank post and so on. In this paper, we develop a switching graph characterization of rank-maximal matchings, which is a useful tool that encodes all rank-maximal matchings in an instance. The characterization leads to simple and efficient algorithms for several interesting problems. In particular, we give an efficient algorithm to compute the set of rank-maximal pairs in an instance. We show that the problem of counting the number of rank-maximal matchings is #P-Complete and also give an FPRAS for the problem. Finally, we consider the problem of deciding whether a rank-maximal matching is popular among all the rank-maximal matchings in a given instance, and give an efficient algorithm for the problem.
|
2403.05063
|
Jianxun Lian
|
Wensheng Lu, Jianxun Lian, Wei Zhang, Guanghua Li, Mingyang Zhou, Hao
Liao, Xing Xie
|
Aligning Large Language Models for Controllable Recommendations
|
14 pages; Accepted by ACL 2024 main conference
| null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by the exceptional general intelligence of Large Language Models
(LLMs), researchers have begun to explore their application in pioneering the
next generation of recommender systems - systems that are conversational,
explainable, and controllable. However, existing literature primarily
concentrates on integrating domain-specific knowledge into LLMs to enhance
accuracy, often neglecting the ability to follow instructions. To address this
gap, we initially introduce a collection of supervised learning tasks,
augmented with labels derived from a conventional recommender model, aimed at
explicitly improving LLMs' proficiency in adhering to recommendation-specific
instructions. Subsequently, we develop a reinforcement learning-based alignment
procedure to further strengthen LLMs' aptitude in responding to users'
intentions and mitigating formatting errors. Through extensive experiments on
two real-world datasets, our method markedly advances the capability of LLMs to
comply with instructions within recommender systems, while sustaining a high
level of accuracy performance.
|
[
{
"created": "Fri, 8 Mar 2024 05:23:27 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Aug 2024 11:49:48 GMT",
"version": "v2"
}
] |
2024-08-06
|
[
[
"Lu",
"Wensheng",
""
],
[
"Lian",
"Jianxun",
""
],
[
"Zhang",
"Wei",
""
],
[
"Li",
"Guanghua",
""
],
[
"Zhou",
"Mingyang",
""
],
[
"Liao",
"Hao",
""
],
[
"Xie",
"Xing",
""
]
] |
Inspired by the exceptional general intelligence of Large Language Models (LLMs), researchers have begun to explore their application in pioneering the next generation of recommender systems - systems that are conversational, explainable, and controllable. However, existing literature primarily concentrates on integrating domain-specific knowledge into LLMs to enhance accuracy, often neglecting the ability to follow instructions. To address this gap, we initially introduce a collection of supervised learning tasks, augmented with labels derived from a conventional recommender model, aimed at explicitly improving LLMs' proficiency in adhering to recommendation-specific instructions. Subsequently, we develop a reinforcement learning-based alignment procedure to further strengthen LLMs' aptitude in responding to users' intentions and mitigating formatting errors. Through extensive experiments on two real-world datasets, our method markedly advances the capability of LLMs to comply with instructions within recommender systems, while sustaining a high level of accuracy performance.
|
2406.14162
|
Jingwei Ni
|
Jingwei Ni, Tobias Schimanski, Meihong Lin, Mrinmaya Sachan, Elliott
Ash, Markus Leippold
|
DIRAS: Efficient LLM-Assisted Annotation of Document Relevance in
Retrieval Augmented Generation
| null | null | null | null |
cs.IR cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Retrieval Augmented Generation (RAG) is widely employed to ground responses
to queries on domain-specific documents. But do RAG implementations leave out
important information or excessively include irrelevant information? To allay
these concerns, it is necessary to annotate domain-specific benchmarks to
evaluate information retrieval (IR) performance, as relevance definitions vary
across queries and domains. Furthermore, such benchmarks should be
cost-efficiently annotated to avoid annotation selection bias. In this paper,
we propose DIRAS (Domain-specific Information Retrieval Annotation with
Scalability), a manual-annotation-free schema that fine-tunes open-sourced LLMs
to annotate relevance labels with calibrated relevance probabilities. Extensive
evaluation shows that DIRAS fine-tuned models achieve GPT-4-level performance
on annotating and ranking unseen (query, document) pairs, and is helpful for
real-world RAG development.
|
[
{
"created": "Thu, 20 Jun 2024 10:04:09 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Ni",
"Jingwei",
""
],
[
"Schimanski",
"Tobias",
""
],
[
"Lin",
"Meihong",
""
],
[
"Sachan",
"Mrinmaya",
""
],
[
"Ash",
"Elliott",
""
],
[
"Leippold",
"Markus",
""
]
] |
Retrieval Augmented Generation (RAG) is widely employed to ground responses to queries on domain-specific documents. But do RAG implementations leave out important information or excessively include irrelevant information? To allay these concerns, it is necessary to annotate domain-specific benchmarks to evaluate information retrieval (IR) performance, as relevance definitions vary across queries and domains. Furthermore, such benchmarks should be cost-efficiently annotated to avoid annotation selection bias. In this paper, we propose DIRAS (Domain-specific Information Retrieval Annotation with Scalability), a manual-annotation-free schema that fine-tunes open-sourced LLMs to annotate relevance labels with calibrated relevance probabilities. Extensive evaluation shows that DIRAS fine-tuned models achieve GPT-4-level performance on annotating and ranking unseen (query, document) pairs, and is helpful for real-world RAG development.
|
1606.05210
|
Lene M. Favrholdt
|
Joan Boyar, Lene M. Favrholdt, Christian Kudahl and Jesper W.
Mikkelsen
|
Weighted Online Problems with Advice
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the first online complexity class, AOC, was introduced. The class
consists of many online problems where each request must be either accepted or
rejected, and the aim is to either minimize or maximize the number of accepted
requests, while maintaining a feasible solution. All AOC-complete problems
(including Independent Set, Vertex Cover, Dominating Set, and Set Cover) have
essentially the same advice complexity. In this paper, we study weighted
versions of problems in AOC, i.e., each request comes with a weight and the aim
is to either minimize or maximize the total weight of the accepted requests. In
contrast to the unweighted versions, we show that there is a significant
difference in the advice complexity of complete minimization and maximization
problems. We also show that our algorithmic techniques for dealing with
weighted requests can be extended to work for non-complete AOC problems such as
maximum matching (giving better results than what follow from the general AOC
results) and even non-AOC problems such as scheduling.
|
[
{
"created": "Thu, 16 Jun 2016 14:47:04 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Aug 2017 11:43:21 GMT",
"version": "v2"
}
] |
2017-08-15
|
[
[
"Boyar",
"Joan",
""
],
[
"Favrholdt",
"Lene M.",
""
],
[
"Kudahl",
"Christian",
""
],
[
"Mikkelsen",
"Jesper W.",
""
]
] |
Recently, the first online complexity class, AOC, was introduced. The class consists of many online problems where each request must be either accepted or rejected, and the aim is to either minimize or maximize the number of accepted requests, while maintaining a feasible solution. All AOC-complete problems (including Independent Set, Vertex Cover, Dominating Set, and Set Cover) have essentially the same advice complexity. In this paper, we study weighted versions of problems in AOC, i.e., each request comes with a weight and the aim is to either minimize or maximize the total weight of the accepted requests. In contrast to the unweighted versions, we show that there is a significant difference in the advice complexity of complete minimization and maximization problems. We also show that our algorithmic techniques for dealing with weighted requests can be extended to work for non-complete AOC problems such as maximum matching (giving better results than what follow from the general AOC results) and even non-AOC problems such as scheduling.
|
2112.01315
|
Christoph Derks
|
Christoph Derks, Daniel Str\"uber, Thorsten Berger
|
A Generator Framework For Evolving Variant-Rich Software
|
9 pages, 5 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evolving software is challenging, even more when it exists in many different
variants. Such software evolves not only in time, but also in space--another
dimension of complexity. While evolution in space is supported by a variety of
product-line and variability management tools, many of which originating from
research, their level of evaluation varies significantly, which threatens their
relevance for practitioners and future research. Many tools have only been
evaluated on ad hoc datasets, minimal examples or available preprocessor-based
product lines, missing the early clone & own phases and the re-engineering into
configurable platforms--large parts of the actual evolution lifecycle of
variant-rich systems. Our long-term goal is to provide benchmarks to increase
the maturity of evaluating such tools. However, providing manually curated
benchmarks that cover the whole evolution lifecycle and that are detailed
enough to serve as ground truths, is challenging. We present the framework
vpbench to generates source-code histories of variant-rich systems. Vpbench
comprises several modular generators relying on evolution operators that
systematically and automatically evolve real codebases and document the
evolution in detail. We provide simple and more advanced generators--e.g.,
relying on code transplantation techniques to obtain whole features from
external, real-world projects. We define requirements and demonstrate how
vpbench addresses them for the generated version histories, focusing on support
for evolution in time and space, the generation of detailed meta-data about the
evolution, also considering compileability and extensibility.
|
[
{
"created": "Thu, 2 Dec 2021 15:19:25 GMT",
"version": "v1"
}
] |
2021-12-03
|
[
[
"Derks",
"Christoph",
""
],
[
"Strüber",
"Daniel",
""
],
[
"Berger",
"Thorsten",
""
]
] |
Evolving software is challenging, even more when it exists in many different variants. Such software evolves not only in time, but also in space--another dimension of complexity. While evolution in space is supported by a variety of product-line and variability management tools, many of which originating from research, their level of evaluation varies significantly, which threatens their relevance for practitioners and future research. Many tools have only been evaluated on ad hoc datasets, minimal examples or available preprocessor-based product lines, missing the early clone & own phases and the re-engineering into configurable platforms--large parts of the actual evolution lifecycle of variant-rich systems. Our long-term goal is to provide benchmarks to increase the maturity of evaluating such tools. However, providing manually curated benchmarks that cover the whole evolution lifecycle and that are detailed enough to serve as ground truths, is challenging. We present the framework vpbench to generates source-code histories of variant-rich systems. Vpbench comprises several modular generators relying on evolution operators that systematically and automatically evolve real codebases and document the evolution in detail. We provide simple and more advanced generators--e.g., relying on code transplantation techniques to obtain whole features from external, real-world projects. We define requirements and demonstrate how vpbench addresses them for the generated version histories, focusing on support for evolution in time and space, the generation of detailed meta-data about the evolution, also considering compileability and extensibility.
|
2201.02626
|
Zhiming Lin
|
Zhiming Lin
|
Neighbor2vec: an efficient and effective method for Graph Embedding
| null | null | null | null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph embedding techniques have led to significant progress in recent years.
However, present techniques are not effective enough to capture the patterns of
networks. This paper propose neighbor2vec, a neighbor-based sampling strategy
used algorithm to learn the neighborhood representations of node, a framework
to gather the structure information by feature propagation between the node and
its neighbors. We claim that neighbor2vec is a simple and effective approach to
enhancing the scalability as well as equality of graph embedding, and it breaks
the limits of the existing state-of-the-art unsupervised techniques. We conduct
experiments on several node classification and link prediction tasks for
networks such as ogbn-arxiv, ogbn-products, ogbn-proteins, ogbl-ppa,ogbl-collab
and ogbl-citation2. The result shows that Neighbor2vec's representations
provide an average accuracy scores up to 6.8 percent higher than competing
methods in node classification tasks and 3.0 percent higher in link prediction
tasks. The neighbor2vec's representations are able to outperform all baseline
methods and two classical GNN models in all six experiments.
|
[
{
"created": "Fri, 7 Jan 2022 16:08:26 GMT",
"version": "v1"
}
] |
2022-01-11
|
[
[
"Lin",
"Zhiming",
""
]
] |
Graph embedding techniques have led to significant progress in recent years. However, present techniques are not effective enough to capture the patterns of networks. This paper propose neighbor2vec, a neighbor-based sampling strategy used algorithm to learn the neighborhood representations of node, a framework to gather the structure information by feature propagation between the node and its neighbors. We claim that neighbor2vec is a simple and effective approach to enhancing the scalability as well as equality of graph embedding, and it breaks the limits of the existing state-of-the-art unsupervised techniques. We conduct experiments on several node classification and link prediction tasks for networks such as ogbn-arxiv, ogbn-products, ogbn-proteins, ogbl-ppa,ogbl-collab and ogbl-citation2. The result shows that Neighbor2vec's representations provide an average accuracy scores up to 6.8 percent higher than competing methods in node classification tasks and 3.0 percent higher in link prediction tasks. The neighbor2vec's representations are able to outperform all baseline methods and two classical GNN models in all six experiments.
|
2111.05196
|
David Alfonso-Hermelo
|
David Alfonso-Hermelo, Ahmad Rashid, Abbas Ghaddar, Philippe Langlais,
Mehdi Rezagholizadeh
|
NATURE: Natural Auxiliary Text Utterances for Realistic Spoken Language
Evaluation
|
20 pages, 4 figures, accepted to NeurIPS 2021 Track Datasets and
Benchmarks
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Slot-filling and intent detection are the backbone of conversational agents
such as voice assistants, and are active areas of research. Even though
state-of-the-art techniques on publicly available benchmarks show impressive
performance, their ability to generalize to realistic scenarios is yet to be
demonstrated. In this work, we present NATURE, a set of simple spoken-language
oriented transformations, applied to the evaluation set of datasets, to
introduce human spoken language variations while preserving the semantics of an
utterance. We apply NATURE to common slot-filling and intent detection
benchmarks and demonstrate that simple perturbations from the standard
evaluation set by NATURE can deteriorate model performance significantly.
Through our experiments we demonstrate that when NATURE operators are applied
to evaluation set of popular benchmarks the model accuracy can drop by up to
40%.
|
[
{
"created": "Tue, 9 Nov 2021 15:09:06 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Jan 2022 17:40:16 GMT",
"version": "v2"
}
] |
2022-01-31
|
[
[
"Alfonso-Hermelo",
"David",
""
],
[
"Rashid",
"Ahmad",
""
],
[
"Ghaddar",
"Abbas",
""
],
[
"Langlais",
"Philippe",
""
],
[
"Rezagholizadeh",
"Mehdi",
""
]
] |
Slot-filling and intent detection are the backbone of conversational agents such as voice assistants, and are active areas of research. Even though state-of-the-art techniques on publicly available benchmarks show impressive performance, their ability to generalize to realistic scenarios is yet to be demonstrated. In this work, we present NATURE, a set of simple spoken-language oriented transformations, applied to the evaluation set of datasets, to introduce human spoken language variations while preserving the semantics of an utterance. We apply NATURE to common slot-filling and intent detection benchmarks and demonstrate that simple perturbations from the standard evaluation set by NATURE can deteriorate model performance significantly. Through our experiments we demonstrate that when NATURE operators are applied to evaluation set of popular benchmarks the model accuracy can drop by up to 40%.
|
2204.04372
|
Edward Raff
|
Edward Raff, Andrew L. Farris
|
A Siren Song of Open Source Reproducibility
|
To be presented at the ML Evaluation Standards Workshop at ICLR 2022
| null | null | null |
cs.LG cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As reproducibility becomes a greater concern, conferences have largely
converged to a strategy of asking reviewers to indicate whether code was
attached to a submission. This is part of a larger trend of taking action based
on assumed ideals, without studying if those actions will yield the desired
outcome. Our argument is that this focus on code for replication is misguided
if we want to improve the state of reproducible research. This focus can be
harmful -- we should not force code to be submitted. There is a lack of
evidence for effective actions taken by conferences to encourage and reward
reproducibility. We argue that venues must take more action to advance
reproducible machine learning research today.
|
[
{
"created": "Sat, 9 Apr 2022 03:06:40 GMT",
"version": "v1"
}
] |
2022-04-12
|
[
[
"Raff",
"Edward",
""
],
[
"Farris",
"Andrew L.",
""
]
] |
As reproducibility becomes a greater concern, conferences have largely converged to a strategy of asking reviewers to indicate whether code was attached to a submission. This is part of a larger trend of taking action based on assumed ideals, without studying if those actions will yield the desired outcome. Our argument is that this focus on code for replication is misguided if we want to improve the state of reproducible research. This focus can be harmful -- we should not force code to be submitted. There is a lack of evidence for effective actions taken by conferences to encourage and reward reproducibility. We argue that venues must take more action to advance reproducible machine learning research today.
|
1911.08743
|
Preslav Nakov
|
Todor Mihaylov, Preslav Nakov
|
SemanticZ at SemEval-2016 Task 3: Ranking Relevant Answers in Community
Question Answering Using Semantic Similarity Based on Fine-tuned Word
Embeddings
|
community question answering, semantic similarity
|
SemEval-2016
| null | null |
cs.CL cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe our system for finding good answers in a community forum, as
defined in SemEval-2016, Task 3 on Community Question Answering. Our approach
relies on several semantic similarity features based on fine-tuned word
embeddings and topics similarities. In the main Subtask C, our primary
submission was ranked third, with a MAP of 51.68 and accuracy of 69.94. In
Subtask A, our primary submission was also third, with MAP of 77.58 and
accuracy of 73.39.
|
[
{
"created": "Wed, 20 Nov 2019 07:16:16 GMT",
"version": "v1"
}
] |
2019-11-21
|
[
[
"Mihaylov",
"Todor",
""
],
[
"Nakov",
"Preslav",
""
]
] |
We describe our system for finding good answers in a community forum, as defined in SemEval-2016, Task 3 on Community Question Answering. Our approach relies on several semantic similarity features based on fine-tuned word embeddings and topics similarities. In the main Subtask C, our primary submission was ranked third, with a MAP of 51.68 and accuracy of 69.94. In Subtask A, our primary submission was also third, with MAP of 77.58 and accuracy of 73.39.
|
1712.05247
|
Matthew Piekenbrock
|
Matthew Piekenbrock, Derek Doran
|
Intrinsic Point of Interest Discovery from Trajectory Data
|
10 pages, 9 figures
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a framework for intrinsic point of interest discovery
from trajectory databases. Intrinsic points of interest are regions of a
geospatial area innately defined by the spatial and temporal aspects of
trajectory data, and can be of varying size, shape, and resolution. Any
trajectory database exhibits such points of interest, and hence are intrinsic,
as compared to most other point of interest definitions which are said to be
extrinsic, as they require trajectory metadata, external knowledge about the
region the trajectories are observed, or other application-specific
information. Spatial and temporal aspects are qualities of any trajectory
database, making the framework applicable to data from any domain and of any
resolution. The framework is developed under recent developments on the
consistency of nonparametric hierarchical density estimators and enables the
possibility of formal statistical inference and evaluation over such intrinsic
points of interest. Comparisons of the POIs uncovered by the framework in
synthetic truth data to thousands of parameter settings for common POI
discovery methods show a marked improvement in fidelity without the need to
tune any parameters by hand.
|
[
{
"created": "Thu, 14 Dec 2017 14:26:39 GMT",
"version": "v1"
}
] |
2017-12-15
|
[
[
"Piekenbrock",
"Matthew",
""
],
[
"Doran",
"Derek",
""
]
] |
This paper presents a framework for intrinsic point of interest discovery from trajectory databases. Intrinsic points of interest are regions of a geospatial area innately defined by the spatial and temporal aspects of trajectory data, and can be of varying size, shape, and resolution. Any trajectory database exhibits such points of interest, and hence are intrinsic, as compared to most other point of interest definitions which are said to be extrinsic, as they require trajectory metadata, external knowledge about the region the trajectories are observed, or other application-specific information. Spatial and temporal aspects are qualities of any trajectory database, making the framework applicable to data from any domain and of any resolution. The framework is developed under recent developments on the consistency of nonparametric hierarchical density estimators and enables the possibility of formal statistical inference and evaluation over such intrinsic points of interest. Comparisons of the POIs uncovered by the framework in synthetic truth data to thousands of parameter settings for common POI discovery methods show a marked improvement in fidelity without the need to tune any parameters by hand.
|
1603.02532
|
Antti Honkela
|
Otte Hein\"avaara, Janne Lepp\"a-aho, Jukka Corander and Antti Honkela
|
On the inconsistency of $\ell_1$-penalised sparse precision matrix
estimation
|
9 pages, 10 figures
| null | null | null |
cs.LG stat.CO stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Various $\ell_1$-penalised estimation methods such as graphical lasso and
CLIME are widely used for sparse precision matrix estimation. Many of these
methods have been shown to be consistent under various quantitative assumptions
about the underlying true covariance matrix. Intuitively, these conditions are
related to situations where the penalty term will dominate the optimisation. In
this paper, we explore the consistency of $\ell_1$-based methods for a class of
sparse latent variable -like models, which are strongly motivated by several
types of applications. We show that all $\ell_1$-based methods fail
dramatically for models with nearly linear dependencies between the variables.
We also study the consistency on models derived from real gene expression data
and note that the assumptions needed for consistency never hold even for modest
sized gene networks and $\ell_1$-based methods also become unreliable in
practice for larger networks.
|
[
{
"created": "Tue, 8 Mar 2016 14:24:11 GMT",
"version": "v1"
}
] |
2016-03-09
|
[
[
"Heinävaara",
"Otte",
""
],
[
"Leppä-aho",
"Janne",
""
],
[
"Corander",
"Jukka",
""
],
[
"Honkela",
"Antti",
""
]
] |
Various $\ell_1$-penalised estimation methods such as graphical lasso and CLIME are widely used for sparse precision matrix estimation. Many of these methods have been shown to be consistent under various quantitative assumptions about the underlying true covariance matrix. Intuitively, these conditions are related to situations where the penalty term will dominate the optimisation. In this paper, we explore the consistency of $\ell_1$-based methods for a class of sparse latent variable -like models, which are strongly motivated by several types of applications. We show that all $\ell_1$-based methods fail dramatically for models with nearly linear dependencies between the variables. We also study the consistency on models derived from real gene expression data and note that the assumptions needed for consistency never hold even for modest sized gene networks and $\ell_1$-based methods also become unreliable in practice for larger networks.
|
1303.5841
|
Laghrouche Salah
|
Jianxing Liu, Salah Laghrouche, M.Harmouche and Maxime Wack
|
Adaptive-Gain Second Order Sliding Mode Observer Design for Switching
Power Converters
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a novel adaptive-gain Second Order Sliding Mode (SOSM)
observer is proposed for multicell converters by considering it as a class of
hybrid systems. The aim is to reduce the number of voltage sensors by
estimating the capacitor voltages only from the measurement of load current.
The proposed observer is proven to be robust in the presence of perturbations
with \emph{unknown} boundary. However, the states of the system are only
partially observable in the sense of observability rank condition. Due to its
switching behavior, a recent concept of $Z(T_N)$ observability is used to
analysis its hybrid observability, since its observability depends upon the
switching control signals. Under certain condition of the switching sequences,
the voltage across each capacitor becomes observable. Simulation results and
comparisons with Luenberger switched observer highlight the effectiveness and
robustness of the proposed observer with respect to output measurement noise
and system uncertainties (load variations).
|
[
{
"created": "Sat, 23 Mar 2013 12:46:03 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2013 16:50:47 GMT",
"version": "v2"
}
] |
2013-05-31
|
[
[
"Liu",
"Jianxing",
""
],
[
"Laghrouche",
"Salah",
""
],
[
"Harmouche",
"M.",
""
],
[
"Wack",
"Maxime",
""
]
] |
In this paper, a novel adaptive-gain Second Order Sliding Mode (SOSM) observer is proposed for multicell converters by considering it as a class of hybrid systems. The aim is to reduce the number of voltage sensors by estimating the capacitor voltages only from the measurement of load current. The proposed observer is proven to be robust in the presence of perturbations with \emph{unknown} boundary. However, the states of the system are only partially observable in the sense of observability rank condition. Due to its switching behavior, a recent concept of $Z(T_N)$ observability is used to analysis its hybrid observability, since its observability depends upon the switching control signals. Under certain condition of the switching sequences, the voltage across each capacitor becomes observable. Simulation results and comparisons with Luenberger switched observer highlight the effectiveness and robustness of the proposed observer with respect to output measurement noise and system uncertainties (load variations).
|
2407.00031
|
Holger R. Roth
|
Holger R. Roth, Daniel J. Beutel, Yan Cheng, Javier Fernandez Marques,
Heng Pan, Chester Chen, Zhihong Zhang, Yuhong Wen, Sean Yang, Isaac
(Te-Chung) Yang, Yuan-Ting Hsieh, Ziyue Xu, Daguang Xu, Nicholas D. Lane,
Andrew Feng
|
Supercharging Federated Learning with Flower and NVIDIA FLARE
|
Added a figure comparing running a Flower application natively or
within FLARE
| null | null | null |
cs.DC cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several open-source systems, such as Flower and NVIDIA FLARE, have been
developed in recent years while focusing on different aspects of federated
learning (FL). Flower is dedicated to implementing a cohesive approach to FL,
analytics, and evaluation. Over time, Flower has cultivated extensive
strategies and algorithms tailored for FL application development, fostering a
vibrant FL community in research and industry. Conversely, FLARE has
prioritized the creation of an enterprise-ready, resilient runtime environment
explicitly designed for FL applications in production environments. In this
paper, we describe our initial integration of both frameworks and show how they
can work together to supercharge the FL ecosystem as a whole. Through the
seamless integration of Flower and FLARE, applications crafted within the
Flower framework can effortlessly operate within the FLARE runtime environment
without necessitating any modifications. This initial integration streamlines
the process, eliminating complexities and ensuring smooth interoperability
between the two platforms, thus enhancing the overall efficiency and
accessibility of FL applications.
|
[
{
"created": "Tue, 21 May 2024 21:22:16 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jul 2024 07:01:48 GMT",
"version": "v2"
}
] |
2024-07-23
|
[
[
"Roth",
"Holger R.",
"",
"Te-Chung"
],
[
"Beutel",
"Daniel J.",
"",
"Te-Chung"
],
[
"Cheng",
"Yan",
"",
"Te-Chung"
],
[
"Marques",
"Javier Fernandez",
"",
"Te-Chung"
],
[
"Pan",
"Heng",
"",
"Te-Chung"
],
[
"Chen",
"Chester",
"",
"Te-Chung"
],
[
"Zhang",
"Zhihong",
"",
"Te-Chung"
],
[
"Wen",
"Yuhong",
"",
"Te-Chung"
],
[
"Yang",
"Sean",
"",
"Te-Chung"
],
[
"Isaac",
"",
"",
"Te-Chung"
],
[
"Yang",
"",
""
],
[
"Hsieh",
"Yuan-Ting",
""
],
[
"Xu",
"Ziyue",
""
],
[
"Xu",
"Daguang",
""
],
[
"Lane",
"Nicholas D.",
""
],
[
"Feng",
"Andrew",
""
]
] |
Several open-source systems, such as Flower and NVIDIA FLARE, have been developed in recent years while focusing on different aspects of federated learning (FL). Flower is dedicated to implementing a cohesive approach to FL, analytics, and evaluation. Over time, Flower has cultivated extensive strategies and algorithms tailored for FL application development, fostering a vibrant FL community in research and industry. Conversely, FLARE has prioritized the creation of an enterprise-ready, resilient runtime environment explicitly designed for FL applications in production environments. In this paper, we describe our initial integration of both frameworks and show how they can work together to supercharge the FL ecosystem as a whole. Through the seamless integration of Flower and FLARE, applications crafted within the Flower framework can effortlessly operate within the FLARE runtime environment without necessitating any modifications. This initial integration streamlines the process, eliminating complexities and ensuring smooth interoperability between the two platforms, thus enhancing the overall efficiency and accessibility of FL applications.
|
2306.17140
|
Weihao Cheng
|
Weihao Cheng, Yan-Pei Cao, Ying Shan
|
ID-Pose: Sparse-view Camera Pose Estimation by Inverting Diffusion
Models
|
Github: https://xt4d.github.io/id-pose-web/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Given sparse views of a 3D object, estimating their camera poses is a
long-standing and intractable problem. Toward this goal, we consider harnessing
the pre-trained diffusion model of novel views conditioned on viewpoints
(Zero-1-to-3). We present ID-Pose which inverses the denoising diffusion
process to estimate the relative pose given two input images. ID-Pose adds a
noise to one image, and predicts the noise conditioned on the other image and a
hypothesis of the relative pose. The prediction error is used as the
minimization objective to find the optimal pose with the gradient descent
method. We extend ID-Pose to handle more than two images and estimate each pose
with multiple image pairs from triangular relations. ID-Pose requires no
training and generalizes to open-world images. We conduct extensive experiments
using casually captured photos and rendered images with random viewpoints. The
results demonstrate that ID-Pose significantly outperforms state-of-the-art
methods.
|
[
{
"created": "Thu, 29 Jun 2023 17:41:41 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2023 18:33:12 GMT",
"version": "v2"
}
] |
2023-12-01
|
[
[
"Cheng",
"Weihao",
""
],
[
"Cao",
"Yan-Pei",
""
],
[
"Shan",
"Ying",
""
]
] |
Given sparse views of a 3D object, estimating their camera poses is a long-standing and intractable problem. Toward this goal, we consider harnessing the pre-trained diffusion model of novel views conditioned on viewpoints (Zero-1-to-3). We present ID-Pose which inverses the denoising diffusion process to estimate the relative pose given two input images. ID-Pose adds a noise to one image, and predicts the noise conditioned on the other image and a hypothesis of the relative pose. The prediction error is used as the minimization objective to find the optimal pose with the gradient descent method. We extend ID-Pose to handle more than two images and estimate each pose with multiple image pairs from triangular relations. ID-Pose requires no training and generalizes to open-world images. We conduct extensive experiments using casually captured photos and rendered images with random viewpoints. The results demonstrate that ID-Pose significantly outperforms state-of-the-art methods.
|
2108.10265
|
Xuyang Shen
|
Xuyang Shen, Jo Plested, Sabrina Caldwell, Tom Gedeon
|
Exploring Biases and Prejudice of Facial Synthesis via Semantic Latent
Space
|
8 pages, 11 figures; accepted by IJCNN2021
| null | null | null |
cs.CV cs.CY cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Deep learning (DL) models are widely used to provide a more convenient and
smarter life. However, biased algorithms will negatively influence us. For
instance, groups targeted by biased algorithms will feel unfairly treated and
even fearful of negative consequences of these biases. This work targets biased
generative models' behaviors, identifying the cause of the biases and
eliminating them. We can (as expected) conclude that biased data causes biased
predictions of face frontalization models. Varying the proportions of male and
female faces in the training data can have a substantial effect on behavior on
the test data: we found that the seemingly obvious choice of 50:50 proportions
was not the best for this dataset to reduce biased behavior on female faces,
which was 71% unbiased as compared to our top unbiased rate of 84%. Failure in
generation and generating incorrect gender faces are two behaviors of these
models. In addition, only some layers in face frontalization models are
vulnerable to biased datasets. Optimizing the skip-connections of the generator
in face frontalization models can make models less biased. We conclude that it
is likely to be impossible to eliminate all training bias without an unlimited
size dataset, and our experiments show that the bias can be reduced and
quantified. We believe the next best to a perfect unbiased predictor is one
that has minimized the remaining known bias.
|
[
{
"created": "Mon, 23 Aug 2021 16:09:18 GMT",
"version": "v1"
}
] |
2021-08-24
|
[
[
"Shen",
"Xuyang",
""
],
[
"Plested",
"Jo",
""
],
[
"Caldwell",
"Sabrina",
""
],
[
"Gedeon",
"Tom",
""
]
] |
Deep learning (DL) models are widely used to provide a more convenient and smarter life. However, biased algorithms will negatively influence us. For instance, groups targeted by biased algorithms will feel unfairly treated and even fearful of negative consequences of these biases. This work targets biased generative models' behaviors, identifying the cause of the biases and eliminating them. We can (as expected) conclude that biased data causes biased predictions of face frontalization models. Varying the proportions of male and female faces in the training data can have a substantial effect on behavior on the test data: we found that the seemingly obvious choice of 50:50 proportions was not the best for this dataset to reduce biased behavior on female faces, which was 71% unbiased as compared to our top unbiased rate of 84%. Failure in generation and generating incorrect gender faces are two behaviors of these models. In addition, only some layers in face frontalization models are vulnerable to biased datasets. Optimizing the skip-connections of the generator in face frontalization models can make models less biased. We conclude that it is likely to be impossible to eliminate all training bias without an unlimited size dataset, and our experiments show that the bias can be reduced and quantified. We believe the next best to a perfect unbiased predictor is one that has minimized the remaining known bias.
|
1303.2140
|
Jeroen Ooms
|
Jeroen Ooms
|
Possible Directions for Improving Dependency Versioning in R
| null |
The R Journal Vol. 5/1, June 2013
| null | null |
cs.SE cs.MS stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most powerful features of R is its infrastructure for contributed
code. The built-in package manager and complementary repositories provide a
great system for development and exchange of code, and have played an important
role in the growth of the platform towards the de-facto standard in statistical
computing that it is today. However, the number of packages on CRAN and other
repositories has increased beyond what might have been foreseen, and is
revealing some limitations of the current design. One such problem is the
general lack of dependency versioning in the infrastructure. This paper
explores this problem in greater detail, and suggests approaches taken by other
open source communities that might work for R as well. Three use cases are
defined that exemplify the issue, and illustrate how improving this aspect of
package management could increase reliability while supporting further growth
of the R community.
|
[
{
"created": "Fri, 8 Mar 2013 22:32:22 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Nov 2013 18:47:18 GMT",
"version": "v2"
}
] |
2013-11-04
|
[
[
"Ooms",
"Jeroen",
""
]
] |
One of the most powerful features of R is its infrastructure for contributed code. The built-in package manager and complementary repositories provide a great system for development and exchange of code, and have played an important role in the growth of the platform towards the de-facto standard in statistical computing that it is today. However, the number of packages on CRAN and other repositories has increased beyond what might have been foreseen, and is revealing some limitations of the current design. One such problem is the general lack of dependency versioning in the infrastructure. This paper explores this problem in greater detail, and suggests approaches taken by other open source communities that might work for R as well. Three use cases are defined that exemplify the issue, and illustrate how improving this aspect of package management could increase reliability while supporting further growth of the R community.
|
2209.09188
|
Conor Corbin
|
Conor K. Corbin, Michael Baiocchi, Jonathan H. Chen
|
Avoiding Biased Clinical Machine Learning Model Performance Estimates in
the Presence of Label Selection
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
When evaluating the performance of clinical machine learning models, one must
consider the deployment population. When the population of patients with
observed labels is only a subset of the deployment population (label
selection), standard model performance estimates on the observed population may
be misleading. In this study we describe three classes of label selection and
simulate five causally distinct scenarios to assess how particular selection
mechanisms bias a suite of commonly reported binary machine learning model
performance metrics. Simulations reveal that when selection is affected by
observed features, naive estimates of model discrimination may be misleading.
When selection is affected by labels, naive estimates of calibration fail to
reflect reality. We borrow traditional weighting estimators from causal
inference literature and find that when selection probabilities are properly
specified, they recover full population estimates. We then tackle the
real-world task of monitoring the performance of deployed machine learning
models whose interactions with clinicians feed-back and affect the selection
mechanism of the labels. We train three machine learning models to flag
low-yield laboratory diagnostics, and simulate their intended consequence of
reducing wasteful laboratory utilization. We find that naive estimates of AUROC
on the observed population undershoot actual performance by up to 20%. Such a
disparity could be large enough to lead to the wrongful termination of a
successful clinical decision support tool. We propose an altered deployment
procedure, one that combines injected randomization with traditional weighted
estimates, and find it recovers true model performance.
|
[
{
"created": "Thu, 15 Sep 2022 22:30:14 GMT",
"version": "v1"
}
] |
2022-09-20
|
[
[
"Corbin",
"Conor K.",
""
],
[
"Baiocchi",
"Michael",
""
],
[
"Chen",
"Jonathan H.",
""
]
] |
When evaluating the performance of clinical machine learning models, one must consider the deployment population. When the population of patients with observed labels is only a subset of the deployment population (label selection), standard model performance estimates on the observed population may be misleading. In this study we describe three classes of label selection and simulate five causally distinct scenarios to assess how particular selection mechanisms bias a suite of commonly reported binary machine learning model performance metrics. Simulations reveal that when selection is affected by observed features, naive estimates of model discrimination may be misleading. When selection is affected by labels, naive estimates of calibration fail to reflect reality. We borrow traditional weighting estimators from causal inference literature and find that when selection probabilities are properly specified, they recover full population estimates. We then tackle the real-world task of monitoring the performance of deployed machine learning models whose interactions with clinicians feed-back and affect the selection mechanism of the labels. We train three machine learning models to flag low-yield laboratory diagnostics, and simulate their intended consequence of reducing wasteful laboratory utilization. We find that naive estimates of AUROC on the observed population undershoot actual performance by up to 20%. Such a disparity could be large enough to lead to the wrongful termination of a successful clinical decision support tool. We propose an altered deployment procedure, one that combines injected randomization with traditional weighted estimates, and find it recovers true model performance.
|
2404.07503
|
Ruibo Liu
|
Ruibo Liu, Jerry Wei, Fangyu Liu, Chenglei Si, Yanzhe Zhang, Jinmeng
Rao, Steven Zheng, Daiyi Peng, Diyi Yang, Denny Zhou, Andrew M. Dai
|
Best Practices and Lessons Learned on Synthetic Data
|
In COLM 2024
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The success of AI models relies on the availability of large, diverse, and
high-quality datasets, which can be challenging to obtain due to data scarcity,
privacy concerns, and high costs. Synthetic data has emerged as a promising
solution by generating artificial data that mimics real-world patterns. This
paper provides an overview of synthetic data research, discussing its
applications, challenges, and future directions. We present empirical evidence
from prior art to demonstrate its effectiveness and highlight the importance of
ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for
responsible use of synthetic data to build more powerful, inclusive, and
trustworthy language models.
|
[
{
"created": "Thu, 11 Apr 2024 06:34:17 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Aug 2024 20:46:47 GMT",
"version": "v2"
}
] |
2024-08-13
|
[
[
"Liu",
"Ruibo",
""
],
[
"Wei",
"Jerry",
""
],
[
"Liu",
"Fangyu",
""
],
[
"Si",
"Chenglei",
""
],
[
"Zhang",
"Yanzhe",
""
],
[
"Rao",
"Jinmeng",
""
],
[
"Zheng",
"Steven",
""
],
[
"Peng",
"Daiyi",
""
],
[
"Yang",
"Diyi",
""
],
[
"Zhou",
"Denny",
""
],
[
"Dai",
"Andrew M.",
""
]
] |
The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.
|
2102.03861
|
Fares Abu-Dakka Dr.
|
Matteo Saveriano, Fares J. Abu-Dakka, Aljaz Kramberger, and Luka
Peternel
|
Dynamic Movement Primitives in Robotics: A Tutorial Survey
|
43 pages, 21 figures, 5 tables
| null |
10.1177/02783649231201196
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biological systems, including human beings, have the innate ability to
perform complex tasks in versatile and agile manner. Researchers in
sensorimotor control have tried to understand and formally define this innate
property. The idea, supported by several experimental findings, that biological
systems are able to combine and adapt basic units of motion into complex tasks
finally lead to the formulation of the motor primitives theory. In this
respect, Dynamic Movement Primitives (DMPs) represent an elegant mathematical
formulation of the motor primitives as stable dynamical systems, and are well
suited to generate motor commands for artificial systems like robots. In the
last decades, DMPs have inspired researchers in different robotic fields
including imitation and reinforcement learning, optimal control,physical
interaction, and human-robot co-working, resulting a considerable amount of
published papers. The goal of this tutorial survey is two-fold. On one side, we
present the existing DMPs formulations in rigorous mathematical terms,and
discuss advantages and limitations of each approach as well as practical
implementation details. In the tutorial vein, we also search for existing
implementations of presented approaches and release several others. On the
other side, we provide a systematic and comprehensive review of existing
literature and categorize state of the art work on DMP. The paper concludes
with a discussion on the limitations of DMPs and an outline of possible
research directions.
|
[
{
"created": "Sun, 7 Feb 2021 17:43:51 GMT",
"version": "v1"
}
] |
2023-09-27
|
[
[
"Saveriano",
"Matteo",
""
],
[
"Abu-Dakka",
"Fares J.",
""
],
[
"Kramberger",
"Aljaz",
""
],
[
"Peternel",
"Luka",
""
]
] |
Biological systems, including human beings, have the innate ability to perform complex tasks in versatile and agile manner. Researchers in sensorimotor control have tried to understand and formally define this innate property. The idea, supported by several experimental findings, that biological systems are able to combine and adapt basic units of motion into complex tasks finally lead to the formulation of the motor primitives theory. In this respect, Dynamic Movement Primitives (DMPs) represent an elegant mathematical formulation of the motor primitives as stable dynamical systems, and are well suited to generate motor commands for artificial systems like robots. In the last decades, DMPs have inspired researchers in different robotic fields including imitation and reinforcement learning, optimal control,physical interaction, and human-robot co-working, resulting a considerable amount of published papers. The goal of this tutorial survey is two-fold. On one side, we present the existing DMPs formulations in rigorous mathematical terms,and discuss advantages and limitations of each approach as well as practical implementation details. In the tutorial vein, we also search for existing implementations of presented approaches and release several others. On the other side, we provide a systematic and comprehensive review of existing literature and categorize state of the art work on DMP. The paper concludes with a discussion on the limitations of DMPs and an outline of possible research directions.
|
2110.07719
|
Eric Wong
|
Hadi Salman, Saachi Jain, Eric Wong, Aleksander M\k{a}dry
|
Certified Patch Robustness via Smoothed Vision Transformers
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Certified patch defenses can guarantee robustness of an image classifier to
arbitrary changes within a bounded contiguous region. But, currently, this
robustness comes at a cost of degraded standard accuracies and slower inference
times. We demonstrate how using vision transformers enables significantly
better certified patch robustness that is also more computationally efficient
and does not incur a substantial drop in standard accuracy. These improvements
stem from the inherent ability of the vision transformer to gracefully handle
largely masked images. Our code is available at
https://github.com/MadryLab/smoothed-vit.
|
[
{
"created": "Mon, 11 Oct 2021 17:44:05 GMT",
"version": "v1"
}
] |
2021-10-18
|
[
[
"Salman",
"Hadi",
""
],
[
"Jain",
"Saachi",
""
],
[
"Wong",
"Eric",
""
],
[
"Mądry",
"Aleksander",
""
]
] |
Certified patch defenses can guarantee robustness of an image classifier to arbitrary changes within a bounded contiguous region. But, currently, this robustness comes at a cost of degraded standard accuracies and slower inference times. We demonstrate how using vision transformers enables significantly better certified patch robustness that is also more computationally efficient and does not incur a substantial drop in standard accuracy. These improvements stem from the inherent ability of the vision transformer to gracefully handle largely masked images. Our code is available at https://github.com/MadryLab/smoothed-vit.
|
2312.13240
|
Amit Rozner
|
Amit Rozner, Barak Battash, Ofir Lindenbaum, Lior Wolf
|
Efficient Verification-Based Face Identification
|
10 pages, 5 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We study the problem of performing face verification with an efficient neural
model $f$. The efficiency of $f$ stems from simplifying the face verification
problem from an embedding nearest neighbor search into a binary problem; each
user has its own neural network $f$. To allow information sharing between
different individuals in the training set, we do not train $f$ directly but
instead generate the model weights using a hypernetwork $h$. This leads to the
generation of a compact personalized model for face identification that can be
deployed on edge devices. Key to the method's success is a novel way of
generating hard negatives and carefully scheduling the training objectives. Our
model leads to a substantially small $f$ requiring only 23k parameters and 5M
floating point operations (FLOPS). We use six face verification datasets to
demonstrate that our method is on par or better than state-of-the-art models,
with a significantly reduced number of parameters and computational burden.
Furthermore, we perform an extensive ablation study to demonstrate the
importance of each element in our method.
|
[
{
"created": "Wed, 20 Dec 2023 18:08:02 GMT",
"version": "v1"
},
{
"created": "Sat, 25 May 2024 17:57:41 GMT",
"version": "v2"
}
] |
2024-05-28
|
[
[
"Rozner",
"Amit",
""
],
[
"Battash",
"Barak",
""
],
[
"Lindenbaum",
"Ofir",
""
],
[
"Wolf",
"Lior",
""
]
] |
We study the problem of performing face verification with an efficient neural model $f$. The efficiency of $f$ stems from simplifying the face verification problem from an embedding nearest neighbor search into a binary problem; each user has its own neural network $f$. To allow information sharing between different individuals in the training set, we do not train $f$ directly but instead generate the model weights using a hypernetwork $h$. This leads to the generation of a compact personalized model for face identification that can be deployed on edge devices. Key to the method's success is a novel way of generating hard negatives and carefully scheduling the training objectives. Our model leads to a substantially small $f$ requiring only 23k parameters and 5M floating point operations (FLOPS). We use six face verification datasets to demonstrate that our method is on par or better than state-of-the-art models, with a significantly reduced number of parameters and computational burden. Furthermore, we perform an extensive ablation study to demonstrate the importance of each element in our method.
|
1409.4097
|
Tryphon Georgiou
|
Lipeng Ning and Tryphon T. Georgiou
|
Metrics for matrix-valued measures via test functions
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is perhaps not widely recognized that certain common notions of distance
between probability measures have an alternative dual interpretation which
compares corresponding functionals against suitable families of test functions.
This dual viewpoint extends in a straightforward manner to suggest metrics
between matrix-valued measures. Our main interest has been in developing
weakly-continuous metrics that are suitable for comparing matrix-valued power
spectral density functions. To this end, and following the suggested recipe of
utilizing suitable families of test functions, we develop a weakly-continuous
metric that is analogous to the Wasserstein metric and applies to matrix-valued
densities. We use a numerical example to compare this metric to certain
standard alternatives including a different version of a matricial Wasserstein
metric developed earlier.
|
[
{
"created": "Sun, 14 Sep 2014 20:19:41 GMT",
"version": "v1"
}
] |
2014-09-16
|
[
[
"Ning",
"Lipeng",
""
],
[
"Georgiou",
"Tryphon T.",
""
]
] |
It is perhaps not widely recognized that certain common notions of distance between probability measures have an alternative dual interpretation which compares corresponding functionals against suitable families of test functions. This dual viewpoint extends in a straightforward manner to suggest metrics between matrix-valued measures. Our main interest has been in developing weakly-continuous metrics that are suitable for comparing matrix-valued power spectral density functions. To this end, and following the suggested recipe of utilizing suitable families of test functions, we develop a weakly-continuous metric that is analogous to the Wasserstein metric and applies to matrix-valued densities. We use a numerical example to compare this metric to certain standard alternatives including a different version of a matricial Wasserstein metric developed earlier.
|
2102.10287
|
Siwen Luo
|
Siwen Luo, Mengting Wu, Yiwen Gong, Wanying Zhou, Josiah Poon
|
Deep Structured Feature Networks for Table Detection and Tabular Data
Extraction from Scanned Financial Document Images
|
Works need further review
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic table detection in PDF documents has achieved a great success but
tabular data extraction are still challenging due to the integrity and noise
issues in detected table areas. The accurate data extraction is extremely
crucial in finance area. Inspired by this, the aim of this research is
proposing an automated table detection and tabular data extraction from
financial PDF documents. We proposed a method that consists of three main
processes, which are detecting table areas with a Faster R-CNN (Region-based
Convolutional Neural Network) model with Feature Pyramid Network (FPN) on each
page image, extracting contents and structures by a compounded layout
segmentation technique based on optical character recognition (OCR) and
formulating regular expression rules for table header separation. The tabular
data extraction feature is embedded with rule-based filtering and restructuring
functions that are highly scalable. We annotate a new Financial Documents
dataset with table regions for the experiment. The excellent table detection
performance of the detection model is obtained from our customized dataset. The
main contributions of this paper are proposing the Financial Documents dataset
with table-area annotations, the superior detection model and the rule-based
layout segmentation technique for the tabular data extraction from PDF files.
|
[
{
"created": "Sat, 20 Feb 2021 08:21:17 GMT",
"version": "v1"
},
{
"created": "Mon, 23 May 2022 02:37:47 GMT",
"version": "v2"
}
] |
2022-05-24
|
[
[
"Luo",
"Siwen",
""
],
[
"Wu",
"Mengting",
""
],
[
"Gong",
"Yiwen",
""
],
[
"Zhou",
"Wanying",
""
],
[
"Poon",
"Josiah",
""
]
] |
Automatic table detection in PDF documents has achieved a great success but tabular data extraction are still challenging due to the integrity and noise issues in detected table areas. The accurate data extraction is extremely crucial in finance area. Inspired by this, the aim of this research is proposing an automated table detection and tabular data extraction from financial PDF documents. We proposed a method that consists of three main processes, which are detecting table areas with a Faster R-CNN (Region-based Convolutional Neural Network) model with Feature Pyramid Network (FPN) on each page image, extracting contents and structures by a compounded layout segmentation technique based on optical character recognition (OCR) and formulating regular expression rules for table header separation. The tabular data extraction feature is embedded with rule-based filtering and restructuring functions that are highly scalable. We annotate a new Financial Documents dataset with table regions for the experiment. The excellent table detection performance of the detection model is obtained from our customized dataset. The main contributions of this paper are proposing the Financial Documents dataset with table-area annotations, the superior detection model and the rule-based layout segmentation technique for the tabular data extraction from PDF files.
|
1802.09653
|
Lujo Bauer
|
Mahmood Sharif, Lujo Bauer, and Michael K. Reiter
|
On the Suitability of $L_p$-norms for Creating and Preventing
Adversarial Examples
|
Appeared in CV-COPS/CVPRW 2018
| null | null | null |
cs.CR cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Much research effort has been devoted to better understanding adversarial
examples, which are specially crafted inputs to machine-learning models that
are perceptually similar to benign inputs, but are classified differently
(i.e., misclassified). Both algorithms that create adversarial examples and
strategies for defending against them typically use $L_p$-norms to measure the
perceptual similarity between an adversarial input and its benign original.
Prior work has already shown, however, that two images need not be close to
each other as measured by an $L_p$-norm to be perceptually similar. In this
work, we show that nearness according to an $L_p$-norm is not just unnecessary
for perceptual similarity, but is also insufficient. Specifically, focusing on
datasets (CIFAR10 and MNIST), $L_p$-norms, and thresholds used in prior work,
we show through online user studies that "adversarial examples" that are closer
to their benign counterparts than required by commonly used $L_p$-norm
thresholds can nevertheless be perceptually different to humans from the
corresponding benign examples. Namely, the perceptual distance between two
images that are "near" each other according to an $L_p$-norm can be high enough
that participants frequently classify the two images as representing different
objects or digits. Combined with prior work, we thus demonstrate that nearness
of inputs as measured by $L_p$-norms is neither necessary nor sufficient for
perceptual similarity, which has implications for both creating and defending
against adversarial examples. We propose and discuss alternative similarity
metrics to stimulate future research in the area.
|
[
{
"created": "Tue, 27 Feb 2018 00:04:12 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Jul 2018 03:54:59 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Jul 2018 13:48:36 GMT",
"version": "v3"
}
] |
2018-07-30
|
[
[
"Sharif",
"Mahmood",
""
],
[
"Bauer",
"Lujo",
""
],
[
"Reiter",
"Michael K.",
""
]
] |
Much research effort has been devoted to better understanding adversarial examples, which are specially crafted inputs to machine-learning models that are perceptually similar to benign inputs, but are classified differently (i.e., misclassified). Both algorithms that create adversarial examples and strategies for defending against them typically use $L_p$-norms to measure the perceptual similarity between an adversarial input and its benign original. Prior work has already shown, however, that two images need not be close to each other as measured by an $L_p$-norm to be perceptually similar. In this work, we show that nearness according to an $L_p$-norm is not just unnecessary for perceptual similarity, but is also insufficient. Specifically, focusing on datasets (CIFAR10 and MNIST), $L_p$-norms, and thresholds used in prior work, we show through online user studies that "adversarial examples" that are closer to their benign counterparts than required by commonly used $L_p$-norm thresholds can nevertheless be perceptually different to humans from the corresponding benign examples. Namely, the perceptual distance between two images that are "near" each other according to an $L_p$-norm can be high enough that participants frequently classify the two images as representing different objects or digits. Combined with prior work, we thus demonstrate that nearness of inputs as measured by $L_p$-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples. We propose and discuss alternative similarity metrics to stimulate future research in the area.
|
2109.12045
|
Dimitris Panagopoulos
|
Dimitris Panagopoulos, Giannis Petousakis, Rustam Stolkin, Grigoris
Nikolaou, Manolis Chiou
|
A Bayesian-Based Approach to Human Operator Intent Recognition in Remote
Mobile Robot Navigation
|
7 pages, 3 figures, 2 Tables, IEEE International Conference SMC 2021
| null | null | null |
cs.RO math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the problem of human operator intent recognition during
teleoperated robot navigation. In this context, recognition of the operator's
intended navigational goal, could enable an artificial intelligence (AI) agent
to assist the operator in an advanced human-robot interaction framework. We
propose a Bayesian Operator Intent Recognition (BOIR) probabilistic method that
utilizes: (i) an observation model that fuses information as a weighting
combination of multiple observation sources providing geometric information;
(ii) a transition model that indicates the evolution of the state; and (iii) an
action model, the Active Intent Recognition Model (AIRM), that enables the
operator to communicate their explicit intent asynchronously. The proposed
method is evaluated in an experiment where operators controlling a remote
mobile robot are tasked with navigation and exploration under various scenarios
with different map and obstacle layouts. Results demonstrate that BOIR
outperforms two related methods from literature in terms of accuracy and
uncertainty of the intent recognition.
|
[
{
"created": "Fri, 24 Sep 2021 16:12:13 GMT",
"version": "v1"
}
] |
2021-09-27
|
[
[
"Panagopoulos",
"Dimitris",
""
],
[
"Petousakis",
"Giannis",
""
],
[
"Stolkin",
"Rustam",
""
],
[
"Nikolaou",
"Grigoris",
""
],
[
"Chiou",
"Manolis",
""
]
] |
This paper addresses the problem of human operator intent recognition during teleoperated robot navigation. In this context, recognition of the operator's intended navigational goal, could enable an artificial intelligence (AI) agent to assist the operator in an advanced human-robot interaction framework. We propose a Bayesian Operator Intent Recognition (BOIR) probabilistic method that utilizes: (i) an observation model that fuses information as a weighting combination of multiple observation sources providing geometric information; (ii) a transition model that indicates the evolution of the state; and (iii) an action model, the Active Intent Recognition Model (AIRM), that enables the operator to communicate their explicit intent asynchronously. The proposed method is evaluated in an experiment where operators controlling a remote mobile robot are tasked with navigation and exploration under various scenarios with different map and obstacle layouts. Results demonstrate that BOIR outperforms two related methods from literature in terms of accuracy and uncertainty of the intent recognition.
|
2209.08874
|
Kengo Hashimoto
|
Kengo Hashimoto and Ken-ichi Iwata
|
Optimality of Huffman Code in the Class of 1-bit Delay Decodable Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a given independent and identically distributed (i.i.d.) source, Huffman
code achieves the optimal average codeword length in the class of instantaneous
code with a single code table. However, it is known that there exist
time-variant encoders, which achieve a shorter average codeword length than the
Huffman code, using multiple code tables and allowing at most k-bit decoding
delay for k = 2, 3, 4, . . .. On the other hand, it is not known whether there
exists a 1-bit delay decodable code, which achieves a shorter average length
than the Huffman code. This paper proves that for a given i.i.d. source, a
Huffman code achieves the optimal average codeword length in the class of 1-bit
delay decodable codes with a finite number of code tables.
|
[
{
"created": "Mon, 19 Sep 2022 09:29:03 GMT",
"version": "v1"
}
] |
2022-09-20
|
[
[
"Hashimoto",
"Kengo",
""
],
[
"Iwata",
"Ken-ichi",
""
]
] |
For a given independent and identically distributed (i.i.d.) source, Huffman code achieves the optimal average codeword length in the class of instantaneous code with a single code table. However, it is known that there exist time-variant encoders, which achieve a shorter average codeword length than the Huffman code, using multiple code tables and allowing at most k-bit decoding delay for k = 2, 3, 4, . . .. On the other hand, it is not known whether there exists a 1-bit delay decodable code, which achieves a shorter average length than the Huffman code. This paper proves that for a given i.i.d. source, a Huffman code achieves the optimal average codeword length in the class of 1-bit delay decodable codes with a finite number of code tables.
|
1905.08740
|
Konrad Simon Ph.D.
|
Konrad Simon and J\"orn Behrens
|
Semi-Lagrangian Subgrid Reconstruction for Advection-Dominant Multiscale
Problems
| null | null | null | null |
cs.CE cs.NA math.NA physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new framework of numerical multiscale methods for
advection-dominated problems motivated by climate sciences. Current numerical
multiscale methods (MsFEM) work well on stationary elliptic problems but have
difficulties when the model involves dominant lower order terms. Our idea to
overcome the assocociated difficulties is a semi-Lagrangian based
reconstruction of subgrid variablity into a multiscale basis by solving many
local inverse problems. Globally the method looks like a Eulerian method with
multiscale stabilized basis. We show example runs in one and two dimensions and
a comparison to standard methods to support our ideas and discuss possible
extensions to other types of Galerkin methods, higher dimensions and nonlinear
problems.
|
[
{
"created": "Tue, 21 May 2019 16:36:34 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2019 18:25:49 GMT",
"version": "v2"
}
] |
2019-05-28
|
[
[
"Simon",
"Konrad",
""
],
[
"Behrens",
"Jörn",
""
]
] |
We introduce a new framework of numerical multiscale methods for advection-dominated problems motivated by climate sciences. Current numerical multiscale methods (MsFEM) work well on stationary elliptic problems but have difficulties when the model involves dominant lower order terms. Our idea to overcome the assocociated difficulties is a semi-Lagrangian based reconstruction of subgrid variablity into a multiscale basis by solving many local inverse problems. Globally the method looks like a Eulerian method with multiscale stabilized basis. We show example runs in one and two dimensions and a comparison to standard methods to support our ideas and discuss possible extensions to other types of Galerkin methods, higher dimensions and nonlinear problems.
|
1907.10178
|
Luca Anthony Thiede
|
Luca Anthony Thiede and Pratik Prabhanjan Brahma
|
Analyzing the Variety Loss in the Context of Probabilistic Trajectory
Prediction
|
Accepted for publication at ICCV 2019
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Trajectory or behavior prediction of traffic agents is an important component
of autonomous driving and robot planning in general. It can be framed as a
probabilistic future sequence generation problem and recent literature has
studied the applicability of generative models in this context. The variety or
Minimum over N (MoN) loss, which tries to minimize the error between the ground
truth and the closest of N output predictions, has been used in these recent
learning models to improve the diversity of predictions. In this work, we
present a proof to show that the MoN loss does not lead to the ground truth
probability density function, but approximately to its square root instead. We
validate this finding with extensive experiments on both simulated toy as well
as real world datasets. We also propose multiple solutions to compensate for
the dilation to show improvement of log likelihood of the ground truth samples
in the corrected probability density function.
|
[
{
"created": "Tue, 23 Jul 2019 23:56:02 GMT",
"version": "v1"
}
] |
2019-07-25
|
[
[
"Thiede",
"Luca Anthony",
""
],
[
"Brahma",
"Pratik Prabhanjan",
""
]
] |
Trajectory or behavior prediction of traffic agents is an important component of autonomous driving and robot planning in general. It can be framed as a probabilistic future sequence generation problem and recent literature has studied the applicability of generative models in this context. The variety or Minimum over N (MoN) loss, which tries to minimize the error between the ground truth and the closest of N output predictions, has been used in these recent learning models to improve the diversity of predictions. In this work, we present a proof to show that the MoN loss does not lead to the ground truth probability density function, but approximately to its square root instead. We validate this finding with extensive experiments on both simulated toy as well as real world datasets. We also propose multiple solutions to compensate for the dilation to show improvement of log likelihood of the ground truth samples in the corrected probability density function.
|
2306.05036
|
Jonas Oppenlaender
|
Jonas Oppenlaender, Joonas H\"am\"al\"ainen
|
Mapping the Challenges of HCI: An Application and Evaluation of ChatGPT
and GPT-4 for Mining Insights at Scale
|
42 pages, 5 figures, 4 tables
| null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs), such as ChatGPT and GPT-4, are gaining
wide-spread real world use. Yet, these LLMs are closed source, and little is
known about their performance in real-world use cases. In this paper, we apply
and evaluate the combination of ChatGPT and GPT-4 for the real-world task of
mining insights from a text corpus in order to identify research challenges in
the field of HCI. We extract 4,392 research challenges in over 100 topics from
the 2023~CHI conference proceedings and visualize the research challenges for
interactive exploration. We critically evaluate the LLMs on this practical task
and conclude that the combination of ChatGPT and GPT-4 makes an excellent
cost-efficient means for analyzing a text corpus at scale. Cost-efficiency is
key for flexibly prototyping research ideas and analyzing text corpora from
different perspectives, with implications for applying LLMs for mining insights
in academia and practice.
|
[
{
"created": "Thu, 8 Jun 2023 08:41:30 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Oct 2023 14:56:40 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Dec 2023 11:57:22 GMT",
"version": "v3"
},
{
"created": "Thu, 4 Jul 2024 11:47:23 GMT",
"version": "v4"
}
] |
2024-07-08
|
[
[
"Oppenlaender",
"Jonas",
""
],
[
"Hämäläinen",
"Joonas",
""
]
] |
Large language models (LLMs), such as ChatGPT and GPT-4, are gaining wide-spread real world use. Yet, these LLMs are closed source, and little is known about their performance in real-world use cases. In this paper, we apply and evaluate the combination of ChatGPT and GPT-4 for the real-world task of mining insights from a text corpus in order to identify research challenges in the field of HCI. We extract 4,392 research challenges in over 100 topics from the 2023~CHI conference proceedings and visualize the research challenges for interactive exploration. We critically evaluate the LLMs on this practical task and conclude that the combination of ChatGPT and GPT-4 makes an excellent cost-efficient means for analyzing a text corpus at scale. Cost-efficiency is key for flexibly prototyping research ideas and analyzing text corpora from different perspectives, with implications for applying LLMs for mining insights in academia and practice.
|
1007.3250
|
Laurent Hubert
|
Elvira Albert, Miguel G\'omez-Zamalloa, Laurent Hubert, German Puebla
|
Verification of Java Bytecode using Analysis and Transformation of Logic
Programs
| null |
The International Symposium on Practical Aspects of Declarative
Languages 4354 (2007) 124-139
|
10.1007/978-3-540-69611-7_8
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State of the art analyzers in the Logic Programming (LP) paradigm are
nowadays mature and sophisticated. They allow inferring a wide variety of
global properties including termination, bounds on resource consumption, etc.
The aim of this work is to automatically transfer the power of such analysis
tools for LP to the analysis and verification of Java bytecode (JVML). In order
to achieve our goal, we rely on well-known techniques for meta-programming and
program specialization. More precisely, we propose to partially evaluate a JVML
interpreter implemented in LP together with (an LP representation of) a JVML
program and then analyze the residual program. Interestingly, at least for the
examples we have studied, our approach produces very simple LP representations
of the original JVML programs. This can be seen as a decompilation from JVML to
high-level LP source. By reasoning about such residual programs, we can
automatically prove in the CiaoPP system some non-trivial properties of JVML
programs such as termination, run-time error freeness and infer bounds on its
resource consumption. We are not aware of any other system which is able to
verify such advanced properties of Java bytecode.
|
[
{
"created": "Mon, 19 Jul 2010 19:46:43 GMT",
"version": "v1"
}
] |
2010-11-22
|
[
[
"Albert",
"Elvira",
""
],
[
"Gómez-Zamalloa",
"Miguel",
""
],
[
"Hubert",
"Laurent",
""
],
[
"Puebla",
"German",
""
]
] |
State of the art analyzers in the Logic Programming (LP) paradigm are nowadays mature and sophisticated. They allow inferring a wide variety of global properties including termination, bounds on resource consumption, etc. The aim of this work is to automatically transfer the power of such analysis tools for LP to the analysis and verification of Java bytecode (JVML). In order to achieve our goal, we rely on well-known techniques for meta-programming and program specialization. More precisely, we propose to partially evaluate a JVML interpreter implemented in LP together with (an LP representation of) a JVML program and then analyze the residual program. Interestingly, at least for the examples we have studied, our approach produces very simple LP representations of the original JVML programs. This can be seen as a decompilation from JVML to high-level LP source. By reasoning about such residual programs, we can automatically prove in the CiaoPP system some non-trivial properties of JVML programs such as termination, run-time error freeness and infer bounds on its resource consumption. We are not aware of any other system which is able to verify such advanced properties of Java bytecode.
|
cs/9902002
|
Kuang-hua Chen
|
Kuang-hua Chen
|
Automatic Identification of Subjects for Textual Documents in Digital
Libraries
|
7 pages, 6 tables
| null | null | null |
cs.DL cs.CL
| null |
The amount of electronic documents in the Internet grows very quickly. How to
effectively identify subjects for documents becomes an important issue. In
past, the researches focus on the behavior of nouns in documents. Although
subjects are composed of nouns, the constituents that determine which nouns are
subjects are not only nouns. Based on the assumption that texts are
well-organized and event-driven, nouns and verbs together contribute the
process of subject identification. This paper considers four factors: 1) word
importance, 2) word frequency, 3) word co-occurrence, and 4) word distance and
proposes a model to identify subjects for textual documents. The preliminary
experiments show that the performance of the proposed model is close to that of
human beings.
|
[
{
"created": "Mon, 1 Feb 1999 11:01:23 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Chen",
"Kuang-hua",
""
]
] |
The amount of electronic documents in the Internet grows very quickly. How to effectively identify subjects for documents becomes an important issue. In past, the researches focus on the behavior of nouns in documents. Although subjects are composed of nouns, the constituents that determine which nouns are subjects are not only nouns. Based on the assumption that texts are well-organized and event-driven, nouns and verbs together contribute the process of subject identification. This paper considers four factors: 1) word importance, 2) word frequency, 3) word co-occurrence, and 4) word distance and proposes a model to identify subjects for textual documents. The preliminary experiments show that the performance of the proposed model is close to that of human beings.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.