id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.04975 | Eduardo Noboro Tominaga | Eduardo Noboro Tominaga, Hsuan-Jung Su, Jinfeng Du, Sivarama
Venkatesan, Richard Demo Souza and Hirley Alves | Trade-Off Between Beamforming and Macro-Diversity Gains in Distributed
mMIMO | 6 pages, 3 figures. Manuscript submitted to the IEEE Wireless
Communications and Networking Conference (WCNC) 2024, Dubai, United Arab
Emirates | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Industry and academia have been working towards the evolution from
Centralized massive Multiple-Input Multiple-Output (CmMIMO) to Distributed
mMIMO (DmMIMO) architectures. Instead of splitting a coverage area into many
cells, each served by a single Base Station equipped with several antennas, the
whole coverage area is jointly covered by several Access Points (AP) equipped
with few or single antennas. Nevertheless, when choosing between deploying more
APs with few or single antennas or fewer APs equipped with many antennas, one
observes an inherent trade-off between the beamforming and macro-diversity
gains that has not been investigated in the literature. Given a total number of
antenna elements and total downlink power, under a channel model that takes
into account a probability of Line-of-Sight (LoS) as a function of the distance
between the User Equipments (UEs) and APs, our numerical results show that
there exists a ``sweet spot" on the optimal number of APs and of antenna
elements per AP which is a function of the physical dimensions of the coverage
area.
| [
{
"created": "Sun, 10 Sep 2023 09:39:16 GMT",
"version": "v1"
}
] | 2023-09-12 | [
[
"Tominaga",
"Eduardo Noboro",
""
],
[
"Su",
"Hsuan-Jung",
""
],
[
"Du",
"Jinfeng",
""
],
[
"Venkatesan",
"Sivarama",
""
],
[
"Souza",
"Richard Demo",
""
],
[
"Alves",
"Hirley",
""
]
] | Industry and academia have been working towards the evolution from Centralized massive Multiple-Input Multiple-Output (CmMIMO) to Distributed mMIMO (DmMIMO) architectures. Instead of splitting a coverage area into many cells, each served by a single Base Station equipped with several antennas, the whole coverage area is jointly covered by several Access Points (AP) equipped with few or single antennas. Nevertheless, when choosing between deploying more APs with few or single antennas or fewer APs equipped with many antennas, one observes an inherent trade-off between the beamforming and macro-diversity gains that has not been investigated in the literature. Given a total number of antenna elements and total downlink power, under a channel model that takes into account a probability of Line-of-Sight (LoS) as a function of the distance between the User Equipments (UEs) and APs, our numerical results show that there exists a ``sweet spot" on the optimal number of APs and of antenna elements per AP which is a function of the physical dimensions of the coverage area. |
2403.05543 | Pilar Aparicio-Mart\'inez Dr. | Olga Maria Luque Alcaraz, Pilar Aparicio-Mart\'inez, Antonio Gomera,
Manuel Vaquero-Abell\'an | Nurses as agents for achieving Environmentally Sustainable Health
Systems: A bibliometric analysis | 9 pages,4 figures,2 tables | null | 0.1111/jonm.13798 | null | cs.CY cs.DL cs.SI | http://creativecommons.org/licenses/by/4.0/ | Objective: To analyze the current scientific knowledge and research lines
focused on environmentally sustainable health systems, including the role of
nurses. Background: There seem to be differences between creating interventions
focused on environmentally sustainable health systems, including nurses, and
the scarcity of research on this topic, framed on the Sustainable Development
Goals. Methods: A bibliometric analysis was carried out, via three databases
(Web of Science, Scopus, and Pubmed), and the guideline recommendations were
followed to select bibliometric data. Results: The search resulted in 159
publications, significantly increasing the trends from 2017 to 2021 (p=0.028).
The most relevant countries in this area were the United States of America, the
United Kingdom, and Sweden. Also, the top articles were from relevant journals,
indexed in Journal Citation Report, and the first and the second quartile
linked to the nursing field and citations (p<0.001). Conclusion: Education is
key to achieving environmentally sustainable health systems via institutions
and policies. Implications for nursing management: There is a lack of
experimental data and policies on achieving or maintaining environmentally
sustainable health care systems, indicating that nurses have an important role
and should be consulted and included in decision-making policies regarding
sustainability in the healthcare systems.
| [
{
"created": "Mon, 5 Feb 2024 12:14:04 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Alcaraz",
"Olga Maria Luque",
""
],
[
"Aparicio-Martínez",
"Pilar",
""
],
[
"Gomera",
"Antonio",
""
],
[
"Vaquero-Abellán",
"Manuel",
""
]
] | Objective: To analyze the current scientific knowledge and research lines focused on environmentally sustainable health systems, including the role of nurses. Background: There seem to be differences between creating interventions focused on environmentally sustainable health systems, including nurses, and the scarcity of research on this topic, framed on the Sustainable Development Goals. Methods: A bibliometric analysis was carried out, via three databases (Web of Science, Scopus, and Pubmed), and the guideline recommendations were followed to select bibliometric data. Results: The search resulted in 159 publications, significantly increasing the trends from 2017 to 2021 (p=0.028). The most relevant countries in this area were the United States of America, the United Kingdom, and Sweden. Also, the top articles were from relevant journals, indexed in Journal Citation Report, and the first and the second quartile linked to the nursing field and citations (p<0.001). Conclusion: Education is key to achieving environmentally sustainable health systems via institutions and policies. Implications for nursing management: There is a lack of experimental data and policies on achieving or maintaining environmentally sustainable health care systems, indicating that nurses have an important role and should be consulted and included in decision-making policies regarding sustainability in the healthcare systems. |
1503.06375 | Bharath Sankaran | Bharath Sankaran, Jeannette Bohg, Nathan Ratliff and Stefan Schaal | Policy Learning with Hypothesis based Local Action Selection | RLDM abstract | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For robots to be able to manipulate in unknown and unstructured environments
the robot should be capable of operating under partial observability of the
environment. Object occlusions and unmodeled environments are some of the
factors that result in partial observability. A common scenario where this is
encountered is manipulation in clutter. In the case that the robot needs to
locate an object of interest and manipulate it, it needs to perform a series of
decluttering actions to accurately detect the object of interest. To perform
such a series of actions, the robot also needs to account for the dynamics of
objects in the environment and how they react to contact. This is a non trivial
problem since one needs to reason not only about robot-object interactions but
also object-object interactions in the presence of contact. In the example
scenario of manipulation in clutter, the state vector would have to account for
the pose of the object of interest and the structure of the surrounding
environment. The process model would have to account for all the aforementioned
robot-object, object-object interactions. The complexity of the process model
grows exponentially as the number of objects in the scene increases. This is
commonly the case in unstructured environments. Hence it is not reasonable to
attempt to model all object-object and robot-object interactions explicitly.
Under this setting we propose a hypothesis based action selection algorithm
where we construct a hypothesis set of the possible poses of an object of
interest given the current evidence in the scene and select actions based on
our current set of hypothesis. This hypothesis set tends to represent the
belief about the structure of the environment and the number of poses the
object of interest can take. The agent's only stopping criterion is when the
uncertainty regarding the pose of the object is fully resolved.
| [
{
"created": "Sun, 22 Mar 2015 02:36:49 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Mar 2015 00:24:43 GMT",
"version": "v2"
},
{
"created": "Fri, 8 May 2015 05:25:47 GMT",
"version": "v3"
}
] | 2015-05-11 | [
[
"Sankaran",
"Bharath",
""
],
[
"Bohg",
"Jeannette",
""
],
[
"Ratliff",
"Nathan",
""
],
[
"Schaal",
"Stefan",
""
]
] | For robots to be able to manipulate in unknown and unstructured environments the robot should be capable of operating under partial observability of the environment. Object occlusions and unmodeled environments are some of the factors that result in partial observability. A common scenario where this is encountered is manipulation in clutter. In the case that the robot needs to locate an object of interest and manipulate it, it needs to perform a series of decluttering actions to accurately detect the object of interest. To perform such a series of actions, the robot also needs to account for the dynamics of objects in the environment and how they react to contact. This is a non trivial problem since one needs to reason not only about robot-object interactions but also object-object interactions in the presence of contact. In the example scenario of manipulation in clutter, the state vector would have to account for the pose of the object of interest and the structure of the surrounding environment. The process model would have to account for all the aforementioned robot-object, object-object interactions. The complexity of the process model grows exponentially as the number of objects in the scene increases. This is commonly the case in unstructured environments. Hence it is not reasonable to attempt to model all object-object and robot-object interactions explicitly. Under this setting we propose a hypothesis based action selection algorithm where we construct a hypothesis set of the possible poses of an object of interest given the current evidence in the scene and select actions based on our current set of hypothesis. This hypothesis set tends to represent the belief about the structure of the environment and the number of poses the object of interest can take. The agent's only stopping criterion is when the uncertainty regarding the pose of the object is fully resolved. |
2402.03818 | O Duranthon | O. Duranthon, L. Zdeborov\'a | Asymptotic generalization error of a single-layer graph convolutional
network | null | null | null | null | cs.LG cond-mat.dis-nn | http://creativecommons.org/licenses/by-sa/4.0/ | While graph convolutional networks show great practical promises, the
theoretical understanding of their generalization properties as a function of
the number of samples is still in its infancy compared to the more broadly
studied case of supervised fully connected neural networks. In this article, we
predict the performances of a single-layer graph convolutional network (GCN)
trained on data produced by attributed stochastic block models (SBMs) in the
high-dimensional limit. Previously, only ridge regression on contextual-SBM
(CSBM) has been considered in Shi et al. 2022; we generalize the analysis to
arbitrary convex loss and regularization for the CSBM and add the analysis for
another data model, the neural-prior SBM. We also study the high
signal-to-noise ratio limit, detail the convergence rates of the GCN and show
that, while consistent, it does not reach the Bayes-optimal rate for any of the
considered cases.
| [
{
"created": "Tue, 6 Feb 2024 09:07:26 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Mar 2024 15:08:27 GMT",
"version": "v2"
}
] | 2024-03-21 | [
[
"Duranthon",
"O.",
""
],
[
"Zdeborová",
"L.",
""
]
] | While graph convolutional networks show great practical promises, the theoretical understanding of their generalization properties as a function of the number of samples is still in its infancy compared to the more broadly studied case of supervised fully connected neural networks. In this article, we predict the performances of a single-layer graph convolutional network (GCN) trained on data produced by attributed stochastic block models (SBMs) in the high-dimensional limit. Previously, only ridge regression on contextual-SBM (CSBM) has been considered in Shi et al. 2022; we generalize the analysis to arbitrary convex loss and regularization for the CSBM and add the analysis for another data model, the neural-prior SBM. We also study the high signal-to-noise ratio limit, detail the convergence rates of the GCN and show that, while consistent, it does not reach the Bayes-optimal rate for any of the considered cases. |
1908.04924 | Mingyuan Bai | Mingyuan Bai, S.T. Boris Choy, Xin Song, Junbin Gao | Tensor-Train Parameterization for Ultra Dimensionality Reduction | null | null | null | null | cs.LG eess.IV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Locality preserving projections (LPP) are a classical dimensionality
reduction method based on data graph information. However, LPP is still
responsive to extreme outliers. LPP aiming for vectorial data may undermine
data structural information when it is applied to multidimensional data.
Besides, it assumes the dimension of data to be smaller than the number of
instances, which is not suitable for high-dimensional data. For
high-dimensional data analysis, the tensor-train decomposition is proved to be
able to efficiently and effectively capture the spatial relations. Thus, we
propose a tensor-train parameterization for ultra dimensionality reduction
(TTPUDR) in which the traditional LPP mapping is tensorized in terms of
tensor-trains and the LPP objective is replaced with the Frobenius norm to
increase the robustness of the model. The manifold optimization technique is
utilized to solve the new model. The performance of TTPUDR is assessed on
classification problems and TTPUDR significantly outperforms the past methods
and the several state-of-the-art methods.
| [
{
"created": "Wed, 14 Aug 2019 02:04:34 GMT",
"version": "v1"
}
] | 2019-08-15 | [
[
"Bai",
"Mingyuan",
""
],
[
"Choy",
"S. T. Boris",
""
],
[
"Song",
"Xin",
""
],
[
"Gao",
"Junbin",
""
]
] | Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods. |
2009.05502 | Johannes Knittel | Johannes Knittel, Andres Lalama, Steffen Koch, and Thomas Ertl | Visual Neural Decomposition to Explain Multivariate Data Sets | To appear in IEEE Transactions on Visualization and Computer Graphics
and IEEE VIS 2020 (VAST) | null | null | null | cs.LG cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Investigating relationships between variables in multi-dimensional data sets
is a common task for data analysts and engineers. More specifically, it is
often valuable to understand which ranges of which input variables lead to
particular values of a given target variable. Unfortunately, with an increasing
number of independent variables, this process may become cumbersome and
time-consuming due to the many possible combinations that have to be explored.
In this paper, we propose a novel approach to visualize correlations between
input variables and a target output variable that scales to hundreds of
variables. We developed a visual model based on neural networks that can be
explored in a guided way to help analysts find and understand such
correlations. First, we train a neural network to predict the target from the
input variables. Then, we visualize the inner workings of the resulting model
to help understand relations within the data set. We further introduce a new
regularization term for the backpropagation algorithm that encourages the
neural network to learn representations that are easier to interpret visually.
We apply our method to artificial and real-world data sets to show its utility.
| [
{
"created": "Fri, 11 Sep 2020 15:53:37 GMT",
"version": "v1"
}
] | 2020-09-14 | [
[
"Knittel",
"Johannes",
""
],
[
"Lalama",
"Andres",
""
],
[
"Koch",
"Steffen",
""
],
[
"Ertl",
"Thomas",
""
]
] | Investigating relationships between variables in multi-dimensional data sets is a common task for data analysts and engineers. More specifically, it is often valuable to understand which ranges of which input variables lead to particular values of a given target variable. Unfortunately, with an increasing number of independent variables, this process may become cumbersome and time-consuming due to the many possible combinations that have to be explored. In this paper, we propose a novel approach to visualize correlations between input variables and a target output variable that scales to hundreds of variables. We developed a visual model based on neural networks that can be explored in a guided way to help analysts find and understand such correlations. First, we train a neural network to predict the target from the input variables. Then, we visualize the inner workings of the resulting model to help understand relations within the data set. We further introduce a new regularization term for the backpropagation algorithm that encourages the neural network to learn representations that are easier to interpret visually. We apply our method to artificial and real-world data sets to show its utility. |
1605.05106 | Kaelon Lloyd | Kaelon Lloyd, David Marshall, Simon C. Moore, Paul L. Rosin | Detecting Violent and Abnormal Crowd activity using Temporal Analysis of
Grey Level Co-occurrence Matrix (GLCM) Based Texture Measures | Published under open access, 9 pages, 12 Figures | Machine Vision and Applications (2017) | 10.1007/s00138-017-0830-x | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The severity of sustained injury resulting from assault-related violence can
be minimised by reducing detection time. However, it has been shown that human
operators perform poorly at detecting events found in video footage when
presented with simultaneous feeds. We utilise computer vision techniques to
develop an automated method of abnormal crowd detection that can aid a human
operator in the detection of violent behaviour. We observed that behaviour in
city centre environments often occur in crowded areas, resulting in individual
actions being occluded by other crowd members. We propose a real-time
descriptor that models crowd dynamics by encoding changes in crowd texture
using temporal summaries of Grey Level Co-Occurrence Matrix (GLCM) features. We
introduce a measure of inter-frame uniformity (IFU) and demonstrate that the
appearance of violent behaviour changes in a less uniform manner when compared
to other types of crowd behaviour. Our proposed method is computationally cheap
and offers real-time description. Evaluating our method using a privately held
CCTV dataset and the publicly available Violent Flows, UCF Web Abnormality, and
UMN Abnormal Crowd datasets, we report a receiver operating characteristic
score of 0.9782, 0.9403, 0.8218 and 0.9956 respectively.
| [
{
"created": "Tue, 17 May 2016 10:53:07 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Apr 2017 10:39:02 GMT",
"version": "v2"
}
] | 2017-04-04 | [
[
"Lloyd",
"Kaelon",
""
],
[
"Marshall",
"David",
""
],
[
"Moore",
"Simon C.",
""
],
[
"Rosin",
"Paul L.",
""
]
] | The severity of sustained injury resulting from assault-related violence can be minimised by reducing detection time. However, it has been shown that human operators perform poorly at detecting events found in video footage when presented with simultaneous feeds. We utilise computer vision techniques to develop an automated method of abnormal crowd detection that can aid a human operator in the detection of violent behaviour. We observed that behaviour in city centre environments often occur in crowded areas, resulting in individual actions being occluded by other crowd members. We propose a real-time descriptor that models crowd dynamics by encoding changes in crowd texture using temporal summaries of Grey Level Co-Occurrence Matrix (GLCM) features. We introduce a measure of inter-frame uniformity (IFU) and demonstrate that the appearance of violent behaviour changes in a less uniform manner when compared to other types of crowd behaviour. Our proposed method is computationally cheap and offers real-time description. Evaluating our method using a privately held CCTV dataset and the publicly available Violent Flows, UCF Web Abnormality, and UMN Abnormal Crowd datasets, we report a receiver operating characteristic score of 0.9782, 0.9403, 0.8218 and 0.9956 respectively. |
2405.05688 | Dipankar Srirag | Dipankar Srirag and Aditya Joshi | Evaluating Dialect Robustness of Language Models via Conversation
Understanding | 13 pages, 7 figures, 6 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With an evergrowing number of LLMs reporting superlative performance for
English, their ability to perform equitably for different dialects of English
(i.e., dialect robustness) needs to be ascertained. Specifically, we use
English language (US English or Indian English) conversations between humans
who play the word-guessing game of `taboo'. We formulate two evaluative tasks:
target word prediction (TWP) (i.e.predict the masked target word in a
conversation) and target word selection (TWS) (i.e., select the most likely
masked target word in a conversation, from among a set of candidate words).
Extending MD3, an existing dialectic dataset of taboo-playing conversations, we
introduce M-MD3, a target-word-masked version of MD3 with the USEng and IndEng
subsets. We add two subsets: AITrans (where dialectic information is removed
from IndEng) and AIGen (where LLMs are prompted to generate conversations). Our
evaluation uses pre-trained and fine-tuned versions of two closed-source
(GPT-4/3.5) and two open-source LLMs (Mistral and Gemma). LLMs perform
significantly better for US English than Indian English for both TWP and TWS,
for all settings. While GPT-based models perform the best, the comparatively
smaller models work more equitably for short conversations (<8 turns). Our
results on AIGen and AITrans (the best and worst-performing subset)
respectively show that LLMs may learn a dialect of their own based on the
composition of the training data, and that dialect robustness is indeed a
challenging task. Our evaluation methodology exhibits a novel way to examine
attributes of language models using pre-existing dialogue datasets.
| [
{
"created": "Thu, 9 May 2024 11:38:23 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Srirag",
"Dipankar",
""
],
[
"Joshi",
"Aditya",
""
]
] | With an evergrowing number of LLMs reporting superlative performance for English, their ability to perform equitably for different dialects of English (i.e., dialect robustness) needs to be ascertained. Specifically, we use English language (US English or Indian English) conversations between humans who play the word-guessing game of `taboo'. We formulate two evaluative tasks: target word prediction (TWP) (i.e.predict the masked target word in a conversation) and target word selection (TWS) (i.e., select the most likely masked target word in a conversation, from among a set of candidate words). Extending MD3, an existing dialectic dataset of taboo-playing conversations, we introduce M-MD3, a target-word-masked version of MD3 with the USEng and IndEng subsets. We add two subsets: AITrans (where dialectic information is removed from IndEng) and AIGen (where LLMs are prompted to generate conversations). Our evaluation uses pre-trained and fine-tuned versions of two closed-source (GPT-4/3.5) and two open-source LLMs (Mistral and Gemma). LLMs perform significantly better for US English than Indian English for both TWP and TWS, for all settings. While GPT-based models perform the best, the comparatively smaller models work more equitably for short conversations (<8 turns). Our results on AIGen and AITrans (the best and worst-performing subset) respectively show that LLMs may learn a dialect of their own based on the composition of the training data, and that dialect robustness is indeed a challenging task. Our evaluation methodology exhibits a novel way to examine attributes of language models using pre-existing dialogue datasets. |
2007.08025 | Hakim Hafidi | Hakim Hafidi, Mounir Ghogho, Philippe Ciblat and Ananthram Swami | GraphCL: Contrastive Self-Supervised Learning of Graph Representations | Under review for Neurips 2020 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Graph Contrastive Learning (GraphCL), a general framework for
learning node representations in a self supervised manner. GraphCL learns node
embeddings by maximizing the similarity between the representations of two
randomly perturbed versions of the intrinsic features and link structure of the
same node's local subgraph. We use graph neural networks to produce two
representations of the same node and leverage a contrastive learning loss to
maximize agreement between them. In both transductive and inductive learning
setups, we demonstrate that our approach significantly outperforms the
state-of-the-art in unsupervised learning on a number of node classification
benchmarks.
| [
{
"created": "Wed, 15 Jul 2020 22:36:53 GMT",
"version": "v1"
}
] | 2020-07-17 | [
[
"Hafidi",
"Hakim",
""
],
[
"Ghogho",
"Mounir",
""
],
[
"Ciblat",
"Philippe",
""
],
[
"Swami",
"Ananthram",
""
]
] | We propose Graph Contrastive Learning (GraphCL), a general framework for learning node representations in a self supervised manner. GraphCL learns node embeddings by maximizing the similarity between the representations of two randomly perturbed versions of the intrinsic features and link structure of the same node's local subgraph. We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them. In both transductive and inductive learning setups, we demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks. |
2209.01292 | Bargav Jayaraman | Bargav Jayaraman and David Evans | Are Attribute Inference Attacks Just Imputation? | 13 (main body) + 4 (references and appendix) pages. To appear in
CCS'22 | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models can expose sensitive information about their training data. In an
attribute inference attack, an adversary has partial knowledge of some training
records and access to a model trained on those records, and infers the unknown
values of a sensitive feature of those records. We study a fine-grained variant
of attribute inference we call \emph{sensitive value inference}, where the
adversary's goal is to identify with high confidence some records from a
candidate set where the unknown attribute has a particular sensitive value. We
explicitly compare attribute inference with data imputation that captures the
training distribution statistics, under various assumptions about the training
data available to the adversary. Our main conclusions are: (1) previous
attribute inference methods do not reveal more about the training data from the
model than can be inferred by an adversary without access to the trained model,
but with the same knowledge of the underlying distribution as needed to train
the attribute inference attack; (2) black-box attribute inference attacks
rarely learn anything that cannot be learned without the model; but (3)
white-box attacks, which we introduce and evaluate in the paper, can reliably
identify some records with the sensitive value attribute that would not be
predicted without having access to the model. Furthermore, we show that
proposed defenses such as differentially private training and removing
vulnerable records from training do not mitigate this privacy risk. The code
for our experiments is available at
\url{https://github.com/bargavj/EvaluatingDPML}.
| [
{
"created": "Fri, 2 Sep 2022 23:13:36 GMT",
"version": "v1"
}
] | 2022-09-07 | [
[
"Jayaraman",
"Bargav",
""
],
[
"Evans",
"David",
""
]
] | Models can expose sensitive information about their training data. In an attribute inference attack, an adversary has partial knowledge of some training records and access to a model trained on those records, and infers the unknown values of a sensitive feature of those records. We study a fine-grained variant of attribute inference we call \emph{sensitive value inference}, where the adversary's goal is to identify with high confidence some records from a candidate set where the unknown attribute has a particular sensitive value. We explicitly compare attribute inference with data imputation that captures the training distribution statistics, under various assumptions about the training data available to the adversary. Our main conclusions are: (1) previous attribute inference methods do not reveal more about the training data from the model than can be inferred by an adversary without access to the trained model, but with the same knowledge of the underlying distribution as needed to train the attribute inference attack; (2) black-box attribute inference attacks rarely learn anything that cannot be learned without the model; but (3) white-box attacks, which we introduce and evaluate in the paper, can reliably identify some records with the sensitive value attribute that would not be predicted without having access to the model. Furthermore, we show that proposed defenses such as differentially private training and removing vulnerable records from training do not mitigate this privacy risk. The code for our experiments is available at \url{https://github.com/bargavj/EvaluatingDPML}. |
1810.01256 | Yang Chen | Guanxiong Zeng, Yang Chen, Bo Cui, Shan Yu | Continual Learning of Context-dependent Processing in Neural Networks | null | null | 10.1038/s42256-019-0080-x | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNs) are powerful tools in learning sophisticated but
fixed mapping rules between inputs and outputs, thereby limiting their
application in more complex and dynamic situations in which the mapping rules
are not kept the same but changing according to different contexts. To lift
such limits, we developed a novel approach involving a learning algorithm,
called orthogonal weights modification (OWM), with the addition of a
context-dependent processing (CDP) module. We demonstrated that with OWM to
overcome the problem of catastrophic forgetting, and the CDP module to learn
how to reuse a feature representation and a classifier for different contexts,
a single network can acquire numerous context-dependent mapping rules in an
online and continual manner, with as few as $\sim$10 samples to learn each.
This should enable highly compact systems to gradually learn myriad
regularities of the real world and eventually behave appropriately within it.
| [
{
"created": "Sat, 29 Sep 2018 09:45:08 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Oct 2018 15:36:51 GMT",
"version": "v2"
},
{
"created": "Sun, 27 Jun 2021 13:38:39 GMT",
"version": "v3"
}
] | 2021-06-29 | [
[
"Zeng",
"Guanxiong",
""
],
[
"Chen",
"Yang",
""
],
[
"Cui",
"Bo",
""
],
[
"Yu",
"Shan",
""
]
] | Deep neural networks (DNNs) are powerful tools in learning sophisticated but fixed mapping rules between inputs and outputs, thereby limiting their application in more complex and dynamic situations in which the mapping rules are not kept the same but changing according to different contexts. To lift such limits, we developed a novel approach involving a learning algorithm, called orthogonal weights modification (OWM), with the addition of a context-dependent processing (CDP) module. We demonstrated that with OWM to overcome the problem of catastrophic forgetting, and the CDP module to learn how to reuse a feature representation and a classifier for different contexts, a single network can acquire numerous context-dependent mapping rules in an online and continual manner, with as few as $\sim$10 samples to learn each. This should enable highly compact systems to gradually learn myriad regularities of the real world and eventually behave appropriately within it. |
2401.15246 | Chiyuan Zhang | Lynn Chua, Qiliang Cui, Badih Ghazi, Charlie Harrison, Pritish Kamath,
Walid Krichene, Ravi Kumar, Pasin Manurangsi, Krishna Giri Narra, Amer Sinha,
Avinash Varadarajan, Chiyuan Zhang | Training Differentially Private Ad Prediction Models with Semi-Sensitive
Features | 7 pages, 4 figures | null | null | null | cs.LG cs.CR cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by problems arising in digital advertising, we introduce the task
of training differentially private (DP) machine learning models with
semi-sensitive features. In this setting, a subset of the features is known to
the attacker (and thus need not be protected) while the remaining features as
well as the label are unknown to the attacker and should be protected by the DP
guarantee. This task interpolates between training the model with full DP
(where the label and all features should be protected) or with label DP (where
all the features are considered known, and only the label should be protected).
We present a new algorithm for training DP models with semi-sensitive features.
Through an empirical evaluation on real ads datasets, we demonstrate that our
algorithm surpasses in utility the baselines of (i) DP stochastic gradient
descent (DP-SGD) run on all features (known and unknown), and (ii) a label DP
algorithm run only on the known features (while discarding the unknown ones).
| [
{
"created": "Fri, 26 Jan 2024 23:41:28 GMT",
"version": "v1"
}
] | 2024-01-30 | [
[
"Chua",
"Lynn",
""
],
[
"Cui",
"Qiliang",
""
],
[
"Ghazi",
"Badih",
""
],
[
"Harrison",
"Charlie",
""
],
[
"Kamath",
"Pritish",
""
],
[
"Krichene",
"Walid",
""
],
[
"Kumar",
"Ravi",
""
],
[
"Manurangsi",
"Pasin",
""
],
[
"Narra",
"Krishna Giri",
""
],
[
"Sinha",
"Amer",
""
],
[
"Varadarajan",
"Avinash",
""
],
[
"Zhang",
"Chiyuan",
""
]
] | Motivated by problems arising in digital advertising, we introduce the task of training differentially private (DP) machine learning models with semi-sensitive features. In this setting, a subset of the features is known to the attacker (and thus need not be protected) while the remaining features as well as the label are unknown to the attacker and should be protected by the DP guarantee. This task interpolates between training the model with full DP (where the label and all features should be protected) or with label DP (where all the features are considered known, and only the label should be protected). We present a new algorithm for training DP models with semi-sensitive features. Through an empirical evaluation on real ads datasets, we demonstrate that our algorithm surpasses in utility the baselines of (i) DP stochastic gradient descent (DP-SGD) run on all features (known and unknown), and (ii) a label DP algorithm run only on the known features (while discarding the unknown ones). |
2006.01222 | Tanvi Dadu Miss | Tanvi Dadu, Kartikey Pant and Radhika Mamidi | BERT-based Ensembles for Modeling Disclosure and Support in
Conversational Social Media Text | Accepted at the Affective Content workshop held at AAAI 2020 as the
Best System Paper | null | null | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a growing interest in understanding how humans initiate and hold
conversations. The affective understanding of conversations focuses on the
problem of how speakers use emotions to react to a situation and to each other.
In the CL-Aff Shared Task, the organizers released Get it #OffMyChest dataset,
which contains Reddit comments from casual and confessional conversations,
labeled for their disclosure and supportiveness characteristics. In this paper,
we introduce a predictive ensemble model exploiting the finetuned
contextualized word embeddings, RoBERTa and ALBERT. We show that our model
outperforms the base models in all considered metrics, achieving an improvement
of $3\%$ in the F1 score. We further conduct statistical analysis and outline
deeper insights into the given dataset while providing a new characterization
of impact for the dataset.
| [
{
"created": "Mon, 1 Jun 2020 19:52:01 GMT",
"version": "v1"
}
] | 2020-06-03 | [
[
"Dadu",
"Tanvi",
""
],
[
"Pant",
"Kartikey",
""
],
[
"Mamidi",
"Radhika",
""
]
] | There is a growing interest in understanding how humans initiate and hold conversations. The affective understanding of conversations focuses on the problem of how speakers use emotions to react to a situation and to each other. In the CL-Aff Shared Task, the organizers released Get it #OffMyChest dataset, which contains Reddit comments from casual and confessional conversations, labeled for their disclosure and supportiveness characteristics. In this paper, we introduce a predictive ensemble model exploiting the finetuned contextualized word embeddings, RoBERTa and ALBERT. We show that our model outperforms the base models in all considered metrics, achieving an improvement of $3\%$ in the F1 score. We further conduct statistical analysis and outline deeper insights into the given dataset while providing a new characterization of impact for the dataset. |
2304.14377 | Hengyi Wang | Hengyi Wang, Jingwen Wang, Lourdes Agapito | Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural
Real-Time SLAM | CVPR2023. First two authors contributed equally. Project page:
https://hengyiwang.github.io/projects/CoSLAM | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Co-SLAM, a neural RGB-D SLAM system based on a hybrid
representation, that performs robust camera tracking and high-fidelity surface
reconstruction in real time. Co-SLAM represents the scene as a multi-resolution
hash-grid to exploit its high convergence speed and ability to represent
high-frequency local features. In addition, Co-SLAM incorporates one-blob
encoding, to encourage surface coherence and completion in unobserved areas.
This joint parametric-coordinate encoding enables real-time and robust
performance by bringing the best of both worlds: fast convergence and surface
hole filling. Moreover, our ray sampling strategy allows Co-SLAM to perform
global bundle adjustment over all keyframes instead of requiring keyframe
selection to maintain a small number of active keyframes as competing neural
SLAM approaches do. Experimental results show that Co-SLAM runs at 10-17Hz and
achieves state-of-the-art scene reconstruction results, and competitive
tracking performance in various datasets and benchmarks (ScanNet, TUM, Replica,
Synthetic RGBD). Project page: https://hengyiwang.github.io/projects/CoSLAM
| [
{
"created": "Thu, 27 Apr 2023 17:46:45 GMT",
"version": "v1"
}
] | 2023-04-28 | [
[
"Wang",
"Hengyi",
""
],
[
"Wang",
"Jingwen",
""
],
[
"Agapito",
"Lourdes",
""
]
] | We present Co-SLAM, a neural RGB-D SLAM system based on a hybrid representation, that performs robust camera tracking and high-fidelity surface reconstruction in real time. Co-SLAM represents the scene as a multi-resolution hash-grid to exploit its high convergence speed and ability to represent high-frequency local features. In addition, Co-SLAM incorporates one-blob encoding, to encourage surface coherence and completion in unobserved areas. This joint parametric-coordinate encoding enables real-time and robust performance by bringing the best of both worlds: fast convergence and surface hole filling. Moreover, our ray sampling strategy allows Co-SLAM to perform global bundle adjustment over all keyframes instead of requiring keyframe selection to maintain a small number of active keyframes as competing neural SLAM approaches do. Experimental results show that Co-SLAM runs at 10-17Hz and achieves state-of-the-art scene reconstruction results, and competitive tracking performance in various datasets and benchmarks (ScanNet, TUM, Replica, Synthetic RGBD). Project page: https://hengyiwang.github.io/projects/CoSLAM |
2403.08334 | Cheng Huang Dr. | Cheng Huang (1), Nannan Wang (1), Ziyan Wang (1), Siqi Sun (1), Lingzi
Li (1), Junren Chen (1), Qianchong Zhao (1), Jiaxuan Han (1), Zhen Yang (1),
Lei Shi (2) ((1) Sichuan University, (2) Huawei Technologies) | DONAPI: Malicious NPM Packages Detector using Behavior Sequence
Knowledge Mapping | 18 pages, accepted for publication at USENIX Security 2024 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing popularity of modularity in software development comes the
rise of package managers and language ecosystems. Among them, npm stands out as
the most extensive package manager, hosting more than 2 million third-party
open-source packages that greatly simplify the process of building code.
However, this openness also brings security risks, as evidenced by numerous
package poisoning incidents.
In this paper, we synchronize a local package cache containing more than 3.4
million packages in near real-time to give us access to more package code
details. Further, we perform manual inspection and API call sequence analysis
on packages collected from public datasets and security reports to build a
hierarchical classification framework and behavioral knowledge base covering
different sensitive behaviors. In addition, we propose the DONAPI, an automatic
malicious npm packages detector that combines static and dynamic analysis. It
makes preliminary judgments on the degree of maliciousness of packages by code
reconstruction techniques and static analysis, extracts dynamic API call
sequences to confirm and identify obfuscated content that static analysis can
not handle alone, and finally tags malicious software packages based on the
constructed behavior knowledge base. To date, we have identified and manually
confirmed 325 malicious samples and discovered 2 unusual API calls and 246 API
call sequences that have not appeared in known samples.
| [
{
"created": "Wed, 13 Mar 2024 08:38:21 GMT",
"version": "v1"
}
] | 2024-03-14 | [
[
"Huang",
"Cheng",
"",
"Sichuan University"
],
[
"Wang",
"Nannan",
"",
"Sichuan University"
],
[
"Wang",
"Ziyan",
"",
"Sichuan University"
],
[
"Sun",
"Siqi",
"",
"Sichuan University"
],
[
"Li",
"Lingzi",
"",
"Sichuan University"
],
[
"Chen",
"Junren",
"",
"Sichuan University"
],
[
"Zhao",
"Qianchong",
"",
"Sichuan University"
],
[
"Han",
"Jiaxuan",
"",
"Sichuan University"
],
[
"Yang",
"Zhen",
"",
"Sichuan University"
],
[
"Shi",
"Lei",
"",
"Huawei Technologies"
]
] | With the growing popularity of modularity in software development comes the rise of package managers and language ecosystems. Among them, npm stands out as the most extensive package manager, hosting more than 2 million third-party open-source packages that greatly simplify the process of building code. However, this openness also brings security risks, as evidenced by numerous package poisoning incidents. In this paper, we synchronize a local package cache containing more than 3.4 million packages in near real-time to give us access to more package code details. Further, we perform manual inspection and API call sequence analysis on packages collected from public datasets and security reports to build a hierarchical classification framework and behavioral knowledge base covering different sensitive behaviors. In addition, we propose the DONAPI, an automatic malicious npm packages detector that combines static and dynamic analysis. It makes preliminary judgments on the degree of maliciousness of packages by code reconstruction techniques and static analysis, extracts dynamic API call sequences to confirm and identify obfuscated content that static analysis can not handle alone, and finally tags malicious software packages based on the constructed behavior knowledge base. To date, we have identified and manually confirmed 325 malicious samples and discovered 2 unusual API calls and 246 API call sequences that have not appeared in known samples. |
2305.17115 | Mateo Perez | Rajeev Alur, Osbert Bastani, Kishor Jothimurugan, Mateo Perez, Fabio
Somenzi, Ashutosh Trivedi | Policy Synthesis and Reinforcement Learning for Discounted LTL | null | null | null | null | cs.LO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The difficulty of manually specifying reward functions has led to an interest
in using linear temporal logic (LTL) to express objectives for reinforcement
learning (RL). However, LTL has the downside that it is sensitive to small
perturbations in the transition probabilities, which prevents probably
approximately correct (PAC) learning without additional assumptions. Time
discounting provides a way of removing this sensitivity, while retaining the
high expressivity of the logic. We study the use of discounted LTL for policy
synthesis in Markov decision processes with unknown transition probabilities,
and show how to reduce discounted LTL to discounted-sum reward via a reward
machine when all discount factors are identical.
| [
{
"created": "Fri, 26 May 2023 17:32:38 GMT",
"version": "v1"
},
{
"created": "Mon, 29 May 2023 23:43:19 GMT",
"version": "v2"
}
] | 2023-05-31 | [
[
"Alur",
"Rajeev",
""
],
[
"Bastani",
"Osbert",
""
],
[
"Jothimurugan",
"Kishor",
""
],
[
"Perez",
"Mateo",
""
],
[
"Somenzi",
"Fabio",
""
],
[
"Trivedi",
"Ashutosh",
""
]
] | The difficulty of manually specifying reward functions has led to an interest in using linear temporal logic (LTL) to express objectives for reinforcement learning (RL). However, LTL has the downside that it is sensitive to small perturbations in the transition probabilities, which prevents probably approximately correct (PAC) learning without additional assumptions. Time discounting provides a way of removing this sensitivity, while retaining the high expressivity of the logic. We study the use of discounted LTL for policy synthesis in Markov decision processes with unknown transition probabilities, and show how to reduce discounted LTL to discounted-sum reward via a reward machine when all discount factors are identical. |
2407.12684 | Yu-Jie Yuan | Yu-Jie Yuan, Leif Kobbelt, Jiwen Liu, Yuan Zhang, Pengfei Wan, Yu-Kun
Lai, Lin Gao | 4Dynamic: Text-to-4D Generation with Hybrid Priors | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the fascinating generative performance of text-to-image diffusion
models, growing text-to-3D generation works explore distilling the 2D
generative priors into 3D, using the score distillation sampling (SDS) loss, to
bypass the data scarcity problem. The existing text-to-3D methods have achieved
promising results in realism and 3D consistency, but text-to-4D generation
still faces challenges, including lack of realism and insufficient dynamic
motions. In this paper, we propose a novel method for text-to-4D generation,
which ensures the dynamic amplitude and authenticity through direct supervision
provided by a video prior. Specifically, we adopt a text-to-video diffusion
model to generate a reference video and divide 4D generation into two stages:
static generation and dynamic generation. The static 3D generation is achieved
under the guidance of the input text and the first frame of the reference
video, while in the dynamic generation stage, we introduce a customized SDS
loss to ensure multi-view consistency, a video-based SDS loss to improve
temporal consistency, and most importantly, direct priors from the reference
video to ensure the quality of geometry and texture. Moreover, we design a
prior-switching training strategy to avoid conflicts between different priors
and fully leverage the benefits of each prior. In addition, to enrich the
generated motion, we further introduce a dynamic modeling representation
composed of a deformation network and a topology network, which ensures dynamic
continuity while modeling topological changes. Our method not only supports
text-to-4D generation but also enables 4D generation from monocular videos. The
comparison experiments demonstrate the superiority of our method compared to
existing methods.
| [
{
"created": "Wed, 17 Jul 2024 16:02:55 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Yuan",
"Yu-Jie",
""
],
[
"Kobbelt",
"Leif",
""
],
[
"Liu",
"Jiwen",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Lai",
"Yu-Kun",
""
],
[
"Gao",
"Lin",
""
]
] | Due to the fascinating generative performance of text-to-image diffusion models, growing text-to-3D generation works explore distilling the 2D generative priors into 3D, using the score distillation sampling (SDS) loss, to bypass the data scarcity problem. The existing text-to-3D methods have achieved promising results in realism and 3D consistency, but text-to-4D generation still faces challenges, including lack of realism and insufficient dynamic motions. In this paper, we propose a novel method for text-to-4D generation, which ensures the dynamic amplitude and authenticity through direct supervision provided by a video prior. Specifically, we adopt a text-to-video diffusion model to generate a reference video and divide 4D generation into two stages: static generation and dynamic generation. The static 3D generation is achieved under the guidance of the input text and the first frame of the reference video, while in the dynamic generation stage, we introduce a customized SDS loss to ensure multi-view consistency, a video-based SDS loss to improve temporal consistency, and most importantly, direct priors from the reference video to ensure the quality of geometry and texture. Moreover, we design a prior-switching training strategy to avoid conflicts between different priors and fully leverage the benefits of each prior. In addition, to enrich the generated motion, we further introduce a dynamic modeling representation composed of a deformation network and a topology network, which ensures dynamic continuity while modeling topological changes. Our method not only supports text-to-4D generation but also enables 4D generation from monocular videos. The comparison experiments demonstrate the superiority of our method compared to existing methods. |
1102.2768 | Ananthanarayanan Chockalingam | Suresh Chandrasekaran, Saif K. Mohammed, and A. Chockalingam | Achievable Rate Region of Quantized Broadcast and MAC Channels | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the achievable rate region of Gaussian multiuser
channels with the messages transmitted being from finite input alphabets and
the outputs being {\em quantized at the receiver}. In particular, we focus on
the achievable rate region of $i)$ Gaussian broadcast channel (GBC) and $ii)$
Gaussian multiple access channel (GMAC). First, we study the achievable rate
region of two-user GBC when the messages to be transmitted to both the users
take values from finite signal sets and the received signal is quantized at
both the users. We refer to this channel as {\em quantized broadcast channel
(QBC)}. We observe that the capacity region defined for a GBC does not carry
over as such to QBC. We show that the optimal decoding scheme for GBC (i.e.,
high SNR user doing successive decoding and low SNR user decoding its message
alone) is not optimal for QBC. We then propose an achievable rate region for
QBC based on two different schemes. We present achievable rate region results
for the case of uniform quantization at the receivers. Next, we investigate the
achievable rate region of two-user GMAC with finite input alphabet and
quantized receiver output. We refer to this channel as {\em quantized multiple
access channel (QMAC)}. We derive expressions for the achievable rate region of
a two-user QMAC. We show that, with finite input alphabet, the achievable rate
region with the commonly used uniform receiver quantizer has a significant loss
compared to the achievable rate region without receiver quantization. We
propose a {\em non-uniform quantizer} which has a significantly larger rate
region compared to what is achieved with a uniform quantizer in QMAC.
| [
{
"created": "Mon, 14 Feb 2011 13:49:31 GMT",
"version": "v1"
}
] | 2011-02-15 | [
[
"Chandrasekaran",
"Suresh",
""
],
[
"Mohammed",
"Saif K.",
""
],
[
"Chockalingam",
"A.",
""
]
] | In this paper, we study the achievable rate region of Gaussian multiuser channels with the messages transmitted being from finite input alphabets and the outputs being {\em quantized at the receiver}. In particular, we focus on the achievable rate region of $i)$ Gaussian broadcast channel (GBC) and $ii)$ Gaussian multiple access channel (GMAC). First, we study the achievable rate region of two-user GBC when the messages to be transmitted to both the users take values from finite signal sets and the received signal is quantized at both the users. We refer to this channel as {\em quantized broadcast channel (QBC)}. We observe that the capacity region defined for a GBC does not carry over as such to QBC. We show that the optimal decoding scheme for GBC (i.e., high SNR user doing successive decoding and low SNR user decoding its message alone) is not optimal for QBC. We then propose an achievable rate region for QBC based on two different schemes. We present achievable rate region results for the case of uniform quantization at the receivers. Next, we investigate the achievable rate region of two-user GMAC with finite input alphabet and quantized receiver output. We refer to this channel as {\em quantized multiple access channel (QMAC)}. We derive expressions for the achievable rate region of a two-user QMAC. We show that, with finite input alphabet, the achievable rate region with the commonly used uniform receiver quantizer has a significant loss compared to the achievable rate region without receiver quantization. We propose a {\em non-uniform quantizer} which has a significantly larger rate region compared to what is achieved with a uniform quantizer in QMAC. |
2403.16239 | Savinay Nagendra | Savinay Nagendra | Thermal Analysis for NVIDIA GTX480 Fermi GPU Architecture | null | null | null | null | cs.AR | http://creativecommons.org/licenses/by/4.0/ | In this project, we design a four-layer (Silicon|TIM|Silicon|TIM), 3D floor
plan for NVIDIA GTX480 Fermi GPU architecture and compare heat dissipation and
power trends for matrix multiplication and Needleman-Wunsch kernels. First,
cuda kernels for the two algorithms are written. These kernels are compiled and
executed with the GPGPU Simulator to extract power logs for varying tensor
sizes. These power logs are converted to ptrace files with an automation script
written in Python. The 3D floor plan, along with the generated ptrace files are
given to HotSpot, which generates thermal heat maps to show heat dissipation
for various components of the Fermi architecture. These heat dissipation trends
for both the kernels are observed for multiple tensor sizes to draw qualitative
conclusions. The behavioral and execution patterns of both kernels are also
observed with these varying heat dissipation trends. With this project, we
observe that an increase in tensor size results in an increase of heat
dissipation in components of the Fermi Architecture. However, the temperature
of the chip remains saturated after a particular tensor size and remains
constant thereafter. Heat dissipation is non-uniform with smaller tensor sizes,
and becomes more uniform after a certain tensor size. This means, that after a
particular tensor size, more cores of the architecture get activated in the
computations, thereby resulting in an almost constant temperature. We also
observe that Needleman Wunsch uses more data movement between DRAM and caches,
thereby showing higher heat dissipation patterns in DRAMs when compared to
Matrix multiplication for the same tensor size. Our observations are in
accordance with the theoretical concepts behind the working of the two
algorithms, thereby making our results consistent.
| [
{
"created": "Sun, 24 Mar 2024 17:06:45 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Nagendra",
"Savinay",
""
]
] | In this project, we design a four-layer (Silicon|TIM|Silicon|TIM), 3D floor plan for NVIDIA GTX480 Fermi GPU architecture and compare heat dissipation and power trends for matrix multiplication and Needleman-Wunsch kernels. First, cuda kernels for the two algorithms are written. These kernels are compiled and executed with the GPGPU Simulator to extract power logs for varying tensor sizes. These power logs are converted to ptrace files with an automation script written in Python. The 3D floor plan, along with the generated ptrace files are given to HotSpot, which generates thermal heat maps to show heat dissipation for various components of the Fermi architecture. These heat dissipation trends for both the kernels are observed for multiple tensor sizes to draw qualitative conclusions. The behavioral and execution patterns of both kernels are also observed with these varying heat dissipation trends. With this project, we observe that an increase in tensor size results in an increase of heat dissipation in components of the Fermi Architecture. However, the temperature of the chip remains saturated after a particular tensor size and remains constant thereafter. Heat dissipation is non-uniform with smaller tensor sizes, and becomes more uniform after a certain tensor size. This means, that after a particular tensor size, more cores of the architecture get activated in the computations, thereby resulting in an almost constant temperature. We also observe that Needleman Wunsch uses more data movement between DRAM and caches, thereby showing higher heat dissipation patterns in DRAMs when compared to Matrix multiplication for the same tensor size. Our observations are in accordance with the theoretical concepts behind the working of the two algorithms, thereby making our results consistent. |
2105.06603 | Emily Allaway | Emily Allaway, Malavika Srikanth, and Kathleen McKeown | Adversarial Learning for Zero-Shot Stance Detection on Social Media | To appear in NAACL 2021 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Stance detection on social media can help to identify and understand slanted
news or commentary in everyday life. In this work, we propose a new model for
zero-shot stance detection on Twitter that uses adversarial learning to
generalize across topics. Our model achieves state-of-the-art performance on a
number of unseen test topics with minimal computational costs. In addition, we
extend zero-shot stance detection to new topics, highlighting future directions
for zero-shot transfer.
| [
{
"created": "Fri, 14 May 2021 01:08:48 GMT",
"version": "v1"
}
] | 2021-05-17 | [
[
"Allaway",
"Emily",
""
],
[
"Srikanth",
"Malavika",
""
],
[
"McKeown",
"Kathleen",
""
]
] | Stance detection on social media can help to identify and understand slanted news or commentary in everyday life. In this work, we propose a new model for zero-shot stance detection on Twitter that uses adversarial learning to generalize across topics. Our model achieves state-of-the-art performance on a number of unseen test topics with minimal computational costs. In addition, we extend zero-shot stance detection to new topics, highlighting future directions for zero-shot transfer. |
1812.04905 | Frederic Bour | Fr\'ed\'eric Bour | CAMLroot: revisiting the OCaml FFI | null | null | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The OCaml language comes with a facility for interfacing with C code -- the
Foreign Function Interface or FFI. The primitives for working with the OCaml
runtime -- and, in particular, with the garbage collector (GC) -- strive for a
minimal overhead: they avoid unnecessary work and allow for calls to C code to
be very cheap. But they are also hard to use properly. Satisfying the GC
invariants leads to counter-intuitive C code and there are hardly any safety
checks to warn the developer. In this work, we explore two complementary
approaches to mitigate these issues. First, simply adding an indirection to the
API manipulating OCaml values let us write safer code amenable to optional
runtime tests that assert proper use of the API. Second, a notion of region for
tracking lifetimes of OCaml values on C side let us trade some performance for
simpler code.
| [
{
"created": "Wed, 12 Dec 2018 11:37:02 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2019 13:52:54 GMT",
"version": "v2"
}
] | 2019-04-29 | [
[
"Bour",
"Frédéric",
""
]
] | The OCaml language comes with a facility for interfacing with C code -- the Foreign Function Interface or FFI. The primitives for working with the OCaml runtime -- and, in particular, with the garbage collector (GC) -- strive for a minimal overhead: they avoid unnecessary work and allow for calls to C code to be very cheap. But they are also hard to use properly. Satisfying the GC invariants leads to counter-intuitive C code and there are hardly any safety checks to warn the developer. In this work, we explore two complementary approaches to mitigate these issues. First, simply adding an indirection to the API manipulating OCaml values let us write safer code amenable to optional runtime tests that assert proper use of the API. Second, a notion of region for tracking lifetimes of OCaml values on C side let us trade some performance for simpler code. |
2003.00126 | Zhe Zeng Miss | Zhe Zeng, Paolo Morettin, Fanqi Yan, Antonio Vergari, Guy Van den
Broeck | Scaling up Hybrid Probabilistic Inference with Logical and Arithmetic
Constraints via Message Passing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted model integration (WMI) is a very appealing framework for
probabilistic inference: it allows to express the complex dependencies of
real-world problems where variables are both continuous and discrete, via the
language of Satisfiability Modulo Theories (SMT), as well as to compute
probabilistic queries with complex logical and arithmetic constraints. Yet,
existing WMI solvers are not ready to scale to these problems. They either
ignore the intrinsic dependency structure of the problem at all, or they are
limited to too restrictive structures. To narrow this gap, we derive a
factorized formalism of WMI enabling us to devise a scalable WMI solver based
on message passing, MP-WMI. Namely, MP-WMI is the first WMI solver which allows
to: 1) perform exact inference on the full class of tree-structured WMI
problems; 2) compute all marginal densities in linear time; 3) amortize
inference inter query. Experimental results show that our solver dramatically
outperforms the existing WMI solvers on a large set of benchmarks.
| [
{
"created": "Fri, 28 Feb 2020 23:51:45 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Aug 2020 22:41:13 GMT",
"version": "v2"
}
] | 2020-08-21 | [
[
"Zeng",
"Zhe",
""
],
[
"Morettin",
"Paolo",
""
],
[
"Yan",
"Fanqi",
""
],
[
"Vergari",
"Antonio",
""
],
[
"Broeck",
"Guy Van den",
""
]
] | Weighted model integration (WMI) is a very appealing framework for probabilistic inference: it allows to express the complex dependencies of real-world problems where variables are both continuous and discrete, via the language of Satisfiability Modulo Theories (SMT), as well as to compute probabilistic queries with complex logical and arithmetic constraints. Yet, existing WMI solvers are not ready to scale to these problems. They either ignore the intrinsic dependency structure of the problem at all, or they are limited to too restrictive structures. To narrow this gap, we derive a factorized formalism of WMI enabling us to devise a scalable WMI solver based on message passing, MP-WMI. Namely, MP-WMI is the first WMI solver which allows to: 1) perform exact inference on the full class of tree-structured WMI problems; 2) compute all marginal densities in linear time; 3) amortize inference inter query. Experimental results show that our solver dramatically outperforms the existing WMI solvers on a large set of benchmarks. |
2201.11897 | Yuekai Huang | Yuekai Huang, Ye Yang, Junjie Wang, Wei Zheng, Qing Wang | Identifying Emergent Leadership in OSS Projects Based on Communication
Styles | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In open source software (OSS) communities, existing leadership indicators are
dominantly measured by code contribution or community influence. Recent studies
on emergent leadership shed light on additional dimensions such as intellectual
stimulation in collaborative communications. To that end, this paper proposes
an automated approach, named iLead, to mine communication styles and identify
emergent leadership behaviors in OSS communities, using issue comments data. We
start with the construction of 6 categories of leadership behaviors based on
existing leadership studies. Then, we manually label leadership behaviors in
10,000 issue comments from 10 OSS projects, and extract 304 heuristic
linguistic patterns which represent different types of emergent leadership
behaviors in flexible and concise manners. Next, an automated algorithm is
developed to merge and consolidate different pattern sets extracted from
multiple projects into a final pattern ranking list, which can be applied for
the automatic leadership identification. The evaluation results show that iLead
can achieve a median precision of 0.82 and recall of 0.78, outperforming ten
machine/deep learning baselines. To demonstrate practical usefulness, we also
conduct empirical analysis and human evaluation of the identified leadership
behaviors from iLead. We argue that emergent leadership behaviors in issue
discussion should be taken into consideration to broaden existing OSS
leadership viewpoints. Practical insights on community building and leadership
skill development are offered for OSS community and individual developers,
respectively.
| [
{
"created": "Fri, 28 Jan 2022 02:20:44 GMT",
"version": "v1"
}
] | 2022-01-31 | [
[
"Huang",
"Yuekai",
""
],
[
"Yang",
"Ye",
""
],
[
"Wang",
"Junjie",
""
],
[
"Zheng",
"Wei",
""
],
[
"Wang",
"Qing",
""
]
] | In open source software (OSS) communities, existing leadership indicators are dominantly measured by code contribution or community influence. Recent studies on emergent leadership shed light on additional dimensions such as intellectual stimulation in collaborative communications. To that end, this paper proposes an automated approach, named iLead, to mine communication styles and identify emergent leadership behaviors in OSS communities, using issue comments data. We start with the construction of 6 categories of leadership behaviors based on existing leadership studies. Then, we manually label leadership behaviors in 10,000 issue comments from 10 OSS projects, and extract 304 heuristic linguistic patterns which represent different types of emergent leadership behaviors in flexible and concise manners. Next, an automated algorithm is developed to merge and consolidate different pattern sets extracted from multiple projects into a final pattern ranking list, which can be applied for the automatic leadership identification. The evaluation results show that iLead can achieve a median precision of 0.82 and recall of 0.78, outperforming ten machine/deep learning baselines. To demonstrate practical usefulness, we also conduct empirical analysis and human evaluation of the identified leadership behaviors from iLead. We argue that emergent leadership behaviors in issue discussion should be taken into consideration to broaden existing OSS leadership viewpoints. Practical insights on community building and leadership skill development are offered for OSS community and individual developers, respectively. |
2009.13724 | Fajie Yuan | Fajie Yuan, Guoxiao Zhang, Alexandros Karatzoglou, Joemon Jose, Beibei
Kong, Yudong Li | One Person, One Model, One World: Learning Continual User Representation
without Forgetting | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning user representations is a vital technique toward effective user
modeling and personalized recommender systems. Existing approaches often derive
an individual set of model parameters for each task by training on separate
data. However, the representation of the same user potentially has some
commonalities, such as preference and personality, even in different tasks. As
such, these separately trained representations could be suboptimal in
performance as well as inefficient in terms of parameter sharing.
In this paper, we delve on research to continually learn user representations
task by task, whereby new tasks are learned while using partial parameters from
old ones. A new problem arises since when new tasks are trained, previously
learned parameters are very likely to be modified, and as a result, an
artificial neural network (ANN)-based model may lose its capacity to serve for
well-trained previous tasks forever, this issue is termed catastrophic
forgetting. To address this issue, we present \emph{Conure} the first
\underline{con}tinual, or lifelong, \underline{u}ser \underline{re}presentation
learner -- i.e., learning new tasks over time without forgetting old ones.
Specifically, we propose iteratively removing less important weights of old
tasks in a deep user representation model, motivated by the fact that neural
network models are usually over-parameterized. In this way, we could learn many
tasks with a single model by reusing the important weights, and modifying the
less important weights to adapt to new tasks. We conduct extensive experiments
on two real-world datasets with nine tasks and show that \emph{Conure} largely
exceeds the standard model that does not purposely preserve such old
"knowledge", and performs competitively or sometimes better than models which
are trained either individually for each task or simultaneously by merging all
task data.
| [
{
"created": "Tue, 29 Sep 2020 01:49:14 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Oct 2020 14:37:16 GMT",
"version": "v2"
},
{
"created": "Sun, 9 May 2021 10:07:55 GMT",
"version": "v3"
}
] | 2021-05-11 | [
[
"Yuan",
"Fajie",
""
],
[
"Zhang",
"Guoxiao",
""
],
[
"Karatzoglou",
"Alexandros",
""
],
[
"Jose",
"Joemon",
""
],
[
"Kong",
"Beibei",
""
],
[
"Li",
"Yudong",
""
]
] | Learning user representations is a vital technique toward effective user modeling and personalized recommender systems. Existing approaches often derive an individual set of model parameters for each task by training on separate data. However, the representation of the same user potentially has some commonalities, such as preference and personality, even in different tasks. As such, these separately trained representations could be suboptimal in performance as well as inefficient in terms of parameter sharing. In this paper, we delve on research to continually learn user representations task by task, whereby new tasks are learned while using partial parameters from old ones. A new problem arises since when new tasks are trained, previously learned parameters are very likely to be modified, and as a result, an artificial neural network (ANN)-based model may lose its capacity to serve for well-trained previous tasks forever, this issue is termed catastrophic forgetting. To address this issue, we present \emph{Conure} the first \underline{con}tinual, or lifelong, \underline{u}ser \underline{re}presentation learner -- i.e., learning new tasks over time without forgetting old ones. Specifically, we propose iteratively removing less important weights of old tasks in a deep user representation model, motivated by the fact that neural network models are usually over-parameterized. In this way, we could learn many tasks with a single model by reusing the important weights, and modifying the less important weights to adapt to new tasks. We conduct extensive experiments on two real-world datasets with nine tasks and show that \emph{Conure} largely exceeds the standard model that does not purposely preserve such old "knowledge", and performs competitively or sometimes better than models which are trained either individually for each task or simultaneously by merging all task data. |
1204.4209 | Venkatesan Guruswami | Venkatesan Guruswami and Chaoping Xing | Folded Codes from Function Field Towers and Improved Optimal Rate List
Decoding | Conference version appears at STOC 2012 | null | null | null | cs.IT cs.DS math.AG math.IT math.NT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a new construction of algebraic codes which are efficiently list
decodable from a fraction $1-R-\eps$ of adversarial errors where $R$ is the
rate of the code, for any desired positive constant $\eps$. The worst-case list
size output by the algorithm is $O(1/\eps)$, matching the existential bound for
random codes up to constant factors. Further, the alphabet size of the codes is
a constant depending only on $\eps$ - it can be made
$\exp(\tilde{O}(1/\eps^2))$ which is not much worse than the lower bound of
$\exp(\Omega(1/\eps))$. The parameters we achieve are thus quite close to the
existential bounds in all three aspects - error-correction radius, alphabet
size, and list-size - simultaneously. Our code construction is Monte Carlo and
has the claimed list decoding property with high probability. Once the code is
(efficiently) sampled, the encoding/decoding algorithms are deterministic with
a running time $O_\eps(N^c)$ for an absolute constant $c$, where $N$ is the
code's block length.
Our construction is based on a linear-algebraic approach to list decoding
folded codes from towers of function fields, and combining it with a special
form of subspace-evasive sets. Instantiating this with the explicit
"asymptotically good" Garcia-Stichtenoth tower of function fields yields the
above parameters. To illustrate the method in a simpler setting, we also
present a construction based on Hermitian function fields, which offers similar
guarantees with a list and alphabet size polylogarithmic in the block length
$N$. Along the way, we shed light on how to use automorphisms of certain
function fields to enable list decoding of the folded version of the associated
algebraic-geometric codes.
| [
{
"created": "Wed, 18 Apr 2012 21:02:36 GMT",
"version": "v1"
}
] | 2015-03-20 | [
[
"Guruswami",
"Venkatesan",
""
],
[
"Xing",
"Chaoping",
""
]
] | We give a new construction of algebraic codes which are efficiently list decodable from a fraction $1-R-\eps$ of adversarial errors where $R$ is the rate of the code, for any desired positive constant $\eps$. The worst-case list size output by the algorithm is $O(1/\eps)$, matching the existential bound for random codes up to constant factors. Further, the alphabet size of the codes is a constant depending only on $\eps$ - it can be made $\exp(\tilde{O}(1/\eps^2))$ which is not much worse than the lower bound of $\exp(\Omega(1/\eps))$. The parameters we achieve are thus quite close to the existential bounds in all three aspects - error-correction radius, alphabet size, and list-size - simultaneously. Our code construction is Monte Carlo and has the claimed list decoding property with high probability. Once the code is (efficiently) sampled, the encoding/decoding algorithms are deterministic with a running time $O_\eps(N^c)$ for an absolute constant $c$, where $N$ is the code's block length. Our construction is based on a linear-algebraic approach to list decoding folded codes from towers of function fields, and combining it with a special form of subspace-evasive sets. Instantiating this with the explicit "asymptotically good" Garcia-Stichtenoth tower of function fields yields the above parameters. To illustrate the method in a simpler setting, we also present a construction based on Hermitian function fields, which offers similar guarantees with a list and alphabet size polylogarithmic in the block length $N$. Along the way, we shed light on how to use automorphisms of certain function fields to enable list decoding of the folded version of the associated algebraic-geometric codes. |
2304.14358 | \v{S}imon Bil\'ik | Juraj Lagin, Simon Bilik | Structure Analysis of the FRP Rebar Using Computer Vision Techniques | null | null | null | null | cs.CV cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | In this paper we present a method to analyze the inner structure of the
composite FRP rebar, namely the shift of the real center of gravity with a
respect to the geometrical center of rebar and changes of cross-sectional
characteristics. We propose an automated pipeline based on classical computer
vision techniques and on the ratio between the glass fibers and epoxy filament
in the analyzed cross-section to compute the shift vector of the real center of
gravity in respect to the geometrical center together with the cross-section
area and its principal moments. We discuss the achieved results over two cross
sections in a different portion of the rebar and in the end, we suggest
possible direction and improvements for our future work. We also made our code
publicly available.
| [
{
"created": "Thu, 27 Apr 2023 17:37:23 GMT",
"version": "v1"
}
] | 2023-04-28 | [
[
"Lagin",
"Juraj",
""
],
[
"Bilik",
"Simon",
""
]
] | In this paper we present a method to analyze the inner structure of the composite FRP rebar, namely the shift of the real center of gravity with a respect to the geometrical center of rebar and changes of cross-sectional characteristics. We propose an automated pipeline based on classical computer vision techniques and on the ratio between the glass fibers and epoxy filament in the analyzed cross-section to compute the shift vector of the real center of gravity in respect to the geometrical center together with the cross-section area and its principal moments. We discuss the achieved results over two cross sections in a different portion of the rebar and in the end, we suggest possible direction and improvements for our future work. We also made our code publicly available. |
1807.07432 | Thomas Mitchel | Thomas W. Mitchel, Sipu Ruan, Gregory S. Chirikjian | Signal Alignment for Humanoid Skeletons via the Globally Optimal
Reparameterization Algorithm | Humanoids 2018 initial submission; companion paper to
arXiv:1807.05485 | null | null | null | cs.CV math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The general ability to analyze and classify the 3D kinematics of the human
form is an essential step in the development of socially adept humanoid robots.
A variety of different types of signals can be used by machines to represent
and characterize actions such as RGB videos, infrared maps, and optical flow.
In particular, skeleton sequences provide a natural 3D kinematic description of
human motions and can be acquired in real time using RGB+D cameras. Moreover,
skeleton sequences are generalizable to characterize the motions of both humans
and humanoid robots. The Globally Optimal Reparameterization Algorithm (GORA)
is a novel, recently proposed algorithm for signal alignment in which signals
are reparameterized to a globally optimal universal standard timescale (UST).
Here, we introduce a variant of GORA for humanoid action recognition with
skeleton sequences, which we call GORA-S. We briefly review the algorithm's
mathematical foundations and contextualize them in the problem of action
recognition with skeleton sequences. Subsequently, we introduce GORA-S and
discuss parameters and numerical techniques for its effective implementation.
We then compare its performance with that of the DTW and FastDTW algorithms, in
terms of computational efficiency and accuracy in matching skeletons. Our
results show that GORA-S attains a complexity that is significantly less than
that of any tested DTW method. In addition, it displays a favorable balance
between speed and accuracy that remains invariant under changes in skeleton
sampling frequency, lending it a degree of versatility that could make it
well-suited for a variety of action recognition tasks.
| [
{
"created": "Wed, 18 Jul 2018 03:16:39 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Jul 2018 15:14:55 GMT",
"version": "v2"
}
] | 2018-07-23 | [
[
"Mitchel",
"Thomas W.",
""
],
[
"Ruan",
"Sipu",
""
],
[
"Chirikjian",
"Gregory S.",
""
]
] | The general ability to analyze and classify the 3D kinematics of the human form is an essential step in the development of socially adept humanoid robots. A variety of different types of signals can be used by machines to represent and characterize actions such as RGB videos, infrared maps, and optical flow. In particular, skeleton sequences provide a natural 3D kinematic description of human motions and can be acquired in real time using RGB+D cameras. Moreover, skeleton sequences are generalizable to characterize the motions of both humans and humanoid robots. The Globally Optimal Reparameterization Algorithm (GORA) is a novel, recently proposed algorithm for signal alignment in which signals are reparameterized to a globally optimal universal standard timescale (UST). Here, we introduce a variant of GORA for humanoid action recognition with skeleton sequences, which we call GORA-S. We briefly review the algorithm's mathematical foundations and contextualize them in the problem of action recognition with skeleton sequences. Subsequently, we introduce GORA-S and discuss parameters and numerical techniques for its effective implementation. We then compare its performance with that of the DTW and FastDTW algorithms, in terms of computational efficiency and accuracy in matching skeletons. Our results show that GORA-S attains a complexity that is significantly less than that of any tested DTW method. In addition, it displays a favorable balance between speed and accuracy that remains invariant under changes in skeleton sampling frequency, lending it a degree of versatility that could make it well-suited for a variety of action recognition tasks. |
2310.13361 | Wenyu Guo | Wenyu Guo, Qingkai Fang, Dong Yu, Yang Feng | Bridging the Gap between Synthetic and Authentic Images for Multimodal
Machine Translation | Accepted to EMNLP 2023 main conference | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Multimodal machine translation (MMT) simultaneously takes the source sentence
and a relevant image as input for translation. Since there is no paired image
available for the input sentence in most cases, recent studies suggest
utilizing powerful text-to-image generation models to provide image inputs.
Nevertheless, synthetic images generated by these models often follow different
distributions compared to authentic images. Consequently, using authentic
images for training and synthetic images for inference can introduce a
distribution shift, resulting in performance degradation during inference. To
tackle this challenge, in this paper, we feed synthetic and authentic images to
the MMT model, respectively. Then we minimize the gap between the synthetic and
authentic images by drawing close the input image representations of the
Transformer Encoder and the output distributions of the Transformer Decoder.
Therefore, we mitigate the distribution disparity introduced by the synthetic
images during inference, thereby freeing the authentic images from the
inference process.Experimental results show that our approach achieves
state-of-the-art performance on the Multi30K En-De and En-Fr datasets, while
remaining independent of authentic images during inference.
| [
{
"created": "Fri, 20 Oct 2023 09:06:30 GMT",
"version": "v1"
}
] | 2023-10-23 | [
[
"Guo",
"Wenyu",
""
],
[
"Fang",
"Qingkai",
""
],
[
"Yu",
"Dong",
""
],
[
"Feng",
"Yang",
""
]
] | Multimodal machine translation (MMT) simultaneously takes the source sentence and a relevant image as input for translation. Since there is no paired image available for the input sentence in most cases, recent studies suggest utilizing powerful text-to-image generation models to provide image inputs. Nevertheless, synthetic images generated by these models often follow different distributions compared to authentic images. Consequently, using authentic images for training and synthetic images for inference can introduce a distribution shift, resulting in performance degradation during inference. To tackle this challenge, in this paper, we feed synthetic and authentic images to the MMT model, respectively. Then we minimize the gap between the synthetic and authentic images by drawing close the input image representations of the Transformer Encoder and the output distributions of the Transformer Decoder. Therefore, we mitigate the distribution disparity introduced by the synthetic images during inference, thereby freeing the authentic images from the inference process.Experimental results show that our approach achieves state-of-the-art performance on the Multi30K En-De and En-Fr datasets, while remaining independent of authentic images during inference. |
1603.00418 | Ahmad Al- Shamailh | Ahmad Al- Shamailh | Visualizing source code in 3D Maya software | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper is clarify the summaries codes for programmers through
three-dimensional shapes, and clearly programmers and developers, scholars and
researchers in the field of software engineering, as well as researchers from
the representative about threedimensional forms. Through a three-dimensional
drawing on a Maya scripts which are based on drawing shapes and
three-dimensional stereoscopic show every part of the code, for example,
classes, methods, coherence and homogeneity , In these drawings and show
clearly and useful.
| [
{
"created": "Tue, 1 Mar 2016 19:26:42 GMT",
"version": "v1"
}
] | 2016-03-02 | [
[
"Shamailh",
"Ahmad Al-",
""
]
] | In this paper is clarify the summaries codes for programmers through three-dimensional shapes, and clearly programmers and developers, scholars and researchers in the field of software engineering, as well as researchers from the representative about threedimensional forms. Through a three-dimensional drawing on a Maya scripts which are based on drawing shapes and three-dimensional stereoscopic show every part of the code, for example, classes, methods, coherence and homogeneity , In these drawings and show clearly and useful. |
2109.11541 | Han Wu | Han Wu, Kun Xu, Linqi Song | CSAGN: Conversational Structure Aware Graph Network for Conversational
Semantic Role Labeling | To appear in EMNLP 2021 | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Conversational semantic role labeling (CSRL) is believed to be a crucial step
towards dialogue understanding. However, it remains a major challenge for
existing CSRL parser to handle conversational structural information. In this
paper, we present a simple and effective architecture for CSRL which aims to
address this problem. Our model is based on a conversational structure-aware
graph network which explicitly encodes the speaker dependent information. We
also propose a multi-task learning method to further improve the model.
Experimental results on benchmark datasets show that our model with our
proposed training objectives significantly outperforms previous baselines.
| [
{
"created": "Thu, 23 Sep 2021 07:47:28 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Nov 2021 06:57:06 GMT",
"version": "v2"
}
] | 2021-11-05 | [
[
"Wu",
"Han",
""
],
[
"Xu",
"Kun",
""
],
[
"Song",
"Linqi",
""
]
] | Conversational semantic role labeling (CSRL) is believed to be a crucial step towards dialogue understanding. However, it remains a major challenge for existing CSRL parser to handle conversational structural information. In this paper, we present a simple and effective architecture for CSRL which aims to address this problem. Our model is based on a conversational structure-aware graph network which explicitly encodes the speaker dependent information. We also propose a multi-task learning method to further improve the model. Experimental results on benchmark datasets show that our model with our proposed training objectives significantly outperforms previous baselines. |
1910.12800 | Xing Zhao | Xing Zhao, Ping Lu, Yanyan Zhang, Jianxiong Chen, and Xiaoyang Li | Attenuating Random Noise in Seismic Data by a Deep Learning Approach | 33 pages, 11 figures | null | null | null | cs.LG cs.CV eess.IV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the geophysical field, seismic noise attenuation has been considered as a
critical and long-standing problem, especially for the pre-stack data
processing. Here, we propose a model to leverage the deep-learning model for
this task. Rather than directly applying an existing de-noising model from
ordinary images to the seismic data, we have designed a particular
deep-learning model, based on residual neural networks. It is named as
N2N-Seismic, which has a strong ability to recover the seismic signals back to
intact condition with the preservation of primary signals. The proposed model,
achieving with great success in attenuating noise, has been tested on two
different seismic datasets. Several metrics show that our method outperforms
conventional approaches in terms of Signal-to-Noise-Ratio, Mean-Squared-Error,
Phase Spectrum, etc. Moreover, robust tests in terms of effectively removing
random noise from any dataset with strong and weak noises have been extensively
scrutinized in making sure that the proposed model is able to maintain a good
level of adaptation while dealing with large variations of noise
characteristics and intensities.
| [
{
"created": "Mon, 28 Oct 2019 16:53:26 GMT",
"version": "v1"
}
] | 2019-10-29 | [
[
"Zhao",
"Xing",
""
],
[
"Lu",
"Ping",
""
],
[
"Zhang",
"Yanyan",
""
],
[
"Chen",
"Jianxiong",
""
],
[
"Li",
"Xiaoyang",
""
]
] | In the geophysical field, seismic noise attenuation has been considered as a critical and long-standing problem, especially for the pre-stack data processing. Here, we propose a model to leverage the deep-learning model for this task. Rather than directly applying an existing de-noising model from ordinary images to the seismic data, we have designed a particular deep-learning model, based on residual neural networks. It is named as N2N-Seismic, which has a strong ability to recover the seismic signals back to intact condition with the preservation of primary signals. The proposed model, achieving with great success in attenuating noise, has been tested on two different seismic datasets. Several metrics show that our method outperforms conventional approaches in terms of Signal-to-Noise-Ratio, Mean-Squared-Error, Phase Spectrum, etc. Moreover, robust tests in terms of effectively removing random noise from any dataset with strong and weak noises have been extensively scrutinized in making sure that the proposed model is able to maintain a good level of adaptation while dealing with large variations of noise characteristics and intensities. |
2305.00735 | Roel Bouman | Roel Bouman, Zaharah Bukhsh, Tom Heskes | Unsupervised anomaly detection algorithms on real-world data: how many
do we need? | The associated Git repository can be found at:
https://github.com/RoelBouman/outlierdetection | Journal of Machine Learning Research 25.105 (2024): 1-34 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this study we evaluate 32 unsupervised anomaly detection algorithms on 52
real-world multivariate tabular datasets, performing the largest comparison of
unsupervised anomaly detection algorithms to date. On this collection of
datasets, the $k$-thNN (distance to the $k$-nearest neighbor) algorithm
significantly outperforms the most other algorithms. Visualizing and then
clustering the relative performance of the considered algorithms on all
datasets, we identify two clear clusters: one with ``local'' datasets, and
another with ``global'' datasets. ``Local'' anomalies occupy a region with low
density when compared to nearby samples, while ``global'' occupy an overall low
density region in the feature space. On the local datasets the $k$NN
($k$-nearest neighbor) algorithm comes out on top. On the global datasets, the
EIF (extended isolation forest) algorithm performs the best. Also taking into
consideration the algorithms' computational complexity, a toolbox with these
three unsupervised anomaly detection algorithms suffices for finding anomalies
in this representative collection of multivariate datasets. By providing access
to code and datasets, our study can be easily reproduced and extended with more
algorithms and/or datasets.
| [
{
"created": "Mon, 1 May 2023 09:27:42 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Bouman",
"Roel",
""
],
[
"Bukhsh",
"Zaharah",
""
],
[
"Heskes",
"Tom",
""
]
] | In this study we evaluate 32 unsupervised anomaly detection algorithms on 52 real-world multivariate tabular datasets, performing the largest comparison of unsupervised anomaly detection algorithms to date. On this collection of datasets, the $k$-thNN (distance to the $k$-nearest neighbor) algorithm significantly outperforms the most other algorithms. Visualizing and then clustering the relative performance of the considered algorithms on all datasets, we identify two clear clusters: one with ``local'' datasets, and another with ``global'' datasets. ``Local'' anomalies occupy a region with low density when compared to nearby samples, while ``global'' occupy an overall low density region in the feature space. On the local datasets the $k$NN ($k$-nearest neighbor) algorithm comes out on top. On the global datasets, the EIF (extended isolation forest) algorithm performs the best. Also taking into consideration the algorithms' computational complexity, a toolbox with these three unsupervised anomaly detection algorithms suffices for finding anomalies in this representative collection of multivariate datasets. By providing access to code and datasets, our study can be easily reproduced and extended with more algorithms and/or datasets. |
2106.05003 | Guanchen Ding | Jingyuan Chen, Guanchen Ding, Yuchen Yang, Wenwei Han, Kangmin Xu,
Tianyi Gao, Zhe Zhang, Wanping Ouyang, Hao Cai, Zhenzhong Chen | Dual-Modality Vehicle Anomaly Detection via Bilateral Trajectory Tracing | 9 pages, 5 figures, accepted to CVPRW 2021 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic anomaly detection has played a crucial role in Intelligent
Transportation System (ITS). The main challenges of this task lie in the highly
diversified anomaly scenes and variational lighting conditions. Although much
work has managed to identify the anomaly in homogenous weather and scene, few
resolved to cope with complex ones. In this paper, we proposed a dual-modality
modularized methodology for the robust detection of abnormal vehicles. We
introduced an integrated anomaly detection framework comprising the following
modules: background modeling, vehicle tracking with detection, mask
construction, Region of Interest (ROI) backtracking, and dual-modality tracing.
Concretely, we employed background modeling to filter the motion information
and left the static information for later vehicle detection. For the vehicle
detection and tracking module, we adopted YOLOv5 and multi-scale tracking to
localize the anomalies. Besides, we utilized the frame difference and tracking
results to identify the road and obtain the mask. In addition, we introduced
multiple similarity estimation metrics to refine the anomaly period via
backtracking. Finally, we proposed a dual-modality bilateral tracing module to
refine the time further. The experiments conducted on the Track 4 testset of
the NVIDIA 2021 AI City Challenge yielded a result of 0.9302 F1-Score and
3.4039 root mean square error (RMSE), indicating the effectiveness of our
framework.
| [
{
"created": "Wed, 9 Jun 2021 12:04:25 GMT",
"version": "v1"
}
] | 2021-06-10 | [
[
"Chen",
"Jingyuan",
""
],
[
"Ding",
"Guanchen",
""
],
[
"Yang",
"Yuchen",
""
],
[
"Han",
"Wenwei",
""
],
[
"Xu",
"Kangmin",
""
],
[
"Gao",
"Tianyi",
""
],
[
"Zhang",
"Zhe",
""
],
[
"Ouyang",
"Wanping",
""
],
[
"Cai",
"Hao",
""
],
[
"Chen",
"Zhenzhong",
""
]
] | Traffic anomaly detection has played a crucial role in Intelligent Transportation System (ITS). The main challenges of this task lie in the highly diversified anomaly scenes and variational lighting conditions. Although much work has managed to identify the anomaly in homogenous weather and scene, few resolved to cope with complex ones. In this paper, we proposed a dual-modality modularized methodology for the robust detection of abnormal vehicles. We introduced an integrated anomaly detection framework comprising the following modules: background modeling, vehicle tracking with detection, mask construction, Region of Interest (ROI) backtracking, and dual-modality tracing. Concretely, we employed background modeling to filter the motion information and left the static information for later vehicle detection. For the vehicle detection and tracking module, we adopted YOLOv5 and multi-scale tracking to localize the anomalies. Besides, we utilized the frame difference and tracking results to identify the road and obtain the mask. In addition, we introduced multiple similarity estimation metrics to refine the anomaly period via backtracking. Finally, we proposed a dual-modality bilateral tracing module to refine the time further. The experiments conducted on the Track 4 testset of the NVIDIA 2021 AI City Challenge yielded a result of 0.9302 F1-Score and 3.4039 root mean square error (RMSE), indicating the effectiveness of our framework. |
0903.1061 | Tiberiu Marius Karnyanszky | Ovidiu Crista, Tiberiu Marius Karnyanszky | Application for Evaluation of the Professional Competencies of the
Teaching Staff | 6 pages (71-76), 5th "Actualities and Perspectives in Hard and Soft",
2007 | Ann. Univ. Tibiscus Comp. Sci. Series 5 (2007), 71-76 | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of the presented application is to offer a full support for
universities in retrieving the feedback from their students with regard to
their teachers. This is the main reason we described it in this paper. To build
this application the following tools have been used: Microsoft Notepad 5.1 (to
make the source files), Adobe Photoshop CS3 (to make the background image) and
Adobe Flash Media Encoder 8 (to render the video clips).
| [
{
"created": "Thu, 5 Mar 2009 18:41:47 GMT",
"version": "v1"
}
] | 2009-03-10 | [
[
"Crista",
"Ovidiu",
""
],
[
"Karnyanszky",
"Tiberiu Marius",
""
]
] | The goal of the presented application is to offer a full support for universities in retrieving the feedback from their students with regard to their teachers. This is the main reason we described it in this paper. To build this application the following tools have been used: Microsoft Notepad 5.1 (to make the source files), Adobe Photoshop CS3 (to make the background image) and Adobe Flash Media Encoder 8 (to render the video clips). |
1307.7211 | Giovanni Geraci | Giovanni Geraci, Harpreet S. Dhillon, Jeffrey G. Andrews, Jinhong
Yuan, and Iain B. Collings | Physical Layer Security in Downlink Multi-Antenna Cellular Networks | submitted to IEEE Transactions on Communications, July 2013 | null | 10.1109/TCOMM.2014.2314664 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study physical layer security for the downlink of cellular
networks, where the confidential messages transmitted to each mobile user can
be eavesdropped by both (i) the other users in the same cell and (ii) the users
in the other cells. The locations of base stations and mobile users are modeled
as two independent two-dimensional Poisson point processes. Using the proposed
model, we analyze the secrecy rates achievable by regularized channel inversion
(RCI) precoding by performing a large-system analysis that combines tools from
stochastic geometry and random matrix theory. We obtain approximations for the
probability of secrecy outage and the mean secrecy rate, and characterize
regimes where RCI precoding achieves a nonzero secrecy rate. We find that
unlike isolated cells, the secrecy rate in a cellular network does not grow
monotonically with the transmit power, and the network tends to be in secrecy
outage if the transmit power grows unbounded. Furthermore, we show that there
is an optimal value for the base station deployment density that maximizes the
secrecy rate, and this value is a decreasing function of the signal-to-noise
ratio.
| [
{
"created": "Sat, 27 Jul 2013 02:42:14 GMT",
"version": "v1"
}
] | 2016-11-15 | [
[
"Geraci",
"Giovanni",
""
],
[
"Dhillon",
"Harpreet S.",
""
],
[
"Andrews",
"Jeffrey G.",
""
],
[
"Yuan",
"Jinhong",
""
],
[
"Collings",
"Iain B.",
""
]
] | In this paper, we study physical layer security for the downlink of cellular networks, where the confidential messages transmitted to each mobile user can be eavesdropped by both (i) the other users in the same cell and (ii) the users in the other cells. The locations of base stations and mobile users are modeled as two independent two-dimensional Poisson point processes. Using the proposed model, we analyze the secrecy rates achievable by regularized channel inversion (RCI) precoding by performing a large-system analysis that combines tools from stochastic geometry and random matrix theory. We obtain approximations for the probability of secrecy outage and the mean secrecy rate, and characterize regimes where RCI precoding achieves a nonzero secrecy rate. We find that unlike isolated cells, the secrecy rate in a cellular network does not grow monotonically with the transmit power, and the network tends to be in secrecy outage if the transmit power grows unbounded. Furthermore, we show that there is an optimal value for the base station deployment density that maximizes the secrecy rate, and this value is a decreasing function of the signal-to-noise ratio. |
1905.04093 | Estefania Talavera | Estefania Talavera, Nicolai Petkov and Petia Radeva | Towards Unsupervised Familiar Scene Recognition in Egocentric Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, there is an upsurge of interest in using lifelogging devices. Such
devices generate huge amounts of image data; consequently, the need for
automatic methods for analyzing and summarizing these data is drastically
increasing. We present a new method for familiar scene recognition in
egocentric videos, based on background pattern detection through automatically
configurable COSFIRE filters. We present some experiments over egocentric data
acquired with the Narrative Clip.
| [
{
"created": "Fri, 10 May 2019 11:58:10 GMT",
"version": "v1"
}
] | 2019-05-13 | [
[
"Talavera",
"Estefania",
""
],
[
"Petkov",
"Nicolai",
""
],
[
"Radeva",
"Petia",
""
]
] | Nowadays, there is an upsurge of interest in using lifelogging devices. Such devices generate huge amounts of image data; consequently, the need for automatic methods for analyzing and summarizing these data is drastically increasing. We present a new method for familiar scene recognition in egocentric videos, based on background pattern detection through automatically configurable COSFIRE filters. We present some experiments over egocentric data acquired with the Narrative Clip. |
1701.02044 | Abhishek Gupta | Abhishek K. Gupta, Jeffrey G. Andrews, Robert W. Heath Jr | Macro diversity in Cellular Networks with Random Blockages | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blocking objects (blockages) between a transmitter and receiver cause
wireless communication links to transition from line-of-sight (LOS) to
non-line-of-sight (NLOS) propagation, which can greatly reduce the received
power, particularly at higher frequencies such as millimeter wave (mmWave). We
consider a cellular network in which a mobile user attempts to connect to two
or more base stations (BSs) simultaneously, to increase the probability of at
least one LOS link, which is a form of macrodiversity. We develop a framework
for determining the LOS probability as a function of the number of BSs, when
taking into account the correlation between blockages: for example, a single
blockage close to the device -- including the user's own body -- could block
multiple BSs. We consider the impact of the size of blocking objects on the
system reliability probability and show that macrodiversity gains are higher
when the blocking objects are small. We also show that the BS density must
scale as the square of the blockage density to maintain a given level of
reliability.
| [
{
"created": "Mon, 9 Jan 2017 01:17:51 GMT",
"version": "v1"
}
] | 2017-01-10 | [
[
"Gupta",
"Abhishek K.",
""
],
[
"Andrews",
"Jeffrey G.",
""
],
[
"Heath",
"Robert W.",
"Jr"
]
] | Blocking objects (blockages) between a transmitter and receiver cause wireless communication links to transition from line-of-sight (LOS) to non-line-of-sight (NLOS) propagation, which can greatly reduce the received power, particularly at higher frequencies such as millimeter wave (mmWave). We consider a cellular network in which a mobile user attempts to connect to two or more base stations (BSs) simultaneously, to increase the probability of at least one LOS link, which is a form of macrodiversity. We develop a framework for determining the LOS probability as a function of the number of BSs, when taking into account the correlation between blockages: for example, a single blockage close to the device -- including the user's own body -- could block multiple BSs. We consider the impact of the size of blocking objects on the system reliability probability and show that macrodiversity gains are higher when the blocking objects are small. We also show that the BS density must scale as the square of the blockage density to maintain a given level of reliability. |
2209.10042 | Hyeon Jeon | Hyeon Jeon, Michael Aupetit, DongHwa Shin, Aeri Cho, Seokhyeon Park,
Jinwook Seo | Sanity Check for External Clustering Validation Benchmarks using
Internal Validation Measures | Datasets available on https://github.com/hj-n/labeled-datasets | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We address the lack of reliability in benchmarking clustering techniques
based on labeled datasets. A standard scheme in external clustering validation
is to use class labels as ground truth clusters, based on the assumption that
each class forms a single, clearly separated cluster. However, as such
cluster-label matching (CLM) assumption often breaks, the lack of conducting a
sanity check for the CLM of benchmark datasets casts doubt on the validity of
external validations. Still, evaluating the degree of CLM is challenging. For
example, internal clustering validation measures can be used to quantify CLM
within the same dataset to evaluate its different clusterings but are not
designed to compare clusterings of different datasets. In this work, we propose
a principled way to generate between-dataset internal measures that enable the
comparison of CLM across datasets. We first determine four axioms for
between-dataset internal measures, complementing Ackerman and Ben-David's
within-dataset axioms. We then propose processes to generalize internal
measures to fulfill these new axioms, and use them to extend the widely used
Calinski-Harabasz index for between-dataset CLM evaluation. Through
quantitative experiments, we (1) verify the validity and necessity of the
generalization processes and (2) show that the proposed between-dataset
Calinski-Harabasz index accurately evaluates CLM across datasets. Finally, we
demonstrate the importance of evaluating CLM of benchmark datasets before
conducting external validation.
| [
{
"created": "Tue, 20 Sep 2022 23:32:18 GMT",
"version": "v1"
}
] | 2022-09-22 | [
[
"Jeon",
"Hyeon",
""
],
[
"Aupetit",
"Michael",
""
],
[
"Shin",
"DongHwa",
""
],
[
"Cho",
"Aeri",
""
],
[
"Park",
"Seokhyeon",
""
],
[
"Seo",
"Jinwook",
""
]
] | We address the lack of reliability in benchmarking clustering techniques based on labeled datasets. A standard scheme in external clustering validation is to use class labels as ground truth clusters, based on the assumption that each class forms a single, clearly separated cluster. However, as such cluster-label matching (CLM) assumption often breaks, the lack of conducting a sanity check for the CLM of benchmark datasets casts doubt on the validity of external validations. Still, evaluating the degree of CLM is challenging. For example, internal clustering validation measures can be used to quantify CLM within the same dataset to evaluate its different clusterings but are not designed to compare clusterings of different datasets. In this work, we propose a principled way to generate between-dataset internal measures that enable the comparison of CLM across datasets. We first determine four axioms for between-dataset internal measures, complementing Ackerman and Ben-David's within-dataset axioms. We then propose processes to generalize internal measures to fulfill these new axioms, and use them to extend the widely used Calinski-Harabasz index for between-dataset CLM evaluation. Through quantitative experiments, we (1) verify the validity and necessity of the generalization processes and (2) show that the proposed between-dataset Calinski-Harabasz index accurately evaluates CLM across datasets. Finally, we demonstrate the importance of evaluating CLM of benchmark datasets before conducting external validation. |
2110.01616 | Vikram Ramesh | Vikram Ramesh, Vighnesh Natarajan and Anil Prabhakar | A spatial-photonic Ising machine to solve the two-way
number-partitioning problem | null | null | null | null | cs.ET physics.optics | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We evaluate the performance of different algorithms in minimizing the
Hamiltonian of a spatial-photonic Ising machine (SPIM). We then encode the
number-partitioning problem on the SPIM and adiabatically arrive at good
solutions for the problem for over 16000 spins, with a time complexity that
only scales linearly with problem size. Finally, we benchmark our machine
performance against the classical solver, Gurobi, and also a D-Wave 5000+
quantum annealer. With just one spatial light modulator, and and adiabatic
evolution scheme for the phase, our results surpass current state-of-the-art
SPIMs. We reduce hardware costs, and can solve larger problems more
efficiently.
| [
{
"created": "Sun, 3 Oct 2021 22:00:58 GMT",
"version": "v1"
}
] | 2021-10-06 | [
[
"Ramesh",
"Vikram",
""
],
[
"Natarajan",
"Vighnesh",
""
],
[
"Prabhakar",
"Anil",
""
]
] | We evaluate the performance of different algorithms in minimizing the Hamiltonian of a spatial-photonic Ising machine (SPIM). We then encode the number-partitioning problem on the SPIM and adiabatically arrive at good solutions for the problem for over 16000 spins, with a time complexity that only scales linearly with problem size. Finally, we benchmark our machine performance against the classical solver, Gurobi, and also a D-Wave 5000+ quantum annealer. With just one spatial light modulator, and and adiabatic evolution scheme for the phase, our results surpass current state-of-the-art SPIMs. We reduce hardware costs, and can solve larger problems more efficiently. |
2007.10934 | Sujit P B Dr | Sarthak Bhagat and Sujit PB | UAV Target Tracking in Urban Environments Using Deep Reinforcement
Learning | null | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Persistent target tracking in urban environments using UAV is a difficult
task due to the limited field of view, visibility obstruction from obstacles
and uncertain target motion. The vehicle needs to plan intelligently in 3D such
that the target visibility is maximized. In this paper, we introduce Target
Following DQN (TF-DQN), a deep reinforcement learning technique based on Deep
Q-Networks with a curriculum training framework for the UAV to persistently
track the target in the presence of obstacles and target motion uncertainty.
The algorithm is evaluated through several simulation experiments qualitatively
as well as quantitatively. The results show that the UAV tracks the target
persistently in diverse environments while avoiding obstacles on the trained
environments as well as on unseen environments.
| [
{
"created": "Tue, 21 Jul 2020 16:52:48 GMT",
"version": "v1"
}
] | 2020-07-22 | [
[
"Bhagat",
"Sarthak",
""
],
[
"PB",
"Sujit",
""
]
] | Persistent target tracking in urban environments using UAV is a difficult task due to the limited field of view, visibility obstruction from obstacles and uncertain target motion. The vehicle needs to plan intelligently in 3D such that the target visibility is maximized. In this paper, we introduce Target Following DQN (TF-DQN), a deep reinforcement learning technique based on Deep Q-Networks with a curriculum training framework for the UAV to persistently track the target in the presence of obstacles and target motion uncertainty. The algorithm is evaluated through several simulation experiments qualitatively as well as quantitatively. The results show that the UAV tracks the target persistently in diverse environments while avoiding obstacles on the trained environments as well as on unseen environments. |
2009.12046 | Lei Shu | Lei Shu, Alexandros Papangelis, Yi-Chia Wang, Gokhan Tur, Hu Xu,
Zhaleh Feizollahi, Bing Liu, Piero Molino | Controllable Text Generation with Focused Variation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work introduces Focused-Variation Network (FVN), a novel model to
control language generation. The main problems in previous controlled language
generation models range from the difficulty of generating text according to the
given attributes, to the lack of diversity of the generated texts. FVN
addresses these issues by learning disjoint discrete latent spaces for each
attribute inside codebooks, which allows for both controllability and
diversity, while at the same time generating fluent text. We evaluate FVN on
two text generation datasets with annotated content and style, and show
state-of-the-art performance as assessed by automatic and human evaluations.
| [
{
"created": "Fri, 25 Sep 2020 06:31:06 GMT",
"version": "v1"
}
] | 2020-09-28 | [
[
"Shu",
"Lei",
""
],
[
"Papangelis",
"Alexandros",
""
],
[
"Wang",
"Yi-Chia",
""
],
[
"Tur",
"Gokhan",
""
],
[
"Xu",
"Hu",
""
],
[
"Feizollahi",
"Zhaleh",
""
],
[
"Liu",
"Bing",
""
],
[
"Molino",
"Piero",
""
]
] | This work introduces Focused-Variation Network (FVN), a novel model to control language generation. The main problems in previous controlled language generation models range from the difficulty of generating text according to the given attributes, to the lack of diversity of the generated texts. FVN addresses these issues by learning disjoint discrete latent spaces for each attribute inside codebooks, which allows for both controllability and diversity, while at the same time generating fluent text. We evaluate FVN on two text generation datasets with annotated content and style, and show state-of-the-art performance as assessed by automatic and human evaluations. |
2407.16200 | Milan Tomy | Milan Tomy, Konstantin M. Seiler, Andrew J. Hill | MCTS Based Dispatch of Autonomous Vehicles under Operational Constraints
for Continuous Transportation | International Conference on Automation Science and Engineering
(CASE), 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Continuous transportation of material in the mining industry is achieved by
the dispatch of autonomous haul-trucks with discrete haulage capacities.
Recently, Monte Carlo Tree Search (MCTS) was successfully deployed in tackling
challenges of long-run optimality, scalability and adaptability in haul-truck
dispatch. Typically, operational constraints imposed on the mine site are
satisfied by heuristic controllers or human operators independent of the
dispatch planning. This article incorporates operational constraint
satisfaction into the dispatch planning by utilising the MCTS based dispatch
planner Flow-Achieving Scheduling Tree (FAST). Operational constraint violation
and satisfaction are modelled as opportunity costs in the combinatorial
optimisation problem of dispatch. Explicit cost formulations are avoided by
utilising MCTS generator models to derive opportunity costs. Experimental
studies with four types of operational constraints demonstrate the success of
utilising opportunity costs for constraint satisfaction, and the effectiveness
of integrating constraints into dispatch planning.
| [
{
"created": "Tue, 23 Jul 2024 06:06:16 GMT",
"version": "v1"
}
] | 2024-07-24 | [
[
"Tomy",
"Milan",
""
],
[
"Seiler",
"Konstantin M.",
""
],
[
"Hill",
"Andrew J.",
""
]
] | Continuous transportation of material in the mining industry is achieved by the dispatch of autonomous haul-trucks with discrete haulage capacities. Recently, Monte Carlo Tree Search (MCTS) was successfully deployed in tackling challenges of long-run optimality, scalability and adaptability in haul-truck dispatch. Typically, operational constraints imposed on the mine site are satisfied by heuristic controllers or human operators independent of the dispatch planning. This article incorporates operational constraint satisfaction into the dispatch planning by utilising the MCTS based dispatch planner Flow-Achieving Scheduling Tree (FAST). Operational constraint violation and satisfaction are modelled as opportunity costs in the combinatorial optimisation problem of dispatch. Explicit cost formulations are avoided by utilising MCTS generator models to derive opportunity costs. Experimental studies with four types of operational constraints demonstrate the success of utilising opportunity costs for constraint satisfaction, and the effectiveness of integrating constraints into dispatch planning. |
2404.06139 | Pengfei Zhou | Pengfei Zhou, Fangxiang Feng, Xiaojie Wang | DiffHarmony: Latent Diffusion Model Meets Image Harmonization | Accepted by ICMR 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image harmonization, which involves adjusting the foreground of a composite
image to attain a unified visual consistency with the background, can be
conceptualized as an image-to-image translation task. Diffusion models have
recently promoted the rapid development of image-to-image translation tasks .
However, training diffusion models from scratch is computationally intensive.
Fine-tuning pre-trained latent diffusion models entails dealing with the
reconstruction error induced by the image compression autoencoder, making it
unsuitable for image generation tasks that involve pixel-level evaluation
metrics. To deal with these issues, in this paper, we first adapt a pre-trained
latent diffusion model to the image harmonization task to generate the
harmonious but potentially blurry initial images. Then we implement two
strategies: utilizing higher-resolution images during inference and
incorporating an additional refinement stage, to further enhance the clarity of
the initially harmonized images. Extensive experiments on iHarmony4 datasets
demonstrate the superiority of our proposed method. The code and model will be
made publicly available at https://github.com/nicecv/DiffHarmony .
| [
{
"created": "Tue, 9 Apr 2024 09:05:23 GMT",
"version": "v1"
}
] | 2024-04-10 | [
[
"Zhou",
"Pengfei",
""
],
[
"Feng",
"Fangxiang",
""
],
[
"Wang",
"Xiaojie",
""
]
] | Image harmonization, which involves adjusting the foreground of a composite image to attain a unified visual consistency with the background, can be conceptualized as an image-to-image translation task. Diffusion models have recently promoted the rapid development of image-to-image translation tasks . However, training diffusion models from scratch is computationally intensive. Fine-tuning pre-trained latent diffusion models entails dealing with the reconstruction error induced by the image compression autoencoder, making it unsuitable for image generation tasks that involve pixel-level evaluation metrics. To deal with these issues, in this paper, we first adapt a pre-trained latent diffusion model to the image harmonization task to generate the harmonious but potentially blurry initial images. Then we implement two strategies: utilizing higher-resolution images during inference and incorporating an additional refinement stage, to further enhance the clarity of the initially harmonized images. Extensive experiments on iHarmony4 datasets demonstrate the superiority of our proposed method. The code and model will be made publicly available at https://github.com/nicecv/DiffHarmony . |
2212.08817 | Jun-Gi Jang | Jun-Gi Jang, Sooyeon Shim, Vladimir Egay, Jeeyong Lee, Jongmin Park,
Suhyun Chae, U Kang | Accurate Open-set Recognition for Memory Workload | 15 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we accurately identify new memory workloads while classifying known
memory workloads? Verifying DRAM (Dynamic Random Access Memory) using various
workloads is an important task to guarantee the quality of DRAM. A crucial
component in the process is open-set recognition which aims to detect new
workloads not seen in the training phase. Despite its importance, however,
existing open-set recognition methods are unsatisfactory in terms of accuracy
since they fail to exploit the characteristics of workload sequences. In this
paper, we propose Acorn, an accurate open-set recognition method capturing the
characteristics of workload sequences. Acorn extracts two types of feature
vectors to capture sequential patterns and spatial locality patterns in memory
access. Acorn then uses the feature vectors to accurately classify a
subsequence into one of the known classes or identify it as the unknown class.
Experiments show that Acorn achieves state-of-the-art accuracy, giving up to
37% points higher unknown class detection accuracy while achieving comparable
known class classification accuracy than existing methods.
| [
{
"created": "Sat, 17 Dec 2022 07:37:40 GMT",
"version": "v1"
}
] | 2022-12-20 | [
[
"Jang",
"Jun-Gi",
""
],
[
"Shim",
"Sooyeon",
""
],
[
"Egay",
"Vladimir",
""
],
[
"Lee",
"Jeeyong",
""
],
[
"Park",
"Jongmin",
""
],
[
"Chae",
"Suhyun",
""
],
[
"Kang",
"U",
""
]
] | How can we accurately identify new memory workloads while classifying known memory workloads? Verifying DRAM (Dynamic Random Access Memory) using various workloads is an important task to guarantee the quality of DRAM. A crucial component in the process is open-set recognition which aims to detect new workloads not seen in the training phase. Despite its importance, however, existing open-set recognition methods are unsatisfactory in terms of accuracy since they fail to exploit the characteristics of workload sequences. In this paper, we propose Acorn, an accurate open-set recognition method capturing the characteristics of workload sequences. Acorn extracts two types of feature vectors to capture sequential patterns and spatial locality patterns in memory access. Acorn then uses the feature vectors to accurately classify a subsequence into one of the known classes or identify it as the unknown class. Experiments show that Acorn achieves state-of-the-art accuracy, giving up to 37% points higher unknown class detection accuracy while achieving comparable known class classification accuracy than existing methods. |
2402.18698 | Ziyun Yang | Ziyun Yang, Kevin Choy, and Sina Farsiu | Spatial Coherence Loss: All Objects Matter in Salient and Camouflaged
Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generic object detection is a category-independent task that relies on
accurate modeling of objectness. We show that for accurate semantic analysis,
the network needs to learn all object-level predictions that appear at any
stage of learning, including the pre-defined ground truth (GT) objects and the
ambiguous decoy objects that the network misidentifies as foreground. Yet, most
relevant models focused mainly on improving the learning of the GT objects. A
few methods that consider decoy objects utilize loss functions that only focus
on the single-response, i.e., the loss response of a single ambiguous pixel,
and thus do not benefit from the wealth of information that an object-level
ambiguity learning design can provide. Inspired by the human visual system,
which first discerns the boundaries of ambiguous regions before delving into
the semantic meaning, we propose a novel loss function, Spatial Coherence Loss
(SCLoss), that incorporates the mutual response between adjacent pixels into
the widely-used single-response loss functions. We demonstrate that the
proposed SCLoss can gradually learn the ambiguous regions by detecting and
emphasizing their boundaries in a self-adaptive manner. Through comprehensive
experiments, we demonstrate that replacing popular loss functions with SCLoss
can improve the performance of current state-of-the-art (SOTA) salient or
camouflaged object detection (SOD or COD) models. We also demonstrate that
combining SCLoss with other loss functions can further improve performance and
result in SOTA outcomes for different applications.
| [
{
"created": "Wed, 28 Feb 2024 20:27:49 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jul 2024 20:23:30 GMT",
"version": "v2"
}
] | 2024-07-18 | [
[
"Yang",
"Ziyun",
""
],
[
"Choy",
"Kevin",
""
],
[
"Farsiu",
"Sina",
""
]
] | Generic object detection is a category-independent task that relies on accurate modeling of objectness. We show that for accurate semantic analysis, the network needs to learn all object-level predictions that appear at any stage of learning, including the pre-defined ground truth (GT) objects and the ambiguous decoy objects that the network misidentifies as foreground. Yet, most relevant models focused mainly on improving the learning of the GT objects. A few methods that consider decoy objects utilize loss functions that only focus on the single-response, i.e., the loss response of a single ambiguous pixel, and thus do not benefit from the wealth of information that an object-level ambiguity learning design can provide. Inspired by the human visual system, which first discerns the boundaries of ambiguous regions before delving into the semantic meaning, we propose a novel loss function, Spatial Coherence Loss (SCLoss), that incorporates the mutual response between adjacent pixels into the widely-used single-response loss functions. We demonstrate that the proposed SCLoss can gradually learn the ambiguous regions by detecting and emphasizing their boundaries in a self-adaptive manner. Through comprehensive experiments, we demonstrate that replacing popular loss functions with SCLoss can improve the performance of current state-of-the-art (SOTA) salient or camouflaged object detection (SOD or COD) models. We also demonstrate that combining SCLoss with other loss functions can further improve performance and result in SOTA outcomes for different applications. |
1504.01218 | Mohammad Shahedul Karim | Mohammad S. Karim, Parastoo Sadeghi, Sameh Sorour, Neda Aboutorab | Instantly Decodable Network Coding for Real-Time Scalable Video
Broadcast over Wireless Networks | null | null | 10.1186/s13634-015-0299-6 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study a real-time scalable video broadcast over wireless
networks in instantly decodable network coded (IDNC) systems. Such real-time
scalable video has a hard deadline and imposes a decoding order on the video
layers.We first derive the upper bound on the probability that the individual
completion times of all receivers meet the deadline. Using this probability, we
design two prioritized IDNC algorithms, namely the expanding window IDNC
(EW-IDNC) algorithm and the non-overlapping window IDNC (NOW-IDNC) algorithm.
These algorithms provide a high level of protection to the most important video
layer before considering additional video layers in coding decisions. Moreover,
in these algorithms, we select an appropriate packet combination over a given
number of video layers so that these video layers are decoded by the maximum
number of receivers before the deadline. We formulate this packet selection
problem as a two-stage maximal clique selection problem over an IDNC graph.
Simulation results over a real scalable video stream show that our proposed
EW-IDNC and NOW-IDNC algorithms improve the received video quality compared to
the existing IDNC algorithms.
| [
{
"created": "Mon, 6 Apr 2015 07:14:46 GMT",
"version": "v1"
}
] | 2016-02-17 | [
[
"Karim",
"Mohammad S.",
""
],
[
"Sadeghi",
"Parastoo",
""
],
[
"Sorour",
"Sameh",
""
],
[
"Aboutorab",
"Neda",
""
]
] | In this paper, we study a real-time scalable video broadcast over wireless networks in instantly decodable network coded (IDNC) systems. Such real-time scalable video has a hard deadline and imposes a decoding order on the video layers.We first derive the upper bound on the probability that the individual completion times of all receivers meet the deadline. Using this probability, we design two prioritized IDNC algorithms, namely the expanding window IDNC (EW-IDNC) algorithm and the non-overlapping window IDNC (NOW-IDNC) algorithm. These algorithms provide a high level of protection to the most important video layer before considering additional video layers in coding decisions. Moreover, in these algorithms, we select an appropriate packet combination over a given number of video layers so that these video layers are decoded by the maximum number of receivers before the deadline. We formulate this packet selection problem as a two-stage maximal clique selection problem over an IDNC graph. Simulation results over a real scalable video stream show that our proposed EW-IDNC and NOW-IDNC algorithms improve the received video quality compared to the existing IDNC algorithms. |
2209.01755 | Atsushi Kawamoto | Yuki Sato, Teppei Deguchi, Tsuyoshi Nomura and Atsushi Kawamoto | Free material optimization of thermal conductivity tensors with
asymmetric components | 12 pages, 9 figures | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Free Material Optimization (FMO), a branch of topology optimization, in which
the design variables are the full constitutive tensors, can provide the most
general form of the design problems. Considering the microstructure composed of
isotropic materials, the constitutive tensors are yet positive definite and
symmetric. On the other hand, it has been reported that the symmetry of this
constitutive tensor can be broken in appearance by considering other physical
phenomena. In the present study, we focus on the thermal Hall effect, which is
explained as the phenomena that induces the temperature gradient orthogonal to
a given temperature gradient across a solid when a magnetic field is applied to
the solid. This effect makes the thermal conductivity tensor asymmetric and
justifies extending the space of the constitutive tensors to be an asymmetric
domain. We propose the FMO for asymmetric constitutive tensors, parameterizing
the design space so that the physically available property could be naturally
satisfied. Several numerical experiments are provided to show the validity and
the utility of the proposed method.
| [
{
"created": "Mon, 5 Sep 2022 04:20:58 GMT",
"version": "v1"
}
] | 2022-09-07 | [
[
"Sato",
"Yuki",
""
],
[
"Deguchi",
"Teppei",
""
],
[
"Nomura",
"Tsuyoshi",
""
],
[
"Kawamoto",
"Atsushi",
""
]
] | Free Material Optimization (FMO), a branch of topology optimization, in which the design variables are the full constitutive tensors, can provide the most general form of the design problems. Considering the microstructure composed of isotropic materials, the constitutive tensors are yet positive definite and symmetric. On the other hand, it has been reported that the symmetry of this constitutive tensor can be broken in appearance by considering other physical phenomena. In the present study, we focus on the thermal Hall effect, which is explained as the phenomena that induces the temperature gradient orthogonal to a given temperature gradient across a solid when a magnetic field is applied to the solid. This effect makes the thermal conductivity tensor asymmetric and justifies extending the space of the constitutive tensors to be an asymmetric domain. We propose the FMO for asymmetric constitutive tensors, parameterizing the design space so that the physically available property could be naturally satisfied. Several numerical experiments are provided to show the validity and the utility of the proposed method. |
1811.02062 | Xuankai Chang | Xuankai Chang and Yanmin Qian and Kai Yu and Shinji Watanabe | End-to-End Monaural Multi-speaker ASR System without Pretraining | submitted to ICASSP2019 | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, end-to-end models have become a popular approach as an alternative
to traditional hybrid models in automatic speech recognition (ASR). The
multi-speaker speech separation and recognition task is a central task in
cocktail party problem. In this paper, we present a state-of-the-art monaural
multi-speaker end-to-end automatic speech recognition model. In contrast to
previous studies on the monaural multi-speaker speech recognition, this
end-to-end framework is trained to recognize multiple label sequences
completely from scratch. The system only requires the speech mixture and
corresponding label sequences, without needing any indeterminate supervisions
obtained from non-mixture speech or corresponding labels/alignments. Moreover,
we exploited using the individual attention module for each separated speaker
and the scheduled sampling to further improve the performance. Finally, we
evaluate the proposed model on the 2-speaker mixed speech generated from the
WSJ corpus and the wsj0-2mix dataset, which is a speech separation and
recognition benchmark. The experiments demonstrate that the proposed methods
can improve the performance of the end-to-end model in separating the
overlapping speech and recognizing the separated streams. From the results, the
proposed model leads to ~10.0% relative performance gains in terms of CER and
WER respectively.
| [
{
"created": "Mon, 5 Nov 2018 22:21:51 GMT",
"version": "v1"
}
] | 2018-11-07 | [
[
"Chang",
"Xuankai",
""
],
[
"Qian",
"Yanmin",
""
],
[
"Yu",
"Kai",
""
],
[
"Watanabe",
"Shinji",
""
]
] | Recently, end-to-end models have become a popular approach as an alternative to traditional hybrid models in automatic speech recognition (ASR). The multi-speaker speech separation and recognition task is a central task in cocktail party problem. In this paper, we present a state-of-the-art monaural multi-speaker end-to-end automatic speech recognition model. In contrast to previous studies on the monaural multi-speaker speech recognition, this end-to-end framework is trained to recognize multiple label sequences completely from scratch. The system only requires the speech mixture and corresponding label sequences, without needing any indeterminate supervisions obtained from non-mixture speech or corresponding labels/alignments. Moreover, we exploited using the individual attention module for each separated speaker and the scheduled sampling to further improve the performance. Finally, we evaluate the proposed model on the 2-speaker mixed speech generated from the WSJ corpus and the wsj0-2mix dataset, which is a speech separation and recognition benchmark. The experiments demonstrate that the proposed methods can improve the performance of the end-to-end model in separating the overlapping speech and recognizing the separated streams. From the results, the proposed model leads to ~10.0% relative performance gains in terms of CER and WER respectively. |
2104.01355 | Ermin Sakic | Ermin Sakic, Petra Vizarreta, Wolfgang Kellerer | SEER: Performance-Aware Leader Election in Single-Leader Consensus | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern stateful web services and distributed SDN controllers rely on log
replication to omit data loss in case of fail-stop failures. In single-leader
execution, the leader replica is responsible for ordering log updates and the
initiation of distributed commits, in order to guarantee log consistency.
Network congestions, resource-heavy computation, and imbalanced resource
allocations may, however, result in inappropriate leader election and increased
cluster response times.
We present SEER, a logically centralized approach to performance prediction
and efficient leader election in leader-based consensus systems. SEER
autonomously identifies the replica that minimizes the average cluster response
time, using prediction models trained dynamically at runtime. To balance the
exploration and exploitation, SEER explores replicas' performance and updates
their prediction models only after detecting significant system changes. We
evaluate SEER in a traffic management scenario comprising [3..7] Raft replicas,
and well-known data-center and WAN topologies. Compared to the Raft's uniform
leader election, SEER decreases the mean control plane response time by up to
~32%. The benefit comes at the expense of the minimal adaptation of Raft
election procedure and a slight increase in leader reconfiguration frequency,
the latter being tunable with a guaranteed upper bound. No safety properties of
Raft are invalidated by SEER.
| [
{
"created": "Sat, 3 Apr 2021 09:15:16 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Sakic",
"Ermin",
""
],
[
"Vizarreta",
"Petra",
""
],
[
"Kellerer",
"Wolfgang",
""
]
] | Modern stateful web services and distributed SDN controllers rely on log replication to omit data loss in case of fail-stop failures. In single-leader execution, the leader replica is responsible for ordering log updates and the initiation of distributed commits, in order to guarantee log consistency. Network congestions, resource-heavy computation, and imbalanced resource allocations may, however, result in inappropriate leader election and increased cluster response times. We present SEER, a logically centralized approach to performance prediction and efficient leader election in leader-based consensus systems. SEER autonomously identifies the replica that minimizes the average cluster response time, using prediction models trained dynamically at runtime. To balance the exploration and exploitation, SEER explores replicas' performance and updates their prediction models only after detecting significant system changes. We evaluate SEER in a traffic management scenario comprising [3..7] Raft replicas, and well-known data-center and WAN topologies. Compared to the Raft's uniform leader election, SEER decreases the mean control plane response time by up to ~32%. The benefit comes at the expense of the minimal adaptation of Raft election procedure and a slight increase in leader reconfiguration frequency, the latter being tunable with a guaranteed upper bound. No safety properties of Raft are invalidated by SEER. |
1806.03639 | Ali Esswie Dr. | Ali A. Esswie, Octavia A. Dobre, and Salama Ikki | Directional Spatial Channel Estimation For Massive FD-MIMO in Next
Generation 5G Networks | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Full-dimensional (FD) channel state information at transmitter (CSIT) has
always been a major limitation of the spectral efficiency of cellular
multi-input multi-output (MIMO) networks. This letter proposes an
FD-directional spatial channel estimation algorithm for frequency division
duplex massive FD-MIMO systems. The proposed algorithm uses the statistical
spatial correlation between the uplink (UL) and downlink (DL) channels of each
user equipment. It spatially decomposes the UL channel into azimuthal and
elevation dimensions to estimate the array principal receive responses. An FD
spatial rotation matrix is constructed to estimate the corresponding transmit
responses of the DL channel, in terms of the frequency band gap between the UL
and DL channels. The proposed algorithm shows significantly promising
performance, approaching the ideal perfect-CSIT case without UL feedback
overhead.
| [
{
"created": "Sun, 10 Jun 2018 12:02:49 GMT",
"version": "v1"
}
] | 2018-06-12 | [
[
"Esswie",
"Ali A.",
""
],
[
"Dobre",
"Octavia A.",
""
],
[
"Ikki",
"Salama",
""
]
] | Full-dimensional (FD) channel state information at transmitter (CSIT) has always been a major limitation of the spectral efficiency of cellular multi-input multi-output (MIMO) networks. This letter proposes an FD-directional spatial channel estimation algorithm for frequency division duplex massive FD-MIMO systems. The proposed algorithm uses the statistical spatial correlation between the uplink (UL) and downlink (DL) channels of each user equipment. It spatially decomposes the UL channel into azimuthal and elevation dimensions to estimate the array principal receive responses. An FD spatial rotation matrix is constructed to estimate the corresponding transmit responses of the DL channel, in terms of the frequency band gap between the UL and DL channels. The proposed algorithm shows significantly promising performance, approaching the ideal perfect-CSIT case without UL feedback overhead. |
1601.02071 | Eduardo Graells-Garrido | Eduardo Graells-Garrido, Mounia Lalmas, Ricardo Baeza-Yates | Sentiment Visualisation Widgets for Exploratory Search | Presented at the Social Personalization Workshop held jointly with
ACM Hypertext 2014. 6 pages | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes the usage of \emph{visualisation widgets} for exploratory
search with \emph{sentiment} as a facet. Starting from specific design goals
for depiction of ambivalence in sentiment, two visualization widgets were
implemented: \emph{scatter plot} and \emph{parallel coordinates}. Those widgets
were evaluated against a text baseline in a small-scale usability study with
exploratory tasks using Wikipedia as dataset. The study results indicate that
users spend more time browsing with scatter plots in a positive way. A post-hoc
analysis of individual differences in behavior revealed that when considering
two types of users, \emph{explorers} and \emph{achievers}, engagement with
scatter plots is positive and significantly greater \textit{when users are
explorers}. We discuss the implications of these findings for sentiment-based
exploratory search and personalised user interfaces.
| [
{
"created": "Sat, 9 Jan 2016 03:48:07 GMT",
"version": "v1"
}
] | 2016-01-12 | [
[
"Graells-Garrido",
"Eduardo",
""
],
[
"Lalmas",
"Mounia",
""
],
[
"Baeza-Yates",
"Ricardo",
""
]
] | This paper proposes the usage of \emph{visualisation widgets} for exploratory search with \emph{sentiment} as a facet. Starting from specific design goals for depiction of ambivalence in sentiment, two visualization widgets were implemented: \emph{scatter plot} and \emph{parallel coordinates}. Those widgets were evaluated against a text baseline in a small-scale usability study with exploratory tasks using Wikipedia as dataset. The study results indicate that users spend more time browsing with scatter plots in a positive way. A post-hoc analysis of individual differences in behavior revealed that when considering two types of users, \emph{explorers} and \emph{achievers}, engagement with scatter plots is positive and significantly greater \textit{when users are explorers}. We discuss the implications of these findings for sentiment-based exploratory search and personalised user interfaces. |
1702.01695 | EPTCS | Dan R Ghica (University of Birmingham), Aliaume Lopez (ENS Cachan) | A Structural and Nominal Syntax for Diagrams | In Proceedings QPL 2017, arXiv:1802.09737 | EPTCS 266, 2018, pp. 71-83 | 10.4204/EPTCS.266.4 | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The correspondence between monoidal categories and graphical languages of
diagrams has been studied extensively, leading to applications in quantum
computing and communication, systems theory, circuit design and more. From the
categorical perspective, diagrams can be specified using (name-free)
combinators which enjoy elegant equational properties. However, conventional
notations for diagrammatic structures, such as hardware description languages
(VHDL, Verilog) or graph languages (Dot), use a different style, which is flat,
relational, and reliant on extensive use of names (labels). Such languages are
not known to enjoy nice syntactic equational properties. However, since they
make it relatively easy to specify (and modify) arbitrary diagrammatic
structures they are more popular than the combinator style. In this paper we
show how the two approaches to diagram syntax can be reconciled and unified in
a way that does not change the semantics and the existing equational theory.
Additionally, we give sound and complete equational theories for the combined
syntax.
| [
{
"created": "Mon, 6 Feb 2017 16:40:00 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Mar 2018 03:45:32 GMT",
"version": "v2"
}
] | 2018-03-05 | [
[
"Ghica",
"Dan R",
"",
"University of Birmingham"
],
[
"Lopez",
"Aliaume",
"",
"ENS Cachan"
]
] | The correspondence between monoidal categories and graphical languages of diagrams has been studied extensively, leading to applications in quantum computing and communication, systems theory, circuit design and more. From the categorical perspective, diagrams can be specified using (name-free) combinators which enjoy elegant equational properties. However, conventional notations for diagrammatic structures, such as hardware description languages (VHDL, Verilog) or graph languages (Dot), use a different style, which is flat, relational, and reliant on extensive use of names (labels). Such languages are not known to enjoy nice syntactic equational properties. However, since they make it relatively easy to specify (and modify) arbitrary diagrammatic structures they are more popular than the combinator style. In this paper we show how the two approaches to diagram syntax can be reconciled and unified in a way that does not change the semantics and the existing equational theory. Additionally, we give sound and complete equational theories for the combined syntax. |
2308.11300 | Thomas O'Connell | Thomas P. O'Connell, Tyler Bonnen, Yoni Friedman, Ayush Tewari, Josh
B. Tenenbaum, Vincent Sitzmann, Nancy Kanwisher | Approaching human 3D shape perception with neurally mappable models | null | null | null | null | cs.CV cs.GT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Humans effortlessly infer the 3D shape of objects. What computations underlie
this ability? Although various computational models have been proposed, none of
them capture the human ability to match object shape across viewpoints. Here,
we ask whether and how this gap might be closed. We begin with a relatively
novel class of computational models, 3D neural fields, which encapsulate the
basic principles of classic analysis-by-synthesis in a deep neural network
(DNN). First, we find that a 3D Light Field Network (3D-LFN) supports 3D
matching judgments well aligned to humans for within-category comparisons,
adversarially-defined comparisons that accentuate the 3D failure cases of
standard DNN models, and adversarially-defined comparisons for algorithmically
generated shapes with no category structure. We then investigate the source of
the 3D-LFN's ability to achieve human-aligned performance through a series of
computational experiments. Exposure to multiple viewpoints of objects during
training and a multi-view learning objective are the primary factors behind
model-human alignment; even conventional DNN architectures come much closer to
human behavior when trained with multi-view objectives. Finally, we find that
while the models trained with multi-view learning objectives are able to
partially generalize to new object categories, they fall short of human
alignment. This work provides a foundation for understanding human shape
inferences within neurally mappable computational architectures.
| [
{
"created": "Tue, 22 Aug 2023 09:29:05 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Sep 2023 21:18:15 GMT",
"version": "v2"
}
] | 2023-09-11 | [
[
"O'Connell",
"Thomas P.",
""
],
[
"Bonnen",
"Tyler",
""
],
[
"Friedman",
"Yoni",
""
],
[
"Tewari",
"Ayush",
""
],
[
"Tenenbaum",
"Josh B.",
""
],
[
"Sitzmann",
"Vincent",
""
],
[
"Kanwisher",
"Nancy",
""
]
] | Humans effortlessly infer the 3D shape of objects. What computations underlie this ability? Although various computational models have been proposed, none of them capture the human ability to match object shape across viewpoints. Here, we ask whether and how this gap might be closed. We begin with a relatively novel class of computational models, 3D neural fields, which encapsulate the basic principles of classic analysis-by-synthesis in a deep neural network (DNN). First, we find that a 3D Light Field Network (3D-LFN) supports 3D matching judgments well aligned to humans for within-category comparisons, adversarially-defined comparisons that accentuate the 3D failure cases of standard DNN models, and adversarially-defined comparisons for algorithmically generated shapes with no category structure. We then investigate the source of the 3D-LFN's ability to achieve human-aligned performance through a series of computational experiments. Exposure to multiple viewpoints of objects during training and a multi-view learning objective are the primary factors behind model-human alignment; even conventional DNN architectures come much closer to human behavior when trained with multi-view objectives. Finally, we find that while the models trained with multi-view learning objectives are able to partially generalize to new object categories, they fall short of human alignment. This work provides a foundation for understanding human shape inferences within neurally mappable computational architectures. |
1511.04808 | Mengyi Liu | Mengyi Liu, Ruiping Wang, Shiguang Shan, Xilin Chen | Learning Mid-level Words on Riemannian Manifold for Action Recognition | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human action recognition remains a challenging task due to the various
sources of video data and large intra-class variations. It thus becomes one of
the key issues in recent research to explore effective and robust
representation to handle such challenges. In this paper, we propose a novel
representation approach by constructing mid-level words in videos and encoding
them on Riemannian manifold. Specifically, we first conduct a global alignment
on the densely extracted low-level features to build a bank of corresponding
feature groups, each of which can be statistically modeled as a mid-level word
lying on some specific Riemannian manifold. Based on these mid-level words, we
construct intrinsic Riemannian codebooks by employing K-Karcher-means
clustering and Riemannian Gaussian Mixture Model, and consequently extend the
Riemannian manifold version of three well studied encoding methods in Euclidean
space, i.e. Bag of Visual Words (BoVW), Vector of Locally Aggregated
Descriptors (VLAD), and Fisher Vector (FV), to obtain the final action video
representations. Our method is evaluated in two tasks on four popular realistic
datasets: action recognition on YouTube, UCF50, HMDB51 databases, and action
similarity labeling on ASLAN database. In all cases, the reported results
achieve very competitive performance with those most recent state-of-the-art
works.
| [
{
"created": "Mon, 16 Nov 2015 03:18:06 GMT",
"version": "v1"
}
] | 2015-11-17 | [
[
"Liu",
"Mengyi",
""
],
[
"Wang",
"Ruiping",
""
],
[
"Shan",
"Shiguang",
""
],
[
"Chen",
"Xilin",
""
]
] | Human action recognition remains a challenging task due to the various sources of video data and large intra-class variations. It thus becomes one of the key issues in recent research to explore effective and robust representation to handle such challenges. In this paper, we propose a novel representation approach by constructing mid-level words in videos and encoding them on Riemannian manifold. Specifically, we first conduct a global alignment on the densely extracted low-level features to build a bank of corresponding feature groups, each of which can be statistically modeled as a mid-level word lying on some specific Riemannian manifold. Based on these mid-level words, we construct intrinsic Riemannian codebooks by employing K-Karcher-means clustering and Riemannian Gaussian Mixture Model, and consequently extend the Riemannian manifold version of three well studied encoding methods in Euclidean space, i.e. Bag of Visual Words (BoVW), Vector of Locally Aggregated Descriptors (VLAD), and Fisher Vector (FV), to obtain the final action video representations. Our method is evaluated in two tasks on four popular realistic datasets: action recognition on YouTube, UCF50, HMDB51 databases, and action similarity labeling on ASLAN database. In all cases, the reported results achieve very competitive performance with those most recent state-of-the-art works. |
2407.08227 | Joaquim Jorge | Chihcheng Hsieh, Catarina Moreira, Isabel Blanco Nobre, Sandra Costa
Sousa, Chun Ouyang, Margot Brereton, Joaquim Jorge and Jacinto C. Nascimento | DALL-M: Context-Aware Clinical Data Augmentation with LLMs | we introduce a pioneering approach to clinical data augmentation that
employs large language models (LLMs) to generate patient contextual synthetic
data. It preserves the integrity of real patient data while enriching the
dataset with contextually relevant synthetic features, significantly
enhancing model performance | null | null | null | cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | X-ray images are vital in medical diagnostics, but their effectiveness is
limited without clinical context. Radiologists often find chest X-rays
insufficient for diagnosing underlying diseases, necessitating comprehensive
clinical features and data integration. We present a novel technique to enhance
the clinical context through augmentation techniques with clinical tabular
data, thereby improving its applicability and reliability in AI medical
diagnostics. To address this, we introduce a pioneering approach to clinical
data augmentation that employs large language models (LLMs) to generate patient
contextual synthetic data. This methodology is crucial for training more robust
deep learning models in healthcare. It preserves the integrity of real patient
data while enriching the dataset with contextually relevant synthetic features,
significantly enhancing model performance. DALL-M uses a three-phase feature
generation process: (i) clinical context storage, (ii) expert query generation,
and (iii) context-aware feature augmentation. DALL-M generates new, clinically
relevant features by synthesizing chest X-ray images and reports. Applied to
799 cases using nine features from the MIMIC-IV dataset, it created an
augmented set of 91 features. This is the first work to generate contextual
values for existing and new features based on patients' X-ray reports, gender,
and age and to produce new contextual knowledge during data augmentation.
Empirical validation with machine learning models, including Decision Trees,
Random Forests, XGBoost, and TabNET, showed significant performance
improvements. Incorporating augmented features increased the F1 score by 16.5%
and Precision and Recall by approximately 25%. DALL-M addresses a critical gap
in clinical data augmentation, offering a robust framework for generating
contextually enriched datasets.
| [
{
"created": "Thu, 11 Jul 2024 07:01:50 GMT",
"version": "v1"
}
] | 2024-07-12 | [
[
"Hsieh",
"Chihcheng",
""
],
[
"Moreira",
"Catarina",
""
],
[
"Nobre",
"Isabel Blanco",
""
],
[
"Sousa",
"Sandra Costa",
""
],
[
"Ouyang",
"Chun",
""
],
[
"Brereton",
"Margot",
""
],
[
"Jorge",
"Joaquim",
""
],
[
"Nascimento",
"Jacinto C.",
""
]
] | X-ray images are vital in medical diagnostics, but their effectiveness is limited without clinical context. Radiologists often find chest X-rays insufficient for diagnosing underlying diseases, necessitating comprehensive clinical features and data integration. We present a novel technique to enhance the clinical context through augmentation techniques with clinical tabular data, thereby improving its applicability and reliability in AI medical diagnostics. To address this, we introduce a pioneering approach to clinical data augmentation that employs large language models (LLMs) to generate patient contextual synthetic data. This methodology is crucial for training more robust deep learning models in healthcare. It preserves the integrity of real patient data while enriching the dataset with contextually relevant synthetic features, significantly enhancing model performance. DALL-M uses a three-phase feature generation process: (i) clinical context storage, (ii) expert query generation, and (iii) context-aware feature augmentation. DALL-M generates new, clinically relevant features by synthesizing chest X-ray images and reports. Applied to 799 cases using nine features from the MIMIC-IV dataset, it created an augmented set of 91 features. This is the first work to generate contextual values for existing and new features based on patients' X-ray reports, gender, and age and to produce new contextual knowledge during data augmentation. Empirical validation with machine learning models, including Decision Trees, Random Forests, XGBoost, and TabNET, showed significant performance improvements. Incorporating augmented features increased the F1 score by 16.5% and Precision and Recall by approximately 25%. DALL-M addresses a critical gap in clinical data augmentation, offering a robust framework for generating contextually enriched datasets. |
2204.10245 | Ping Li | Jinxing Yu, Yunfeng Cai, Mingming Sun, Ping Li | SpaceE: Knowledge Graph Embedding by Relational Linear Transformation in
the Entity Space | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Translation distance based knowledge graph embedding (KGE) methods, such as
TransE and RotatE, model the relation in knowledge graphs as translation or
rotation in the vector space. Both translation and rotation are injective; that
is, the translation or rotation of different vectors results in different
results. In knowledge graphs, different entities may have a relation with the
same entity; for example, many actors starred in one movie. Such a
non-injective relation pattern cannot be well modeled by the translation or
rotation operations in existing translation distance based KGE methods. To
tackle the challenge, we propose a translation distance-based KGE method called
SpaceE to model relations as linear transformations. The proposed SpaceE embeds
both entities and relations in knowledge graphs as matrices and SpaceE
naturally models non-injective relations with singular linear transformations.
We theoretically demonstrate that SpaceE is a fully expressive model with the
ability to infer multiple desired relation patterns, including symmetry,
skew-symmetry, inversion, Abelian composition, and non-Abelian composition.
Experimental results on link prediction datasets illustrate that SpaceE
substantially outperforms many previous translation distance based knowledge
graph embedding methods, especially on datasets with many non-injective
relations. The code is available based on the PaddlePaddle deep learning
platform https://www.paddlepaddle.org.cn.
| [
{
"created": "Thu, 21 Apr 2022 16:26:20 GMT",
"version": "v1"
}
] | 2022-04-22 | [
[
"Yu",
"Jinxing",
""
],
[
"Cai",
"Yunfeng",
""
],
[
"Sun",
"Mingming",
""
],
[
"Li",
"Ping",
""
]
] | Translation distance based knowledge graph embedding (KGE) methods, such as TransE and RotatE, model the relation in knowledge graphs as translation or rotation in the vector space. Both translation and rotation are injective; that is, the translation or rotation of different vectors results in different results. In knowledge graphs, different entities may have a relation with the same entity; for example, many actors starred in one movie. Such a non-injective relation pattern cannot be well modeled by the translation or rotation operations in existing translation distance based KGE methods. To tackle the challenge, we propose a translation distance-based KGE method called SpaceE to model relations as linear transformations. The proposed SpaceE embeds both entities and relations in knowledge graphs as matrices and SpaceE naturally models non-injective relations with singular linear transformations. We theoretically demonstrate that SpaceE is a fully expressive model with the ability to infer multiple desired relation patterns, including symmetry, skew-symmetry, inversion, Abelian composition, and non-Abelian composition. Experimental results on link prediction datasets illustrate that SpaceE substantially outperforms many previous translation distance based knowledge graph embedding methods, especially on datasets with many non-injective relations. The code is available based on the PaddlePaddle deep learning platform https://www.paddlepaddle.org.cn. |
2207.13988 | Matej Ul\v{c}ar | Matej Ul\v{c}ar, Marko Robnik-\v{S}ikonja | Sequence to sequence pretraining for a less-resourced Slovenian language | 19 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Large pretrained language models have recently conquered the area of natural
language processing. As an alternative to predominant masked language modelling
introduced in BERT, the T5 model has introduced a more general training
objective, namely sequence to sequence transformation, which includes masked
language model but more naturally fits text generation tasks such as machine
translation, summarization, question answering, text simplification, dialogue
systems, etc. The monolingual variants of T5 models have been limited to
well-resourced languages, while the massively multilingual T5 model supports
101 languages. In contrast, we trained two different sized T5-type sequence to
sequence models for morphologically rich Slovene language with much less
resources and analyzed their behavior on 11 tasks. Concerning classification
tasks, the SloT5 models mostly lag behind the monolingual Slovene SloBERTa
model but are useful for the generative tasks.
| [
{
"created": "Thu, 28 Jul 2022 10:08:50 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Jan 2023 15:57:04 GMT",
"version": "v2"
}
] | 2023-01-03 | [
[
"Ulčar",
"Matej",
""
],
[
"Robnik-Šikonja",
"Marko",
""
]
] | Large pretrained language models have recently conquered the area of natural language processing. As an alternative to predominant masked language modelling introduced in BERT, the T5 model has introduced a more general training objective, namely sequence to sequence transformation, which includes masked language model but more naturally fits text generation tasks such as machine translation, summarization, question answering, text simplification, dialogue systems, etc. The monolingual variants of T5 models have been limited to well-resourced languages, while the massively multilingual T5 model supports 101 languages. In contrast, we trained two different sized T5-type sequence to sequence models for morphologically rich Slovene language with much less resources and analyzed their behavior on 11 tasks. Concerning classification tasks, the SloT5 models mostly lag behind the monolingual Slovene SloBERTa model but are useful for the generative tasks. |
1408.2034 | Vicenc Gomez | Vicenc Gomez, Hilbert Kappen, Michael Chertkov | Approximate inference on planar graphs using Loop Calculus and Belief
Propagation | Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty
in Artificial Intelligence (UAI2009) | null | null | UAI-P-2009-PG-195-202 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce novel results for approximate inference on planar graphical
models using the loop calculus framework. The loop calculus (Chertkov and
Chernyak, 2006b) allows to express the exact partition function Z of a
graphical model as a finite sum of terms that can be evaluated once the belief
propagation (BP) solution is known. In general, full summation over all
correction terms is intractable. We develop an algorithm for the approach
presented in Chertkov et al. (2008) which represents an efficient truncation
scheme on planar graphs and a new representation of the series in terms of
Pfaffians of matrices. We analyze in detail both the loop series and the
Pfaffian series for models with binary variables and pairwise interactions, and
show that the first term of the Pfaffian series can provide very accurate
approximations. The algorithm outperforms previous truncation schemes of the
loop series and is competitive with other state-of-the-art methods for
approximate inference.
| [
{
"created": "Sat, 9 Aug 2014 05:28:26 GMT",
"version": "v1"
}
] | 2014-08-12 | [
[
"Gomez",
"Vicenc",
""
],
[
"Kappen",
"Hilbert",
""
],
[
"Chertkov",
"Michael",
""
]
] | We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006b) allows to express the exact partition function Z of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in Chertkov et al. (2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyze in detail both the loop series and the Pfaffian series for models with binary variables and pairwise interactions, and show that the first term of the Pfaffian series can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference. |
0904.0313 | Petar Kormushev | Petar Kormushev | Visual approach for data mining on medical information databases using
Fastmap algorithm | Master's Thesis in Bio- and Medical Informatics, 76 pages, in
Bulgarian. Submitted to Faculty of Mathematics and Informatics, Sofia
University, 2006 | null | null | null | cs.IR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid development of tools for acquisition and storage of information has
lead to the formation of enormous medical databases. The large quantity of data
definitely surpasses the abilities of humans for efficient usage without
specialized tools for analysis. The situation is described as rich in data, but
poor in information. In order to fill this growing gap, different approaches
from the field of Data Mining are applied. These methods perform analysis of
large sets of observed data in order to find new dependencies or concise
representation of the data, which is more meaningful to humans. One of the
possible approaches for discovery of dependencies is the visual approach, in
which data is processed and visualized in a way suitable for analysis by a
domain expert. This work proposes a visual approach, in which data is processed
and visualized in a way suitable for analysis by a domain expert. We design and
implement a software solution for visualization of multi-dimensional,
classified medical data using the FastMap algorithm for graduate reduction of
dimensions. The implementation of the graphical user interface is described in
detail since it is the most important factor for the ease of use of these tools
by non-professionals in data mining.
| [
{
"created": "Thu, 2 Apr 2009 06:14:42 GMT",
"version": "v1"
}
] | 2009-04-03 | [
[
"Kormushev",
"Petar",
""
]
] | The rapid development of tools for acquisition and storage of information has lead to the formation of enormous medical databases. The large quantity of data definitely surpasses the abilities of humans for efficient usage without specialized tools for analysis. The situation is described as rich in data, but poor in information. In order to fill this growing gap, different approaches from the field of Data Mining are applied. These methods perform analysis of large sets of observed data in order to find new dependencies or concise representation of the data, which is more meaningful to humans. One of the possible approaches for discovery of dependencies is the visual approach, in which data is processed and visualized in a way suitable for analysis by a domain expert. This work proposes a visual approach, in which data is processed and visualized in a way suitable for analysis by a domain expert. We design and implement a software solution for visualization of multi-dimensional, classified medical data using the FastMap algorithm for graduate reduction of dimensions. The implementation of the graphical user interface is described in detail since it is the most important factor for the ease of use of these tools by non-professionals in data mining. |
2307.09642 | Wei-Lun Huang | Wei-Lun Huang, Davood Tashayyod, Jun Kang, Amir Gandjbakhche, Michael
Kazhdan, Mehran Armand | Skin Lesion Correspondence Localization in Total Body Photography | MICCAI-2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Longitudinal tracking of skin lesions - finding correspondence, changes in
morphology, and texture - is beneficial to the early detection of melanoma.
However, it has not been well investigated in the context of full-body imaging.
We propose a novel framework combining geometric and texture information to
localize skin lesion correspondence from a source scan to a target scan in
total body photography (TBP). Body landmarks or sparse correspondence are first
created on the source and target 3D textured meshes. Every vertex on each of
the meshes is then mapped to a feature vector characterizing the geodesic
distances to the landmarks on that mesh. Then, for each lesion of interest
(LOI) on the source, its corresponding location on the target is first coarsely
estimated using the geometric information encoded in the feature vectors and
then refined using the texture information. We evaluated the framework
quantitatively on both a public and a private dataset, for which our success
rates (at 10 mm criterion) are comparable to the only reported longitudinal
study. As full-body 3D capture becomes more prevalent and has higher quality,
we expect the proposed method to constitute a valuable step in the longitudinal
tracking of skin lesions.
| [
{
"created": "Tue, 18 Jul 2023 21:10:59 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Aug 2023 17:19:34 GMT",
"version": "v2"
}
] | 2023-08-23 | [
[
"Huang",
"Wei-Lun",
""
],
[
"Tashayyod",
"Davood",
""
],
[
"Kang",
"Jun",
""
],
[
"Gandjbakhche",
"Amir",
""
],
[
"Kazhdan",
"Michael",
""
],
[
"Armand",
"Mehran",
""
]
] | Longitudinal tracking of skin lesions - finding correspondence, changes in morphology, and texture - is beneficial to the early detection of melanoma. However, it has not been well investigated in the context of full-body imaging. We propose a novel framework combining geometric and texture information to localize skin lesion correspondence from a source scan to a target scan in total body photography (TBP). Body landmarks or sparse correspondence are first created on the source and target 3D textured meshes. Every vertex on each of the meshes is then mapped to a feature vector characterizing the geodesic distances to the landmarks on that mesh. Then, for each lesion of interest (LOI) on the source, its corresponding location on the target is first coarsely estimated using the geometric information encoded in the feature vectors and then refined using the texture information. We evaluated the framework quantitatively on both a public and a private dataset, for which our success rates (at 10 mm criterion) are comparable to the only reported longitudinal study. As full-body 3D capture becomes more prevalent and has higher quality, we expect the proposed method to constitute a valuable step in the longitudinal tracking of skin lesions. |
1810.06659 | Yifan Wang | Yifan Wang, Shaoshan Liu, Xiaopei Wu, Weisong Shi | CAVBench: A Benchmark Suite for Connected and Autonomous Vehicles | 13 pages, The Third ACM/IEEE Symposium on Edge Computing 2018 SEC | null | null | null | cs.DC cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Connected and autonomous vehicles (CAVs) have recently attracted a
significant amount of attention both from researchers and industry. Numerous
studies targeting algorithms, software frameworks, and applications on the CAVs
scenario have emerged. Meanwhile, several pioneer efforts have focused on the
edge computing system and architecture design for the CAVs scenario and
provided various heterogeneous platform prototypes for CAVs. However, a
standard and comprehensive application benchmark for CAVs is missing, hindering
the study of these emerging computing systems. To address this challenging
problem, we present CAVBench, the first benchmark suite for the edge computing
system in the CAVs scenario. CAVBench is comprised of six typical applications
covering four dominate CAVs scenarios and takes four datasets as standard
input. CAVBench provides quantitative evaluation results via application and
system perspective output metrics. We perform a series of experiments and
acquire three systemic characteristics of the applications in CAVBench. First,
the operation intensity of the applications is polarized, which explains why
heterogeneous hardware is important for a CAVs computing system. Second, all
applications in CAVBench consume high memory bandwidth, so the system should be
equipped with high bandwidth memory or leverage good memory bandwidth
management to avoid the performance degradation caused by memory bandwidth
competition. Third, some applications have worse data/instruction locality
based on the cache miss observation, so the computing system targeting these
applications should optimize the cache architecture. Last, we use the CAVBench
to evaluate a typical edge computing platform and present the quantitative and
qualitative analysis of the benchmarking results.
| [
{
"created": "Mon, 15 Oct 2018 20:07:33 GMT",
"version": "v1"
}
] | 2018-10-17 | [
[
"Wang",
"Yifan",
""
],
[
"Liu",
"Shaoshan",
""
],
[
"Wu",
"Xiaopei",
""
],
[
"Shi",
"Weisong",
""
]
] | Connected and autonomous vehicles (CAVs) have recently attracted a significant amount of attention both from researchers and industry. Numerous studies targeting algorithms, software frameworks, and applications on the CAVs scenario have emerged. Meanwhile, several pioneer efforts have focused on the edge computing system and architecture design for the CAVs scenario and provided various heterogeneous platform prototypes for CAVs. However, a standard and comprehensive application benchmark for CAVs is missing, hindering the study of these emerging computing systems. To address this challenging problem, we present CAVBench, the first benchmark suite for the edge computing system in the CAVs scenario. CAVBench is comprised of six typical applications covering four dominate CAVs scenarios and takes four datasets as standard input. CAVBench provides quantitative evaluation results via application and system perspective output metrics. We perform a series of experiments and acquire three systemic characteristics of the applications in CAVBench. First, the operation intensity of the applications is polarized, which explains why heterogeneous hardware is important for a CAVs computing system. Second, all applications in CAVBench consume high memory bandwidth, so the system should be equipped with high bandwidth memory or leverage good memory bandwidth management to avoid the performance degradation caused by memory bandwidth competition. Third, some applications have worse data/instruction locality based on the cache miss observation, so the computing system targeting these applications should optimize the cache architecture. Last, we use the CAVBench to evaluate a typical edge computing platform and present the quantitative and qualitative analysis of the benchmarking results. |
1801.05086 | Hung La | Huy X. Pham, Hung M. La, David Feil-Seifer, Luan V. Nguyen | Autonomous UAV Navigation Using Reinforcement Learning | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unmanned aerial vehicles (UAV) are commonly used for missions in unknown
environments, where an exact mathematical model of the environment may not be
available. This paper provides a framework for using reinforcement learning to
allow the UAV to navigate successfully in such environments. We conducted our
simulation and real implementation to show how the UAVs can successfully learn
to navigate through an unknown environment. Technical aspects regarding to
applying reinforcement learning algorithm to a UAV system and UAV flight
control were also addressed. This will enable continuing research using a UAV
with learning capabilities in more important applications, such as wildfire
monitoring, or search and rescue missions.
| [
{
"created": "Tue, 16 Jan 2018 01:14:12 GMT",
"version": "v1"
}
] | 2018-01-17 | [
[
"Pham",
"Huy X.",
""
],
[
"La",
"Hung M.",
""
],
[
"Feil-Seifer",
"David",
""
],
[
"Nguyen",
"Luan V.",
""
]
] | Unmanned aerial vehicles (UAV) are commonly used for missions in unknown environments, where an exact mathematical model of the environment may not be available. This paper provides a framework for using reinforcement learning to allow the UAV to navigate successfully in such environments. We conducted our simulation and real implementation to show how the UAVs can successfully learn to navigate through an unknown environment. Technical aspects regarding to applying reinforcement learning algorithm to a UAV system and UAV flight control were also addressed. This will enable continuing research using a UAV with learning capabilities in more important applications, such as wildfire monitoring, or search and rescue missions. |
1910.04392 | Yonglin Tian | Yonglin Tian, Kunfeng Wang, Yuang Wang, Yulin Tian, Zilei Wang,
Fei-Yue Wang | Adaptive and Azimuth-Aware Fusion Network of Multimodal Local Features
for 3D Object Detection | Accepted by Neurocomputing | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on the construction of stronger local features and the
effective fusion of image and LiDAR data. We adopt different modalities of
LiDAR data to generate richer features and present an adaptive and
azimuth-aware network to aggregate local features from image, bird's eye view
maps and point cloud. Our network mainly consists of three subnetworks: ground
plane estimation network, region proposal network and adaptive fusion network.
The ground plane estimation network extracts features of point cloud and
predicts the parameters of a plane which are used for generating abundant 3D
anchors. The region proposal network generates features of image and bird's eye
view maps to output region proposals. To integrate heterogeneous image and
point cloud features, the adaptive fusion network explicitly adjusts the
intensity of multiple local features and achieves the orientation consistency
between image and LiDAR data by introduce an azimuth-aware fusion module.
Experiments are conducted on KITTI dataset and the results validate the
advantages of our aggregation of multimodal local features and the adaptive
fusion network.
| [
{
"created": "Thu, 10 Oct 2019 07:07:01 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jun 2020 10:13:08 GMT",
"version": "v2"
}
] | 2020-06-04 | [
[
"Tian",
"Yonglin",
""
],
[
"Wang",
"Kunfeng",
""
],
[
"Wang",
"Yuang",
""
],
[
"Tian",
"Yulin",
""
],
[
"Wang",
"Zilei",
""
],
[
"Wang",
"Fei-Yue",
""
]
] | This paper focuses on the construction of stronger local features and the effective fusion of image and LiDAR data. We adopt different modalities of LiDAR data to generate richer features and present an adaptive and azimuth-aware network to aggregate local features from image, bird's eye view maps and point cloud. Our network mainly consists of three subnetworks: ground plane estimation network, region proposal network and adaptive fusion network. The ground plane estimation network extracts features of point cloud and predicts the parameters of a plane which are used for generating abundant 3D anchors. The region proposal network generates features of image and bird's eye view maps to output region proposals. To integrate heterogeneous image and point cloud features, the adaptive fusion network explicitly adjusts the intensity of multiple local features and achieves the orientation consistency between image and LiDAR data by introduce an azimuth-aware fusion module. Experiments are conducted on KITTI dataset and the results validate the advantages of our aggregation of multimodal local features and the adaptive fusion network. |
cs/0003022 | Horacio Arlo-Costa | Horacio Arlo-Costa | Hypothetical revision and matter-of-fact supposition | 9 pages. Presented at the Special Session on Belief change: theory
and practice of the 8th Intl. Workshop on Non-Monotonic Reasoning NMR'2000 | null | null | null | cs.AI cs.CL | null | The paper studies the notion of supposition encoded in non-Archimedean
conditional probability (and revealed in the acceptance of the so-called
indicative conditionals). The notion of qualitative change of view that thus
arises is axiomatized and compared with standard notions like AGM and UPDATE.
Applications in the following fields are discussed: (1) theory of games and
decisions, (2) causal models, (3) non-monotonic logic.
| [
{
"created": "Wed, 8 Mar 2000 16:06:58 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Arlo-Costa",
"Horacio",
""
]
] | The paper studies the notion of supposition encoded in non-Archimedean conditional probability (and revealed in the acceptance of the so-called indicative conditionals). The notion of qualitative change of view that thus arises is axiomatized and compared with standard notions like AGM and UPDATE. Applications in the following fields are discussed: (1) theory of games and decisions, (2) causal models, (3) non-monotonic logic. |
1902.10191 | Jie Chen | Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro
Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, Charles E. Leiserson | EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs | AAAI 2020. The code is available at https://github.com/IBM/EvolveGCN | null | null | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph representation learning resurges as a trending research subject owing
to the widespread use of deep learning for Euclidean data, which inspire
various creative designs of neural networks in the non-Euclidean domain,
particularly graphs. With the success of these graph neural networks (GNN) in
the static setting, we approach further practical scenarios where the graph
dynamically evolves. Existing approaches typically resort to node embeddings
and use a recurrent neural network (RNN, broadly speaking) to regulate the
embeddings and learn the temporal dynamics. These methods require the knowledge
of a node in the full time span (including both training and testing) and are
less applicable to the frequent change of the node set. In some extreme
scenarios, the node sets at different time steps may completely differ. To
resolve this challenge, we propose EvolveGCN, which adapts the graph
convolutional network (GCN) model along the temporal dimension without
resorting to node embeddings. The proposed approach captures the dynamism of
the graph sequence through using an RNN to evolve the GCN parameters. Two
architectures are considered for the parameter evolution. We evaluate the
proposed approach on tasks including link prediction, edge classification, and
node classification. The experimental results indicate a generally higher
performance of EvolveGCN compared with related approaches. The code is
available at \url{https://github.com/IBM/EvolveGCN}.
| [
{
"created": "Tue, 26 Feb 2019 20:07:34 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Sep 2019 03:33:46 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Nov 2019 18:42:50 GMT",
"version": "v3"
}
] | 2019-11-19 | [
[
"Pareja",
"Aldo",
""
],
[
"Domeniconi",
"Giacomo",
""
],
[
"Chen",
"Jie",
""
],
[
"Ma",
"Tengfei",
""
],
[
"Suzumura",
"Toyotaro",
""
],
[
"Kanezashi",
"Hiroki",
""
],
[
"Kaler",
"Tim",
""
],
[
"Schardl",
"Tao B.",
""
],
[
"Leiserson",
"Charles E.",
""
]
] | Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. Existing approaches typically resort to node embeddings and use a recurrent neural network (RNN, broadly speaking) to regulate the embeddings and learn the temporal dynamics. These methods require the knowledge of a node in the full time span (including both training and testing) and are less applicable to the frequent change of the node set. In some extreme scenarios, the node sets at different time steps may completely differ. To resolve this challenge, we propose EvolveGCN, which adapts the graph convolutional network (GCN) model along the temporal dimension without resorting to node embeddings. The proposed approach captures the dynamism of the graph sequence through using an RNN to evolve the GCN parameters. Two architectures are considered for the parameter evolution. We evaluate the proposed approach on tasks including link prediction, edge classification, and node classification. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. The code is available at \url{https://github.com/IBM/EvolveGCN}. |
1808.07198 | Wei Li | Min Chen, Wei Li, Giancarlo Fortino, Yixue Hao, Long Hu, and Iztok
Humar | A Dynamic Service-Migration Mechanism in Edge Cognitive Computing | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by the vision of edge computing and the success of rich cognitive
services based on artificial intelligence, a new computing paradigm, edge
cognitive computing (ECC), is a promising approach that applies cognitive
computing at the edge of the network. ECC has the potential to provide the
cognition of users and network environmental information, and further to
provide elastic cognitive computing services to achieve a higher energy
efficiency and a higher Quality of Experience (QoE) compared to edge computing.
This paper firstly introduces our architecture of the ECC and then describes
its design issues in detail. Moreover, we propose an ECC-based dynamic service
migration mechanism to provide an insight into how cognitive computing is
combined with edge computing. In order to evaluate the proposed mechanism, a
practical platform for dynamic service migration is built up, where the
services are migrated based on the behavioral cognition of a mobile user. The
experimental results show that the proposed ECC architecture has ultra-low
latency and a high user experience, while providing better service to the user,
saving computing resources, and achieving a high energy efficiency.
| [
{
"created": "Wed, 22 Aug 2018 03:07:27 GMT",
"version": "v1"
}
] | 2018-08-23 | [
[
"Chen",
"Min",
""
],
[
"Li",
"Wei",
""
],
[
"Fortino",
"Giancarlo",
""
],
[
"Hao",
"Yixue",
""
],
[
"Hu",
"Long",
""
],
[
"Humar",
"Iztok",
""
]
] | Driven by the vision of edge computing and the success of rich cognitive services based on artificial intelligence, a new computing paradigm, edge cognitive computing (ECC), is a promising approach that applies cognitive computing at the edge of the network. ECC has the potential to provide the cognition of users and network environmental information, and further to provide elastic cognitive computing services to achieve a higher energy efficiency and a higher Quality of Experience (QoE) compared to edge computing. This paper firstly introduces our architecture of the ECC and then describes its design issues in detail. Moreover, we propose an ECC-based dynamic service migration mechanism to provide an insight into how cognitive computing is combined with edge computing. In order to evaluate the proposed mechanism, a practical platform for dynamic service migration is built up, where the services are migrated based on the behavioral cognition of a mobile user. The experimental results show that the proposed ECC architecture has ultra-low latency and a high user experience, while providing better service to the user, saving computing resources, and achieving a high energy efficiency. |
2209.06257 | Teddy Lazebnik Dr. | Liron Simon Keren, Alex Liberzon, Teddy Lazebnik | A computational framework for physics-informed symbolic regression with
straightforward integration of domain knowledge | null | Sci Rep 13, 1249 (2023) | 10.1038/s41598-023-28328-2 | null | cs.LG cs.CE cs.HC cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Discovering a meaningful symbolic expression that explains experimental data
is a fundamental challenge in many scientific fields. We present a novel,
open-source computational framework called Scientist-Machine Equation Detector
(SciMED), which integrates scientific discipline wisdom in a
scientist-in-the-loop approach, with state-of-the-art symbolic regression (SR)
methods. SciMED combines a wrapper selection method, that is based on a genetic
algorithm, with automatic machine learning and two levels of SR methods. We
test SciMED on five configurations of a settling sphere, with and without
aerodynamic non-linear drag force, and with excessive noise in the
measurements. We show that SciMED is sufficiently robust to discover the
correct physically meaningful symbolic expressions from the data, and
demonstrate how the integration of domain knowledge enhances its performance.
Our results indicate better performance on these tasks than the
state-of-the-art SR software packages , even in cases where no knowledge is
integrated. Moreover, we demonstrate how SciMED can alert the user about
possible missing features, unlike the majority of current SR systems.
| [
{
"created": "Tue, 13 Sep 2022 18:31:23 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Dec 2022 04:54:11 GMT",
"version": "v2"
},
{
"created": "Mon, 23 Jan 2023 19:10:13 GMT",
"version": "v3"
}
] | 2023-03-02 | [
[
"Keren",
"Liron Simon",
""
],
[
"Liberzon",
"Alex",
""
],
[
"Lazebnik",
"Teddy",
""
]
] | Discovering a meaningful symbolic expression that explains experimental data is a fundamental challenge in many scientific fields. We present a novel, open-source computational framework called Scientist-Machine Equation Detector (SciMED), which integrates scientific discipline wisdom in a scientist-in-the-loop approach, with state-of-the-art symbolic regression (SR) methods. SciMED combines a wrapper selection method, that is based on a genetic algorithm, with automatic machine learning and two levels of SR methods. We test SciMED on five configurations of a settling sphere, with and without aerodynamic non-linear drag force, and with excessive noise in the measurements. We show that SciMED is sufficiently robust to discover the correct physically meaningful symbolic expressions from the data, and demonstrate how the integration of domain knowledge enhances its performance. Our results indicate better performance on these tasks than the state-of-the-art SR software packages , even in cases where no knowledge is integrated. Moreover, we demonstrate how SciMED can alert the user about possible missing features, unlike the majority of current SR systems. |
1902.04694 | Thomas Gabor | Thomas Gabor, Marie Kiermeier, Andreas Sedlmeier, Bernhard Kempter,
Cornel Klein, Horst Sauer, Reiner Schmid, Jan Wieghardt | Adapting Quality Assurance to Adaptive Systems: The Scenario Coevolution
Paradigm | 17 pages, published at ISOLA 2018 | International Symposium on Leveraging Applications of Formal
Methods (ISOLA). Springer, 2018 | 10.1007/978-3-030-03424-5_10 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From formal and practical analysis, we identify new challenges that
self-adaptive systems pose to the process of quality assurance. When tackling
these, the effort spent on various tasks in the process of software engineering
is naturally re-distributed. We claim that all steps related to testing need to
become self-adaptive to match the capabilities of the self-adaptive
system-under-test. Otherwise, the adaptive system's behavior might elude
traditional variants of quality assurance. We thus propose the paradigm of
scenario coevolution, which describes a pool of test cases and other
constraints on system behavior that evolves in parallel to the (in part
autonomous) development of behavior in the system-under-test. Scenario
coevolution offers a simple structure for the organization of adaptive testing
that allows for both human-controlled and autonomous intervention, supporting
software engineering for adaptive systems on a procedural as well as technical
level.
| [
{
"created": "Wed, 13 Feb 2019 01:27:18 GMT",
"version": "v1"
}
] | 2019-02-14 | [
[
"Gabor",
"Thomas",
""
],
[
"Kiermeier",
"Marie",
""
],
[
"Sedlmeier",
"Andreas",
""
],
[
"Kempter",
"Bernhard",
""
],
[
"Klein",
"Cornel",
""
],
[
"Sauer",
"Horst",
""
],
[
"Schmid",
"Reiner",
""
],
[
"Wieghardt",
"Jan",
""
]
] | From formal and practical analysis, we identify new challenges that self-adaptive systems pose to the process of quality assurance. When tackling these, the effort spent on various tasks in the process of software engineering is naturally re-distributed. We claim that all steps related to testing need to become self-adaptive to match the capabilities of the self-adaptive system-under-test. Otherwise, the adaptive system's behavior might elude traditional variants of quality assurance. We thus propose the paradigm of scenario coevolution, which describes a pool of test cases and other constraints on system behavior that evolves in parallel to the (in part autonomous) development of behavior in the system-under-test. Scenario coevolution offers a simple structure for the organization of adaptive testing that allows for both human-controlled and autonomous intervention, supporting software engineering for adaptive systems on a procedural as well as technical level. |
1907.06853 | Wenqi Fan | Wenqi Fan, Yao Ma, Dawei Yin, Jianping Wang, Jiliang Tang, Qing Li | Deep Social Collaborative Filtering | Accepted by 13th ACM Conference on Recommender Systems (RecSys 2019,
Long Paper) | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems are crucial to alleviate the information overload problem
in online worlds. Most of the modern recommender systems capture users'
preference towards items via their interactions based on collaborative
filtering techniques. In addition to the user-item interactions, social
networks can also provide useful information to understand users' preference as
suggested by the social theories such as homophily and influence. Recently,
deep neural networks have been utilized for social recommendations, which
facilitate both the user-item interactions and the social network information.
However, most of these models cannot take full advantage of the social network
information. They only use information from direct neighbors, but distant
neighbors can also provide helpful information. Meanwhile, most of these models
treat neighbors' information equally without considering the specific
recommendations. However, for a specific recommendation case, the information
relevant to the specific item would be helpful. Besides, most of these models
do not explicitly capture the neighbor's opinions to items for social
recommendations, while different opinions could affect the user differently. In
this paper, to address the aforementioned challenges, we propose DSCF, a Deep
Social Collaborative Filtering framework, which can exploit the social
relations with various aspects for recommender systems. Comprehensive
experiments on two-real world datasets show the effectiveness of the proposed
framework.
| [
{
"created": "Tue, 16 Jul 2019 06:05:29 GMT",
"version": "v1"
}
] | 2019-07-17 | [
[
"Fan",
"Wenqi",
""
],
[
"Ma",
"Yao",
""
],
[
"Yin",
"Dawei",
""
],
[
"Wang",
"Jianping",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Li",
"Qing",
""
]
] | Recommender systems are crucial to alleviate the information overload problem in online worlds. Most of the modern recommender systems capture users' preference towards items via their interactions based on collaborative filtering techniques. In addition to the user-item interactions, social networks can also provide useful information to understand users' preference as suggested by the social theories such as homophily and influence. Recently, deep neural networks have been utilized for social recommendations, which facilitate both the user-item interactions and the social network information. However, most of these models cannot take full advantage of the social network information. They only use information from direct neighbors, but distant neighbors can also provide helpful information. Meanwhile, most of these models treat neighbors' information equally without considering the specific recommendations. However, for a specific recommendation case, the information relevant to the specific item would be helpful. Besides, most of these models do not explicitly capture the neighbor's opinions to items for social recommendations, while different opinions could affect the user differently. In this paper, to address the aforementioned challenges, we propose DSCF, a Deep Social Collaborative Filtering framework, which can exploit the social relations with various aspects for recommender systems. Comprehensive experiments on two-real world datasets show the effectiveness of the proposed framework. |
1608.05864 | Ahmad Masoud Dr | Ahmad A. Masoud | A Hybrid, PDE-ODE Control Strategy for Intercepting an Intelligent,
well-informed Target in a Stationary, Cluttered Environment | 22 pages, 20 figures, Journal paper | Applied Mathematical Sciences, HIKARI Ltd, Vol. 1, 2007, No. 48,
2345-2371 | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In [1,2] a new class of intelligent controllers that can semantically embed
an agent in a spatial context constraining its behavior in a goal-oriented
manner was suggested. A controller of such a class can guide an agent in a
stationary unknown environment to a fixed target zone along an obstacle-free
trajectory. Here, an extension is suggested that would enable the interception
of an intelligent target that is maneuvering to evade capture amidst stationary
clutter (i.e. the target zone is moving). This is achieved by forcing the
differential properties of the potential field used to induce the control
action to satisfy the wave equation. Background of the problem, theoretical
developments, as well as, proofs of the ability of the modified control to
intercept the target along an obstacle-free trajectory are supplied. Simulation
results are also provided.
| [
{
"created": "Sat, 20 Aug 2016 19:02:19 GMT",
"version": "v1"
}
] | 2016-08-23 | [
[
"Masoud",
"Ahmad A.",
""
]
] | In [1,2] a new class of intelligent controllers that can semantically embed an agent in a spatial context constraining its behavior in a goal-oriented manner was suggested. A controller of such a class can guide an agent in a stationary unknown environment to a fixed target zone along an obstacle-free trajectory. Here, an extension is suggested that would enable the interception of an intelligent target that is maneuvering to evade capture amidst stationary clutter (i.e. the target zone is moving). This is achieved by forcing the differential properties of the potential field used to induce the control action to satisfy the wave equation. Background of the problem, theoretical developments, as well as, proofs of the ability of the modified control to intercept the target along an obstacle-free trajectory are supplied. Simulation results are also provided. |
1202.4406 | Oleg Verbitsky | Johannes K\"obler, Sebastian Kuhnert, Oleg Verbitsky | Solving the Canonical Representation and Star System Problems for Proper
Circular-Arc Graphs in Log-Space | 19 pages, 3 figures, major revision | null | null | null | cs.CC cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a logspace algorithm that constructs a canonical intersection
model for a given proper circular-arc graph, where `canonical' means that
models of isomorphic graphs are equal. This implies that the recognition and
the isomorphism problems for this class of graphs are solvable in logspace. For
a broader class of concave-round graphs, that still possess (not necessarily
proper) circular-arc models, we show that those can also be constructed
canonically in logspace. As a building block for these results, we show how to
compute canonical models of circular-arc hypergraphs in logspace, which are
also known as matrices with the circular-ones property. Finally, we consider
the search version of the Star System Problem that consists in reconstructing a
graph from its closed neighborhood hypergraph. We solve it in logspace for the
classes of proper circular-arc, concave-round, and co-convex graphs.
| [
{
"created": "Mon, 20 Feb 2012 18:06:08 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Mar 2012 10:05:17 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Mar 2012 09:23:07 GMT",
"version": "v3"
},
{
"created": "Tue, 8 May 2012 08:07:27 GMT",
"version": "v4"
},
{
"created": "Thu, 5 Dec 2013 12:22:31 GMT",
"version": "v5"
}
] | 2013-12-06 | [
[
"Köbler",
"Johannes",
""
],
[
"Kuhnert",
"Sebastian",
""
],
[
"Verbitsky",
"Oleg",
""
]
] | We present a logspace algorithm that constructs a canonical intersection model for a given proper circular-arc graph, where `canonical' means that models of isomorphic graphs are equal. This implies that the recognition and the isomorphism problems for this class of graphs are solvable in logspace. For a broader class of concave-round graphs, that still possess (not necessarily proper) circular-arc models, we show that those can also be constructed canonically in logspace. As a building block for these results, we show how to compute canonical models of circular-arc hypergraphs in logspace, which are also known as matrices with the circular-ones property. Finally, we consider the search version of the Star System Problem that consists in reconstructing a graph from its closed neighborhood hypergraph. We solve it in logspace for the classes of proper circular-arc, concave-round, and co-convex graphs. |
1908.03176 | Sobhan Soleymani | Sobhan Soleymani, Ali Dabouei, Jeremy Dawson, Nasser M. Nasrabadi | Defending Against Adversarial Iris Examples Using Wavelet Decomposition | The Tenth IEEE International Conference on Biometrics: Theory,
Applications, and Systems (BTAS 2019) | null | null | null | cs.CV cs.LG eess.IV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have presented impressive performance in biometric
applications. However, their performance is highly at risk when facing
carefully crafted input samples known as adversarial examples. In this paper,
we present three defense strategies to detect adversarial iris examples. These
defense strategies are based on wavelet domain denoising of the input examples
by investigating each wavelet sub-band and removing the sub-bands that are most
affected by the adversary. The first proposed defense strategy reconstructs
multiple denoised versions of the input example through manipulating the mid-
and high-frequency components of the wavelet domain representation of the input
example and makes a decision upon the classification result of the majority of
the denoised examples. The second and third proposed defense strategies aim to
denoise each wavelet domain sub-band and determine the sub-bands that are most
likely affected by the adversary using the reconstruction error computed for
each sub-band. We test the performance of the proposed defense strategies
against several attack scenarios and compare the results with five state of the
art defense strategies.
| [
{
"created": "Thu, 8 Aug 2019 17:08:25 GMT",
"version": "v1"
}
] | 2019-08-09 | [
[
"Soleymani",
"Sobhan",
""
],
[
"Dabouei",
"Ali",
""
],
[
"Dawson",
"Jeremy",
""
],
[
"Nasrabadi",
"Nasser M.",
""
]
] | Deep neural networks have presented impressive performance in biometric applications. However, their performance is highly at risk when facing carefully crafted input samples known as adversarial examples. In this paper, we present three defense strategies to detect adversarial iris examples. These defense strategies are based on wavelet domain denoising of the input examples by investigating each wavelet sub-band and removing the sub-bands that are most affected by the adversary. The first proposed defense strategy reconstructs multiple denoised versions of the input example through manipulating the mid- and high-frequency components of the wavelet domain representation of the input example and makes a decision upon the classification result of the majority of the denoised examples. The second and third proposed defense strategies aim to denoise each wavelet domain sub-band and determine the sub-bands that are most likely affected by the adversary using the reconstruction error computed for each sub-band. We test the performance of the proposed defense strategies against several attack scenarios and compare the results with five state of the art defense strategies. |
1707.01344 | Wen Chean Teh | Wen Chean Teh | Compositions of Functions and Permutations Specified by Minimal Reaction
Systems | 10 pages, preprint | null | null | null | cs.FL math.CO q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies mathematical properties of reaction systems that was
introduced by Enrenfeucht and Rozenberg as computational models inspired by
biochemical reaction in the living cells. In particular, we continue the study
on the generative power of functions specified by minimal reaction systems
under composition initiated by Salomaa. Allowing degenerate reaction systems,
functions specified by minimal reaction systems over a quarternary alphabet
that are permutations generate the alternating group on the power set of the
background set.
| [
{
"created": "Tue, 4 Jul 2017 02:53:52 GMT",
"version": "v1"
}
] | 2017-07-06 | [
[
"Teh",
"Wen Chean",
""
]
] | This paper studies mathematical properties of reaction systems that was introduced by Enrenfeucht and Rozenberg as computational models inspired by biochemical reaction in the living cells. In particular, we continue the study on the generative power of functions specified by minimal reaction systems under composition initiated by Salomaa. Allowing degenerate reaction systems, functions specified by minimal reaction systems over a quarternary alphabet that are permutations generate the alternating group on the power set of the background set. |
1811.00839 | Jiankai Sun | Jiankai Sun, Bortik Bandyopadhyay, Armin Bashizade, Jiongqian Liang,
P. Sadayappan and Srinivasan Parthasarathy | ATP: Directed Graph Embedding with Asymmetric Transitivity Preservation | has been accepted to the Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI 2019), acceptance rate: 1150/7095 = 16.2% | null | null | null | cs.AI cs.IR cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Directed graphs have been widely used in Community Question Answering
services (CQAs) to model asymmetric relationships among different types of
nodes in CQA graphs, e.g., question, answer, user. Asymmetric transitivity is
an essential property of directed graphs, since it can play an important role
in downstream graph inference and analysis. Question difficulty and user
expertise follow the characteristic of asymmetric transitivity. Maintaining
such properties, while reducing the graph to a lower dimensional vector
embedding space, has been the focus of much recent research. In this paper, we
tackle the challenge of directed graph embedding with asymmetric transitivity
preservation and then leverage the proposed embedding method to solve a
fundamental task in CQAs: how to appropriately route and assign newly posted
questions to users with the suitable expertise and interest in CQAs. The
technique incorporates graph hierarchy and reachability information naturally
by relying on a non-linear transformation that operates on the core
reachability and implicit hierarchy within such graphs. Subsequently, the
methodology levers a factorization-based approach to generate two embedding
vectors for each node within the graph, to capture the asymmetric transitivity.
Extensive experiments show that our framework consistently and significantly
outperforms the state-of-the-art baselines on two diverse real-world tasks:
link prediction, and question difficulty estimation and expert finding in
online forums like Stack Exchange. Particularly, our framework can support
inductive embedding learning for newly posted questions (unseen nodes during
training), and therefore can properly route and assign these kinds of questions
to experts in CQAs.
| [
{
"created": "Fri, 2 Nov 2018 12:45:16 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Nov 2018 14:25:49 GMT",
"version": "v2"
}
] | 2018-11-07 | [
[
"Sun",
"Jiankai",
""
],
[
"Bandyopadhyay",
"Bortik",
""
],
[
"Bashizade",
"Armin",
""
],
[
"Liang",
"Jiongqian",
""
],
[
"Sadayappan",
"P.",
""
],
[
"Parthasarathy",
"Srinivasan",
""
]
] | Directed graphs have been widely used in Community Question Answering services (CQAs) to model asymmetric relationships among different types of nodes in CQA graphs, e.g., question, answer, user. Asymmetric transitivity is an essential property of directed graphs, since it can play an important role in downstream graph inference and analysis. Question difficulty and user expertise follow the characteristic of asymmetric transitivity. Maintaining such properties, while reducing the graph to a lower dimensional vector embedding space, has been the focus of much recent research. In this paper, we tackle the challenge of directed graph embedding with asymmetric transitivity preservation and then leverage the proposed embedding method to solve a fundamental task in CQAs: how to appropriately route and assign newly posted questions to users with the suitable expertise and interest in CQAs. The technique incorporates graph hierarchy and reachability information naturally by relying on a non-linear transformation that operates on the core reachability and implicit hierarchy within such graphs. Subsequently, the methodology levers a factorization-based approach to generate two embedding vectors for each node within the graph, to capture the asymmetric transitivity. Extensive experiments show that our framework consistently and significantly outperforms the state-of-the-art baselines on two diverse real-world tasks: link prediction, and question difficulty estimation and expert finding in online forums like Stack Exchange. Particularly, our framework can support inductive embedding learning for newly posted questions (unseen nodes during training), and therefore can properly route and assign these kinds of questions to experts in CQAs. |
1206.1299 | John Sun | John Z. Sun, Vinith Misra and Vivek K Goyal | Distributed Functional Scalar Quantization Simplified | null | IEEE Trans. on Signal Processing, vol. 61, no. 14, pp. 3495-3508,
July 2013 | 10.1109/TSP.2013.2259483 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed functional scalar quantization (DFSQ) theory provides optimality
conditions and predicts performance of data acquisition systems in which a
computation on acquired data is desired. We address two limitations of previous
works: prohibitively expensive decoder design and a restriction to sources with
bounded distributions. We rigorously show that a much simpler decoder has
equivalent asymptotic performance as the conditional expectation estimator
previously explored, thus reducing decoder design complexity. The simpler
decoder has the feature of decoupled communication and computation blocks.
Moreover, we extend the DFSQ framework with the simpler decoder to acquire
sources with infinite-support distributions such as Gaussian or exponential
distributions. Finally, through simulation results we demonstrate that
performance at moderate coding rates is well predicted by the asymptotic
analysis, and we give new insight on the rate of convergence.
| [
{
"created": "Wed, 6 Jun 2012 19:00:45 GMT",
"version": "v1"
}
] | 2015-03-24 | [
[
"Sun",
"John Z.",
""
],
[
"Misra",
"Vinith",
""
],
[
"Goyal",
"Vivek K",
""
]
] | Distributed functional scalar quantization (DFSQ) theory provides optimality conditions and predicts performance of data acquisition systems in which a computation on acquired data is desired. We address two limitations of previous works: prohibitively expensive decoder design and a restriction to sources with bounded distributions. We rigorously show that a much simpler decoder has equivalent asymptotic performance as the conditional expectation estimator previously explored, thus reducing decoder design complexity. The simpler decoder has the feature of decoupled communication and computation blocks. Moreover, we extend the DFSQ framework with the simpler decoder to acquire sources with infinite-support distributions such as Gaussian or exponential distributions. Finally, through simulation results we demonstrate that performance at moderate coding rates is well predicted by the asymptotic analysis, and we give new insight on the rate of convergence. |
2302.08674 | Tianyi Zheng | Tianyi Zheng | EnfoMax: Domain Entropy and Mutual Information Maximization for Domain
Generalized Face Anti-spoofing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The face anti-spoofing (FAS) method performs well under intra-domain setups.
However, its cross-domain performance is unsatisfactory. As a result, the
domain generalization (DG) method has gained more attention in FAS. Existing
methods treat FAS as a simple binary classification task and propose a
heuristic training objective to learn domain-invariant features. However, there
is no theoretical explanation of what a domain-invariant feature is.
Additionally, the lack of theoretical support makes domain generalization
techniques such as adversarial training lack training stability. To address
these issues, this paper proposes the EnfoMax framework, which uses information
theory to analyze cross-domain FAS tasks. This framework provides theoretical
guarantees and optimization objectives for domain-generalized FAS tasks.
EnfoMax maximizes the domain entropy and mutual information of live samples in
source domains without using adversarial learning. Experimental results
demonstrate that our approach performs well on extensive public datasets and
outperforms state-of-the-art methods.
| [
{
"created": "Fri, 17 Feb 2023 03:54:18 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Jun 2023 11:28:48 GMT",
"version": "v2"
}
] | 2023-06-06 | [
[
"Zheng",
"Tianyi",
""
]
] | The face anti-spoofing (FAS) method performs well under intra-domain setups. However, its cross-domain performance is unsatisfactory. As a result, the domain generalization (DG) method has gained more attention in FAS. Existing methods treat FAS as a simple binary classification task and propose a heuristic training objective to learn domain-invariant features. However, there is no theoretical explanation of what a domain-invariant feature is. Additionally, the lack of theoretical support makes domain generalization techniques such as adversarial training lack training stability. To address these issues, this paper proposes the EnfoMax framework, which uses information theory to analyze cross-domain FAS tasks. This framework provides theoretical guarantees and optimization objectives for domain-generalized FAS tasks. EnfoMax maximizes the domain entropy and mutual information of live samples in source domains without using adversarial learning. Experimental results demonstrate that our approach performs well on extensive public datasets and outperforms state-of-the-art methods. |
2312.04572 | Feifan Yu | Feifan Yu, Wenyuan Cong, Xinmin Chen, Yue Lin and Jiqiang Wang | Harnessing LSTM for Nonlinear Ship Deck Motion Prediction in UAV
Autonomous Landing amidst High Sea States | 11 pages, 7 figures, accept by ICANDVC2023 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous landing of UAVs in high sea states requires the UAV to land
exclusively during the ship deck's "rest period," coinciding with minimal
movement. Given this scenario, determining the ship's "rest period" based on
its movement patterns becomes a fundamental prerequisite for addressing this
challenge. This study employs the Long Short-Term Memory (LSTM) neural network
to predict the ship's motion across three dimensions: longi-tudinal,
transverse, and vertical waves. In the absence of actual ship data under high
sea states, this paper employs a composite sine wave model to simulate ship
deck motion. Through this approach, a highly accurate model is established,
exhibiting promising outcomes within various stochastic sine wave combination
models.
| [
{
"created": "Wed, 15 Nov 2023 07:29:10 GMT",
"version": "v1"
}
] | 2023-12-11 | [
[
"Yu",
"Feifan",
""
],
[
"Cong",
"Wenyuan",
""
],
[
"Chen",
"Xinmin",
""
],
[
"Lin",
"Yue",
""
],
[
"Wang",
"Jiqiang",
""
]
] | Autonomous landing of UAVs in high sea states requires the UAV to land exclusively during the ship deck's "rest period," coinciding with minimal movement. Given this scenario, determining the ship's "rest period" based on its movement patterns becomes a fundamental prerequisite for addressing this challenge. This study employs the Long Short-Term Memory (LSTM) neural network to predict the ship's motion across three dimensions: longi-tudinal, transverse, and vertical waves. In the absence of actual ship data under high sea states, this paper employs a composite sine wave model to simulate ship deck motion. Through this approach, a highly accurate model is established, exhibiting promising outcomes within various stochastic sine wave combination models. |
2111.01086 | Stefanie Scherzinger | Stefanie Scherzinger and Andreas Thor | AutoShard -- Declaratively Managing Hot Spot Data Objects in NoSQL
Document Stores | Published at WebDB 2014 | WebDB 2014 | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | NoSQL document stores are becoming increasingly popular as backends in web
development. Not only do they scale out to large volumes of data, many systems
are even custom-tailored for this domain: NoSQL document stores like Google
Cloud Datastore have been designed to support massively parallel reads, and
even guarantee strong consistency in updating single data objects. However,
strongly consistent updates cannot be implemented arbitrarily fast in
large-scale distributed systems. Consequently, data objects that experience
high-frequent writes can turn into severe performance bottlenecks. In this
paper, we present AutoShard, a ready-to-use object mapper for Java applications
running against NoSQL document stores. AutoShard's unique feature is its
capability to gracefully shard hot spot data objects to avoid write contention.
Using AutoShard, developers can easily handle hot spot data objects by adding
minimally intrusive annotations to their application code. Our experiments show
the significant impact of sharding on both the write throughput and the
execution time.
| [
{
"created": "Mon, 1 Nov 2021 16:55:43 GMT",
"version": "v1"
}
] | 2021-11-02 | [
[
"Scherzinger",
"Stefanie",
""
],
[
"Thor",
"Andreas",
""
]
] | NoSQL document stores are becoming increasingly popular as backends in web development. Not only do they scale out to large volumes of data, many systems are even custom-tailored for this domain: NoSQL document stores like Google Cloud Datastore have been designed to support massively parallel reads, and even guarantee strong consistency in updating single data objects. However, strongly consistent updates cannot be implemented arbitrarily fast in large-scale distributed systems. Consequently, data objects that experience high-frequent writes can turn into severe performance bottlenecks. In this paper, we present AutoShard, a ready-to-use object mapper for Java applications running against NoSQL document stores. AutoShard's unique feature is its capability to gracefully shard hot spot data objects to avoid write contention. Using AutoShard, developers can easily handle hot spot data objects by adding minimally intrusive annotations to their application code. Our experiments show the significant impact of sharding on both the write throughput and the execution time. |
2404.18117 | Jing Yang | Jing Yang and Wei Yang | A Basis-preserving Algorithm for Computing the Bezout Matrix of Newton
Polynomials | null | null | null | null | cs.SC cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | This paper tackles the problem of constructing Bezout matrices for Newton
polynomials in a basis-preserving approach that operates directly with the
given Newton basis, thus avoiding the need for transformation from Newton basis
to monomial basis. This approach significantly reduces the computational cost
and also mitigates numerical instability caused by basis transformation. For
this purpose, we investigate the internal structure of Bezout matrices in
Newton basis and design a basis-preserving algorithm that generates the Bezout
matrix in the specified basis used to formulate the input polynomials.
Furthermore, we show an application of the proposed algorithm on constructing
confederate resultant matrices for Newton polynomials. Experimental results
demonstrate that the proposed methods perform superior to the
basis-transformation-based ones.
| [
{
"created": "Sun, 28 Apr 2024 08:54:57 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Yang",
"Jing",
""
],
[
"Yang",
"Wei",
""
]
] | This paper tackles the problem of constructing Bezout matrices for Newton polynomials in a basis-preserving approach that operates directly with the given Newton basis, thus avoiding the need for transformation from Newton basis to monomial basis. This approach significantly reduces the computational cost and also mitigates numerical instability caused by basis transformation. For this purpose, we investigate the internal structure of Bezout matrices in Newton basis and design a basis-preserving algorithm that generates the Bezout matrix in the specified basis used to formulate the input polynomials. Furthermore, we show an application of the proposed algorithm on constructing confederate resultant matrices for Newton polynomials. Experimental results demonstrate that the proposed methods perform superior to the basis-transformation-based ones. |
2007.10529 | Jinyue Song | Jinyue Song, Tianbo Gu, Zheng Fang, Xiaotao Feng, Yunjie Ge, Hao Fu,
Pengfei Hu, Prasant Mohapatra | Blockchain Meets COVID-19: A Framework for Contact Information Sharing
and Risk Notification System | 11 pages, 7 figures, this work has been accepted by IEEE
International Conference on Mobile Ad-Hoc and Smart Systems (MASS) 2021 | null | null | null | cs.CR cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19 is a severe global epidemic in human history. Even though there are
particular medications and vaccines to curb the epidemic, tracing and isolating
the infection source is the best option to slow the virus spread and reduce
infection and death rates. There are three disadvantages to the existing
contact tracing system: 1. User data is stored in a centralized database that
could be stolen and tampered with, 2. User's confidential personal identity may
be revealed to a third party or organization, 3. Existing contact tracing
systems only focus on information sharing from one dimension, such as
location-based tracing, which significantly limits the effectiveness of such
systems.
We propose a global COVID-19 information sharing and risk notification system
that utilizes the Blockchain, Smart Contract, and Bluetooth. To protect user
privacy, we design a novel Blockchain-based platform that can share consistent
and non-tampered contact tracing information from multiple dimensions, such as
location-based for indirect contact and Bluetooth-based for direct contact.
Hierarchical smart contract architecture is also designed to achieve global
agreements from users about how to process and utilize user data, thereby
enhancing the data usage transparency. Furthermore, we propose a mechanism to
protect user identity privacy from multiple aspects. More importantly, our
system can notify the users about the exposure risk via smart contracts. We
implement a prototype system to conduct extensive measurements to demonstrate
the feasibility and effectiveness of our system.
| [
{
"created": "Mon, 20 Jul 2020 23:36:46 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Feb 2022 19:14:55 GMT",
"version": "v2"
}
] | 2022-02-03 | [
[
"Song",
"Jinyue",
""
],
[
"Gu",
"Tianbo",
""
],
[
"Fang",
"Zheng",
""
],
[
"Feng",
"Xiaotao",
""
],
[
"Ge",
"Yunjie",
""
],
[
"Fu",
"Hao",
""
],
[
"Hu",
"Pengfei",
""
],
[
"Mohapatra",
"Prasant",
""
]
] | COVID-19 is a severe global epidemic in human history. Even though there are particular medications and vaccines to curb the epidemic, tracing and isolating the infection source is the best option to slow the virus spread and reduce infection and death rates. There are three disadvantages to the existing contact tracing system: 1. User data is stored in a centralized database that could be stolen and tampered with, 2. User's confidential personal identity may be revealed to a third party or organization, 3. Existing contact tracing systems only focus on information sharing from one dimension, such as location-based tracing, which significantly limits the effectiveness of such systems. We propose a global COVID-19 information sharing and risk notification system that utilizes the Blockchain, Smart Contract, and Bluetooth. To protect user privacy, we design a novel Blockchain-based platform that can share consistent and non-tampered contact tracing information from multiple dimensions, such as location-based for indirect contact and Bluetooth-based for direct contact. Hierarchical smart contract architecture is also designed to achieve global agreements from users about how to process and utilize user data, thereby enhancing the data usage transparency. Furthermore, we propose a mechanism to protect user identity privacy from multiple aspects. More importantly, our system can notify the users about the exposure risk via smart contracts. We implement a prototype system to conduct extensive measurements to demonstrate the feasibility and effectiveness of our system. |
1802.02629 | David Minnen | David Minnen, George Toderici, Michele Covell, Troy Chinen, Nick
Johnston, Joel Shor, Sung Jin Hwang, Damien Vincent, Saurabh Singh | Spatially adaptive image compression using a tiled deep network | null | International Conference on Image Processing 2017 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks represent a powerful class of function approximators
that can learn to compress and reconstruct images. Existing image compression
algorithms based on neural networks learn quantized representations with a
constant spatial bit rate across each image. While entropy coding introduces
some spatial variation, traditional codecs have benefited significantly by
explicitly adapting the bit rate based on local image complexity and visual
saliency. This paper introduces an algorithm that combines deep neural networks
with quality-sensitive bit rate adaptation using a tiled network. We
demonstrate the importance of spatial context prediction and show improved
quantitative (PSNR) and qualitative (subjective rater assessment) results
compared to a non-adaptive baseline and a recently published image compression
model based on fully-convolutional neural networks.
| [
{
"created": "Wed, 7 Feb 2018 20:59:39 GMT",
"version": "v1"
}
] | 2018-02-09 | [
[
"Minnen",
"David",
""
],
[
"Toderici",
"George",
""
],
[
"Covell",
"Michele",
""
],
[
"Chinen",
"Troy",
""
],
[
"Johnston",
"Nick",
""
],
[
"Shor",
"Joel",
""
],
[
"Hwang",
"Sung Jin",
""
],
[
"Vincent",
"Damien",
""
],
[
"Singh",
"Saurabh",
""
]
] | Deep neural networks represent a powerful class of function approximators that can learn to compress and reconstruct images. Existing image compression algorithms based on neural networks learn quantized representations with a constant spatial bit rate across each image. While entropy coding introduces some spatial variation, traditional codecs have benefited significantly by explicitly adapting the bit rate based on local image complexity and visual saliency. This paper introduces an algorithm that combines deep neural networks with quality-sensitive bit rate adaptation using a tiled network. We demonstrate the importance of spatial context prediction and show improved quantitative (PSNR) and qualitative (subjective rater assessment) results compared to a non-adaptive baseline and a recently published image compression model based on fully-convolutional neural networks. |
1610.08717 | Kashyap Thimmaraju | Kashyap Thimmaraju, Bhargava Shastry, Tobias Fiebig, Felicitas
Hetzelt, Jean-Pierre Seifert, Anja Feldmann, Stefan Schmid | Reins to the Cloud: Compromising Cloud Systems via the Data Plane | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual switches have become popular among cloud operating systems to
interconnect virtual machines in a more flexible manner. However, this paper
demonstrates that virtual switches introduce new attack surfaces in cloud
setups, whose effects can be disastrous. Our analysis shows that these
vulnerabilities are caused by: (1) inappropriate security assumptions
(privileged virtual switch execution in kernel and user space), (2) the logical
centralization of such networks (e.g., OpenStack or SDN), (3) the presence of
bi-directional communication channels between data plane systems and the
centralized controller, and (4) non-standard protocol parsers.
Our work highlights the need to accommodate the data plane(s) in our threat
models. In particular, it forces us to revisit today's assumption that the data
plane can only be compromised by a sophisticated attacker: we show that
compromising the data plane of modern computer networks can actually be
performed by a very simple attacker with limited resources only and at low cost
(i.e., at the cost of renting a virtual machine in the Cloud). As a case study,
we fuzzed only 2\% of the code-base of a production quality virtual switch's
packet processor (namely OvS), identifying serious vulnerabilities leading to
unauthenticated remote code execution. In particular, we present the "rein
worm" which allows us to fully compromise test-setups in less than 100 seconds.
We also evaluate the performance overhead of existing mitigations such as ASLR,
PIEs, and unconditional stack canaries on OvS. We find that while applying
these countermeasures in kernel-space incurs a significant overhead, in
user-space the performance overhead is negligible.
| [
{
"created": "Thu, 27 Oct 2016 11:39:47 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Feb 2017 16:24:25 GMT",
"version": "v2"
}
] | 2017-02-13 | [
[
"Thimmaraju",
"Kashyap",
""
],
[
"Shastry",
"Bhargava",
""
],
[
"Fiebig",
"Tobias",
""
],
[
"Hetzelt",
"Felicitas",
""
],
[
"Seifert",
"Jean-Pierre",
""
],
[
"Feldmann",
"Anja",
""
],
[
"Schmid",
"Stefan",
""
]
] | Virtual switches have become popular among cloud operating systems to interconnect virtual machines in a more flexible manner. However, this paper demonstrates that virtual switches introduce new attack surfaces in cloud setups, whose effects can be disastrous. Our analysis shows that these vulnerabilities are caused by: (1) inappropriate security assumptions (privileged virtual switch execution in kernel and user space), (2) the logical centralization of such networks (e.g., OpenStack or SDN), (3) the presence of bi-directional communication channels between data plane systems and the centralized controller, and (4) non-standard protocol parsers. Our work highlights the need to accommodate the data plane(s) in our threat models. In particular, it forces us to revisit today's assumption that the data plane can only be compromised by a sophisticated attacker: we show that compromising the data plane of modern computer networks can actually be performed by a very simple attacker with limited resources only and at low cost (i.e., at the cost of renting a virtual machine in the Cloud). As a case study, we fuzzed only 2\% of the code-base of a production quality virtual switch's packet processor (namely OvS), identifying serious vulnerabilities leading to unauthenticated remote code execution. In particular, we present the "rein worm" which allows us to fully compromise test-setups in less than 100 seconds. We also evaluate the performance overhead of existing mitigations such as ASLR, PIEs, and unconditional stack canaries on OvS. We find that while applying these countermeasures in kernel-space incurs a significant overhead, in user-space the performance overhead is negligible. |
2403.13843 | Hamza Kheddar | Yassine Habchi, Hamza Kheddar, Yassine Himeur, Abdelkrim Boukabou,
Ammar Chouchane, Abdelmalik Ouamane, Shadi Atalla, Wathiq Mansoor | Machine Learning and Vision Transformers for Thyroid Carcinoma
Diagnosis: A review | null | null | null | null | cs.LG cs.AI eess.IV | http://creativecommons.org/licenses/by/4.0/ | The growing interest in developing smart diagnostic systems to help medical
experts process extensive data for treating incurable diseases has been
notable. In particular, the challenge of identifying thyroid cancer (TC) has
seen progress with the use of machine learning (ML) and big data analysis,
incorporating transformers to evaluate TC prognosis and determine the risk of
malignancy in individuals. This review article presents a summary of various
studies on AIbased approaches, especially those employing transformers, for
diagnosing TC. It introduces a new categorization system for these methods
based on artifcial intelligence (AI) algorithms, the goals of the framework,
and the computing environments used. Additionally, it scrutinizes and contrasts
the available TC datasets by their features. The paper highlights the
importance of AI instruments in aiding the diagnosis and treatment of TC
through supervised, unsupervised, or mixed approaches, with a special focus on
the ongoing importance of transformers in medical diagnostics and disease
management. It further discusses the progress made and the continuing obstacles
in this area. Lastly, it explores future directions and focuses within this
research feld.
| [
{
"created": "Sun, 17 Mar 2024 17:45:04 GMT",
"version": "v1"
}
] | 2024-03-22 | [
[
"Habchi",
"Yassine",
""
],
[
"Kheddar",
"Hamza",
""
],
[
"Himeur",
"Yassine",
""
],
[
"Boukabou",
"Abdelkrim",
""
],
[
"Chouchane",
"Ammar",
""
],
[
"Ouamane",
"Abdelmalik",
""
],
[
"Atalla",
"Shadi",
""
],
[
"Mansoor",
"Wathiq",
""
]
] | The growing interest in developing smart diagnostic systems to help medical experts process extensive data for treating incurable diseases has been notable. In particular, the challenge of identifying thyroid cancer (TC) has seen progress with the use of machine learning (ML) and big data analysis, incorporating transformers to evaluate TC prognosis and determine the risk of malignancy in individuals. This review article presents a summary of various studies on AIbased approaches, especially those employing transformers, for diagnosing TC. It introduces a new categorization system for these methods based on artifcial intelligence (AI) algorithms, the goals of the framework, and the computing environments used. Additionally, it scrutinizes and contrasts the available TC datasets by their features. The paper highlights the importance of AI instruments in aiding the diagnosis and treatment of TC through supervised, unsupervised, or mixed approaches, with a special focus on the ongoing importance of transformers in medical diagnostics and disease management. It further discusses the progress made and the continuing obstacles in this area. Lastly, it explores future directions and focuses within this research feld. |
1210.1572 | Hector Zenil | Hector Zenil | Turing Patterns with Turing Machines: Emergence and Low-level Structure
Formation | 27 pages, 14 figures. Forthcoming in Natural Computing | null | null | null | cs.CC nlin.CG nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite having advanced a reaction-diffusion model of ODE's in his 1952 paper
on morphogenesis, reflecting his interest in mathematical biology, Alan Turing
has never been considered to have approached a definition of Cellular Automata.
However, his treatment of morphogenesis, and in particular a difficulty he
identified relating to the uneven distribution of certain forms as a result of
symmetry breaking, are key to connecting his theory of universal computation
with his theory of biological pattern formation. Making such a connection would
not overcome the particular difficulty that Turing was concerned about, which
has in any case been resolved in biology. But instead the approach developed
here captures Turing's initial concern and provides a low-level solution to a
more general question by way of the concept of algorithmic probability, thus
bridging two of his most important contributions to science: Turing pattern
formation and universal computation. I will provide experimental results of
one-dimensional patterns using this approach, with no loss of generality to a
n-dimensional pattern generalisation.
| [
{
"created": "Thu, 4 Oct 2012 19:51:48 GMT",
"version": "v1"
}
] | 2012-10-08 | [
[
"Zenil",
"Hector",
""
]
] | Despite having advanced a reaction-diffusion model of ODE's in his 1952 paper on morphogenesis, reflecting his interest in mathematical biology, Alan Turing has never been considered to have approached a definition of Cellular Automata. However, his treatment of morphogenesis, and in particular a difficulty he identified relating to the uneven distribution of certain forms as a result of symmetry breaking, are key to connecting his theory of universal computation with his theory of biological pattern formation. Making such a connection would not overcome the particular difficulty that Turing was concerned about, which has in any case been resolved in biology. But instead the approach developed here captures Turing's initial concern and provides a low-level solution to a more general question by way of the concept of algorithmic probability, thus bridging two of his most important contributions to science: Turing pattern formation and universal computation. I will provide experimental results of one-dimensional patterns using this approach, with no loss of generality to a n-dimensional pattern generalisation. |
2212.14678 | Princy Chahal | Princy Chahal | Exploring Transformer Backbones for Image Diffusion Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an end-to-end Transformer based Latent Diffusion model for image
synthesis. On the ImageNet class conditioned generation task we show that a
Transformer based Latent Diffusion model achieves a 14.1FID which is comparable
to the 13.1FID score of a UNet based architecture. In addition to showing the
application of Transformer models for Diffusion based image synthesis this
simplification in architecture allows easy fusion and modeling of text and
image data. The multi-head attention mechanism of Transformers enables
simplified interaction between the image and text features which removes the
requirement for crossattention mechanism in UNet based Diffusion models.
| [
{
"created": "Tue, 27 Dec 2022 07:05:14 GMT",
"version": "v1"
}
] | 2023-01-02 | [
[
"Chahal",
"Princy",
""
]
] | We present an end-to-end Transformer based Latent Diffusion model for image synthesis. On the ImageNet class conditioned generation task we show that a Transformer based Latent Diffusion model achieves a 14.1FID which is comparable to the 13.1FID score of a UNet based architecture. In addition to showing the application of Transformer models for Diffusion based image synthesis this simplification in architecture allows easy fusion and modeling of text and image data. The multi-head attention mechanism of Transformers enables simplified interaction between the image and text features which removes the requirement for crossattention mechanism in UNet based Diffusion models. |
1907.08610 | Michael Zhang | Michael R. Zhang, James Lucas, Geoffrey Hinton, Jimmy Ba | Lookahead Optimizer: k steps forward, 1 step back | Accepted to Neural Information Processing Systems 2019. Code
available at: https://github.com/michaelrzhang/lookahead | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The vast majority of successful deep neural networks are trained using
variants of stochastic gradient descent (SGD) algorithms. Recent attempts to
improve SGD can be broadly categorized into two approaches: (1) adaptive
learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes,
such as heavy-ball and Nesterov momentum. In this paper, we propose a new
optimization algorithm, Lookahead, that is orthogonal to these previous
approaches and iteratively updates two sets of weights. Intuitively, the
algorithm chooses a search direction by looking ahead at the sequence of fast
weights generated by another optimizer. We show that Lookahead improves the
learning stability and lowers the variance of its inner optimizer with
negligible computation and memory cost. We empirically demonstrate Lookahead
can significantly improve the performance of SGD and Adam, even with their
default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine
translation, and Penn Treebank.
| [
{
"created": "Fri, 19 Jul 2019 17:59:50 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Dec 2019 15:55:38 GMT",
"version": "v2"
}
] | 2019-12-04 | [
[
"Zhang",
"Michael R.",
""
],
[
"Lucas",
"James",
""
],
[
"Hinton",
"Geoffrey",
""
],
[
"Ba",
"Jimmy",
""
]
] | The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of fast weights generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine translation, and Penn Treebank. |
2004.14010 | Laura Zabawa | Laura Zabawa, Anna Kicherer, Lasse Klingbeil, Reinhard T\"opfer,
Heiner Kuhlmann, Ribana Roscher | Counting of Grapevine Berries in Images via Semantic Segmentation using
Convolutional Neural Networks | null | Journal of Photogrammetry and Remote Sensing, vol. 164,
pp.73-83,2020 | 10.1016/j.isprsjprs.2020.04.002 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The extraction of phenotypic traits is often very time and labour intensive.
Especially the investigation in viticulture is restricted to an on-site
analysis due to the perennial nature of grapevine. Traditionally skilled
experts examine small samples and extrapolate the results to a whole plot.
Thereby different grapevine varieties and training systems, e.g. vertical shoot
positioning (VSP) and semi minimal pruned hedges (SMPH) pose different
challenges. In this paper we present an objective framework based on automatic
image analysis which works on two different training systems. The images are
collected semi automatic by a camera system which is installed in a modified
grape harvester. The system produces overlapping images from the sides of the
plants. Our framework uses a convolutional neural network to detect single
berries in images by performing a semantic segmentation. Each berry is then
counted with a connected component algorithm. We compare our results with the
Mask-RCNN, a state-of-the-art network for instance segmentation and with a
regression approach for counting. The experiments presented in this paper show
that we are able to detect green berries in images despite of different
training systems. We achieve an accuracy for the berry detection of 94.0% in
the VSP and 85.6% in the SMPH.
| [
{
"created": "Wed, 29 Apr 2020 08:10:19 GMT",
"version": "v1"
}
] | 2020-04-30 | [
[
"Zabawa",
"Laura",
""
],
[
"Kicherer",
"Anna",
""
],
[
"Klingbeil",
"Lasse",
""
],
[
"Töpfer",
"Reinhard",
""
],
[
"Kuhlmann",
"Heiner",
""
],
[
"Roscher",
"Ribana",
""
]
] | The extraction of phenotypic traits is often very time and labour intensive. Especially the investigation in viticulture is restricted to an on-site analysis due to the perennial nature of grapevine. Traditionally skilled experts examine small samples and extrapolate the results to a whole plot. Thereby different grapevine varieties and training systems, e.g. vertical shoot positioning (VSP) and semi minimal pruned hedges (SMPH) pose different challenges. In this paper we present an objective framework based on automatic image analysis which works on two different training systems. The images are collected semi automatic by a camera system which is installed in a modified grape harvester. The system produces overlapping images from the sides of the plants. Our framework uses a convolutional neural network to detect single berries in images by performing a semantic segmentation. Each berry is then counted with a connected component algorithm. We compare our results with the Mask-RCNN, a state-of-the-art network for instance segmentation and with a regression approach for counting. The experiments presented in this paper show that we are able to detect green berries in images despite of different training systems. We achieve an accuracy for the berry detection of 94.0% in the VSP and 85.6% in the SMPH. |
2311.01423 | Jen-Hao Cheng | Jen-Hao Cheng, Sheng-Yao Kuan, Hugo Latapie, Gaowen Liu, Jenq-Neng
Hwang | CenterRadarNet: Joint 3D Object Detection and Tracking Framework using
4D FMCW Radar | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust perception is a vital component for ensuring safe autonomous and
assisted driving. Automotive radar (77 to 81 GHz), which offers
weather-resilient sensing, provides a complementary capability to the vision-
or LiDAR-based autonomous driving systems. Raw radio-frequency (RF) radar
tensors contain rich spatiotemporal semantics besides 3D location information.
The majority of previous methods take in 3D (Doppler-range-azimuth) RF radar
tensors, allowing prediction of an object's location, heading angle, and size
in bird's-eye-view (BEV). However, they lack the ability to at the same time
infer objects' size, orientation, and identity in the 3D space. To overcome
this limitation, we propose an efficient joint architecture called
CenterRadarNet, designed to facilitate high-resolution representation learning
from 4D (Doppler-range-azimuth-elevation) radar data for 3D object detection
and re-identification (re-ID) tasks. As a single-stage 3D object detector,
CenterRadarNet directly infers the BEV object distribution confidence maps,
corresponding 3D bounding box attributes, and appearance embedding for each
pixel. Moreover, we build an online tracker utilizing the learned appearance
embedding for re-ID. CenterRadarNet achieves the state-of-the-art result on the
K-Radar 3D object detection benchmark. In addition, we present the first 3D
object-tracking result using radar on the K-Radar dataset V2. In diverse
driving scenarios, CenterRadarNet shows consistent, robust performance,
emphasizing its wide applicability.
| [
{
"created": "Thu, 2 Nov 2023 17:36:40 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Nov 2023 21:30:42 GMT",
"version": "v2"
}
] | 2023-11-07 | [
[
"Cheng",
"Jen-Hao",
""
],
[
"Kuan",
"Sheng-Yao",
""
],
[
"Latapie",
"Hugo",
""
],
[
"Liu",
"Gaowen",
""
],
[
"Hwang",
"Jenq-Neng",
""
]
] | Robust perception is a vital component for ensuring safe autonomous and assisted driving. Automotive radar (77 to 81 GHz), which offers weather-resilient sensing, provides a complementary capability to the vision- or LiDAR-based autonomous driving systems. Raw radio-frequency (RF) radar tensors contain rich spatiotemporal semantics besides 3D location information. The majority of previous methods take in 3D (Doppler-range-azimuth) RF radar tensors, allowing prediction of an object's location, heading angle, and size in bird's-eye-view (BEV). However, they lack the ability to at the same time infer objects' size, orientation, and identity in the 3D space. To overcome this limitation, we propose an efficient joint architecture called CenterRadarNet, designed to facilitate high-resolution representation learning from 4D (Doppler-range-azimuth-elevation) radar data for 3D object detection and re-identification (re-ID) tasks. As a single-stage 3D object detector, CenterRadarNet directly infers the BEV object distribution confidence maps, corresponding 3D bounding box attributes, and appearance embedding for each pixel. Moreover, we build an online tracker utilizing the learned appearance embedding for re-ID. CenterRadarNet achieves the state-of-the-art result on the K-Radar 3D object detection benchmark. In addition, we present the first 3D object-tracking result using radar on the K-Radar dataset V2. In diverse driving scenarios, CenterRadarNet shows consistent, robust performance, emphasizing its wide applicability. |
1603.04726 | Amir Kiperwas | Amir Kiperwas, Daniel Rosenfeld and Yonina C. Eldar | The SPURS Algorithm for Resampling an Irregularly Sampled Signal onto a
Cartesian Grid | 16 pages, 21 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an algorithm for resampling a function from its values on a
non-Cartesian grid onto a Cartesian grid. This problem arises in many
applications such as MRI, CT, radio astronomy and geophysics. Our algorithm,
termed SParse Uniform ReSampling (SPURS), employs methods from modern sampling
theory to achieve a small approximation error while maintaining low
computational cost. The given non-Cartesian samples are projected onto a
selected intermediate subspace, spanned by integer translations of a compactly
supported kernel function. This produces a sparse system of equations
describing the relation between the nonuniformly spaced samples and a vector of
coefficients representing the projection of the signal onto the chosen
subspace. This sparse system of equations can be solved efficiently using
available sparse equation solvers. The result is then projected onto the
subspace in which the sampled signal is known to reside. The second projection
is implemented efficiently using a digital linear shift invariant (LSI) filter
and produces uniformly spaced values of the signal on a Cartesian grid. The
method can be iterated to improve the reconstruction results. We then apply
SPURS to reconstruction of MRI data from nonuniformly spaced k-space samples.
Simulations demonstrate that SPURS outperforms other reconstruction methods
while maintaining a similar computational complexity over a range of sampling
densities and trajectories as well as various input SNR levels.
| [
{
"created": "Tue, 15 Mar 2016 15:49:27 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Mar 2016 12:13:01 GMT",
"version": "v2"
}
] | 2016-03-17 | [
[
"Kiperwas",
"Amir",
""
],
[
"Rosenfeld",
"Daniel",
""
],
[
"Eldar",
"Yonina C.",
""
]
] | We present an algorithm for resampling a function from its values on a non-Cartesian grid onto a Cartesian grid. This problem arises in many applications such as MRI, CT, radio astronomy and geophysics. Our algorithm, termed SParse Uniform ReSampling (SPURS), employs methods from modern sampling theory to achieve a small approximation error while maintaining low computational cost. The given non-Cartesian samples are projected onto a selected intermediate subspace, spanned by integer translations of a compactly supported kernel function. This produces a sparse system of equations describing the relation between the nonuniformly spaced samples and a vector of coefficients representing the projection of the signal onto the chosen subspace. This sparse system of equations can be solved efficiently using available sparse equation solvers. The result is then projected onto the subspace in which the sampled signal is known to reside. The second projection is implemented efficiently using a digital linear shift invariant (LSI) filter and produces uniformly spaced values of the signal on a Cartesian grid. The method can be iterated to improve the reconstruction results. We then apply SPURS to reconstruction of MRI data from nonuniformly spaced k-space samples. Simulations demonstrate that SPURS outperforms other reconstruction methods while maintaining a similar computational complexity over a range of sampling densities and trajectories as well as various input SNR levels. |
2212.09219 | Mengying Chen | Mengying Chen, Wannian An, Yang Liu, Chen Dong, Xiaodong Xu, Boxiao
Han, Ping Zhang | Modeling and Performance Analysis of Single-Server Database Over
Quasi-static Rayleigh Fading Channel | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud database is the key technology in cloud computing. The effective and
efficient service quality of the cloud database is inseparable from
communication technology, just as improving communication quality will reduce
the concurrency phenomenon in the ticketing system. In order to visually
observe the impact of communication on the cloud database, we propose a
Communication-Database (C-D) Model with a single-server database over the
quasi-static Rayleigh fading channel, which consists of three parts: CLIENTS
SOURCE, COMMUNICATION SYSTEM and DATABASE SYSTEM. This paper uses the queuing
model, M/G/1//K, to model the whole system. The C-D Model is analyzed in two
cases: nonlinearity and linearity, which correspond to some instances of SISO
and MIMO. The simulation results of average staying time, average number of
transactions and other performance characteristics are basically consistent
with the theoretical results, which verifies the validity of the C-D Model. The
comparison of these experimental results also proves that poor communication
quality does lead to the reduction in the quality of service.
| [
{
"created": "Mon, 19 Dec 2022 02:44:03 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 03:39:41 GMT",
"version": "v2"
},
{
"created": "Tue, 17 Jan 2023 09:07:16 GMT",
"version": "v3"
}
] | 2023-01-18 | [
[
"Chen",
"Mengying",
""
],
[
"An",
"Wannian",
""
],
[
"Liu",
"Yang",
""
],
[
"Dong",
"Chen",
""
],
[
"Xu",
"Xiaodong",
""
],
[
"Han",
"Boxiao",
""
],
[
"Zhang",
"Ping",
""
]
] | Cloud database is the key technology in cloud computing. The effective and efficient service quality of the cloud database is inseparable from communication technology, just as improving communication quality will reduce the concurrency phenomenon in the ticketing system. In order to visually observe the impact of communication on the cloud database, we propose a Communication-Database (C-D) Model with a single-server database over the quasi-static Rayleigh fading channel, which consists of three parts: CLIENTS SOURCE, COMMUNICATION SYSTEM and DATABASE SYSTEM. This paper uses the queuing model, M/G/1//K, to model the whole system. The C-D Model is analyzed in two cases: nonlinearity and linearity, which correspond to some instances of SISO and MIMO. The simulation results of average staying time, average number of transactions and other performance characteristics are basically consistent with the theoretical results, which verifies the validity of the C-D Model. The comparison of these experimental results also proves that poor communication quality does lead to the reduction in the quality of service. |
2306.12274 | Enes Yigitbas | Enes Yigitbas, Alexander Nowosad, Gregor Engels | Supporting Construction and Architectural Visualization through BIM and
AR/VR: A Systematic Literature Review | Preprint of accepted paper at INTERACT'23 | null | null | null | cs.HC | http://creativecommons.org/licenses/by-sa/4.0/ | The Architecture, Engineering, Construction, and Facility Management (AEC/FM)
industry deals with the design, construction, and operation of complex
buildings. Today, Building Information Modeling (BIM) is used to represent
information about a building in a single, non-redundant representation. Here,
Augmented Reality (AR) and Virtual Reality (VR) can improve the visualization
and interaction with the resulting model by augmenting the real world with
information from the BIM model or allowing a user to immerse in a virtual world
generated from the BIM model. This can improve the design, construction, and
operation of buildings. While an increasing number of studies in HCI,
construction, or engineering have shown the potential of using AR and VR
technology together with BIM, often research remains focused on individual
explorations and key design strategies. In addition to that, a systematic
overview and discussion of recent works combining AR/VR with BIM are not yet
fully covered. Therefore, this paper systematically reviews recent approaches
combining AR/VR with BIM and categorizes the literature by the building's
lifecycle phase while systematically describing relevant use cases. In total,
32 out of 447 papers between 2017 and 2022 were categorized. The categorization
shows that most approaches focus on the construction phase and the use case of
review and quality assurance. In the design phase, most approaches use VR,
while in the construction and operation phases, AR is prevalent.
| [
{
"created": "Wed, 21 Jun 2023 13:52:37 GMT",
"version": "v1"
}
] | 2023-06-22 | [
[
"Yigitbas",
"Enes",
""
],
[
"Nowosad",
"Alexander",
""
],
[
"Engels",
"Gregor",
""
]
] | The Architecture, Engineering, Construction, and Facility Management (AEC/FM) industry deals with the design, construction, and operation of complex buildings. Today, Building Information Modeling (BIM) is used to represent information about a building in a single, non-redundant representation. Here, Augmented Reality (AR) and Virtual Reality (VR) can improve the visualization and interaction with the resulting model by augmenting the real world with information from the BIM model or allowing a user to immerse in a virtual world generated from the BIM model. This can improve the design, construction, and operation of buildings. While an increasing number of studies in HCI, construction, or engineering have shown the potential of using AR and VR technology together with BIM, often research remains focused on individual explorations and key design strategies. In addition to that, a systematic overview and discussion of recent works combining AR/VR with BIM are not yet fully covered. Therefore, this paper systematically reviews recent approaches combining AR/VR with BIM and categorizes the literature by the building's lifecycle phase while systematically describing relevant use cases. In total, 32 out of 447 papers between 2017 and 2022 were categorized. The categorization shows that most approaches focus on the construction phase and the use case of review and quality assurance. In the design phase, most approaches use VR, while in the construction and operation phases, AR is prevalent. |
2204.06450 | Soroosh Tayebi Arasteh | Soroosh Tayebi Arasteh, Tobias Weise, Maria Schuster, Elmar Noeth,
Andreas Maier, Seung Hee Yang | The effect of speech pathology on automatic speaker verification -- a
large-scale study | Published in Scientific Reports | Sci Rep 13, 20476 (2023) | 10.1038/s41598-023-47711-7 | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Navigating the challenges of data-driven speech processing, one of the
primary hurdles is accessing reliable pathological speech data. While public
datasets appear to offer solutions, they come with inherent risks of potential
unintended exposure of patient health information via re-identification
attacks. Using a comprehensive real-world pathological speech corpus, with over
n=3,800 test subjects spanning various age groups and speech disorders, we
employed a deep-learning-driven automatic speaker verification (ASV) approach.
This resulted in a notable mean equal error rate (EER) of 0.89% with a standard
deviation of 0.06%, outstripping traditional benchmarks. Our comprehensive
assessments demonstrate that pathological speech overall faces heightened
privacy breach risks compared to healthy speech. Specifically, adults with
dysphonia are at heightened re-identification risks, whereas conditions like
dysarthria yield results comparable to those of healthy speakers. Crucially,
speech intelligibility does not influence the ASV system's performance metrics.
In pediatric cases, particularly those with cleft lip and palate, the recording
environment plays a decisive role in re-identification. Merging data across
pathological types led to a marked EER decrease, suggesting the potential
benefits of pathological diversity in ASV, accompanied by a logarithmic boost
in ASV effectiveness. In essence, this research sheds light on the dynamics
between pathological speech and speaker verification, emphasizing its crucial
role in safeguarding patient confidentiality in our increasingly digitized
healthcare era.
| [
{
"created": "Wed, 13 Apr 2022 15:17:00 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Dec 2022 00:20:50 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Nov 2023 14:10:56 GMT",
"version": "v3"
}
] | 2023-11-23 | [
[
"Arasteh",
"Soroosh Tayebi",
""
],
[
"Weise",
"Tobias",
""
],
[
"Schuster",
"Maria",
""
],
[
"Noeth",
"Elmar",
""
],
[
"Maier",
"Andreas",
""
],
[
"Yang",
"Seung Hee",
""
]
] | Navigating the challenges of data-driven speech processing, one of the primary hurdles is accessing reliable pathological speech data. While public datasets appear to offer solutions, they come with inherent risks of potential unintended exposure of patient health information via re-identification attacks. Using a comprehensive real-world pathological speech corpus, with over n=3,800 test subjects spanning various age groups and speech disorders, we employed a deep-learning-driven automatic speaker verification (ASV) approach. This resulted in a notable mean equal error rate (EER) of 0.89% with a standard deviation of 0.06%, outstripping traditional benchmarks. Our comprehensive assessments demonstrate that pathological speech overall faces heightened privacy breach risks compared to healthy speech. Specifically, adults with dysphonia are at heightened re-identification risks, whereas conditions like dysarthria yield results comparable to those of healthy speakers. Crucially, speech intelligibility does not influence the ASV system's performance metrics. In pediatric cases, particularly those with cleft lip and palate, the recording environment plays a decisive role in re-identification. Merging data across pathological types led to a marked EER decrease, suggesting the potential benefits of pathological diversity in ASV, accompanied by a logarithmic boost in ASV effectiveness. In essence, this research sheds light on the dynamics between pathological speech and speaker verification, emphasizing its crucial role in safeguarding patient confidentiality in our increasingly digitized healthcare era. |
2302.13825 | Marco Favorito | Marco Favorito | Forward LTLf Synthesis: DPLL At Work | null | null | null | null | cs.LO cs.AI cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new AND-OR graph search framework for synthesis of
Linear Temporal Logic on finite traces (\LTLf), that overcomes some limitations
of previous approaches. Within such framework, we devise a procedure inspired
by the Davis-Putnam-Logemann-Loveland (DPLL) algorithm to generate the next
available agent-environment moves in a truly depth-first fashion, possibly
avoiding exhaustive enumeration or costly compilations. We also propose a novel
equivalence check for search nodes based on syntactic equivalence of state
formulas. Since the resulting procedure is not guaranteed to terminate, we
identify a stopping condition to abort execution and restart the search with
state-equivalence checking based on Binary Decision Diagrams (BDD), which we
show to be correct. The experimental results show that in many cases the
proposed techniques outperform other state-of-the-art approaches. Our
implementation Nike competed in the LTLf Realizability Track in the 2023
edition of SYNTCOMP, and won the competition.
| [
{
"created": "Mon, 27 Feb 2023 14:33:50 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Jun 2023 17:02:21 GMT",
"version": "v2"
}
] | 2023-06-21 | [
[
"Favorito",
"Marco",
""
]
] | This paper proposes a new AND-OR graph search framework for synthesis of Linear Temporal Logic on finite traces (\LTLf), that overcomes some limitations of previous approaches. Within such framework, we devise a procedure inspired by the Davis-Putnam-Logemann-Loveland (DPLL) algorithm to generate the next available agent-environment moves in a truly depth-first fashion, possibly avoiding exhaustive enumeration or costly compilations. We also propose a novel equivalence check for search nodes based on syntactic equivalence of state formulas. Since the resulting procedure is not guaranteed to terminate, we identify a stopping condition to abort execution and restart the search with state-equivalence checking based on Binary Decision Diagrams (BDD), which we show to be correct. The experimental results show that in many cases the proposed techniques outperform other state-of-the-art approaches. Our implementation Nike competed in the LTLf Realizability Track in the 2023 edition of SYNTCOMP, and won the competition. |
2103.08368 | Hongxiang Yu | Hongxiang Yu, Dashun Guo, Huan Yin, Anzhe Chen, Kechun Xu, Yue Wang
and Rong Xiong | Neural Motion Prediction for In-flight Uneven Object Catching | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | In-flight objects capture is extremely challenging. The robot is required to
complete trajectory prediction, interception position calculation and motion
planning in sequence within tens of milliseconds. As in-flight uneven objects
are affected by various kinds of forces, motion prediction is difficult for a
time-varying acceleration. In order to compensate the system's non-linearity,
we introduce the Neural Acceleration Estimator (NAE) that estimates the varying
acceleration by observing a small fragment of previous deflected trajectory.
Moreover, end-to-end training with Differantiable Filter (NAE-DF) gives a
supervision for measurement uncertainty and further improves the prediction
accuracy. Experimental results show that motion prediction with NAE and NAE-DF
is superior to other methods and has a good generalization performance on
unseen objects. We test our methods on a robot, performing velocity control in
real world and respectively achieve 83.3% and 86.7% success rate on a ploy
urethane banana and a gourd. We also release an object in-flight dataset
containing 1,500 trajectorys for uneven objects.
| [
{
"created": "Mon, 15 Mar 2021 13:16:28 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Yu",
"Hongxiang",
""
],
[
"Guo",
"Dashun",
""
],
[
"Yin",
"Huan",
""
],
[
"Chen",
"Anzhe",
""
],
[
"Xu",
"Kechun",
""
],
[
"Wang",
"Yue",
""
],
[
"Xiong",
"Rong",
""
]
] | In-flight objects capture is extremely challenging. The robot is required to complete trajectory prediction, interception position calculation and motion planning in sequence within tens of milliseconds. As in-flight uneven objects are affected by various kinds of forces, motion prediction is difficult for a time-varying acceleration. In order to compensate the system's non-linearity, we introduce the Neural Acceleration Estimator (NAE) that estimates the varying acceleration by observing a small fragment of previous deflected trajectory. Moreover, end-to-end training with Differantiable Filter (NAE-DF) gives a supervision for measurement uncertainty and further improves the prediction accuracy. Experimental results show that motion prediction with NAE and NAE-DF is superior to other methods and has a good generalization performance on unseen objects. We test our methods on a robot, performing velocity control in real world and respectively achieve 83.3% and 86.7% success rate on a ploy urethane banana and a gourd. We also release an object in-flight dataset containing 1,500 trajectorys for uneven objects. |
2004.01832 | Avery Ma | Avery Ma, Fartash Faghri, Nicolas Papernot, Amir-massoud Farahmand | SOAR: Second-Order Adversarial Regularization | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial training is a common approach to improving the robustness of deep
neural networks against adversarial examples. In this work, we propose a novel
regularization approach as an alternative. To derive the regularizer, we
formulate the adversarial robustness problem under the robust optimization
framework and approximate the loss function using a second-order Taylor series
expansion. Our proposed second-order adversarial regularizer (SOAR) is an upper
bound based on the Taylor approximation of the inner-max in the robust
optimization objective. We empirically show that the proposed method
significantly improves the robustness of networks against the $\ell_\infty$ and
$\ell_2$ bounded perturbations generated using cross-entropy-based PGD on
CIFAR-10 and SVHN.
| [
{
"created": "Sat, 4 Apr 2020 01:35:07 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Feb 2021 22:52:49 GMT",
"version": "v2"
}
] | 2021-02-09 | [
[
"Ma",
"Avery",
""
],
[
"Faghri",
"Fartash",
""
],
[
"Papernot",
"Nicolas",
""
],
[
"Farahmand",
"Amir-massoud",
""
]
] | Adversarial training is a common approach to improving the robustness of deep neural networks against adversarial examples. In this work, we propose a novel regularization approach as an alternative. To derive the regularizer, we formulate the adversarial robustness problem under the robust optimization framework and approximate the loss function using a second-order Taylor series expansion. Our proposed second-order adversarial regularizer (SOAR) is an upper bound based on the Taylor approximation of the inner-max in the robust optimization objective. We empirically show that the proposed method significantly improves the robustness of networks against the $\ell_\infty$ and $\ell_2$ bounded perturbations generated using cross-entropy-based PGD on CIFAR-10 and SVHN. |
2404.08691 | Koffka Khan | Koffka Khan | Enhancing Adaptive Video Streaming through Fuzzy Logic-Based Content
Recommendation Systems: A Comprehensive Review and Future Directions | 7 pages | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | As the demand for high-quality video content continues to rise, adaptive
video streaming plays a pivotal role in delivering an optimal viewing
experience. However, traditional content recommendation systems face challenges
in dynamically adapting to users' preferences, content features, and contextual
information. This review paper explores the integration of fuzzy logic into
content recommendation systems for adaptive video streaming. Fuzzy logic, known
for handling uncertainty and imprecision, provides a promising framework for
modeling and accommodating the dynamic nature of user preferences and
contextual factors. The paper discusses the evolution of adaptive video
streaming, reviews traditional content recommendation algorithms, and
introduces fuzzy logic as a solution to enhance the adaptability of these
systems. Through a comprehensive exploration of case studies and applications,
the effectiveness of fuzzy logic in improving user satisfaction and system
performance is highlighted. The review also addresses challenges associated
with the integration of fuzzy logic and suggests future research directions to
further advance this approach. The proposed framework offers insights into a
dynamic and context-aware content recommendation system, contributing to the
evolution of adaptive video streaming technologies.
| [
{
"created": "Wed, 10 Apr 2024 02:58:54 GMT",
"version": "v1"
}
] | 2024-04-16 | [
[
"Khan",
"Koffka",
""
]
] | As the demand for high-quality video content continues to rise, adaptive video streaming plays a pivotal role in delivering an optimal viewing experience. However, traditional content recommendation systems face challenges in dynamically adapting to users' preferences, content features, and contextual information. This review paper explores the integration of fuzzy logic into content recommendation systems for adaptive video streaming. Fuzzy logic, known for handling uncertainty and imprecision, provides a promising framework for modeling and accommodating the dynamic nature of user preferences and contextual factors. The paper discusses the evolution of adaptive video streaming, reviews traditional content recommendation algorithms, and introduces fuzzy logic as a solution to enhance the adaptability of these systems. Through a comprehensive exploration of case studies and applications, the effectiveness of fuzzy logic in improving user satisfaction and system performance is highlighted. The review also addresses challenges associated with the integration of fuzzy logic and suggests future research directions to further advance this approach. The proposed framework offers insights into a dynamic and context-aware content recommendation system, contributing to the evolution of adaptive video streaming technologies. |
2407.06720 | Moritz Schubotz | David Carliste, Paul Libbrecht, Moritz Schubotz, Neil Soiffer | Author Intent: Eliminating Ambiguity in MathML | This preprint has not undergone peer review or any post-submission
improvements or corrections. The Version of Record of this contribution is
published in Int. Conf. on Computers Helping People with Special Needs will
be available online at TBD | null | null | null | cs.DL cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MathML has been successful in improving the accessibility of mathematical
notation on the web. All major screen readers support MathML to generate
speech, allow navigation of the math, and generate braille. A troublesome area
remains: handling ambiguous notations such as \( \vert x\vert\). While it is
possible to speak this syntactically, anecdotal evidence indicates most people
prefer semantic speech such as ``absolute value of x'' or ``determinant of x''
instead of ``vertical bar x vertical bar'' when first hearing an expression.
Several heuristics to infer semantics have improved speech, but ultimately, the
author is the one who definitively knows how an expression is meant to be
spoken. The W3C Math Working Group is in the process of allowing authors to
convey their intent in MathML markup via an intent attribute. This paper
describes that work.
| [
{
"created": "Tue, 9 Jul 2024 09:50:16 GMT",
"version": "v1"
}
] | 2024-07-10 | [
[
"Carliste",
"David",
""
],
[
"Libbrecht",
"Paul",
""
],
[
"Schubotz",
"Moritz",
""
],
[
"Soiffer",
"Neil",
""
]
] | MathML has been successful in improving the accessibility of mathematical notation on the web. All major screen readers support MathML to generate speech, allow navigation of the math, and generate braille. A troublesome area remains: handling ambiguous notations such as \( \vert x\vert\). While it is possible to speak this syntactically, anecdotal evidence indicates most people prefer semantic speech such as ``absolute value of x'' or ``determinant of x'' instead of ``vertical bar x vertical bar'' when first hearing an expression. Several heuristics to infer semantics have improved speech, but ultimately, the author is the one who definitively knows how an expression is meant to be spoken. The W3C Math Working Group is in the process of allowing authors to convey their intent in MathML markup via an intent attribute. This paper describes that work. |
2212.11140 | Hammond Pearce | Shailja Thakur, Baleegh Ahmad, Zhenxing Fan, Hammond Pearce, Benjamin
Tan, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg | Benchmarking Large Language Models for Automated Verilog RTL Code
Generation | Accepted in DATE 2023. 7 pages, 4 tables, 7 figures | null | null | null | cs.PL cs.LG cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automating hardware design could obviate a significant amount of human error
from the engineering process and lead to fewer errors. Verilog is a popular
hardware description language to model and design digital systems, thus
generating Verilog code is a critical first step. Emerging large language
models (LLMs) are able to write high-quality code in other programming
languages. In this paper, we characterize the ability of LLMs to generate
useful Verilog. For this, we fine-tune pre-trained LLMs on Verilog datasets
collected from GitHub and Verilog textbooks. We construct an evaluation
framework comprising test-benches for functional analysis and a flow to test
the syntax of Verilog code generated in response to problems of varying
difficulty. Our findings show that across our problem scenarios, the
fine-tuning results in LLMs more capable of producing syntactically correct
code (25.9% overall). Further, when analyzing functional correctness, a
fine-tuned open-source CodeGen LLM can outperform the state-of-the-art
commercial Codex LLM (6.5% overall). Training/evaluation scripts and LLM
checkpoints are available: https://github.com/shailja-thakur/VGen.
| [
{
"created": "Tue, 13 Dec 2022 16:34:39 GMT",
"version": "v1"
}
] | 2022-12-22 | [
[
"Thakur",
"Shailja",
""
],
[
"Ahmad",
"Baleegh",
""
],
[
"Fan",
"Zhenxing",
""
],
[
"Pearce",
"Hammond",
""
],
[
"Tan",
"Benjamin",
""
],
[
"Karri",
"Ramesh",
""
],
[
"Dolan-Gavitt",
"Brendan",
""
],
[
"Garg",
"Siddharth",
""
]
] | Automating hardware design could obviate a significant amount of human error from the engineering process and lead to fewer errors. Verilog is a popular hardware description language to model and design digital systems, thus generating Verilog code is a critical first step. Emerging large language models (LLMs) are able to write high-quality code in other programming languages. In this paper, we characterize the ability of LLMs to generate useful Verilog. For this, we fine-tune pre-trained LLMs on Verilog datasets collected from GitHub and Verilog textbooks. We construct an evaluation framework comprising test-benches for functional analysis and a flow to test the syntax of Verilog code generated in response to problems of varying difficulty. Our findings show that across our problem scenarios, the fine-tuning results in LLMs more capable of producing syntactically correct code (25.9% overall). Further, when analyzing functional correctness, a fine-tuned open-source CodeGen LLM can outperform the state-of-the-art commercial Codex LLM (6.5% overall). Training/evaluation scripts and LLM checkpoints are available: https://github.com/shailja-thakur/VGen. |
2101.06968 | Javier Fumanal-Idocin Mr. | Javier Fumanal-Idocin, Yu-Kai Wang, Chin-Teng Lin, Javier Fern\'andez,
Jose Antonio Sanz, Humberto Bustince | Motor-Imagery-Based Brain Computer Interface using Signal Derivation and
Aggregation Functions | IEEE Transactions on Cybernetics (2021) | null | 10.1109/TCYB.2021.3073210 | null | cs.HC cs.AI cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Brain Computer Interface technologies are popular methods of communication
between the human brain and external devices. One of the most popular
approaches to BCI is Motor Imagery. In BCI applications, the
ElectroEncephaloGraphy is a very popular measurement for brain dynamics because
of its non-invasive nature. Although there is a high interest in the BCI topic,
the performance of existing systems is still far from ideal, due to the
difficulty of performing pattern recognition tasks in EEG signals. BCI systems
are composed of a wide range of components that perform signal pre-processing,
feature extraction and decision making. In this paper, we define a BCI
Framework, named Enhanced Fusion Framework, where we propose three different
ideas to improve the existing MI-based BCI frameworks. Firstly, we include aan
additional pre-processing step of the signal: a differentiation of the EEG
signal that makes it time-invariant. Secondly, we add an additional frequency
band as feature for the system and we show its effect on the performance of the
system. Finally, we make a profound study of how to make the final decision in
the system. We propose the usage of both up to six types of different
classifiers and a wide range of aggregation functions (including classical
aggregations, Choquet and Sugeno integrals and their extensions and overlap
functions) to fuse the information given by the considered classifiers. We have
tested this new system on a dataset of 20 volunteers performing motor
imagery-based brain-computer interface experiments. On this dataset, the new
system achieved a 88.80% of accuracy. We also propose an optimized version of
our system that is able to obtain up to 90,76%. Furthermore, we find that the
pair Choquet/Sugeno integrals and overlap functions are the ones providing the
best results.
| [
{
"created": "Mon, 18 Jan 2021 10:14:01 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Jun 2021 08:41:37 GMT",
"version": "v2"
}
] | 2021-06-03 | [
[
"Fumanal-Idocin",
"Javier",
""
],
[
"Wang",
"Yu-Kai",
""
],
[
"Lin",
"Chin-Teng",
""
],
[
"Fernández",
"Javier",
""
],
[
"Sanz",
"Jose Antonio",
""
],
[
"Bustince",
"Humberto",
""
]
] | Brain Computer Interface technologies are popular methods of communication between the human brain and external devices. One of the most popular approaches to BCI is Motor Imagery. In BCI applications, the ElectroEncephaloGraphy is a very popular measurement for brain dynamics because of its non-invasive nature. Although there is a high interest in the BCI topic, the performance of existing systems is still far from ideal, due to the difficulty of performing pattern recognition tasks in EEG signals. BCI systems are composed of a wide range of components that perform signal pre-processing, feature extraction and decision making. In this paper, we define a BCI Framework, named Enhanced Fusion Framework, where we propose three different ideas to improve the existing MI-based BCI frameworks. Firstly, we include aan additional pre-processing step of the signal: a differentiation of the EEG signal that makes it time-invariant. Secondly, we add an additional frequency band as feature for the system and we show its effect on the performance of the system. Finally, we make a profound study of how to make the final decision in the system. We propose the usage of both up to six types of different classifiers and a wide range of aggregation functions (including classical aggregations, Choquet and Sugeno integrals and their extensions and overlap functions) to fuse the information given by the considered classifiers. We have tested this new system on a dataset of 20 volunteers performing motor imagery-based brain-computer interface experiments. On this dataset, the new system achieved a 88.80% of accuracy. We also propose an optimized version of our system that is able to obtain up to 90,76%. Furthermore, we find that the pair Choquet/Sugeno integrals and overlap functions are the ones providing the best results. |
2401.14420 | Md Arif Hassan | Md Arif Hassan, Cong T. Nguyen, Chi-Hieu Nguyen, Dinh Thai Hoang, Diep
N. Nguyen and Eryk Dutkiewicz | A Novel Blockchain Based Information Management Framework for Web 3.0 | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web 3.0 is the third generation of the World Wide Web (WWW), concentrating on
the critical concepts of decentralization, availability, and increasing client
usability. Although Web 3.0 is undoubtedly an essential component of the future
Internet, it currently faces critical challenges, including decentralized data
collection and management. To overcome these challenges, blockchain has emerged
as one of the core technologies for the future development of Web 3.0. In this
paper, we propose a novel blockchain-based information management framework,
namely Smart Blockchain-based Web, to manage information in Web 3.0
effectively, enhance the security and privacy of users data, bring additional
profits, and incentivize users to contribute information to the websites.
Particularly, SBW utilizes blockchain technology and smart contracts to manage
the decentralized data collection process for Web 3.0 effectively. Moreover, in
this framework, we develop an effective consensus mechanism based on
Proof-of-Stake to reward the user's information contribution and conduct game
theoretical analysis to analyze the users behavior in the considered system.
Additionally, we conduct simulations to assess the performance of SBW and
investigate the impact of critical parameters on information contribution. The
findings confirm our theoretical analysis and demonstrate that our proposed
consensus mechanism can incentivize the nodes and users to contribute more
information to our systems.
| [
{
"created": "Tue, 23 Jan 2024 12:34:02 GMT",
"version": "v1"
}
] | 2024-01-29 | [
[
"Hassan",
"Md Arif",
""
],
[
"Nguyen",
"Cong T.",
""
],
[
"Nguyen",
"Chi-Hieu",
""
],
[
"Hoang",
"Dinh Thai",
""
],
[
"Nguyen",
"Diep N.",
""
],
[
"Dutkiewicz",
"Eryk",
""
]
] | Web 3.0 is the third generation of the World Wide Web (WWW), concentrating on the critical concepts of decentralization, availability, and increasing client usability. Although Web 3.0 is undoubtedly an essential component of the future Internet, it currently faces critical challenges, including decentralized data collection and management. To overcome these challenges, blockchain has emerged as one of the core technologies for the future development of Web 3.0. In this paper, we propose a novel blockchain-based information management framework, namely Smart Blockchain-based Web, to manage information in Web 3.0 effectively, enhance the security and privacy of users data, bring additional profits, and incentivize users to contribute information to the websites. Particularly, SBW utilizes blockchain technology and smart contracts to manage the decentralized data collection process for Web 3.0 effectively. Moreover, in this framework, we develop an effective consensus mechanism based on Proof-of-Stake to reward the user's information contribution and conduct game theoretical analysis to analyze the users behavior in the considered system. Additionally, we conduct simulations to assess the performance of SBW and investigate the impact of critical parameters on information contribution. The findings confirm our theoretical analysis and demonstrate that our proposed consensus mechanism can incentivize the nodes and users to contribute more information to our systems. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.