id stringlengths 9 14 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 1 609 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 12 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 112 | license stringclasses 9 values | orig_abstract stringlengths 14 3.76k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 535 | abstract stringlengths 11 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0907.3734 | Nikolay Nikolov | Nikolay M. Nikolov | Renormalization theory of Feynman amplitudes on configuration spaces | 20 pages | null | null | null | hep-th math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a previous paper "Anomalies in Quantum Field Theory and Cohomologies of
Configuration Spaces" (arXiv:0903.0187) we presented a new method for
renormalization in Euclidean configuration spaces based on certain
renormalization maps. This approach is aimed to serve for developing an
algebraic algorithm for computing the Gell--Mann--Low renormalization group
action. In the present work we introduce a modification of the theory of
renormalization maps for the case of Minkowski space and we give the way how it
is combined with the causal perturbation theory.
| [
{
"created": "Wed, 22 Jul 2009 19:57:56 GMT",
"version": "v1"
}
] | 2009-07-23 | [
[
"Nikolov",
"Nikolay M.",
""
]
] | In a previous paper "Anomalies in Quantum Field Theory and Cohomologies of Configuration Spaces" (arXiv:0903.0187) we presented a new method for renormalization in Euclidean configuration spaces based on certain renormalization maps. This approach is aimed to serve for developing an algebraic algorithm for computing the Gell--Mann--Low renormalization group action. In the present work we introduce a modification of the theory of renormalization maps for the case of Minkowski space and we give the way how it is combined with the causal perturbation theory. |
2009.12678 | Benjamin Busam | Benjamin Busam and Hyun Jun Jung and Nassir Navab | I Like to Move It: 6D Pose Estimation as an Action Decision Process | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object pose estimation is an integral part of robot vision and AR. Previous
6D pose retrieval pipelines treat the problem either as a regression task or
discretize the pose space to classify. We change this paradigm and reformulate
the problem as an action decision process where an initial pose is updated in
incremental discrete steps that sequentially move a virtual 3D rendering
towards the correct solution. A neural network estimates likely moves from a
single RGB image iteratively and determines so an acceptable final pose. In
comparison to other approaches that train object-specific pose models, we learn
a decision process. This allows for a lightweight architecture while it
naturally generalizes to unseen objects. A coherent stop action for process
termination enables dynamic reduction of the computation cost if there are
insignificant changes in a video sequence. Instead of a static inference time,
we thereby automatically increase the runtime depending on the object motion.
Robustness and accuracy of our action decision network are evaluated on Laval
and YCB video scenes where we significantly improve the state-of-the-art.
| [
{
"created": "Sat, 26 Sep 2020 20:05:42 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Nov 2020 19:03:28 GMT",
"version": "v2"
}
] | 2020-12-02 | [
[
"Busam",
"Benjamin",
""
],
[
"Jung",
"Hyun Jun",
""
],
[
"Navab",
"Nassir",
""
]
] | Object pose estimation is an integral part of robot vision and AR. Previous 6D pose retrieval pipelines treat the problem either as a regression task or discretize the pose space to classify. We change this paradigm and reformulate the problem as an action decision process where an initial pose is updated in incremental discrete steps that sequentially move a virtual 3D rendering towards the correct solution. A neural network estimates likely moves from a single RGB image iteratively and determines so an acceptable final pose. In comparison to other approaches that train object-specific pose models, we learn a decision process. This allows for a lightweight architecture while it naturally generalizes to unseen objects. A coherent stop action for process termination enables dynamic reduction of the computation cost if there are insignificant changes in a video sequence. Instead of a static inference time, we thereby automatically increase the runtime depending on the object motion. Robustness and accuracy of our action decision network are evaluated on Laval and YCB video scenes where we significantly improve the state-of-the-art. |
2307.00495 | Xunlian Luo | Xunlian Luo, Chunjiang Zhu, Detian Zhang, Qing Li | STG4Traffic: A Survey and Benchmark of Spatial-Temporal Graph Neural
Networks for Traffic Prediction | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic prediction has been an active research topic in the domain of
spatial-temporal data mining. Accurate real-time traffic prediction is
essential to improve the safety, stability, and versatility of smart city
systems, i.e., traffic control and optimal routing. The complex and highly
dynamic spatial-temporal dependencies make effective predictions still face
many challenges. Recent studies have shown that spatial-temporal graph neural
networks exhibit great potential applied to traffic prediction, which combines
sequential models with graph convolutional networks to jointly model temporal
and spatial correlations. However, a survey study of graph learning,
spatial-temporal graph models for traffic, as well as a fair comparison of
baseline models are pending and unavoidable issues. In this paper, we first
provide a systematic review of graph learning strategies and commonly used
graph convolution algorithms. Then we conduct a comprehensive analysis of the
strengths and weaknesses of recently proposed spatial-temporal graph network
models. Furthermore, we build a study called STG4Traffic using the deep
learning framework PyTorch to establish a standardized and scalable benchmark
on two types of traffic datasets. We can evaluate their performance by
personalizing the model settings with uniform metrics. Finally, we point out
some problems in the current study and discuss future directions. Source codes
are available at https://github.com/trainingl/STG4Traffic.
| [
{
"created": "Sun, 2 Jul 2023 06:56:52 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2024 14:11:01 GMT",
"version": "v2"
}
] | 2024-06-19 | [
[
"Luo",
"Xunlian",
""
],
[
"Zhu",
"Chunjiang",
""
],
[
"Zhang",
"Detian",
""
],
[
"Li",
"Qing",
""
]
] | Traffic prediction has been an active research topic in the domain of spatial-temporal data mining. Accurate real-time traffic prediction is essential to improve the safety, stability, and versatility of smart city systems, i.e., traffic control and optimal routing. The complex and highly dynamic spatial-temporal dependencies make effective predictions still face many challenges. Recent studies have shown that spatial-temporal graph neural networks exhibit great potential applied to traffic prediction, which combines sequential models with graph convolutional networks to jointly model temporal and spatial correlations. However, a survey study of graph learning, spatial-temporal graph models for traffic, as well as a fair comparison of baseline models are pending and unavoidable issues. In this paper, we first provide a systematic review of graph learning strategies and commonly used graph convolution algorithms. Then we conduct a comprehensive analysis of the strengths and weaknesses of recently proposed spatial-temporal graph network models. Furthermore, we build a study called STG4Traffic using the deep learning framework PyTorch to establish a standardized and scalable benchmark on two types of traffic datasets. We can evaluate their performance by personalizing the model settings with uniform metrics. Finally, we point out some problems in the current study and discuss future directions. Source codes are available at https://github.com/trainingl/STG4Traffic. |
2407.12512 | Fengyu Cai | Fengyu Cai, Xinran Zhao, Hongming Zhang, Iryna Gurevych, Heinz Koeppl | $\textit{GeoHard}$: Towards Measuring Class-wise Hardness through
Modelling Class Semantics | Findings of ACL 2024 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent advances in measuring hardness-wise properties of data guide language
models in sample selection within low-resource scenarios. However,
class-specific properties are overlooked for task setup and learning. How will
these properties influence model learning and is it generalizable across
datasets? To answer this question, this work formally initiates the concept of
$\textit{class-wise hardness}$. Experiments across eight natural language
understanding (NLU) datasets demonstrate a consistent hardness distribution
across learning paradigms, models, and human judgment. Subsequent experiments
unveil a notable challenge in measuring such class-wise hardness with
instance-level metrics in previous works. To address this, we propose
$\textit{GeoHard}$ for class-wise hardness measurement by modeling class
geometry in the semantic embedding space. $\textit{GeoHard}$ surpasses
instance-level metrics by over 59 percent on $\textit{Pearson}$'s correlation
on measuring class-wise hardness. Our analysis theoretically and empirically
underscores the generality of $\textit{GeoHard}$ as a fresh perspective on data
diagnosis. Additionally, we showcase how understanding class-wise hardness can
practically aid in improving task learning.
| [
{
"created": "Wed, 17 Jul 2024 11:53:39 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Cai",
"Fengyu",
""
],
[
"Zhao",
"Xinran",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Gurevych",
"Iryna",
""
],
[
"Koeppl",
"Heinz",
""
]
] | Recent advances in measuring hardness-wise properties of data guide language models in sample selection within low-resource scenarios. However, class-specific properties are overlooked for task setup and learning. How will these properties influence model learning and is it generalizable across datasets? To answer this question, this work formally initiates the concept of $\textit{class-wise hardness}$. Experiments across eight natural language understanding (NLU) datasets demonstrate a consistent hardness distribution across learning paradigms, models, and human judgment. Subsequent experiments unveil a notable challenge in measuring such class-wise hardness with instance-level metrics in previous works. To address this, we propose $\textit{GeoHard}$ for class-wise hardness measurement by modeling class geometry in the semantic embedding space. $\textit{GeoHard}$ surpasses instance-level metrics by over 59 percent on $\textit{Pearson}$'s correlation on measuring class-wise hardness. Our analysis theoretically and empirically underscores the generality of $\textit{GeoHard}$ as a fresh perspective on data diagnosis. Additionally, we showcase how understanding class-wise hardness can practically aid in improving task learning. |
2302.07654 | Marcel Wasserer | Anton R. Fuxj\"ager, Kristian Kozak, Matthias Dorfer, Patrick M.
Blies, Marcel Wasserer (enliteAI) | Reinforcement Learning Based Power Grid Day-Ahead Planning and
AI-Assisted Control | null | null | null | null | cs.AI cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ongoing transition to renewable energy is increasing the share of
fluctuating power sources like wind and solar, raising power grid volatility
and making grid operation increasingly complex and costly. In our prior work,
we have introduced a congestion management approach consisting of a
redispatching optimizer combined with a machine learning-based topology
optimization agent. Compared to a typical redispatching-only agent, it was able
to keep a simulated grid in operation longer while at the same time reducing
operational cost. Our approach also ranked 1st in the L2RPN 2022 competition
initiated by RTE, Europe's largest grid operator. The aim of this paper is to
bring this promising technology closer to the real world of power grid
operation. We deploy RL-based agents in two settings resembling established
workflows, AI-assisted day-ahead planning and realtime control, in an attempt
to show the benefits and caveats of this new technology. We then analyse
congestion, redispatching and switching profiles, and elementary sensitivity
analysis providing a glimpse of operation robustness. While there is still a
long way to a real control room, we believe that this paper and the associated
prototypes help to narrow the gap and pave the way for a safe deployment of RL
agents in tomorrow's power grids.
| [
{
"created": "Wed, 15 Feb 2023 13:38:40 GMT",
"version": "v1"
}
] | 2023-02-16 | [
[
"Fuxjäger",
"Anton R.",
"",
"enliteAI"
],
[
"Kozak",
"Kristian",
"",
"enliteAI"
],
[
"Dorfer",
"Matthias",
"",
"enliteAI"
],
[
"Blies",
"Patrick M.",
"",
"enliteAI"
],
[
"Wasserer",
"Marcel",
"",
"enliteAI"
]
] | The ongoing transition to renewable energy is increasing the share of fluctuating power sources like wind and solar, raising power grid volatility and making grid operation increasingly complex and costly. In our prior work, we have introduced a congestion management approach consisting of a redispatching optimizer combined with a machine learning-based topology optimization agent. Compared to a typical redispatching-only agent, it was able to keep a simulated grid in operation longer while at the same time reducing operational cost. Our approach also ranked 1st in the L2RPN 2022 competition initiated by RTE, Europe's largest grid operator. The aim of this paper is to bring this promising technology closer to the real world of power grid operation. We deploy RL-based agents in two settings resembling established workflows, AI-assisted day-ahead planning and realtime control, in an attempt to show the benefits and caveats of this new technology. We then analyse congestion, redispatching and switching profiles, and elementary sensitivity analysis providing a glimpse of operation robustness. While there is still a long way to a real control room, we believe that this paper and the associated prototypes help to narrow the gap and pave the way for a safe deployment of RL agents in tomorrow's power grids. |
2310.00981 | Xixi Lu | Bart J. Verhoef and Xixi Lu | Using Reinforcement Learning to Optimize Responses in Care Processes: A
Case Study on Aggression Incidents | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Previous studies have used prescriptive process monitoring to find actionable
policies in business processes and conducted case studies in similar domains,
such as the loan application process and the traffic fine process. However,
care processes tend to be more dynamic and complex. For example, at any stage
of a care process, a multitude of actions is possible. In this paper, we follow
the reinforcement approach and train a Markov decision process using event data
from a care process. The goal was to find optimal policies for staff members
when clients are displaying any type of aggressive behavior. We used the
reinforcement learning algorithms Q-learning and SARSA to find optimal
policies. Results showed that the policies derived from these algorithms are
similar to the most frequent actions currently used but provide the staff
members with a few more options in certain situations.
| [
{
"created": "Mon, 2 Oct 2023 08:43:29 GMT",
"version": "v1"
}
] | 2023-10-03 | [
[
"Verhoef",
"Bart J.",
""
],
[
"Lu",
"Xixi",
""
]
] | Previous studies have used prescriptive process monitoring to find actionable policies in business processes and conducted case studies in similar domains, such as the loan application process and the traffic fine process. However, care processes tend to be more dynamic and complex. For example, at any stage of a care process, a multitude of actions is possible. In this paper, we follow the reinforcement approach and train a Markov decision process using event data from a care process. The goal was to find optimal policies for staff members when clients are displaying any type of aggressive behavior. We used the reinforcement learning algorithms Q-learning and SARSA to find optimal policies. Results showed that the policies derived from these algorithms are similar to the most frequent actions currently used but provide the staff members with a few more options in certain situations. |
2110.10481 | Guanjie Huang | Guanjie Huang, Hongjian He, Xiang Li, Xingchen Li, Ziang Liu | Unified Style Transfer | 9 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, it is hard to compare and evaluate different style transfer
algorithms due to chaotic definitions of style and the absence of agreed
objective validation methods in the study of style transfer. In this paper, a
novel approach, the Unified Style Transfer (UST) model, is proposed. With the
introduction of a generative model for internal style representation, UST can
transfer images in two approaches, i.e., Domain-based and Image-based,
simultaneously. At the same time, a new philosophy based on the human sense of
art and style distributions for evaluating the transfer model is presented and
demonstrated, called Statistical Style Analysis. It provides a new path to
validate style transfer models' feasibility by validating the general
consistency between internal style representation and art facts. Besides, the
translation-invariance of AdaIN features is also discussed.
| [
{
"created": "Wed, 20 Oct 2021 10:45:38 GMT",
"version": "v1"
}
] | 2021-10-22 | [
[
"Huang",
"Guanjie",
""
],
[
"He",
"Hongjian",
""
],
[
"Li",
"Xiang",
""
],
[
"Li",
"Xingchen",
""
],
[
"Liu",
"Ziang",
""
]
] | Currently, it is hard to compare and evaluate different style transfer algorithms due to chaotic definitions of style and the absence of agreed objective validation methods in the study of style transfer. In this paper, a novel approach, the Unified Style Transfer (UST) model, is proposed. With the introduction of a generative model for internal style representation, UST can transfer images in two approaches, i.e., Domain-based and Image-based, simultaneously. At the same time, a new philosophy based on the human sense of art and style distributions for evaluating the transfer model is presented and demonstrated, called Statistical Style Analysis. It provides a new path to validate style transfer models' feasibility by validating the general consistency between internal style representation and art facts. Besides, the translation-invariance of AdaIN features is also discussed. |
2003.04195 | Piji Li | Piji Li | An Empirical Investigation of Pre-Trained Transformer Language Models
for Open-Domain Dialogue Generation | 26 pages | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an empirical investigation of pre-trained Transformer-based
auto-regressive language models for the task of open-domain dialogue
generation. Training paradigm of pre-training and fine-tuning is employed to
conduct the parameter learning. Corpora of News and Wikipedia in Chinese and
English are collected for the pre-training stage respectively. Dialogue context
and response are concatenated into a single sequence utilized as the input of
the models during the fine-tuning stage. A weighted joint prediction paradigm
for both context and response is designed to evaluate the performance of models
with or without the loss term for context prediction. Various of decoding
strategies such as greedy search, beam search, top-k sampling, etc. are
employed to conduct the response text generation. Extensive experiments are
conducted on the typical single-turn and multi-turn dialogue corpora such as
Weibo, Douban, Reddit, DailyDialog, and Persona-Chat. Detailed numbers of
automatic evaluation metrics on relevance and diversity of the generated
results for the languages models as well as the baseline approaches are
reported.
| [
{
"created": "Mon, 9 Mar 2020 15:20:21 GMT",
"version": "v1"
}
] | 2020-03-10 | [
[
"Li",
"Piji",
""
]
] | We present an empirical investigation of pre-trained Transformer-based auto-regressive language models for the task of open-domain dialogue generation. Training paradigm of pre-training and fine-tuning is employed to conduct the parameter learning. Corpora of News and Wikipedia in Chinese and English are collected for the pre-training stage respectively. Dialogue context and response are concatenated into a single sequence utilized as the input of the models during the fine-tuning stage. A weighted joint prediction paradigm for both context and response is designed to evaluate the performance of models with or without the loss term for context prediction. Various of decoding strategies such as greedy search, beam search, top-k sampling, etc. are employed to conduct the response text generation. Extensive experiments are conducted on the typical single-turn and multi-turn dialogue corpora such as Weibo, Douban, Reddit, DailyDialog, and Persona-Chat. Detailed numbers of automatic evaluation metrics on relevance and diversity of the generated results for the languages models as well as the baseline approaches are reported. |
1507.08467 | David Sousa-Rodrigues | Cristian Jimenez-Romero and David Sousa-Rodrigues and Jeffrey H.
Johnson and Vitorino Ramos | A Model for Foraging Ants, Controlled by Spiking Neural Networks and
Double Pheromones | This work has been accepted for presentation at the UK Workshop on
Computational Intelligence --- University of Exeter, September 2015
http://www.ukci2015.ex.ac.uk/ | null | null | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A model of an Ant System where ants are controlled by a spiking neural
circuit and a second order pheromone mechanism in a foraging task is presented.
A neural circuit is trained for individual ants and subsequently the ants are
exposed to a virtual environment where a swarm of ants performed a resource
foraging task. The model comprises an associative and unsupervised learning
strategy for the neural circuit of the ant. The neural circuit adapts to the
environment by means of classical conditioning. The initially unknown
environment includes different types of stimuli representing food and obstacles
which, when they come in direct contact with the ant, elicit a reflex response
in the motor neural system of the ant: moving towards or away from the source
of the stimulus. The ants are released on a landscape with multiple food
sources where one ant alone would have difficulty harvesting the landscape to
maximum efficiency. The introduction of a double pheromone mechanism yields
better results than traditional ant colony optimization strategies. Traditional
ant systems include mainly a positive reinforcement pheromone. This approach
uses a second pheromone that acts as a marker for forbidden paths (negative
feedback). This blockade is not permanent and is controlled by the evaporation
rate of the pheromones. The combined action of both pheromones acts as a
collective stigmergic memory of the swarm, which reduces the search space of
the problem. This paper explores how the adaptation and learning abilities
observed in biologically inspired cognitive architectures is synergistically
enhanced by swarm optimization strategies. The model portraits two forms of
artificial intelligent behaviour: at the individual level the spiking neural
network is the main controller and at the collective level the pheromone
distribution is a map towards the solution emerged by the colony.
| [
{
"created": "Thu, 30 Jul 2015 11:57:54 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Aug 2015 09:25:03 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Sep 2015 14:17:39 GMT",
"version": "v3"
}
] | 2015-09-21 | [
[
"Jimenez-Romero",
"Cristian",
""
],
[
"Sousa-Rodrigues",
"David",
""
],
[
"Johnson",
"Jeffrey H.",
""
],
[
"Ramos",
"Vitorino",
""
]
] | A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food and obstacles which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. The introduction of a double pheromone mechanism yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. |
2306.00188 | Guangyao Zheng | Guangyao Zheng, Shuhao Lai, Vladimir Braverman, Michael A. Jacobs,
Vishwa S. Parekh | Multi-environment lifelong deep reinforcement learning for medical
imaging | null | null | null | null | cs.LG cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Deep reinforcement learning(DRL) is increasingly being explored in medical
imaging. However, the environments for medical imaging tasks are constantly
evolving in terms of imaging orientations, imaging sequences, and pathologies.
To that end, we developed a Lifelong DRL framework, SERIL to continually learn
new tasks in changing imaging environments without catastrophic forgetting.
SERIL was developed using selective experience replay based lifelong learning
technique for the localization of five anatomical landmarks in brain MRI on a
sequence of twenty-four different imaging environments. The performance of
SERIL, when compared to two baseline setups: MERT(multi-environment-best-case)
and SERT(single-environment-worst-case) demonstrated excellent performance with
an average distance of $9.90\pm7.35$ pixels from the desired landmark across
all 120 tasks, compared to $10.29\pm9.07$ for MERT and $36.37\pm22.41$ for
SERT($p<0.05$), demonstrating the excellent potential for continuously learning
multiple tasks across dynamically changing imaging environments.
| [
{
"created": "Wed, 31 May 2023 21:06:42 GMT",
"version": "v1"
}
] | 2023-06-02 | [
[
"Zheng",
"Guangyao",
""
],
[
"Lai",
"Shuhao",
""
],
[
"Braverman",
"Vladimir",
""
],
[
"Jacobs",
"Michael A.",
""
],
[
"Parekh",
"Vishwa S.",
""
]
] | Deep reinforcement learning(DRL) is increasingly being explored in medical imaging. However, the environments for medical imaging tasks are constantly evolving in terms of imaging orientations, imaging sequences, and pathologies. To that end, we developed a Lifelong DRL framework, SERIL to continually learn new tasks in changing imaging environments without catastrophic forgetting. SERIL was developed using selective experience replay based lifelong learning technique for the localization of five anatomical landmarks in brain MRI on a sequence of twenty-four different imaging environments. The performance of SERIL, when compared to two baseline setups: MERT(multi-environment-best-case) and SERT(single-environment-worst-case) demonstrated excellent performance with an average distance of $9.90\pm7.35$ pixels from the desired landmark across all 120 tasks, compared to $10.29\pm9.07$ for MERT and $36.37\pm22.41$ for SERT($p<0.05$), demonstrating the excellent potential for continuously learning multiple tasks across dynamically changing imaging environments. |
1403.0379 | Marcin Skwark | Christoph Feinauer, Marcin J. Skwark, Andrea Pagnani and Erik Aurell | Improving contact prediction along three dimensions | 19 pages, 8 figures in main text; 7 pages, 6 figures in supporting
information | null | 10.1371/journal.pcbi.1003847 | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Correlation patterns in multiple sequence alignments of homologous proteins
can be exploited to infer information on the three-dimensional structure of
their members. The typical pipeline to address this task, which we in this
paper refer to as the three dimensions of contact prediction, is to: (i) filter
and align the raw sequence data representing the evolutionarily related
proteins; (ii) choose a predictive model to describe a sequence alignment;
(iii) infer the model parameters and interpret them in terms of structural
properties, such as an accurate contact map. We show here that all three
dimensions are important for overall prediction success. In particular, we show
that it is possible to improve significantly along the second dimension by
going beyond the pair-wise Potts models from statistical physics, which have
hitherto been the focus of the field. These (simple) extensions are motivated
by multiple sequence alignments often containing long stretches of gaps which,
as a data feature, would be rather untypical for independent samples drawn from
a Potts model. Using a large test set of proteins we show that the combined
improvements along the three dimensions are as large as any reported to date.
| [
{
"created": "Mon, 3 Mar 2014 10:46:01 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Mar 2014 10:02:31 GMT",
"version": "v2"
}
] | 2015-06-18 | [
[
"Feinauer",
"Christoph",
""
],
[
"Skwark",
"Marcin J.",
""
],
[
"Pagnani",
"Andrea",
""
],
[
"Aurell",
"Erik",
""
]
] | Correlation patterns in multiple sequence alignments of homologous proteins can be exploited to infer information on the three-dimensional structure of their members. The typical pipeline to address this task, which we in this paper refer to as the three dimensions of contact prediction, is to: (i) filter and align the raw sequence data representing the evolutionarily related proteins; (ii) choose a predictive model to describe a sequence alignment; (iii) infer the model parameters and interpret them in terms of structural properties, such as an accurate contact map. We show here that all three dimensions are important for overall prediction success. In particular, we show that it is possible to improve significantly along the second dimension by going beyond the pair-wise Potts models from statistical physics, which have hitherto been the focus of the field. These (simple) extensions are motivated by multiple sequence alignments often containing long stretches of gaps which, as a data feature, would be rather untypical for independent samples drawn from a Potts model. Using a large test set of proteins we show that the combined improvements along the three dimensions are as large as any reported to date. |
2109.03999 | Zhifeng Jiang | Zhifeng Jiang, Wei Wang, Bo Li, Qiang Yang | Towards Efficient Synchronous Federated Training: A Survey on System
Optimization Strategies | This article has been accepted for publication in IEEE Transactions
on Big Data. This is the author's version which has not been fully edited and
content may change prior to final publication | null | 10.1109/TBDATA.2022.3177222 | null | cs.DC cs.LG cs.NI | http://creativecommons.org/licenses/by/4.0/ | The increasing demand for privacy-preserving collaborative learning has given
rise to a new computing paradigm called federated learning (FL), in which
clients collaboratively train a machine learning (ML) model without revealing
their private training data. Given an acceptable level of privacy guarantee,
the goal of FL is to minimize the time-to-accuracy of model training. Compared
with distributed ML in data centers, there are four distinct challenges to
achieving short time-to-accuracy in FL training, namely the lack of information
for optimization, the tradeoff between statistical and system utility, client
heterogeneity, and large configuration space. In this paper, we survey recent
works in addressing these challenges and present them following a typical
training workflow through three phases: client selection, configuration, and
reporting. We also review system works including measurement studies and
benchmarking tools that aim to support FL developers.
| [
{
"created": "Thu, 9 Sep 2021 02:31:29 GMT",
"version": "v1"
},
{
"created": "Sun, 12 Sep 2021 17:17:01 GMT",
"version": "v2"
},
{
"created": "Mon, 30 May 2022 13:44:21 GMT",
"version": "v3"
}
] | 2022-05-31 | [
[
"Jiang",
"Zhifeng",
""
],
[
"Wang",
"Wei",
""
],
[
"Li",
"Bo",
""
],
[
"Yang",
"Qiang",
""
]
] | The increasing demand for privacy-preserving collaborative learning has given rise to a new computing paradigm called federated learning (FL), in which clients collaboratively train a machine learning (ML) model without revealing their private training data. Given an acceptable level of privacy guarantee, the goal of FL is to minimize the time-to-accuracy of model training. Compared with distributed ML in data centers, there are four distinct challenges to achieving short time-to-accuracy in FL training, namely the lack of information for optimization, the tradeoff between statistical and system utility, client heterogeneity, and large configuration space. In this paper, we survey recent works in addressing these challenges and present them following a typical training workflow through three phases: client selection, configuration, and reporting. We also review system works including measurement studies and benchmarking tools that aim to support FL developers. |
1511.07878 | Junggi Yoon | Antal Jevicki and Junggi Yoon | $S_N$ Orbifolds and String Interactions | 34 pages, 3 figures; v2: minor changes, references added | null | 10.1088/1751-8113/49/20/205401 | BROWN-HET-1688 | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study interacting features of $S_N$ Orbifold CFTs. Concentrating on
characters (associated with $S_N$ Orbifold primaries) we first formulate a
novel procedure for evaluating them through $GL(\infty)_+$ tracing. The result
is a polynomial formula which we show gives results equivalent to those found
by Bantay. From this we deduce a hierarchy of commuting Hamiltonians featuring
locality in the induced space, and nonlinear string-type interactions.
| [
{
"created": "Tue, 24 Nov 2015 21:00:06 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Dec 2015 20:28:43 GMT",
"version": "v2"
}
] | 2016-05-04 | [
[
"Jevicki",
"Antal",
""
],
[
"Yoon",
"Junggi",
""
]
] | We study interacting features of $S_N$ Orbifold CFTs. Concentrating on characters (associated with $S_N$ Orbifold primaries) we first formulate a novel procedure for evaluating them through $GL(\infty)_+$ tracing. The result is a polynomial formula which we show gives results equivalent to those found by Bantay. From this we deduce a hierarchy of commuting Hamiltonians featuring locality in the induced space, and nonlinear string-type interactions. |
1704.00577 | Jakub Gizbert-Studnicki | Jakub Gizbert-Studnicki | Phase structure of Causal Dynamical Triangulations in 4D | To appear in Acta Physica Polonica B Proceedings Supplement.
Presented at the 3rd Conference of the Polish Society on Relativity. 5 pages,
1 figure | null | null | null | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Causal Dynamical Triangulations (CDT) is a lattice approach to quantum
gravity. CDT has rich phase structure, including a semiclassical phase
consistent with Einstein's general relativity. Some of the observed phase
transitions are second (or higher) order which opens a possibility of
investigating the ultraviolet continuum limit. Recently a new phase with
intriguing geometric properties has been discovered and the new phase
transition is also second (or higher) order.
| [
{
"created": "Mon, 3 Apr 2017 13:37:21 GMT",
"version": "v1"
}
] | 2017-04-04 | [
[
"Gizbert-Studnicki",
"Jakub",
""
]
] | Causal Dynamical Triangulations (CDT) is a lattice approach to quantum gravity. CDT has rich phase structure, including a semiclassical phase consistent with Einstein's general relativity. Some of the observed phase transitions are second (or higher) order which opens a possibility of investigating the ultraviolet continuum limit. Recently a new phase with intriguing geometric properties has been discovered and the new phase transition is also second (or higher) order. |
2101.03285 | Yu Tian | Yu Tian, Leonardo Zorron Cheng Tao Pu, Yuyuan Liu, Gabriel Maicas,
Johan W. Verjans, Alastair D. Burt, Seon Ho Shin, Rajvinder Singh, Gustavo
Carneiro | Detecting, Localising and Classifying Polyps from Colonoscopy Videos
using Deep Learning | Preprint to submit to IEEE journals | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose and analyse a system that can automatically detect,
localise and classify polyps from colonoscopy videos. The detection of frames
with polyps is formulated as a few-shot anomaly classification problem, where
the training set is highly imbalanced with the large majority of frames
consisting of normal images and a small minority comprising frames with polyps.
Colonoscopy videos may contain blurry images and frames displaying feces and
water jet sprays to clean the colon -- such frames can mistakenly be detected
as anomalies, so we have implemented a classifier to reject these two types of
frames before polyp detection takes place. Next, given a frame containing a
polyp, our method localises (with a bounding box around the polyp) and
classifies it into five different classes. Furthermore, we study a method to
improve the reliability and interpretability of the classification result using
uncertainty estimation and classification calibration. Classification
uncertainty and calibration not only help improve classification accuracy by
rejecting low-confidence and high-uncertain results, but can be used by doctors
to decide how to decide on the classification of a polyp. All the proposed
detection, localisation and classification methods are tested using large data
sets and compared with relevant baseline approaches.
| [
{
"created": "Sat, 9 Jan 2021 04:25:34 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Tian",
"Yu",
""
],
[
"Pu",
"Leonardo Zorron Cheng Tao",
""
],
[
"Liu",
"Yuyuan",
""
],
[
"Maicas",
"Gabriel",
""
],
[
"Verjans",
"Johan W.",
""
],
[
"Burt",
"Alastair D.",
""
],
[
"Shin",
"Seon Ho",
""
]... | In this paper, we propose and analyse a system that can automatically detect, localise and classify polyps from colonoscopy videos. The detection of frames with polyps is formulated as a few-shot anomaly classification problem, where the training set is highly imbalanced with the large majority of frames consisting of normal images and a small minority comprising frames with polyps. Colonoscopy videos may contain blurry images and frames displaying feces and water jet sprays to clean the colon -- such frames can mistakenly be detected as anomalies, so we have implemented a classifier to reject these two types of frames before polyp detection takes place. Next, given a frame containing a polyp, our method localises (with a bounding box around the polyp) and classifies it into five different classes. Furthermore, we study a method to improve the reliability and interpretability of the classification result using uncertainty estimation and classification calibration. Classification uncertainty and calibration not only help improve classification accuracy by rejecting low-confidence and high-uncertain results, but can be used by doctors to decide how to decide on the classification of a polyp. All the proposed detection, localisation and classification methods are tested using large data sets and compared with relevant baseline approaches. |
1206.4704 | Vidas Regelskis | Marius de Leeuw and Vidas Regelskis | Integrable boundaries in AdS/CFT: revisiting the Z=0 giant graviton and
D7-brane | 36 pages. v2: minor typos corrected, references updated; v3:
published version | JHEP 03 (2013) 030 | 10.1007/JHEP03(2013)030 | null | hep-th math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the worldsheet boundary scattering and the corresponding boundary
algebras for the Z=0 giant graviton and the Z=0 D7-brane in the AdS/CFT
correspondence. We consider two approaches to the boundary scattering, the
usual one governed by the (generalized) twisted Yangians and the q-deformed
model of these boundaries governed by the quantum affine coideal subalgebras.
We show that the q-deformed approach leads to boundary algebras that are of a
more compact form than the corresponding twisted Yangians, and thus are
favourable to use for explicit calculations. We obtain the q-deformed
reflection matrices for both boundaries which in the q->1 limit specialize to
the ones obtained using twisted Yangians.
| [
{
"created": "Wed, 20 Jun 2012 20:01:43 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Jul 2012 22:18:58 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Mar 2013 22:01:02 GMT",
"version": "v3"
}
] | 2015-06-05 | [
[
"de Leeuw",
"Marius",
""
],
[
"Regelskis",
"Vidas",
""
]
] | We consider the worldsheet boundary scattering and the corresponding boundary algebras for the Z=0 giant graviton and the Z=0 D7-brane in the AdS/CFT correspondence. We consider two approaches to the boundary scattering, the usual one governed by the (generalized) twisted Yangians and the q-deformed model of these boundaries governed by the quantum affine coideal subalgebras. We show that the q-deformed approach leads to boundary algebras that are of a more compact form than the corresponding twisted Yangians, and thus are favourable to use for explicit calculations. We obtain the q-deformed reflection matrices for both boundaries which in the q->1 limit specialize to the ones obtained using twisted Yangians. |
cs/0504010 | P. Oscar Boykin | P. Oscar Boykin, Vwani P. Roychowdhury | Reversible Fault-Tolerant Logic | 10 pages, to appear in DSN 2005 | null | null | null | cs.IT math.IT quant-ph | null | It is now widely accepted that the CMOS technology implementing irreversible
logic will hit a scaling limit beyond 2016, and that the increased power
dissipation is a major limiting factor. Reversible computing can potentially
require arbitrarily small amounts of energy. Recently several nano-scale
devices which have the potential to scale, and which naturally perform
reversible logic, have emerged. This paper addresses several fundamental issues
that need to be addressed before any nano-scale reversible computing systems
can be realized, including reliability and performance trade-offs and
architecture optimization. Many nano-scale devices will be limited to only near
neighbor interactions, requiring careful optimization of circuits. We provide
efficient fault-tolerant (FT) circuits when restricted to both 2D and 1D.
Finally, we compute bounds on the entropy (and hence, heat) generated by our FT
circuits and provide quantitative estimates on how large can we make our
circuits before we lose any advantage over irreversible computing.
| [
{
"created": "Mon, 4 Apr 2005 21:44:42 GMT",
"version": "v1"
}
] | 2007-07-13 | [
[
"Boykin",
"P. Oscar",
""
],
[
"Roychowdhury",
"Vwani P.",
""
]
] | It is now widely accepted that the CMOS technology implementing irreversible logic will hit a scaling limit beyond 2016, and that the increased power dissipation is a major limiting factor. Reversible computing can potentially require arbitrarily small amounts of energy. Recently several nano-scale devices which have the potential to scale, and which naturally perform reversible logic, have emerged. This paper addresses several fundamental issues that need to be addressed before any nano-scale reversible computing systems can be realized, including reliability and performance trade-offs and architecture optimization. Many nano-scale devices will be limited to only near neighbor interactions, requiring careful optimization of circuits. We provide efficient fault-tolerant (FT) circuits when restricted to both 2D and 1D. Finally, we compute bounds on the entropy (and hence, heat) generated by our FT circuits and provide quantitative estimates on how large can we make our circuits before we lose any advantage over irreversible computing. |
1207.1385 | Vibhav Gogate | Vibhav Gogate, Rina Dechter | Approximate Inference Algorithms for Hybrid Bayesian Networks with
Discrete Constraints | Appears in Proceedings of the Twenty-First Conference on Uncertainty
in Artificial Intelligence (UAI2005) | null | null | UAI-P-2005-PG-209-216 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider Hybrid Mixed Networks (HMN) which are Hybrid
Bayesian Networks that allow discrete deterministic information to be modeled
explicitly in the form of constraints. We present two approximate inference
algorithms for HMNs that integrate and adjust well known algorithmic principles
such as Generalized Belief Propagation, Rao-Blackwellised Importance Sampling
and Constraint Propagation to address the complexity of modeling and reasoning
in HMNs. We demonstrate the performance of our approximate inference algorithms
on randomly generated HMNs.
| [
{
"created": "Wed, 4 Jul 2012 16:12:59 GMT",
"version": "v1"
}
] | 2012-07-09 | [
[
"Gogate",
"Vibhav",
""
],
[
"Dechter",
"Rina",
""
]
] | In this paper, we consider Hybrid Mixed Networks (HMN) which are Hybrid Bayesian Networks that allow discrete deterministic information to be modeled explicitly in the form of constraints. We present two approximate inference algorithms for HMNs that integrate and adjust well known algorithmic principles such as Generalized Belief Propagation, Rao-Blackwellised Importance Sampling and Constraint Propagation to address the complexity of modeling and reasoning in HMNs. We demonstrate the performance of our approximate inference algorithms on randomly generated HMNs. |
0907.0303 | Yoshinori Matsuo | Yoshinori Matsuo, Takuya Tsukioka and Chul-Moon Yoo | Another Realization of Kerr/CFT Correspondence | 13pages | Nucl.Phys.B825:231-241,2010 | 10.1016/j.nuclphysb.2009.09.025 | APCTP-Pre2009-008 | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study another realization of the Kerr/CFT correspondence. By imposing new
asymptotic conditions for the near horizon geometry of Kerr black hole, an
asymptotic symmetry which contains all of the exact isometries can be obtained.
In particular, the Virasoro algebra can be realized as an enhancement of
SL(2,R) symmetry of the AdS geometry. By using this asymptotic symmetry, we
discuss finite temperature effects and show the correspondence concretely.
| [
{
"created": "Thu, 2 Jul 2009 07:49:10 GMT",
"version": "v1"
}
] | 2009-11-09 | [
[
"Matsuo",
"Yoshinori",
""
],
[
"Tsukioka",
"Takuya",
""
],
[
"Yoo",
"Chul-Moon",
""
]
] | We study another realization of the Kerr/CFT correspondence. By imposing new asymptotic conditions for the near horizon geometry of Kerr black hole, an asymptotic symmetry which contains all of the exact isometries can be obtained. In particular, the Virasoro algebra can be realized as an enhancement of SL(2,R) symmetry of the AdS geometry. By using this asymptotic symmetry, we discuss finite temperature effects and show the correspondence concretely. |
2304.09201 | Mattia Cesaro | Andr\'es Anabal\'on, Mattia Ces\`aro, Antonio Gallerati, Alfredo
Giambrone and Mario Trigiante | A Positive Energy Theorem for AdS Solitons | 10 pages, title changed, new author added, additional remarks | null | null | IFT-UAM/CSIC-23-42 | hep-th | http://creativecommons.org/licenses/by/4.0/ | The uncharged AdS$_4$ soliton has been recently shown to be continuously
connected to a magnetic, supersymmetric AdS$_4$ soliton within $\mathcal{N}=8$
gauged supergravity. By constructing the asymptotic superalgebra, we establish
a positive energy theorem for the magnetic AdS$_4$ solitons admitting
well-defined asymptotic Killing spinors, antiperiodic on a contractible $S^1$.
We show that there exists only one discrete solution endowed with these
boundary conditions satisfying the bound, the latter being saturated by the
null energy supersymmetric configuration. Despite having negative energy, the
uncharged AdS$_4$ soliton does not contradict the positive energy theorem, as
it does not admit well-defined asymptotic Killing spinors.
| [
{
"created": "Tue, 18 Apr 2023 18:00:05 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jul 2023 07:12:32 GMT",
"version": "v2"
}
] | 2023-08-01 | [
[
"Anabalón",
"Andrés",
""
],
[
"Cesàro",
"Mattia",
""
],
[
"Gallerati",
"Antonio",
""
],
[
"Giambrone",
"Alfredo",
""
],
[
"Trigiante",
"Mario",
""
]
] | The uncharged AdS$_4$ soliton has been recently shown to be continuously connected to a magnetic, supersymmetric AdS$_4$ soliton within $\mathcal{N}=8$ gauged supergravity. By constructing the asymptotic superalgebra, we establish a positive energy theorem for the magnetic AdS$_4$ solitons admitting well-defined asymptotic Killing spinors, antiperiodic on a contractible $S^1$. We show that there exists only one discrete solution endowed with these boundary conditions satisfying the bound, the latter being saturated by the null energy supersymmetric configuration. Despite having negative energy, the uncharged AdS$_4$ soliton does not contradict the positive energy theorem, as it does not admit well-defined asymptotic Killing spinors. |
2406.12272 | Jindong Jiang | Jindong Jiang, Fei Deng, Gautam Singh, Minseung Lee, Sungjin Ahn | Slot State Space Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent State Space Models (SSMs) such as S4, S5, and Mamba have shown
remarkable computational benefits in long-range temporal dependency modeling.
However, in many sequence modeling problems, the underlying process is
inherently modular and it is of interest to have inductive biases that mimic
this modular structure. In this paper, we introduce SlotSSMs, a novel framework
for incorporating independent mechanisms into SSMs to preserve or encourage
separation of information. Unlike conventional SSMs that maintain a monolithic
state vector, SlotSSMs maintains the state as a collection of multiple vectors
called slots. Crucially, the state transitions are performed independently per
slot with sparse interactions across slots implemented via the bottleneck of
self-attention. In experiments, we evaluate our model in object-centric video
understanding, 3D visual reasoning, and video prediction tasks, which involve
modeling multiple objects and their long-range temporal dependencies. We find
that our proposed design offers substantial performance gains over existing
sequence modeling methods.
| [
{
"created": "Tue, 18 Jun 2024 04:59:14 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Jun 2024 22:53:36 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Jun 2024 03:04:04 GMT",
"version": "v3"
},
{
"created": "Sun, 30 Jun 2024 22:25:01 GMT",
"version": "v4"
}
] | 2024-07-02 | [
[
"Jiang",
"Jindong",
""
],
[
"Deng",
"Fei",
""
],
[
"Singh",
"Gautam",
""
],
[
"Lee",
"Minseung",
""
],
[
"Ahn",
"Sungjin",
""
]
] | Recent State Space Models (SSMs) such as S4, S5, and Mamba have shown remarkable computational benefits in long-range temporal dependency modeling. However, in many sequence modeling problems, the underlying process is inherently modular and it is of interest to have inductive biases that mimic this modular structure. In this paper, we introduce SlotSSMs, a novel framework for incorporating independent mechanisms into SSMs to preserve or encourage separation of information. Unlike conventional SSMs that maintain a monolithic state vector, SlotSSMs maintains the state as a collection of multiple vectors called slots. Crucially, the state transitions are performed independently per slot with sparse interactions across slots implemented via the bottleneck of self-attention. In experiments, we evaluate our model in object-centric video understanding, 3D visual reasoning, and video prediction tasks, which involve modeling multiple objects and their long-range temporal dependencies. We find that our proposed design offers substantial performance gains over existing sequence modeling methods. |
2004.12920 | Rumit Kumar | Rumit Kumar, Aditya M. Deshpande, James Z. Wells, Manish Kumar | Flight Control of Sliding Arm Quadcopter with Dynamic Structural
Parameters | 6 Pages | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The conceptual design and flight controller of a novel kind of quadcopter are
presented. This design is capable of morphing the shape of the UAV during
flight to achieve position and attitude control. We consider a dynamic center
of gravity (CoG) which causes continuous variation in a moment of inertia (MoI)
parameters of the UAV in this design. These dynamic structural parameters play
a vital role in the stability and control of the system. The length of
quadcopter arms is a variable parameter, and it is actuated using attitude
feedback-based control law. The MoI parameters are computed in real-time and
incorporated in the equations of motion of the system. The UAV utilizes the
angular motion of propellers and variable quadcopter arm lengths for position
and navigation control. The movement space of the CoG is a design parameter and
it is bounded by actuator limitations and stability requirements of the system.
A detailed information on equations of motion, flight controller design and
possible applications of this system are provided. Further, the proposed
shape-changing UAV system is evaluated by comparative numerical simulations for
way point navigation mission and complex trajectory tracking.
| [
{
"created": "Mon, 27 Apr 2020 16:32:58 GMT",
"version": "v1"
}
] | 2020-04-28 | [
[
"Kumar",
"Rumit",
""
],
[
"Deshpande",
"Aditya M.",
""
],
[
"Wells",
"James Z.",
""
],
[
"Kumar",
"Manish",
""
]
] | The conceptual design and flight controller of a novel kind of quadcopter are presented. This design is capable of morphing the shape of the UAV during flight to achieve position and attitude control. We consider a dynamic center of gravity (CoG) which causes continuous variation in a moment of inertia (MoI) parameters of the UAV in this design. These dynamic structural parameters play a vital role in the stability and control of the system. The length of quadcopter arms is a variable parameter, and it is actuated using attitude feedback-based control law. The MoI parameters are computed in real-time and incorporated in the equations of motion of the system. The UAV utilizes the angular motion of propellers and variable quadcopter arm lengths for position and navigation control. The movement space of the CoG is a design parameter and it is bounded by actuator limitations and stability requirements of the system. A detailed information on equations of motion, flight controller design and possible applications of this system are provided. Further, the proposed shape-changing UAV system is evaluated by comparative numerical simulations for way point navigation mission and complex trajectory tracking. |
hep-th/9801038 | Harvendra Singh | Harvendra Singh | New Supersymmetric Vacua for N=4, D=4 Gauged Supergravity | 14 pages, Latex, v1: one reference and an important note added, v2:
minor text modifications to incorporate more references, (to appear in
Physics Letters) | Phys.Lett. B429 (1998) 304-312 | 10.1016/S0370-2693(98)00463-8 | DFPD/98/TH/03 | hep-th hep-ph | null | In this paper we obtain supersymmetric brane-like configurations in the
vacuum of N=4 gauged $SU(2)\times SU(2)$ supergravity theory in four spacetime
dimensions. Almost all of these vacuum solutions preserve either half or one
quarter of the supersymmetry in the theory. We also study the solutions in
presence of nontrivial axionic and gauge field backgrounds. In the case of pure
gravity with axionic charge the geometry of the spacetime is $AdS_3\times R^1$
with N=1 supersymmetry. An interesting observation is that the domain walls of
this theory cannot be given an interpretation of a 2-brane in four dimensions.
But it still exists as a stable vacuum. This feature is quite distinct from the
domain-wall configuration in massive type IIA supergravity in ten dimensions.
| [
{
"created": "Thu, 8 Jan 1998 11:10:17 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jan 1998 22:03:01 GMT",
"version": "v2"
},
{
"created": "Tue, 26 May 1998 12:03:43 GMT",
"version": "v3"
}
] | 2009-10-31 | [
[
"Singh",
"Harvendra",
""
]
] | In this paper we obtain supersymmetric brane-like configurations in the vacuum of N=4 gauged $SU(2)\times SU(2)$ supergravity theory in four spacetime dimensions. Almost all of these vacuum solutions preserve either half or one quarter of the supersymmetry in the theory. We also study the solutions in presence of nontrivial axionic and gauge field backgrounds. In the case of pure gravity with axionic charge the geometry of the spacetime is $AdS_3\times R^1$ with N=1 supersymmetry. An interesting observation is that the domain walls of this theory cannot be given an interpretation of a 2-brane in four dimensions. But it still exists as a stable vacuum. This feature is quite distinct from the domain-wall configuration in massive type IIA supergravity in ten dimensions. |
2405.20852 | Xuxin Cheng | Xuxin Cheng, Wanshi Xu, Zhihong Zhu, Hongxiang Li, Yuexian Zou | Towards Spoken Language Understanding via Multi-level Multi-grained
Contrastive Learning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spoken language understanding (SLU) is a core task in task-oriented dialogue
systems, which aims at understanding the user's current goal through
constructing semantic frames. SLU usually consists of two subtasks, including
intent detection and slot filling. Although there are some SLU frameworks joint
modeling the two subtasks and achieving high performance, most of them still
overlook the inherent relationships between intents and slots and fail to
achieve mutual guidance between the two subtasks. To solve the problem, we
propose a multi-level multi-grained SLU framework MMCL to apply contrastive
learning at three levels, including utterance level, slot level, and word level
to enable intent and slot to mutually guide each other. For the utterance
level, our framework implements coarse granularity contrastive learning and
fine granularity contrastive learning simultaneously. Besides, we also apply
the self-distillation method to improve the robustness of the model.
Experimental results and further analysis demonstrate that our proposed model
achieves new state-of-the-art results on two public multi-intent SLU datasets,
obtaining a 2.6 overall accuracy improvement on the MixATIS dataset compared to
previous best models.
| [
{
"created": "Fri, 31 May 2024 14:34:23 GMT",
"version": "v1"
}
] | 2024-06-03 | [
[
"Cheng",
"Xuxin",
""
],
[
"Xu",
"Wanshi",
""
],
[
"Zhu",
"Zhihong",
""
],
[
"Li",
"Hongxiang",
""
],
[
"Zou",
"Yuexian",
""
]
] | Spoken language understanding (SLU) is a core task in task-oriented dialogue systems, which aims at understanding the user's current goal through constructing semantic frames. SLU usually consists of two subtasks, including intent detection and slot filling. Although there are some SLU frameworks joint modeling the two subtasks and achieving high performance, most of them still overlook the inherent relationships between intents and slots and fail to achieve mutual guidance between the two subtasks. To solve the problem, we propose a multi-level multi-grained SLU framework MMCL to apply contrastive learning at three levels, including utterance level, slot level, and word level to enable intent and slot to mutually guide each other. For the utterance level, our framework implements coarse granularity contrastive learning and fine granularity contrastive learning simultaneously. Besides, we also apply the self-distillation method to improve the robustness of the model. Experimental results and further analysis demonstrate that our proposed model achieves new state-of-the-art results on two public multi-intent SLU datasets, obtaining a 2.6 overall accuracy improvement on the MixATIS dataset compared to previous best models. |
hep-th/9805052 | Heinrich Saller | Heinrich Saller (MPI Physics, M\"unchen) | The External-Internal Group Quotient Structure for the Standard Model in
Analogy to General Relativity | 25 pages, LATEX, all macros included | Int.J.Theor.Phys. 37 (1998) 2333-2361 | null | MPI-PhT/98-35 | hep-th | null | In analogy to the class structure $\GL(\R^4)/\O(1,3)$ for general relativity
with a local Lorentz group as stabilizer and a basic tetrad field for the
parametrization, a corresponding class structure $\GL(\C^2)/\U(2)$ is
investigated for the standard model with a local hyperisospin group $\U(2)$.
The lepton, quark, Higgs and gauge fields, used in the standard model, cannot
be basic in a coset interpretation, they may to be taken as first order terms
in a flat spacetime, particle oriented expansion of a basic field (as the
analogue to the tetrad) and its products.
| [
{
"created": "Mon, 11 May 1998 11:33:10 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Saller",
"Heinrich",
"",
"MPI Physics, München"
]
] | In analogy to the class structure $\GL(\R^4)/\O(1,3)$ for general relativity with a local Lorentz group as stabilizer and a basic tetrad field for the parametrization, a corresponding class structure $\GL(\C^2)/\U(2)$ is investigated for the standard model with a local hyperisospin group $\U(2)$. The lepton, quark, Higgs and gauge fields, used in the standard model, cannot be basic in a coset interpretation, they may to be taken as first order terms in a flat spacetime, particle oriented expansion of a basic field (as the analogue to the tetrad) and its products. |
1710.09516 | Veronika E. Hubeny | Matthew Headrick, Veronika E. Hubeny | Riemannian and Lorentzian flow-cut theorems | 34 pages, 9 figures | Class. Quant. Grav. 35: 10 (2018) | 10.1088/1361-6382/aab83c | BRX-TH-6325, MIT-CTP/4941 | hep-th gr-qc math.DG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove several geometric theorems using tools from the theory of convex
optimization. In the Riemannian setting, we prove the max flow-min cut theorem
for boundary regions, applied recently to develop a "bit-thread" interpretation
of holographic entanglement entropies. We also prove various properties of the
max flow and min cut, including respective nesting properties. In the
Lorentzian setting, we prove the analogous min flow-max cut theorem, which
states that the volume of a maximal slice equals the flux of a minimal flow,
where a flow is defined as a divergenceless timelike vector field with norm at
least 1. This theorem includes as a special case a continuum version of
Dilworth's theorem from the theory of partially ordered sets. We include a
brief review of the necessary tools from the theory of convex optimization, in
particular Lagrangian duality and convex relaxation.
| [
{
"created": "Thu, 26 Oct 2017 02:42:12 GMT",
"version": "v1"
}
] | 2018-08-23 | [
[
"Headrick",
"Matthew",
""
],
[
"Hubeny",
"Veronika E.",
""
]
] | We prove several geometric theorems using tools from the theory of convex optimization. In the Riemannian setting, we prove the max flow-min cut theorem for boundary regions, applied recently to develop a "bit-thread" interpretation of holographic entanglement entropies. We also prove various properties of the max flow and min cut, including respective nesting properties. In the Lorentzian setting, we prove the analogous min flow-max cut theorem, which states that the volume of a maximal slice equals the flux of a minimal flow, where a flow is defined as a divergenceless timelike vector field with norm at least 1. This theorem includes as a special case a continuum version of Dilworth's theorem from the theory of partially ordered sets. We include a brief review of the necessary tools from the theory of convex optimization, in particular Lagrangian duality and convex relaxation. |
2406.03665 | Jihyeon Seong | Jihyeon Seong, Sekwang Oh, Jaesik Choi | Towards Dynamic Trend Filtering through Trend Point Detection with
Reinforcement Learning | 18 pages, 11 figures | IJCAI 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trend filtering simplifies complex time series data by applying smoothness to
filter out noise while emphasizing proximity to the original data. However,
existing trend filtering methods fail to reflect abrupt changes in the trend
due to `approximateness,' resulting in constant smoothness. This
approximateness uniformly filters out the tail distribution of time series
data, characterized by extreme values, including both abrupt changes and noise.
In this paper, we propose Trend Point Detection formulated as a Markov Decision
Process (MDP), a novel approach to identifying essential points that should be
reflected in the trend, departing from approximations. We term these essential
points as Dynamic Trend Points (DTPs) and extract trends by interpolating them.
To identify DTPs, we utilize Reinforcement Learning (RL) within a discrete
action space and a forecasting sum-of-squares loss function as a reward,
referred to as the Dynamic Trend Filtering network (DTF-net). DTF-net
integrates flexible noise filtering, preserving critical original subsequences
while removing noise as required for other subsequences. We demonstrate that
DTF-net excels at capturing abrupt changes compared to other trend filtering
algorithms and enhances forecasting performance, as abrupt changes are
predicted rather than smoothed out.
| [
{
"created": "Thu, 6 Jun 2024 00:50:22 GMT",
"version": "v1"
}
] | 2024-07-12 | [
[
"Seong",
"Jihyeon",
""
],
[
"Oh",
"Sekwang",
""
],
[
"Choi",
"Jaesik",
""
]
] | Trend filtering simplifies complex time series data by applying smoothness to filter out noise while emphasizing proximity to the original data. However, existing trend filtering methods fail to reflect abrupt changes in the trend due to `approximateness,' resulting in constant smoothness. This approximateness uniformly filters out the tail distribution of time series data, characterized by extreme values, including both abrupt changes and noise. In this paper, we propose Trend Point Detection formulated as a Markov Decision Process (MDP), a novel approach to identifying essential points that should be reflected in the trend, departing from approximations. We term these essential points as Dynamic Trend Points (DTPs) and extract trends by interpolating them. To identify DTPs, we utilize Reinforcement Learning (RL) within a discrete action space and a forecasting sum-of-squares loss function as a reward, referred to as the Dynamic Trend Filtering network (DTF-net). DTF-net integrates flexible noise filtering, preserving critical original subsequences while removing noise as required for other subsequences. We demonstrate that DTF-net excels at capturing abrupt changes compared to other trend filtering algorithms and enhances forecasting performance, as abrupt changes are predicted rather than smoothed out. |
2102.13455 | Arnaud Mazier | Arnaud Mazier (1), Alexandre Bilger (1), Antonio E. Forte (2 and 3),
Igor Peterlik (4), Jack S. Hale (1) and St\'ephane P.A. Bordas (1 and 5). (1)
Institute of Computational Engineering, Department of Engineering, University
of Luxembourg, Esch-sur-Alzette, Luxembourg. (2) Harvard University,
Cambridge, USA. (3) Department of Electronics, Information and
Bioengineering, Politecnico di Milano, Milan, Italy. (4) Institute of
Computer Science, Masaryk University, Czech Republic. (5) Institute of
Research and Development Duy Tan University, Danang, Vietnam | Inverse deformation analysis: an experimental and numerical assessment
using the FEniCS Project | 33 pages, 12 figures, submitted to Elsevier | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop a framework for solving inverse deformation
problems using the FEniCS Project finite element software. We validate our
approach with experimental imaging data acquired from a soft silicone beam
under gravity. In contrast with inverse iterative algorithms that require
multiple solutions of a standard elasticity problem, the proposed method can
compute the undeformed configuration by solving only one modified elasticity
problem. This modified problem has a complexity comparable to the standard one.
The framework is implemented within an open-source pipeline enabling the direct
and inverse deformation simulation directly from imaging data. We use the
high-level Unified Form Language (UFL) of the FEniCS Project to express the
finite element model in variational form and to automatically derive the
consistent Jacobian. Consequently, the design of the pipeline is flexible: for
example, it allows the modification of the constitutive models by changing a
single line of code. We include a complete working example showing the inverse
deformation of a beam deformed by gravity as supplementary material.
| [
{
"created": "Fri, 26 Feb 2021 13:20:36 GMT",
"version": "v1"
}
] | 2021-03-01 | [
[
"Mazier",
"Arnaud",
"",
"2 and 3"
],
[
"Bilger",
"Alexandre",
"",
"2 and 3"
],
[
"Forte",
"Antonio E.",
"",
"2 and 3"
],
[
"Peterlik",
"Igor",
"",
"1 and 5"
],
[
"Hale",
"Jack S.",
"",
"1 and 5"
],
[
"Bordas",
... | In this paper, we develop a framework for solving inverse deformation problems using the FEniCS Project finite element software. We validate our approach with experimental imaging data acquired from a soft silicone beam under gravity. In contrast with inverse iterative algorithms that require multiple solutions of a standard elasticity problem, the proposed method can compute the undeformed configuration by solving only one modified elasticity problem. This modified problem has a complexity comparable to the standard one. The framework is implemented within an open-source pipeline enabling the direct and inverse deformation simulation directly from imaging data. We use the high-level Unified Form Language (UFL) of the FEniCS Project to express the finite element model in variational form and to automatically derive the consistent Jacobian. Consequently, the design of the pipeline is flexible: for example, it allows the modification of the constitutive models by changing a single line of code. We include a complete working example showing the inverse deformation of a beam deformed by gravity as supplementary material. |
hep-th/9806034 | Pierre van Baal | Thomas C. Kraan and Pierre van Baal | Monopole Constituents inside SU(n) Calorons | 8 pages, 1 figure (in three parts), latex | Phys.Lett. B435 (1998) 389-395 | 10.1016/S0370-2693(98)00799-0 | INLO-PUB-9/98 | hep-th hep-lat | null | We present a simple result for the action density of the SU(n) charge one
periodic instantons - or calorons - with arbitrary non-trivial Polyakov loop
P_oo at spatial infinity. It is shown explicitly that there are n lumps inside
the caloron, each of which represents a BPS monopole, their masses being
related to the eigenvalues of P_oo. A suitable combination of the ADHM
construction and the Nahm transformation is used to obtain this result.
| [
{
"created": "Thu, 4 Jun 1998 13:34:06 GMT",
"version": "v1"
}
] | 2009-10-31 | [
[
"Kraan",
"Thomas C.",
""
],
[
"van Baal",
"Pierre",
""
]
] | We present a simple result for the action density of the SU(n) charge one periodic instantons - or calorons - with arbitrary non-trivial Polyakov loop P_oo at spatial infinity. It is shown explicitly that there are n lumps inside the caloron, each of which represents a BPS monopole, their masses being related to the eigenvalues of P_oo. A suitable combination of the ADHM construction and the Nahm transformation is used to obtain this result. |
1709.07658 | Jordan Ivanchev | Jordan Ivanchev, Alois Knoll, Daniel Zehe, Suraj Nair, David Eckhoff | Potentials and Implications of Dedicated Highway Lanes for Autonomous
Vehicles | 12 pages, 7 figures | null | null | null | cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The introduction of autonomous vehicles (AVs) will have far-reaching effects
on road traffic in cities and on highways.The implementation of automated
highway system (AHS), possibly with a dedicated lane only for AVs, is believed
to be a requirement to maximise the benefit from the advantages of AVs. We
study the ramifications of an increasing percentage of AVs on the traffic
system with and without the introduction of a dedicated AV lane on highways. We
conduct an analytical evaluation of a simplified scenario and a macroscopic
simulation of the city of Singapore under user equilibrium conditions with a
realistic traffic demand. We present findings regarding average travel time,
fuel consumption, throughput and road usage. Instead of only considering the
highways, we also focus on the effects on the remaining road network. Our
results show a reduction of average travel time and fuel consumption as a
result of increasing the portion of AVs in the system. We show that the
introduction of an AV lane is not beneficial in terms of average commute time.
Examining the effects of the AV population only, however, the AV lane provides
a considerable reduction of travel time (approx. 25%) at the price of delaying
conventional vehicles (approx. 7%). Furthermore a notable shift of travel
demand away from the highways towards major and small roads is noticed in early
stages of AV penetration of the system. Finally, our findings show that after a
certain threshold percentage of AVs the differences between AV and no AV lane
scenarios become negligible.
| [
{
"created": "Fri, 22 Sep 2017 09:45:40 GMT",
"version": "v1"
}
] | 2017-09-25 | [
[
"Ivanchev",
"Jordan",
""
],
[
"Knoll",
"Alois",
""
],
[
"Zehe",
"Daniel",
""
],
[
"Nair",
"Suraj",
""
],
[
"Eckhoff",
"David",
""
]
] | The introduction of autonomous vehicles (AVs) will have far-reaching effects on road traffic in cities and on highways.The implementation of automated highway system (AHS), possibly with a dedicated lane only for AVs, is believed to be a requirement to maximise the benefit from the advantages of AVs. We study the ramifications of an increasing percentage of AVs on the traffic system with and without the introduction of a dedicated AV lane on highways. We conduct an analytical evaluation of a simplified scenario and a macroscopic simulation of the city of Singapore under user equilibrium conditions with a realistic traffic demand. We present findings regarding average travel time, fuel consumption, throughput and road usage. Instead of only considering the highways, we also focus on the effects on the remaining road network. Our results show a reduction of average travel time and fuel consumption as a result of increasing the portion of AVs in the system. We show that the introduction of an AV lane is not beneficial in terms of average commute time. Examining the effects of the AV population only, however, the AV lane provides a considerable reduction of travel time (approx. 25%) at the price of delaying conventional vehicles (approx. 7%). Furthermore a notable shift of travel demand away from the highways towards major and small roads is noticed in early stages of AV penetration of the system. Finally, our findings show that after a certain threshold percentage of AVs the differences between AV and no AV lane scenarios become negligible. |
2205.06313 | William Poole | William Poole, Thomas Ouldridge, Manoj Gopalkrishnan, and Erik Winfree | Detailed Balanced Chemical Reaction Networks as Generalized Boltzmann
Machines | Based on work in William Poole's Thesis "Compilation and Inference
with Chemical Reaction Networks" available at:
https://www.dna.caltech.edu/Papers/William_Poole_2022_thesis.pdf | null | null | null | q-bio.MN cond-mat.stat-mech cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can a micron sized sack of interacting molecules understand, and adapt to a
constantly-fluctuating environment? Cellular life provides an existence proof
in the affirmative, but the principles that allow for life's existence are far
from being proven. One challenge in engineering and understanding biochemical
computation is the intrinsic noise due to chemical fluctuations. In this paper,
we draw insights from machine learning theory, chemical reaction network
theory, and statistical physics to show that the broad and biologically
relevant class of detailed balanced chemical reaction networks is capable of
representing and conditioning complex distributions. These results illustrate
how a biochemical computer can use intrinsic chemical noise to perform complex
computations. Furthermore, we use our explicit physical model to derive
thermodynamic costs of inference.
| [
{
"created": "Thu, 12 May 2022 18:59:43 GMT",
"version": "v1"
}
] | 2022-05-16 | [
[
"Poole",
"William",
""
],
[
"Ouldridge",
"Thomas",
""
],
[
"Gopalkrishnan",
"Manoj",
""
],
[
"Winfree",
"Erik",
""
]
] | Can a micron sized sack of interacting molecules understand, and adapt to a constantly-fluctuating environment? Cellular life provides an existence proof in the affirmative, but the principles that allow for life's existence are far from being proven. One challenge in engineering and understanding biochemical computation is the intrinsic noise due to chemical fluctuations. In this paper, we draw insights from machine learning theory, chemical reaction network theory, and statistical physics to show that the broad and biologically relevant class of detailed balanced chemical reaction networks is capable of representing and conditioning complex distributions. These results illustrate how a biochemical computer can use intrinsic chemical noise to perform complex computations. Furthermore, we use our explicit physical model to derive thermodynamic costs of inference. |
1609.02141 | Antonino Sciarrino | Diego Cocurullo and Antonino Sciarrino | Correlations in Usage Frequencies and Shannon Entropy for Codons | 42 pages, 12 figures, 33 Tables | null | null | Dipartimento di Scienze Fisiche, Naples,Italy, DSF-Th-2/08-v2 | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The usage frequencies for codons belonging to quartets are analized, over the
whole exonic region, for 92 biological species. Correlation is put into
evidence, between the usage frequencies of synonymous codons with third
nucleotide A and C and between the usage frequencies of non synonymous codons,
belonging to suitable subsets of the quartets, with the same third nucleotide.
A correlation is pointed out between amino acids belonging to subsets of the
set encoded by quartets of codons. It is remarked that the computed Shannon
entropy for quartets is weakly dependent on the biological species. The
observed correlations well fit in the mathematical scheme of the crystal basis
model of the genetic code.
| [
{
"created": "Wed, 7 Sep 2016 19:03:09 GMT",
"version": "v1"
}
] | 2016-09-09 | [
[
"Cocurullo",
"Diego",
""
],
[
"Sciarrino",
"Antonino",
""
]
] | The usage frequencies for codons belonging to quartets are analized, over the whole exonic region, for 92 biological species. Correlation is put into evidence, between the usage frequencies of synonymous codons with third nucleotide A and C and between the usage frequencies of non synonymous codons, belonging to suitable subsets of the quartets, with the same third nucleotide. A correlation is pointed out between amino acids belonging to subsets of the set encoded by quartets of codons. It is remarked that the computed Shannon entropy for quartets is weakly dependent on the biological species. The observed correlations well fit in the mathematical scheme of the crystal basis model of the genetic code. |
2303.17338 | Kaya Turgut | Kaya Turgut and Helin Dutagaci | Local region-learning modules for point cloud classification | null | Machine Vision and Applications 35, 16 (2024) | 10.1007/s00138-023-01495-y | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data organization via forming local regions is an integral part of deep
learning networks that process 3D point clouds in a hierarchical manner. At
each level, the point cloud is sampled to extract representative points and
these points are used to be centers of local regions. The organization of local
regions is of considerable importance since it determines the location and size
of the receptive field at a particular layer of feature aggregation. In this
paper, we present two local region-learning modules: Center Shift Module to
infer the appropriate shift for each center point, and Radius Update Module to
alter the radius of each local region. The parameters of the modules are
learned through optimizing the loss associated with the particular task within
an end-to-end network. We present alternatives for these modules through
various ways of modeling the interactions of the features and locations of 3D
points in the point cloud. We integrated both modules independently and
together to the PointNet++ and PointCNN object classification architectures,
and demonstrated that the modules contributed to a significant increase in
classification accuracy for the ScanObjectNN data set consisting of scans of
real-world objects. Our further experiments on ShapeNet data set showed that
the modules are also effective on 3D CAD models.
| [
{
"created": "Thu, 30 Mar 2023 12:45:46 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2023 10:06:08 GMT",
"version": "v2"
}
] | 2023-12-27 | [
[
"Turgut",
"Kaya",
""
],
[
"Dutagaci",
"Helin",
""
]
] | Data organization via forming local regions is an integral part of deep learning networks that process 3D point clouds in a hierarchical manner. At each level, the point cloud is sampled to extract representative points and these points are used to be centers of local regions. The organization of local regions is of considerable importance since it determines the location and size of the receptive field at a particular layer of feature aggregation. In this paper, we present two local region-learning modules: Center Shift Module to infer the appropriate shift for each center point, and Radius Update Module to alter the radius of each local region. The parameters of the modules are learned through optimizing the loss associated with the particular task within an end-to-end network. We present alternatives for these modules through various ways of modeling the interactions of the features and locations of 3D points in the point cloud. We integrated both modules independently and together to the PointNet++ and PointCNN object classification architectures, and demonstrated that the modules contributed to a significant increase in classification accuracy for the ScanObjectNN data set consisting of scans of real-world objects. Our further experiments on ShapeNet data set showed that the modules are also effective on 3D CAD models. |
2208.05911 | Aristomenis Donos | Aristomenis Donos, Polydoros Kailidis, Christiana Pantelidou | Holographic Dissipation from the Symplectic Current | 32 pages,1 figure, Version to appear on JHEP | null | 10.1007/JHEP10(2022)058 | null | hep-th | http://creativecommons.org/licenses/by/4.0/ | We develop analytic techniques to construct the leading dissipative terms in
a derivative expansion of holographic fluids. Our basic ingredient is the
Crnkovic-Witten symplectic current of classical gravity which we use to extract
the dissipative transport coefficients of holographic fluids, assuming
knowledge of the thermodynamics and the near horizon geometries of the bulk
black hole geometries. We apply our techniques to non-conformal neutral fluids
to reproduce previous results on the shear viscosity and generalise a known
expression for the bulk viscosity.
| [
{
"created": "Thu, 11 Aug 2022 16:24:59 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Aug 2022 10:06:21 GMT",
"version": "v2"
},
{
"created": "Sat, 8 Oct 2022 15:53:46 GMT",
"version": "v3"
}
] | 2022-10-26 | [
[
"Donos",
"Aristomenis",
""
],
[
"Kailidis",
"Polydoros",
""
],
[
"Pantelidou",
"Christiana",
""
]
] | We develop analytic techniques to construct the leading dissipative terms in a derivative expansion of holographic fluids. Our basic ingredient is the Crnkovic-Witten symplectic current of classical gravity which we use to extract the dissipative transport coefficients of holographic fluids, assuming knowledge of the thermodynamics and the near horizon geometries of the bulk black hole geometries. We apply our techniques to non-conformal neutral fluids to reproduce previous results on the shear viscosity and generalise a known expression for the bulk viscosity. |
2402.11168 | Amit Dhurandhar | Amit Dhurandhar, Swagatam Haldar, Dennis Wei and Karthikeyan Natesan
Ramamurthy | Trust Regions for Explanations via Black-Box Probabilistic Certification | Accepted to ICML 2024 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Given the black box nature of machine learning models, a plethora of
explainability methods have been developed to decipher the factors behind
individual decisions. In this paper, we introduce a novel problem of black box
(probabilistic) explanation certification. We ask the question: Given a black
box model with only query access, an explanation for an example and a quality
metric (viz. fidelity, stability), can we find the largest hypercube (i.e.,
$\ell_{\infty}$ ball) centered at the example such that when the explanation is
applied to all examples within the hypercube, (with high probability) a quality
criterion is met (viz. fidelity greater than some value)? Being able to
efficiently find such a \emph{trust region} has multiple benefits: i) insight
into model behavior in a \emph{region}, with a \emph{guarantee}; ii)
ascertained \emph{stability} of the explanation; iii) \emph{explanation reuse},
which can save time, energy and money by not having to find explanations for
every example; and iv) a possible \emph{meta-metric} to compare explanation
methods. Our contributions include formalizing this problem, proposing
solutions, providing theoretical guarantees for these solutions that are
computable, and experimentally showing their efficacy on synthetic and real
data.
| [
{
"created": "Sat, 17 Feb 2024 02:26:14 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Feb 2024 00:05:25 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Jun 2024 16:36:21 GMT",
"version": "v3"
}
] | 2024-06-06 | [
[
"Dhurandhar",
"Amit",
""
],
[
"Haldar",
"Swagatam",
""
],
[
"Wei",
"Dennis",
""
],
[
"Ramamurthy",
"Karthikeyan Natesan",
""
]
] | Given the black box nature of machine learning models, a plethora of explainability methods have been developed to decipher the factors behind individual decisions. In this paper, we introduce a novel problem of black box (probabilistic) explanation certification. We ask the question: Given a black box model with only query access, an explanation for an example and a quality metric (viz. fidelity, stability), can we find the largest hypercube (i.e., $\ell_{\infty}$ ball) centered at the example such that when the explanation is applied to all examples within the hypercube, (with high probability) a quality criterion is met (viz. fidelity greater than some value)? Being able to efficiently find such a \emph{trust region} has multiple benefits: i) insight into model behavior in a \emph{region}, with a \emph{guarantee}; ii) ascertained \emph{stability} of the explanation; iii) \emph{explanation reuse}, which can save time, energy and money by not having to find explanations for every example; and iv) a possible \emph{meta-metric} to compare explanation methods. Our contributions include formalizing this problem, proposing solutions, providing theoretical guarantees for these solutions that are computable, and experimentally showing their efficacy on synthetic and real data. |
1001.5100 | Xiwang Cao | Xiwang Cao and Lei Hu | On Exponential Sums, Nowton identities and Dickson Polynomials over
Finite Fields | 18 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Let $\mathbb{F}_{q}$ be a finite field, $\mathbb{F}_{q^s}$ be an extension of
$\mathbb{F}_q$, let $f(x)\in \mathbb{F}_q[x]$ be a polynomial of degree $n$
with $\gcd(n,q)=1$. We present a recursive formula for evaluating the
exponential sum $\sum_{c\in \mathbb{F}_{q^s}}\chi^{(s)}(f(x))$. Let $a$ and $b$
be two elements in $\mathbb{F}_q$ with $a\neq 0$, $u$ be a positive integer. We
obtain an estimate for the exponential sum $\sum_{c\in
\mathbb{F}^*_{q^s}}\chi^{(s)}(ac^u+bc^{-1})$, where $\chi^{(s)}$ is the lifting
of an additive character $\chi$ of $\mathbb{F}_q$. Some properties of the
sequences constructed from these exponential sums are provided also.
| [
{
"created": "Thu, 28 Jan 2010 04:50:32 GMT",
"version": "v1"
}
] | 2010-01-29 | [
[
"Cao",
"Xiwang",
""
],
[
"Hu",
"Lei",
""
]
] | Let $\mathbb{F}_{q}$ be a finite field, $\mathbb{F}_{q^s}$ be an extension of $\mathbb{F}_q$, let $f(x)\in \mathbb{F}_q[x]$ be a polynomial of degree $n$ with $\gcd(n,q)=1$. We present a recursive formula for evaluating the exponential sum $\sum_{c\in \mathbb{F}_{q^s}}\chi^{(s)}(f(x))$. Let $a$ and $b$ be two elements in $\mathbb{F}_q$ with $a\neq 0$, $u$ be a positive integer. We obtain an estimate for the exponential sum $\sum_{c\in \mathbb{F}^*_{q^s}}\chi^{(s)}(ac^u+bc^{-1})$, where $\chi^{(s)}$ is the lifting of an additive character $\chi$ of $\mathbb{F}_q$. Some properties of the sequences constructed from these exponential sums are provided also. |
2012.11087 | Maxime Tr\'epanier | Nadav Drukker and Maxime Tr\'epanier | Observations on BPS observables in 6d | 23 pages | J.Phys.A 54 (2021) 20, 205401 | 10.1088/1751-8121/abf38d | null | hep-th | http://creativecommons.org/licenses/by/4.0/ | We study possible geometries and R-symmetry breaking patterns that lead to
globally BPS surface operators in the six dimensional $\mathcal{N}=(2,0)$
theory. We find four main classes of solutions in different subspaces of
$\mathbb{R}^6$ and a multitude of subclasses and specific examples. We prove
that these constructions lead to supersymmety preserving observables and count
the number of preserved supercharges. We discuss the underlying geometry,
calculate their anomalies and present analogous structures for the holographic
dual M2-branes in $AdS_7\times S^4$. We also comment on the dimensional
reduction of these observables to line and surface operators in 4d and 5d
theories. This rich spectrum of operators are proposed as the simplest and most
natural observables of this mysterious theory.
| [
{
"created": "Mon, 21 Dec 2020 02:37:33 GMT",
"version": "v1"
}
] | 2021-04-27 | [
[
"Drukker",
"Nadav",
""
],
[
"Trépanier",
"Maxime",
""
]
] | We study possible geometries and R-symmetry breaking patterns that lead to globally BPS surface operators in the six dimensional $\mathcal{N}=(2,0)$ theory. We find four main classes of solutions in different subspaces of $\mathbb{R}^6$ and a multitude of subclasses and specific examples. We prove that these constructions lead to supersymmety preserving observables and count the number of preserved supercharges. We discuss the underlying geometry, calculate their anomalies and present analogous structures for the holographic dual M2-branes in $AdS_7\times S^4$. We also comment on the dimensional reduction of these observables to line and surface operators in 4d and 5d theories. This rich spectrum of operators are proposed as the simplest and most natural observables of this mysterious theory. |
2105.13514 | Duong Dung | Tri Dung Duong, Qian Li, Guandong Xu | Stochastic Intervention for Causal Inference via Reinforcement Learning | Under review for Neurocomputiong. arXiv admin note: substantial text
overlap with arXiv:2105.12898 | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Causal inference methods are widely applied in various decision-making
domains such as precision medicine, optimal policy and economics. Central to
causal inference is the treatment effect estimation of intervention strategies,
such as changes in drug dosing and increases in financial aid. Existing methods
are mostly restricted to the deterministic treatment and compare outcomes under
different treatments. However, they are unable to address the substantial
recent interest of treatment effect estimation under stochastic treatment,
e.g., "how all units health status change if they adopt 50\% dose reduction".
In other words, they lack the capability of providing fine-grained treatment
effect estimation to support sound decision-making. In our study, we advance
the causal inference research by proposing a new effective framework to
estimate the treatment effect on stochastic intervention. Particularly, we
develop a stochastic intervention effect estimator (SIE) based on nonparametric
influence function, with the theoretical guarantees of robustness and fast
convergence rates. Additionally, we construct a customised reinforcement
learning algorithm based on the random search solver which can effectively find
the optimal policy to produce the greatest expected outcomes for the
decision-making process. Finally, we conduct an empirical study to justify that
our framework can achieve significant performance in comparison with
state-of-the-art baselines.
| [
{
"created": "Fri, 28 May 2021 00:11:22 GMT",
"version": "v1"
}
] | 2021-05-31 | [
[
"Duong",
"Tri Dung",
""
],
[
"Li",
"Qian",
""
],
[
"Xu",
"Guandong",
""
]
] | Causal inference methods are widely applied in various decision-making domains such as precision medicine, optimal policy and economics. Central to causal inference is the treatment effect estimation of intervention strategies, such as changes in drug dosing and increases in financial aid. Existing methods are mostly restricted to the deterministic treatment and compare outcomes under different treatments. However, they are unable to address the substantial recent interest of treatment effect estimation under stochastic treatment, e.g., "how all units health status change if they adopt 50\% dose reduction". In other words, they lack the capability of providing fine-grained treatment effect estimation to support sound decision-making. In our study, we advance the causal inference research by proposing a new effective framework to estimate the treatment effect on stochastic intervention. Particularly, we develop a stochastic intervention effect estimator (SIE) based on nonparametric influence function, with the theoretical guarantees of robustness and fast convergence rates. Additionally, we construct a customised reinforcement learning algorithm based on the random search solver which can effectively find the optimal policy to produce the greatest expected outcomes for the decision-making process. Finally, we conduct an empirical study to justify that our framework can achieve significant performance in comparison with state-of-the-art baselines. |
hep-th/9811178 | Ingo Runkel | Ingo Runkel | Boundary structure constants for the A-series Virasoro minimal models | 14 pages, LaTeX2e, 6 figures, uses amsmath,amsfonts,epsfig,cite;
minor corrections, version as to appear in Nucl.Phys.B | Nucl.Phys. B549 (1999) 563-578 | 10.1016/S0550-3213(99)00125-X | KCL-MTH-98-59 | hep-th | null | We consider A-series modular invariant Virasoro minimal models on the upper
half plane. From Lewellen's sewing constraints a necessary form of the bulk and
boundary structure constants is derived. Necessary means that any solution can
be brought to the given form by rescaling of the fields. All constants are
expressed essentially in terms of fusing (F-) matrix elements and the
normalisations are chosen such that they are real and no square roots appear.
It is not shown in this paper that the given structure constants solve the
sewing constraints, however random numerical tests show no contradiction and
agreement of the bulk structure constants with Dotsenko and Fateev. In order to
facilitate numerical calculations a recursion relation for the F-matrices is
given.
| [
{
"created": "Thu, 19 Nov 1998 15:38:49 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Jun 1999 14:11:42 GMT",
"version": "v2"
}
] | 2009-10-31 | [
[
"Runkel",
"Ingo",
""
]
] | We consider A-series modular invariant Virasoro minimal models on the upper half plane. From Lewellen's sewing constraints a necessary form of the bulk and boundary structure constants is derived. Necessary means that any solution can be brought to the given form by rescaling of the fields. All constants are expressed essentially in terms of fusing (F-) matrix elements and the normalisations are chosen such that they are real and no square roots appear. It is not shown in this paper that the given structure constants solve the sewing constraints, however random numerical tests show no contradiction and agreement of the bulk structure constants with Dotsenko and Fateev. In order to facilitate numerical calculations a recursion relation for the F-matrices is given. |
1806.07557 | Heyrim Cho | Heyrim Cho and Doron Levy | Modeling continuous levels of resistance to multidrug therapy in cancer | 42 pages | null | 10.1016/j.apm.2018.07.025 | null | q-bio.PE q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multidrug resistance consists of a series of genetic and epigenetic
alternations that involve multifactorial and complex processes, which are a
challenge to successful cancer treatments. Accompanied by advances in
biotechnology and high-dimensional data analysis techniques that are bringing
in new opportunities in modeling biological systems with continuous phenotypic
structured models, we study a cancer cell population model that considers a
multi-dimensional continuous resistance trait to multiple drugs to investigate
multidrug resistance. We compare our continuous resistance trait model with
classical models that assume a discrete resistance state and classify the cases
when the continuum and discrete models yield different dynamical patterns in
the emerging heterogeneity in response to drugs. We also compute the maximal
fitness resistance trait for various continuum models and study the effect of
epimutations. Finally, we demonstrate how our approach can be used to study
tumor growth regarding the turnover rate and the proliferating fraction, and
show that a continuous resistance level may result in a different dynamics when
compared with the predictions of other discrete models.
| [
{
"created": "Wed, 20 Jun 2018 05:22:16 GMT",
"version": "v1"
}
] | 2022-04-19 | [
[
"Cho",
"Heyrim",
""
],
[
"Levy",
"Doron",
""
]
] | Multidrug resistance consists of a series of genetic and epigenetic alternations that involve multifactorial and complex processes, which are a challenge to successful cancer treatments. Accompanied by advances in biotechnology and high-dimensional data analysis techniques that are bringing in new opportunities in modeling biological systems with continuous phenotypic structured models, we study a cancer cell population model that considers a multi-dimensional continuous resistance trait to multiple drugs to investigate multidrug resistance. We compare our continuous resistance trait model with classical models that assume a discrete resistance state and classify the cases when the continuum and discrete models yield different dynamical patterns in the emerging heterogeneity in response to drugs. We also compute the maximal fitness resistance trait for various continuum models and study the effect of epimutations. Finally, we demonstrate how our approach can be used to study tumor growth regarding the turnover rate and the proliferating fraction, and show that a continuous resistance level may result in a different dynamics when compared with the predictions of other discrete models. |
hep-th/0007002 | Luigi Pilo | Mihail Mintchev, Luigi Pilo | Localization of Quantum Fields on Branes | 17 pages, Latex, two eps figures. Few misprints corrected. Final
version accepted for publication in Nuclear Physics B | Nucl.Phys. B592 (2001) 219-233 | 10.1016/S0550-3213(00)00602-7 | IFUP-TH 21/2000 and SNS-PH/00-11 | hep-th | null | A mechanism for localization of quantum fields on a $s$-brane, representing
the boundary of a s+2 dimensional bulk space, is investigated. Minkowski and
AdS bulk spaces are analyzed. Besides the background geometry, the relevant
parameters controlling the theory are the mass M and a real parameter \eta,
specifying the boundary condition on the brane. The importance of exploring the
whole range of allowed values for these parameters is emphasized. Stability in
Minkowski space requires \eta to be greater or equal to -M, whereas in the AdS
background all real \eta are permitted. Both in the flat and in AdS case, the
induced field on the brane is a non-canonical generalized free field. For a
suitable choice of boundary condition, corresponding to the presence of a
boundary state, the induced field on the brane mimics standard s+1 dimensional
physics. In a certain range of \eta, the spectral function in the the AdS case
is ominated by a massive excitation, which imitates the presence of massive
particle on the brane. We show that the quantum field induced on the brane is
stable.
| [
{
"created": "Sat, 1 Jul 2000 11:01:25 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Oct 2000 10:29:55 GMT",
"version": "v2"
}
] | 2015-06-25 | [
[
"Mintchev",
"Mihail",
""
],
[
"Pilo",
"Luigi",
""
]
] | A mechanism for localization of quantum fields on a $s$-brane, representing the boundary of a s+2 dimensional bulk space, is investigated. Minkowski and AdS bulk spaces are analyzed. Besides the background geometry, the relevant parameters controlling the theory are the mass M and a real parameter \eta, specifying the boundary condition on the brane. The importance of exploring the whole range of allowed values for these parameters is emphasized. Stability in Minkowski space requires \eta to be greater or equal to -M, whereas in the AdS background all real \eta are permitted. Both in the flat and in AdS case, the induced field on the brane is a non-canonical generalized free field. For a suitable choice of boundary condition, corresponding to the presence of a boundary state, the induced field on the brane mimics standard s+1 dimensional physics. In a certain range of \eta, the spectral function in the the AdS case is ominated by a massive excitation, which imitates the presence of massive particle on the brane. We show that the quantum field induced on the brane is stable. |
1108.5133 | Everton Murilo Carvalho Abreu | Everton M. C. Abreu and Mario J. Neves | Causality in noncommutative spacetime | 26 pages. Latex-JHEP style. arXiv admin note: substantial text
overlap with arXiv:0909.0465 | null | null | null | hep-th gr-qc math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we investigated the causality problem present in the recent
work about the Doplicher-Fredenhagen-Roberts-Amorim (DFRA) noncommutative
framework which analyzed the complex scalar field. To accomplish this task we
provided a brief review of the main ingredients of the problem and we
demonstrated precisely that the DFRA algebra obeys the rules of the Canonical
Commutation Relations algebra. This fact permitted us to prove the form of the
DFRA operators previously constructed in the usual way. After that, we
introduced the solution of its Klein-Gordon equation with a source term. Its
solution was accomplished through the retarded, advanced and causal Green
functions constructed in this noncommutative ten dimensional DFRA spacetime. We
believe that this solution constitutes the first step in the elaboration of a
quantum field theory using the DFRA formalism where the noncommutative
parameter is an ordinary coordinate of the system and therefore has a canonical
conjugate momentum.
| [
{
"created": "Thu, 25 Aug 2011 17:07:04 GMT",
"version": "v1"
}
] | 2011-12-25 | [
[
"Abreu",
"Everton M. C.",
""
],
[
"Neves",
"Mario J.",
""
]
] | In this paper we investigated the causality problem present in the recent work about the Doplicher-Fredenhagen-Roberts-Amorim (DFRA) noncommutative framework which analyzed the complex scalar field. To accomplish this task we provided a brief review of the main ingredients of the problem and we demonstrated precisely that the DFRA algebra obeys the rules of the Canonical Commutation Relations algebra. This fact permitted us to prove the form of the DFRA operators previously constructed in the usual way. After that, we introduced the solution of its Klein-Gordon equation with a source term. Its solution was accomplished through the retarded, advanced and causal Green functions constructed in this noncommutative ten dimensional DFRA spacetime. We believe that this solution constitutes the first step in the elaboration of a quantum field theory using the DFRA formalism where the noncommutative parameter is an ordinary coordinate of the system and therefore has a canonical conjugate momentum. |
hep-th/0201092 | Stephen Blaha | Stephen Blaha | A Quantum Computer Foundation for the Standard Model and SuperString
Theories | 78 pages, PDF | null | null | null | hep-th cs.PL quant-ph | null | We show the Standard Model and SuperString Theories can be naturally based on
a Quantum Computer foundation. The Standard Model of elementary particles can
be viewed as defining a Quantum Computer Grammar and language. A Quantum
Computer in a certain limit naturally forms a Superspace upon which
Supersymmetry rotations can be defined - a Continuum Quantum Computer. Quantum
high-level computer languages such as Quantum C and Quantum Assembly language
are also discussed. In these new linguistic representations, particles become
literally symbols or letters, and particle interactions become grammar rules.
This view is NOT the same as the often-expressed view that Mathematics is the
language of Physics. Some new developments relating to Quantum Computers and
Quantum Turing Machines are also described.
| [
{
"created": "Mon, 14 Jan 2002 21:57:12 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Blaha",
"Stephen",
""
]
] | We show the Standard Model and SuperString Theories can be naturally based on a Quantum Computer foundation. The Standard Model of elementary particles can be viewed as defining a Quantum Computer Grammar and language. A Quantum Computer in a certain limit naturally forms a Superspace upon which Supersymmetry rotations can be defined - a Continuum Quantum Computer. Quantum high-level computer languages such as Quantum C and Quantum Assembly language are also discussed. In these new linguistic representations, particles become literally symbols or letters, and particle interactions become grammar rules. This view is NOT the same as the often-expressed view that Mathematics is the language of Physics. Some new developments relating to Quantum Computers and Quantum Turing Machines are also described. |
1706.01188 | Diederik Aerts | Diederik Aerts, Jonito Aerts Argu\"elles, Lester Beltran, Suzette
Geriente, Massimiliano Sassoli de Bianchi, Sandro Sozzo and Tomas Veloz | Spin and Wind Directions II: A Bell State Quantum Model | This a the second half of a two-part article, the first half being
entitled 'Spin and Wind Directions I: Identifying Entanglement in Nature and
Cognition' and to be found at arXiv:1508.00434 | Foundations of Science, 23, pp. 337-365 (2018) | 10.1007/s10699-017-9530-2 | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the first half of this two-part article, we analyzed a cognitive
psychology experiment where participants were asked to select pairs of
directions that they considered to be the best example of 'Two Different Wind
Directions', and showed that the data violate the CHSH version of Bell's
inequality, with same magnitude as in typical Bell-test experiments in physics.
In this second part, we complete our analysis by presenting a symmetrized
version of the experiment, still violating the CHSH inequality but now also
obeying the marginal law, for which we provide a full quantum modeling in
Hilbert space, using a singlet state and suitably chosen product measurements.
We also address some of the criticisms that have been recently directed at
experiments of this kind, according to which they would not highlight the
presence of genuine forms of entanglement. We explain that these criticisms are
based on a view of entanglement that is too restrictive, thus unable to capture
all possible ways physical and conceptual entities can connect and form systems
behaving as a whole. We also provide an example of a mechanical model showing
that the violations of the marginal law and Bell inequalities are generally to
be associated with different mechanisms.
| [
{
"created": "Mon, 5 Jun 2017 04:24:31 GMT",
"version": "v1"
}
] | 2019-02-12 | [
[
"Aerts",
"Diederik",
""
],
[
"Arguëlles",
"Jonito Aerts",
""
],
[
"Beltran",
"Lester",
""
],
[
"Geriente",
"Suzette",
""
],
[
"de Bianchi",
"Massimiliano Sassoli",
""
],
[
"Sozzo",
"Sandro",
""
],
[
"Veloz",
"T... | In the first half of this two-part article, we analyzed a cognitive psychology experiment where participants were asked to select pairs of directions that they considered to be the best example of 'Two Different Wind Directions', and showed that the data violate the CHSH version of Bell's inequality, with same magnitude as in typical Bell-test experiments in physics. In this second part, we complete our analysis by presenting a symmetrized version of the experiment, still violating the CHSH inequality but now also obeying the marginal law, for which we provide a full quantum modeling in Hilbert space, using a singlet state and suitably chosen product measurements. We also address some of the criticisms that have been recently directed at experiments of this kind, according to which they would not highlight the presence of genuine forms of entanglement. We explain that these criticisms are based on a view of entanglement that is too restrictive, thus unable to capture all possible ways physical and conceptual entities can connect and form systems behaving as a whole. We also provide an example of a mechanical model showing that the violations of the marginal law and Bell inequalities are generally to be associated with different mechanisms. |
1903.03698 | Vitchyr H. Pong | Vitchyr H. Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl,
Sergey Levine | Skew-Fit: State-Covering Self-Supervised Reinforcement Learning | ICML 2020. 8 pages, 8 figures; 9 pages appendix (6 additional
figures) | null | null | null | cs.LG cs.AI cs.RO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous agents that must exhibit flexible and broad capabilities will need
to be equipped with large repertoires of skills. Defining each skill with a
manually-designed reward function limits this repertoire and imposes a manual
engineering burden. Self-supervised agents that set their own goals can
automate this process, but designing appropriate goal setting objectives can be
difficult, and often involves heuristic design decisions. In this paper, we
propose a formal exploration objective for goal-reaching policies that
maximizes state coverage. We show that this objective is equivalent to
maximizing goal reaching performance together with the entropy of the goal
distribution, where goals correspond to full state observations. To instantiate
this principle, we present an algorithm called Skew-Fit for learning a
maximum-entropy goal distributions. We prove that, under regularity conditions,
Skew-Fit converges to a uniform distribution over the set of valid states, even
when we do not know this set beforehand. Our experiments show that combining
Skew-Fit for learning goal distributions with existing goal-reaching methods
outperforms a variety of prior methods on open-sourced visual goal-reaching
tasks. Moreover, we demonstrate that Skew-Fit enables a real-world robot to
learn to open a door, entirely from scratch, from pixels, and without any
manually-designed reward function.
| [
{
"created": "Fri, 8 Mar 2019 23:32:17 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2019 15:30:20 GMT",
"version": "v2"
},
{
"created": "Sun, 9 Feb 2020 20:24:12 GMT",
"version": "v3"
},
{
"created": "Tue, 4 Aug 2020 04:07:27 GMT",
"version": "v4"
}
] | 2020-08-05 | [
[
"Pong",
"Vitchyr H.",
""
],
[
"Dalal",
"Murtaza",
""
],
[
"Lin",
"Steven",
""
],
[
"Nair",
"Ashvin",
""
],
[
"Bahl",
"Shikhar",
""
],
[
"Levine",
"Sergey",
""
]
] | Autonomous agents that must exhibit flexible and broad capabilities will need to be equipped with large repertoires of skills. Defining each skill with a manually-designed reward function limits this repertoire and imposes a manual engineering burden. Self-supervised agents that set their own goals can automate this process, but designing appropriate goal setting objectives can be difficult, and often involves heuristic design decisions. In this paper, we propose a formal exploration objective for goal-reaching policies that maximizes state coverage. We show that this objective is equivalent to maximizing goal reaching performance together with the entropy of the goal distribution, where goals correspond to full state observations. To instantiate this principle, we present an algorithm called Skew-Fit for learning a maximum-entropy goal distributions. We prove that, under regularity conditions, Skew-Fit converges to a uniform distribution over the set of valid states, even when we do not know this set beforehand. Our experiments show that combining Skew-Fit for learning goal distributions with existing goal-reaching methods outperforms a variety of prior methods on open-sourced visual goal-reaching tasks. Moreover, we demonstrate that Skew-Fit enables a real-world robot to learn to open a door, entirely from scratch, from pixels, and without any manually-designed reward function. |
1803.05760 | Boliang Lin | Boliang Lin | A Study of Car-to-Train Assignment Problem for Rail Express Cargos on
Scheduled and Unscheduled Train Service Network | 12 pages, 1 figure | null | 10.1371/journal.pone.0204598 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Freight train services in a railway network system are generally divided into
two categories: one is the unscheduled train, whose operating frequency
fluctuates with origin-destination (OD) demands; the other is the scheduled
train, which is running based on regular timetable just like the passenger
trains. The timetable will be released to the public if determined and it would
not be influenced by OD demands. Typically, the total capacity of scheduled
trains can usually satisfy the predicted demands of express cargos in average.
However, the demands are changing in practice. Therefore, how to distribute the
shipments between different stations to unscheduled and scheduled train
services has become an important research field in railway transportation. This
paper focuses on the coordinated optimization of the rail express cargos
distribution in two service networks. On the premise of fully utilizing the
capacity of scheduled service network first, we established a Car-to-Train
(CTT) assignment model to assign rail express cargos to scheduled and
unscheduled trains scientifically. The objective function is to maximize the
net income of transporting the rail express cargos. The constraints include the
capacity restriction on the service arcs, flow balance constraints, logical
relationship constraint between two groups of decision variables and the due
date constraint. The last constraint is to ensure that the total transportation
time of a shipment would not be longer than its predefined due date. Finally,
we discuss the linearization techniques to simplify the model proposed in this
paper, which make it possible for obtaining global optimal solution by using
the commercial software.
| [
{
"created": "Wed, 14 Mar 2018 07:32:14 GMT",
"version": "v1"
}
] | 2018-11-21 | [
[
"Lin",
"Boliang",
""
]
] | Freight train services in a railway network system are generally divided into two categories: one is the unscheduled train, whose operating frequency fluctuates with origin-destination (OD) demands; the other is the scheduled train, which is running based on regular timetable just like the passenger trains. The timetable will be released to the public if determined and it would not be influenced by OD demands. Typically, the total capacity of scheduled trains can usually satisfy the predicted demands of express cargos in average. However, the demands are changing in practice. Therefore, how to distribute the shipments between different stations to unscheduled and scheduled train services has become an important research field in railway transportation. This paper focuses on the coordinated optimization of the rail express cargos distribution in two service networks. On the premise of fully utilizing the capacity of scheduled service network first, we established a Car-to-Train (CTT) assignment model to assign rail express cargos to scheduled and unscheduled trains scientifically. The objective function is to maximize the net income of transporting the rail express cargos. The constraints include the capacity restriction on the service arcs, flow balance constraints, logical relationship constraint between two groups of decision variables and the due date constraint. The last constraint is to ensure that the total transportation time of a shipment would not be longer than its predefined due date. Finally, we discuss the linearization techniques to simplify the model proposed in this paper, which make it possible for obtaining global optimal solution by using the commercial software. |
2005.00749 | Zheng Wang | Jie Ren, Lu Yuan, Petteri Nurmi, Xiaoming Wang, Miao Ma, Ling Gao,
Zhanyong Tang, Jie Zheng, Zheng Wang | Smart, Adaptive Energy Optimization for Mobile Web Interactions | Accepted to be published at INFOCOM 2020 | null | null | null | cs.NI cs.HC cs.PF | http://creativecommons.org/licenses/by/4.0/ | Web technology underpins many interactive mobile applications. However,
energy-efficient mobile web interactions is an outstanding challenge. Given the
increasing diversity and complexity of mobile hardware, any practical
optimization scheme must work for a wide range of users, mobile platforms and
web workloads. This paper presents CAMEL , a novel energy optimization system
for mobile web interactions. CAMEL leverages machine learning techniques to
develop a smart, adaptive scheme to judiciously trade performance for reduced
power consumption. Unlike prior work, C AMEL directly models how a given web
content affects the user expectation and uses this to guide energy
optimization. It goes further by employing transfer learning and conformal
predictions to tune a previously learned model in the end-user environment and
improve it over time. We apply CAMEL to Chromium and evaluate it on four
distinct mobile systems involving 1,000 testing webpages and 30 users. Compared
to four state-of-the-art web-event optimizers, CAMEL delivers 22% more energy
savings, but with 49% fewer violations on the quality of user experience, and
exhibits orders of magnitudes less overhead when targeting a new computing
environment.
| [
{
"created": "Sat, 2 May 2020 08:51:07 GMT",
"version": "v1"
}
] | 2020-05-05 | [
[
"Ren",
"Jie",
""
],
[
"Yuan",
"Lu",
""
],
[
"Nurmi",
"Petteri",
""
],
[
"Wang",
"Xiaoming",
""
],
[
"Ma",
"Miao",
""
],
[
"Gao",
"Ling",
""
],
[
"Tang",
"Zhanyong",
""
],
[
"Zheng",
"Jie",
"... | Web technology underpins many interactive mobile applications. However, energy-efficient mobile web interactions is an outstanding challenge. Given the increasing diversity and complexity of mobile hardware, any practical optimization scheme must work for a wide range of users, mobile platforms and web workloads. This paper presents CAMEL , a novel energy optimization system for mobile web interactions. CAMEL leverages machine learning techniques to develop a smart, adaptive scheme to judiciously trade performance for reduced power consumption. Unlike prior work, C AMEL directly models how a given web content affects the user expectation and uses this to guide energy optimization. It goes further by employing transfer learning and conformal predictions to tune a previously learned model in the end-user environment and improve it over time. We apply CAMEL to Chromium and evaluate it on four distinct mobile systems involving 1,000 testing webpages and 30 users. Compared to four state-of-the-art web-event optimizers, CAMEL delivers 22% more energy savings, but with 49% fewer violations on the quality of user experience, and exhibits orders of magnitudes less overhead when targeting a new computing environment. |
2301.12780 | Aviv Navon | Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik,
Haggai Maron | Equivariant Architectures for Learning in Deep Weight Spaces | ICML 2023 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Designing machine learning architectures for processing neural networks in
their raw weight matrix form is a newly introduced research direction.
Unfortunately, the unique symmetry structure of deep weight spaces makes this
design very challenging. If successful, such architectures would be capable of
performing a wide range of intriguing tasks, from adapting a pre-trained
network to a new domain to editing objects represented as functions (INRs or
NeRFs). As a first step towards this goal, we present here a novel network
architecture for learning in deep weight spaces. It takes as input a
concatenation of weights and biases of a pre-trained MLP and processes it using
a composition of layers that are equivariant to the natural permutation
symmetry of the MLP's weights: Changing the order of neurons in intermediate
layers of the MLP does not affect the function it represents. We provide a full
characterization of all affine equivariant and invariant layers for these
symmetries and show how these layers can be implemented using three basic
operations: pooling, broadcasting, and fully connected layers applied to the
input in an appropriate manner. We demonstrate the effectiveness of our
architecture and its advantages over natural baselines in a variety of learning
tasks.
| [
{
"created": "Mon, 30 Jan 2023 10:50:33 GMT",
"version": "v1"
},
{
"created": "Wed, 31 May 2023 19:24:08 GMT",
"version": "v2"
}
] | 2023-06-02 | [
[
"Navon",
"Aviv",
""
],
[
"Shamsian",
"Aviv",
""
],
[
"Achituve",
"Idan",
""
],
[
"Fetaya",
"Ethan",
""
],
[
"Chechik",
"Gal",
""
],
[
"Maron",
"Haggai",
""
]
] | Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction. Unfortunately, the unique symmetry structure of deep weight spaces makes this design very challenging. If successful, such architectures would be capable of performing a wide range of intriguing tasks, from adapting a pre-trained network to a new domain to editing objects represented as functions (INRs or NeRFs). As a first step towards this goal, we present here a novel network architecture for learning in deep weight spaces. It takes as input a concatenation of weights and biases of a pre-trained MLP and processes it using a composition of layers that are equivariant to the natural permutation symmetry of the MLP's weights: Changing the order of neurons in intermediate layers of the MLP does not affect the function it represents. We provide a full characterization of all affine equivariant and invariant layers for these symmetries and show how these layers can be implemented using three basic operations: pooling, broadcasting, and fully connected layers applied to the input in an appropriate manner. We demonstrate the effectiveness of our architecture and its advantages over natural baselines in a variety of learning tasks. |
2306.11705 | Johannes Rosenberger | Johannes Rosenberger, Abdalla Ibrahim, Christian Deppe, Roberto
Ferrara | Deterministic Identification Over Multiple-Access Channels | ISIT 2023 version | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deterministic identification over K-input multiple-access channels with
average input cost constraints is considered. The capacity region for
deterministic identification is determined for an average-error criterion,
where arbitrarily large codes are achievable. For a maximal-error criterion,
upper and lower bounds on the capacity region are derived. The bounds coincide
if all average partial point-to-point channels are injective under the input
constraint, i.e. all inputs at one terminal are mapped to distinct output
distributions, if averaged over the inputs at all other terminals. The
achievability is proved by treating the MAC as an arbitrarily varying channel
with average state constraints. For injective average channels, the capacity
region is a hyperrectangle. The modulo-2 and modulo-3 binary adder MAC are
presented as examples of channels which are injective under suitable input
constraints. The binary multiplier MAC is presented as an example of a
non-injective channel, where the achievable identification rate region still
includes the Shannon capacity region.
| [
{
"created": "Tue, 20 Jun 2023 17:34:42 GMT",
"version": "v1"
}
] | 2023-06-21 | [
[
"Rosenberger",
"Johannes",
""
],
[
"Ibrahim",
"Abdalla",
""
],
[
"Deppe",
"Christian",
""
],
[
"Ferrara",
"Roberto",
""
]
] | Deterministic identification over K-input multiple-access channels with average input cost constraints is considered. The capacity region for deterministic identification is determined for an average-error criterion, where arbitrarily large codes are achievable. For a maximal-error criterion, upper and lower bounds on the capacity region are derived. The bounds coincide if all average partial point-to-point channels are injective under the input constraint, i.e. all inputs at one terminal are mapped to distinct output distributions, if averaged over the inputs at all other terminals. The achievability is proved by treating the MAC as an arbitrarily varying channel with average state constraints. For injective average channels, the capacity region is a hyperrectangle. The modulo-2 and modulo-3 binary adder MAC are presented as examples of channels which are injective under suitable input constraints. The binary multiplier MAC is presented as an example of a non-injective channel, where the achievable identification rate region still includes the Shannon capacity region. |
2001.11263 | Francesc Llu\'is | Francesc Llu\'is, Pablo Mart\'inez-Nuevo, Martin Bo M{\o}ller, Sven
Ewan Shepstone | Sound field reconstruction in rooms: inpainting meets super-resolution | Code: https://github.com/francesclluis/sound-field-neural-network | The Journal of the Acoustical Society of America 148, 649 (2020) | 10.1121/10.0001687 | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, a deep-learning-based method for sound field reconstruction is
proposed. It is shown the possibility to reconstruct the magnitude of the sound
pressure in the frequency band 30-300 Hz for an entire room by using a very low
number of irregularly distributed microphones arbitrarily arranged. Moreover,
the approach is agnostic to the location of the measurements in the Euclidean
space. In particular, the presented approach uses a limited number of arbitrary
discrete measurements of the magnitude of the sound field pressure in order to
extrapolate this field to a higher-resolution grid of discrete points in space
with a low computational complexity. The method is based on a U-net-like neural
network with partial convolutions trained solely on simulated data, which
itself is constructed from numerical simulations of Green's function across
thousands of common rectangular rooms. Although extensible to three dimensions
and different room shapes, the method focuses on reconstructing a
two-dimensional plane of a rectangular room from measurements of the
three-dimensional sound field. Experiments using simulated data together with
an experimental validation in a real listening room are shown. The results
suggest a performance which may exceed conventional reconstruction techniques
for a low number of microphones and computational requirements.
| [
{
"created": "Thu, 30 Jan 2020 11:31:59 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Aug 2020 16:14:24 GMT",
"version": "v2"
}
] | 2020-08-07 | [
[
"Lluís",
"Francesc",
""
],
[
"Martínez-Nuevo",
"Pablo",
""
],
[
"Møller",
"Martin Bo",
""
],
[
"Shepstone",
"Sven Ewan",
""
]
] | In this paper, a deep-learning-based method for sound field reconstruction is proposed. It is shown the possibility to reconstruct the magnitude of the sound pressure in the frequency band 30-300 Hz for an entire room by using a very low number of irregularly distributed microphones arbitrarily arranged. Moreover, the approach is agnostic to the location of the measurements in the Euclidean space. In particular, the presented approach uses a limited number of arbitrary discrete measurements of the magnitude of the sound field pressure in order to extrapolate this field to a higher-resolution grid of discrete points in space with a low computational complexity. The method is based on a U-net-like neural network with partial convolutions trained solely on simulated data, which itself is constructed from numerical simulations of Green's function across thousands of common rectangular rooms. Although extensible to three dimensions and different room shapes, the method focuses on reconstructing a two-dimensional plane of a rectangular room from measurements of the three-dimensional sound field. Experiments using simulated data together with an experimental validation in a real listening room are shown. The results suggest a performance which may exceed conventional reconstruction techniques for a low number of microphones and computational requirements. |
hep-th/9206087 | Shin'ichi Nojiri | Shin'ichi Nojiri and Ichiro Oda | Charged Dilatonic Black Hole and Hawking Radiation in Two Dimensions | 15pp | Phys.Lett. B294 (1992) 317-324 | 10.1016/0370-2693(92)91527-G | null | hep-th | null | We consider Callan, Giddings, Harvey and Strominger's (CGHS) two dimensional
dilatonic gravity with electromagnetic interactions. This model can be also
solved classically. Among the solutions describing static black holes, there
exist extremal solutions which have zero temperatures. In the extremal
solutions, the space-time metric is not singular. We also obtain the solutions
describing charged matter (chiral fermions) collapsing into black holes.
Through the collapsing, not only future horizon but past horizon is also
shifted. The quantum corrections including chiral anomaly are also discussed.
In a way similar to CGHS model, the curvature singularity also appeared, except
extremal case, when the matter collapsing. The screening effects due to the
chiral anomaly have a tendency to cloak the singularity
| [
{
"created": "Wed, 24 Jun 1992 03:01:32 GMT",
"version": "v1"
}
] | 2009-10-22 | [
[
"Nojiri",
"Shin'ichi",
""
],
[
"Oda",
"Ichiro",
""
]
] | We consider Callan, Giddings, Harvey and Strominger's (CGHS) two dimensional dilatonic gravity with electromagnetic interactions. This model can be also solved classically. Among the solutions describing static black holes, there exist extremal solutions which have zero temperatures. In the extremal solutions, the space-time metric is not singular. We also obtain the solutions describing charged matter (chiral fermions) collapsing into black holes. Through the collapsing, not only future horizon but past horizon is also shifted. The quantum corrections including chiral anomaly are also discussed. In a way similar to CGHS model, the curvature singularity also appeared, except extremal case, when the matter collapsing. The screening effects due to the chiral anomaly have a tendency to cloak the singularity |
hep-th/0003064 | Vladimir Kazakov | V. A. Kazakov | Solvable Matrix Models | 16 pages, a talk delivered at the MSRI Workshop ``Matrix Models and
Painlev\'e Equations'', Berkeley (USA) 1999 | null | null | LPTENS-00/09 | hep-th | null | We review some old and new methods of reduction of the number of degrees of
freedom from ~N^2 to ~N in the multi-matrix integrals.
| [
{
"created": "Wed, 8 Mar 2000 20:08:08 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Kazakov",
"V. A.",
""
]
] | We review some old and new methods of reduction of the number of degrees of freedom from ~N^2 to ~N in the multi-matrix integrals. |
1806.07848 | Ghurumuruhan Ganesan | Ghurumuruhan Ganesan | Correcting an ordered deletion-erasure | null | null | null | null | cs.IT math.CO math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we show that the single deletion correcting
Varshamov-Tenengolts code, with minor modifications, can also correct an
ordered deletion-erasure pattern where one deletion and at most one erasure
occur and the deletion always occurs before the erasure. For large code
lengths, the constructed code has the same logarithmic redundancy as optimal
codes.
| [
{
"created": "Wed, 20 Jun 2018 17:17:36 GMT",
"version": "v1"
}
] | 2018-06-21 | [
[
"Ganesan",
"Ghurumuruhan",
""
]
] | In this paper, we show that the single deletion correcting Varshamov-Tenengolts code, with minor modifications, can also correct an ordered deletion-erasure pattern where one deletion and at most one erasure occur and the deletion always occurs before the erasure. For large code lengths, the constructed code has the same logarithmic redundancy as optimal codes. |
1905.04127 | Andrei Roibu | Andrei Claudiu Roibu | Design of Artificial Intelligence Agents for Games using Deep
Reinforcement Learning | Dissertation submitted to the University of Sheffield in partial
fulfilment of the requirements for the degree of Master of Engineering. 98
pages, 21 Tables, 58 Figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order perform a large variety of tasks and to achieve human-level
performance in complex real-world environments, Artificial Intelligence (AI)
Agents must be able to learn from their past experiences and gain both
knowledge and an accurate representation of their environment from raw sensory
inputs. Traditionally, AI agents have suffered from difficulties in using only
sensory inputs to obtain a good representation of their environment and then
mapping this representation to an efficient control policy. Deep reinforcement
learning algorithms have provided a solution to this issue. In this study, the
performance of different conventional and novel deep reinforcement learning
algorithms was analysed. The proposed method utilises two types of algorithms,
one trained with a variant of Q-learning (DQN) and another trained with SARSA
learning (DSN) to assess the feasibility of using direct feedback alignment, a
novel biologically plausible method for back-propagating the error. These novel
agents, alongside two similar agents trained with the conventional
backpropagation algorithm, were tested by using the OpenAI Gym toolkit on
several classic control theory problems and Atari 2600 video games. The results
of this investigation open the way into new, biologically-inspired deep
reinforcement learning algorithms, and their implementation on neuromorphic
hardware.
| [
{
"created": "Fri, 10 May 2019 12:43:52 GMT",
"version": "v1"
}
] | 2019-05-13 | [
[
"Roibu",
"Andrei Claudiu",
""
]
] | In order perform a large variety of tasks and to achieve human-level performance in complex real-world environments, Artificial Intelligence (AI) Agents must be able to learn from their past experiences and gain both knowledge and an accurate representation of their environment from raw sensory inputs. Traditionally, AI agents have suffered from difficulties in using only sensory inputs to obtain a good representation of their environment and then mapping this representation to an efficient control policy. Deep reinforcement learning algorithms have provided a solution to this issue. In this study, the performance of different conventional and novel deep reinforcement learning algorithms was analysed. The proposed method utilises two types of algorithms, one trained with a variant of Q-learning (DQN) and another trained with SARSA learning (DSN) to assess the feasibility of using direct feedback alignment, a novel biologically plausible method for back-propagating the error. These novel agents, alongside two similar agents trained with the conventional backpropagation algorithm, were tested by using the OpenAI Gym toolkit on several classic control theory problems and Atari 2600 video games. The results of this investigation open the way into new, biologically-inspired deep reinforcement learning algorithms, and their implementation on neuromorphic hardware. |
1602.05971 | Nafiz Ishtiaque | Efrat Gerchkovitz, Jaume Gomis, Nafiz Ishtiaque, Avner Karasik, Zohar
Komargodski, Silviu S. Pufu | Correlation Functions of Coulomb Branch Operators | 47 pages, 6 figures. v2: typos corrected and references added | null | 10.1007/JHEP01(2017)103 | PUPT-2500 | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the correlation functions of Coulomb branch operators in
four-dimensional N=2 Superconformal Field Theories (SCFTs) involving exactly
one anti-chiral operator. These extremal correlators are the "minimal"
non-holomorphic local observables in the theory. We show that they can be
expressed in terms of certain determinants of derivatives of the four-sphere
partition function of an appropriate deformation of the SCFT. This relation
between the extremal correlators and the deformed four-sphere partition
function is non-trivial due to the presence of conformal anomalies, which lead
to operator mixing on the sphere. Evaluating the deformed four-sphere partition
function using supersymmetric localization, we compute the extremal correlators
explicitly in many interesting examples. Additionally, the representation of
the extremal correlators mentioned above leads to a system of integrable
differential equations. We compare our exact results with previous perturbative
computations and with the four-dimensional tt^* equations. We also use our
results to study some of the asymptotic properties of the perturbative series
expansions we obtain in N=2 SQCD.
| [
{
"created": "Thu, 18 Feb 2016 21:11:48 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Oct 2016 21:44:21 GMT",
"version": "v2"
}
] | 2017-01-26 | [
[
"Gerchkovitz",
"Efrat",
""
],
[
"Gomis",
"Jaume",
""
],
[
"Ishtiaque",
"Nafiz",
""
],
[
"Karasik",
"Avner",
""
],
[
"Komargodski",
"Zohar",
""
],
[
"Pufu",
"Silviu S.",
""
]
] | We consider the correlation functions of Coulomb branch operators in four-dimensional N=2 Superconformal Field Theories (SCFTs) involving exactly one anti-chiral operator. These extremal correlators are the "minimal" non-holomorphic local observables in the theory. We show that they can be expressed in terms of certain determinants of derivatives of the four-sphere partition function of an appropriate deformation of the SCFT. This relation between the extremal correlators and the deformed four-sphere partition function is non-trivial due to the presence of conformal anomalies, which lead to operator mixing on the sphere. Evaluating the deformed four-sphere partition function using supersymmetric localization, we compute the extremal correlators explicitly in many interesting examples. Additionally, the representation of the extremal correlators mentioned above leads to a system of integrable differential equations. We compare our exact results with previous perturbative computations and with the four-dimensional tt^* equations. We also use our results to study some of the asymptotic properties of the perturbative series expansions we obtain in N=2 SQCD. |
1910.02377 | Sang Won Lee | Sang Won Lee | Liveness in Interactive Systems | null | the CSCW 2018 workshop on Hybrid Events (CSCW) the CSCW 2018
workshop on Hybrid Events (CSCW) , 2018 | 10.5281/zenodo.1471026 | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Creating an artifact in front of public offers an opportunity to involve
spectators in the creation process. For example, in a live music concert,
audience members can clap, stomp and sing with the musicians to be part of the
music piece. Live creation can facilitate collaboration with the spectators.
The questions I set out to answer are what does it mean to have liveness in
interactive systems to support large-scale hybrid events that involve audience
participation. The notion of liveness is subtle in human-computer interaction.
In this paper, I revisit the notion of liveness and provide definitions of both
live and liveness from the perspective of designing interactive systems. In
addition, I discuss why liveness matters in facilitating hybrid events and
suggest future research works
| [
{
"created": "Sun, 6 Oct 2019 05:21:04 GMT",
"version": "v1"
}
] | 2019-10-08 | [
[
"Lee",
"Sang Won",
""
]
] | Creating an artifact in front of public offers an opportunity to involve spectators in the creation process. For example, in a live music concert, audience members can clap, stomp and sing with the musicians to be part of the music piece. Live creation can facilitate collaboration with the spectators. The questions I set out to answer are what does it mean to have liveness in interactive systems to support large-scale hybrid events that involve audience participation. The notion of liveness is subtle in human-computer interaction. In this paper, I revisit the notion of liveness and provide definitions of both live and liveness from the perspective of designing interactive systems. In addition, I discuss why liveness matters in facilitating hybrid events and suggest future research works |
2001.06773 | Wei Zhao | Qing Nie, Lingxia Qiao, Yuchi Qiu, Lei Zhang and Wei Zhao | Noise control and utility: from regulatory network to spatial patterning | null | null | null | null | q-bio.MN q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochasticity (or noise) at cellular and molecular levels has been observed
extensively as a universal feature for living systems. However, how living
systems deal with noise while performing desirable biological functions remains
a major mystery. Regulatory network configurations, such as their topology and
timescale, are shown to be critical in attenuating noise, and noise is also
found to facilitate cell fate decision. Here we review major recent findings on
noise attenuation through regulatory control, the benefit of noise via
noise-induced cellular plasticity during developmental patterning, and
summarize key principles underlying noise control.
| [
{
"created": "Sun, 19 Jan 2020 04:39:38 GMT",
"version": "v1"
}
] | 2020-01-22 | [
[
"Nie",
"Qing",
""
],
[
"Qiao",
"Lingxia",
""
],
[
"Qiu",
"Yuchi",
""
],
[
"Zhang",
"Lei",
""
],
[
"Zhao",
"Wei",
""
]
] | Stochasticity (or noise) at cellular and molecular levels has been observed extensively as a universal feature for living systems. However, how living systems deal with noise while performing desirable biological functions remains a major mystery. Regulatory network configurations, such as their topology and timescale, are shown to be critical in attenuating noise, and noise is also found to facilitate cell fate decision. Here we review major recent findings on noise attenuation through regulatory control, the benefit of noise via noise-induced cellular plasticity during developmental patterning, and summarize key principles underlying noise control. |
1802.10089 | Daolin Ma | Daolin Ma and Alberto Rodriguez | Friction Variability in Planar Pushing Data: Anisotropic Friction and
Data-collection Bias | 8 pages, 13 figures | null | 10.1109/LRA.2018.2851026 | null | cs.RO physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Friction plays a key role in manipulating objects. Most of what we do with
our hands, and most of what robots do with their grippers, is based on the
ability to control frictional forces. This paper aims to better understand the
variability and predictability of planar friction. In particular, we focus on
the analysis of a recent dataset on planar pushing by Yu et al. [1] devised to
create a data-driven footprint of planar friction.
We show in this paper how we can explain a significant fraction of the
observed unconventional phenomena, e.g., stochasticity and multi-modality, by
combining the effects of material non-homogeneity, anisotropy of friction and
biases due to data collection dynamics, hinting that the variability is
explainable but inevitable in practice.
We introduce an anisotropic friction model and conduct simulation experiments
comparing with more standard isotropic friction models. The anisotropic
friction between object and supporting surface results in convergence of
initial condition during the automated data collection. Numerical results
confirm that the anisotropic friction model explains the bias in the dataset
and the apparent stochasticity in the outcome of a push. The fact that the data
collection process itself can originate biases in the collected datasets,
resulting in deterioration of trained models, calls attention to the data
collection dynamics.
| [
{
"created": "Tue, 27 Feb 2018 16:13:17 GMT",
"version": "v1"
},
{
"created": "Thu, 17 May 2018 12:58:29 GMT",
"version": "v2"
},
{
"created": "Fri, 18 May 2018 12:56:28 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Jun 2018 22:02:02 GMT",
"version": "v4"
}
] | 2018-08-07 | [
[
"Ma",
"Daolin",
""
],
[
"Rodriguez",
"Alberto",
""
]
] | Friction plays a key role in manipulating objects. Most of what we do with our hands, and most of what robots do with their grippers, is based on the ability to control frictional forces. This paper aims to better understand the variability and predictability of planar friction. In particular, we focus on the analysis of a recent dataset on planar pushing by Yu et al. [1] devised to create a data-driven footprint of planar friction. We show in this paper how we can explain a significant fraction of the observed unconventional phenomena, e.g., stochasticity and multi-modality, by combining the effects of material non-homogeneity, anisotropy of friction and biases due to data collection dynamics, hinting that the variability is explainable but inevitable in practice. We introduce an anisotropic friction model and conduct simulation experiments comparing with more standard isotropic friction models. The anisotropic friction between object and supporting surface results in convergence of initial condition during the automated data collection. Numerical results confirm that the anisotropic friction model explains the bias in the dataset and the apparent stochasticity in the outcome of a push. The fact that the data collection process itself can originate biases in the collected datasets, resulting in deterioration of trained models, calls attention to the data collection dynamics. |
1406.3762 | Amir Toor | Max Jameson-Lee, Vishal Koparde, Phil Griffith, Allison F. Scalora,
Juliana K. Sampson, Haniya Khalid, Nihar U. Sheth, Michael Batalo, Myrna G.
Serrano, Catherine H. Roberts, Michael L. Hess, Gregory A. Buck, Michael C.
Neale, Masoud H. Manjili, Amir A. Toor | In Silico Derivation of HLA-Specific Alloreactivity Potential from Whole
Exome Sequencing of Stem Cell Transplant Donors and Recipients: Understanding
the Quantitative Immuno-biology of Allogeneic Transplantation | Abstract: 235, Words: 6422, Figures: 7, Tables: 3, Supplementary
figures: 2, Supplementary tables: 2 | Frontiers in Immunology 2014. 5:529 | 10.3389/fimmu.2014.00529 | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Donor T cell mediated graft vs. host effects may result from the aggregate
alloreactivity to minor histocompatibility antigens (mHA) presented by the HLA
in each donor-recipient pair (DRP) undergoing stem cell transplantation (SCT).
Whole exome sequencing has demonstrated extensive nucleotide sequence variation
in HLA-matched DRP. Non-synonymous single nucleotide polymorphisms (nsSNPs) in
the GVH direction (polymorphisms present in recipient and absent in donor) were
identified in 4 HLA-matched related and 5 unrelated DRP. The nucleotide
sequence flanking each SNP was obtained utilizing the ANNOVAR software package.
All possible nonameric-peptides encoded by the non-synonymous SNP were then
interrogated in-silico for their likelihood to be presented by the HLA class I
molecules in individual DRP, using the Immune-Epitope Database (IEDB) SMM
algorithm. The IEDB-SMM algorithm predicted a median 18,396 peptides/DRP which
bound HLA with an IC50 of <500nM, and 2254 peptides/DRP with an IC50 of <50nM.
Unrelated donors generally had higher numbers of peptides presented by the HLA.
A similarly large library of presented peptides was identified when the data
was interrogated using the Net MHCPan algorithm. These peptides were uniformly
distributed in the various organ systems. The bioinformatic algorithm presented
here demonstrates that there may be a high level of minor histocompatibility
antigen variation in HLA-matched individuals, constituting an HLA-specific
alloreactivity potential. These data provide a possible explanation for how
relatively minor adjustments in GVHD prophylaxis yield relatively similar
outcomes in HLA matched and mismatched SCT recipients.
| [
{
"created": "Sat, 14 Jun 2014 18:29:58 GMT",
"version": "v1"
}
] | 2014-12-16 | [
[
"Jameson-Lee",
"Max",
""
],
[
"Koparde",
"Vishal",
""
],
[
"Griffith",
"Phil",
""
],
[
"Scalora",
"Allison F.",
""
],
[
"Sampson",
"Juliana K.",
""
],
[
"Khalid",
"Haniya",
""
],
[
"Sheth",
"Nihar U.",
""
... | Donor T cell mediated graft vs. host effects may result from the aggregate alloreactivity to minor histocompatibility antigens (mHA) presented by the HLA in each donor-recipient pair (DRP) undergoing stem cell transplantation (SCT). Whole exome sequencing has demonstrated extensive nucleotide sequence variation in HLA-matched DRP. Non-synonymous single nucleotide polymorphisms (nsSNPs) in the GVH direction (polymorphisms present in recipient and absent in donor) were identified in 4 HLA-matched related and 5 unrelated DRP. The nucleotide sequence flanking each SNP was obtained utilizing the ANNOVAR software package. All possible nonameric-peptides encoded by the non-synonymous SNP were then interrogated in-silico for their likelihood to be presented by the HLA class I molecules in individual DRP, using the Immune-Epitope Database (IEDB) SMM algorithm. The IEDB-SMM algorithm predicted a median 18,396 peptides/DRP which bound HLA with an IC50 of <500nM, and 2254 peptides/DRP with an IC50 of <50nM. Unrelated donors generally had higher numbers of peptides presented by the HLA. A similarly large library of presented peptides was identified when the data was interrogated using the Net MHCPan algorithm. These peptides were uniformly distributed in the various organ systems. The bioinformatic algorithm presented here demonstrates that there may be a high level of minor histocompatibility antigen variation in HLA-matched individuals, constituting an HLA-specific alloreactivity potential. These data provide a possible explanation for how relatively minor adjustments in GVHD prophylaxis yield relatively similar outcomes in HLA matched and mismatched SCT recipients. |
2107.09706 | Chuang Xu | Carsten Wiuf and Chuang Xu | Fiber decomposition of deterministic reaction networks with applications | null | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deterministic reaction networks (RNs) are tools to model diverse biological
phenomena characterized by particle systems, when there are abundant number of
particles. Examples include but are not limited to biochemistry, molecular
biology, genetics, epidemiology, and social sciences. In this chapter we
propose a new type of decomposition of RNs, called fiber decomposition. Using
this decomposition, we establish lifting of mass-action RNs preserving
stationary properties, including multistationarity and absolute concentration
robustness. Such lifting scheme is simple and explicit which imposes little
restriction on the reaction networks. We provide examples to illustrate how
this lifting can be used to construct RNs preserving certain dynamical
properties.
| [
{
"created": "Sun, 18 Jul 2021 07:08:26 GMT",
"version": "v1"
}
] | 2021-07-22 | [
[
"Wiuf",
"Carsten",
""
],
[
"Xu",
"Chuang",
""
]
] | Deterministic reaction networks (RNs) are tools to model diverse biological phenomena characterized by particle systems, when there are abundant number of particles. Examples include but are not limited to biochemistry, molecular biology, genetics, epidemiology, and social sciences. In this chapter we propose a new type of decomposition of RNs, called fiber decomposition. Using this decomposition, we establish lifting of mass-action RNs preserving stationary properties, including multistationarity and absolute concentration robustness. Such lifting scheme is simple and explicit which imposes little restriction on the reaction networks. We provide examples to illustrate how this lifting can be used to construct RNs preserving certain dynamical properties. |
2007.09420 | Alex Buchel | Alex Buchel | SUGRA/Strings like to be bald | 9 pages, 4 figures | null | 10.1016/j.physletb.2021.136111 | null | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the embedding of the phenomenological holographic models
describing thermal relativistic ordered conformal phases in ${\mathbb R}^{2,1}$
in SUGRA/String theory. The dual black branes in a Poincare patch of
asymptotically $AdS_4$ have ``hair'' -- a condensate of the order parameter for
the broken symmetry. In a gravitational dual the order parameter for a
spontaneous symmetry breaking is represented by a bulk scalar field with a
nontrivial potential. To construct the ordered conformal phases perturbatively
we introduce a phenomenological deformation parameter in the scalar potential.
We find that while the ordered phases exist for different values of the
deformation parameter, they disappear before the deformation is removed, in one
case once the potential is precisely as in the top-down holography. It appears
that the holographic models with the conformal ordered phases are in the String
theory swampland.
| [
{
"created": "Sat, 18 Jul 2020 12:49:49 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Buchel",
"Alex",
""
]
] | We explore the embedding of the phenomenological holographic models describing thermal relativistic ordered conformal phases in ${\mathbb R}^{2,1}$ in SUGRA/String theory. The dual black branes in a Poincare patch of asymptotically $AdS_4$ have ``hair'' -- a condensate of the order parameter for the broken symmetry. In a gravitational dual the order parameter for a spontaneous symmetry breaking is represented by a bulk scalar field with a nontrivial potential. To construct the ordered conformal phases perturbatively we introduce a phenomenological deformation parameter in the scalar potential. We find that while the ordered phases exist for different values of the deformation parameter, they disappear before the deformation is removed, in one case once the potential is precisely as in the top-down holography. It appears that the holographic models with the conformal ordered phases are in the String theory swampland. |
hep-th/0102158 | David Berenstein | David Berenstein, Robert G. Leigh | Observations on non-commutative field theories in coordinate space | 17 pages, Latex. Updated references | null | null | ILL-(TH)-00-11 | hep-th | null | We discuss non-commutative field theories in coordinate space. To do so we
introduce pseudo-localized operators that represent interesting position
dependent (gauge invariant) observables. The formalism may be applied to
arbitrary field theories, with or without supersymmetry.
The formalism has a number of intuitive advantages. First it makes clear the
appearance of new degrees of freedom in the infrared. Second, it allows for a
study of correlation functions of (composite) operators. Thus we calculate the
two point function in position space of the insertion of certain composite
operators. We demonstrate that, even at tree level, many of the by now familiar
properties of non-commutative field theories are manifest and have simple
interpretations. The form of correlation functions are such that certain
singularities may be interpreted in terms of dimensional reduction along the
non-commutative directions: this comes about because these are theories of
fundamental dipoles.
| [
{
"created": "Thu, 22 Feb 2001 22:32:10 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2001 18:38:08 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Berenstein",
"David",
""
],
[
"Leigh",
"Robert G.",
""
]
] | We discuss non-commutative field theories in coordinate space. To do so we introduce pseudo-localized operators that represent interesting position dependent (gauge invariant) observables. The formalism may be applied to arbitrary field theories, with or without supersymmetry. The formalism has a number of intuitive advantages. First it makes clear the appearance of new degrees of freedom in the infrared. Second, it allows for a study of correlation functions of (composite) operators. Thus we calculate the two point function in position space of the insertion of certain composite operators. We demonstrate that, even at tree level, many of the by now familiar properties of non-commutative field theories are manifest and have simple interpretations. The form of correlation functions are such that certain singularities may be interpreted in terms of dimensional reduction along the non-commutative directions: this comes about because these are theories of fundamental dipoles. |
2006.07700 | Ihsen Alouani | Amira Guesmi, Ihsen Alouani, Khaled Khasawneh, Mouna Baklouti, Tarek
Frikha, Mohamed Abid, Nael Abu-Ghazaleh | Defensive Approximation: Securing CNNs using Approximate Computing | ACM International Conference on Architectural Support for Programming
Languages and Operating Systems (ASPLOS 2021) | null | 10.1145/3445814.3446747 | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past few years, an increasing number of machine-learning and deep
learning structures, such as Convolutional Neural Networks (CNNs), have been
applied to solving a wide range of real-life problems. However, these
architectures are vulnerable to adversarial attacks. In this paper, we propose
for the first time to use hardware-supported approximate computing to improve
the robustness of machine learning classifiers. We show that our approximate
computing implementation achieves robustness across a wide range of attack
scenarios. Specifically, for black-box and grey-box attack scenarios, we show
that successful adversarial attacks against the exact classifier have poor
transferability to the approximate implementation. Surprisingly, the robustness
advantages also apply to white-box attacks where the attacker has access to the
internal implementation of the approximate classifier. We explain some of the
possible reasons for this robustness through analysis of the internal operation
of the approximate implementation. Furthermore, our approximate computing model
maintains the same level in terms of classification accuracy, does not require
retraining, and reduces resource utilization and energy consumption of the CNN.
We conducted extensive experiments on a set of strong adversarial attacks; We
empirically show that the proposed implementation increases the robustness of a
LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong
grey-box adversarial attacks along with up to 67% saving in energy consumption
due to the simpler nature of the approximate logic. We also show that a
white-box attack requires a remarkably higher noise budget to fool the
approximate classifier, causing an average of 4db degradation of the PSNR of
the input image relative to the images that succeed in fooling the exact
classifier
| [
{
"created": "Sat, 13 Jun 2020 18:58:25 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jul 2021 20:38:12 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Jul 2021 08:52:10 GMT",
"version": "v3"
}
] | 2021-07-30 | [
[
"Guesmi",
"Amira",
""
],
[
"Alouani",
"Ihsen",
""
],
[
"Khasawneh",
"Khaled",
""
],
[
"Baklouti",
"Mouna",
""
],
[
"Frikha",
"Tarek",
""
],
[
"Abid",
"Mohamed",
""
],
[
"Abu-Ghazaleh",
"Nael",
""
]
] | In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these architectures are vulnerable to adversarial attacks. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine learning classifiers. We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios. Specifically, for black-box and grey-box attack scenarios, we show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has access to the internal implementation of the approximate classifier. We explain some of the possible reasons for this robustness through analysis of the internal operation of the approximate implementation. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong grey-box adversarial attacks along with up to 67% saving in energy consumption due to the simpler nature of the approximate logic. We also show that a white-box attack requires a remarkably higher noise budget to fool the approximate classifier, causing an average of 4db degradation of the PSNR of the input image relative to the images that succeed in fooling the exact classifier |
1308.5434 | Chunhua Geng | Chunhua Geng, Hua Sun, and Syed A. Jafar | Multilevel Topological Interference Management | To be presented at 2013 IEEE Information Theory Workshop | null | 10.1109/ITW.2013.6691291 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The robust principles of treating interference as noise (TIN) when it is
sufficiently weak, and avoiding it when it is not, form the background for this
work. Combining TIN with the topological interference management (TIM)
framework that identifies optimal interference avoidance schemes, a baseline
TIM-TIN approach is proposed which decomposes a network into TIN and TIM
components, allocates the signal power levels to each user in the TIN
component, allocates signal vector space dimensions to each user in the TIM
component, and guarantees that the product of the two is an achievable number
of signal dimensions available to each user in the original network.
| [
{
"created": "Sun, 25 Aug 2013 18:37:06 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Geng",
"Chunhua",
""
],
[
"Sun",
"Hua",
""
],
[
"Jafar",
"Syed A.",
""
]
] | The robust principles of treating interference as noise (TIN) when it is sufficiently weak, and avoiding it when it is not, form the background for this work. Combining TIN with the topological interference management (TIM) framework that identifies optimal interference avoidance schemes, a baseline TIM-TIN approach is proposed which decomposes a network into TIN and TIM components, allocates the signal power levels to each user in the TIN component, allocates signal vector space dimensions to each user in the TIM component, and guarantees that the product of the two is an achievable number of signal dimensions available to each user in the original network. |
1905.13331 | Rui Wang | Rui Wang, Guoyin Wang, Ricardo Henao | Discriminative Clustering for Robust Unsupervised Domain Adaptation | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised domain adaptation seeks to learn an invariant and discriminative
representation for an unlabeled target domain by leveraging the information of
a labeled source dataset. We propose to improve the discriminative ability of
the target domain representation by simultaneously learning tightly clustered
target representations while encouraging that each cluster is assigned to a
unique and different class from the source. This strategy alleviates the
effects of negative transfer when combined with adversarial domain matching
between source and target representations. Our approach is robust to
differences in the source and target label distributions and thus applicable to
both balanced and imbalanced domain adaptation tasks, and with a simple
extension, it can also be used for partial domain adaptation. Experiments on
several benchmark datasets for domain adaptation demonstrate that our approach
can achieve state-of-the-art performance in all three scenarios, namely,
balanced, imbalanced and partial domain adaptation.
| [
{
"created": "Thu, 30 May 2019 21:56:06 GMT",
"version": "v1"
}
] | 2019-06-03 | [
[
"Wang",
"Rui",
""
],
[
"Wang",
"Guoyin",
""
],
[
"Henao",
"Ricardo",
""
]
] | Unsupervised domain adaptation seeks to learn an invariant and discriminative representation for an unlabeled target domain by leveraging the information of a labeled source dataset. We propose to improve the discriminative ability of the target domain representation by simultaneously learning tightly clustered target representations while encouraging that each cluster is assigned to a unique and different class from the source. This strategy alleviates the effects of negative transfer when combined with adversarial domain matching between source and target representations. Our approach is robust to differences in the source and target label distributions and thus applicable to both balanced and imbalanced domain adaptation tasks, and with a simple extension, it can also be used for partial domain adaptation. Experiments on several benchmark datasets for domain adaptation demonstrate that our approach can achieve state-of-the-art performance in all three scenarios, namely, balanced, imbalanced and partial domain adaptation. |
2007.08002 | Lionel Roques | Lionel Roques, Olivier Bonnefon, Virgile Baudrot, Samuel Soubeyrand,
Henri Berestycki | A parsimonious model for spatial transmission and heterogeneity in the
COVID-19 propagation | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Raw data on the cumulative number of deaths at a country level generally
indicate a spatially variable distribution of the incidence of COVID-19
disease. An important issue is to determine whether this spatial pattern is a
consequence of environmental heterogeneities, such as the climatic conditions,
during the course of the outbreak. Another fundamental issue is to understand
the spatial spreading of COVID-19. To address these questions, we consider four
candidate epidemiological models with varying complexity in terms of initial
conditions, contact rates and non-local transmissions, and we fit them to
French mortality data with a mixed probabilistic-ODE approach. Using standard
statistical criteria, we select the model with non-local transmission
corresponding to a diffusion on the graph of counties that depends on the
geographic proximity, with time-dependent contact rate and spatially constant
parameters. This original spatially parsimonious model suggests that in a
geographically middle size centralized country such as France, once the
epidemic is established, the effect of global processes such as restriction
policies, sanitary measures and social distancing overwhelms the effect of
local factors. Additionally, this modeling approach reveals the latent
epidemiological dynamics including the local level of immunity, and allows us
to evaluate the role of non-local interactions on the future spread of the
disease. In view of its theoretical and numerical simplicity and its ability to
accurately track the COVID-19 epidemic curves, the framework we develop here,
in particular the non-local model and the associated estimation procedure, is
of general interest in studying spatial dynamics of epidemics.
| [
{
"created": "Wed, 15 Jul 2020 21:28:51 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Jul 2020 08:25:52 GMT",
"version": "v2"
}
] | 2020-07-21 | [
[
"Roques",
"Lionel",
""
],
[
"Bonnefon",
"Olivier",
""
],
[
"Baudrot",
"Virgile",
""
],
[
"Soubeyrand",
"Samuel",
""
],
[
"Berestycki",
"Henri",
""
]
] | Raw data on the cumulative number of deaths at a country level generally indicate a spatially variable distribution of the incidence of COVID-19 disease. An important issue is to determine whether this spatial pattern is a consequence of environmental heterogeneities, such as the climatic conditions, during the course of the outbreak. Another fundamental issue is to understand the spatial spreading of COVID-19. To address these questions, we consider four candidate epidemiological models with varying complexity in terms of initial conditions, contact rates and non-local transmissions, and we fit them to French mortality data with a mixed probabilistic-ODE approach. Using standard statistical criteria, we select the model with non-local transmission corresponding to a diffusion on the graph of counties that depends on the geographic proximity, with time-dependent contact rate and spatially constant parameters. This original spatially parsimonious model suggests that in a geographically middle size centralized country such as France, once the epidemic is established, the effect of global processes such as restriction policies, sanitary measures and social distancing overwhelms the effect of local factors. Additionally, this modeling approach reveals the latent epidemiological dynamics including the local level of immunity, and allows us to evaluate the role of non-local interactions on the future spread of the disease. In view of its theoretical and numerical simplicity and its ability to accurately track the COVID-19 epidemic curves, the framework we develop here, in particular the non-local model and the associated estimation procedure, is of general interest in studying spatial dynamics of epidemics. |
1510.02457 | Nikolaos Mavromatos | Nick E. Mavromatos | Novel Scenarios for Majorana Neutrino Mass Generation and Leptogenesis
from Kalb-Ramond Torsion | 22 pages latex, uses special macros, Plenary talk at Planck 2015,
18th International Conference From the Planck Scale to the Electroweak Scale,
25-29 May 2015,Ioannina, Greece, to be published in the Proceedings (POS) | null | null | LCTS/2015-34 | hep-th hep-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Kalb-Ramond (KR) antisymmetric tensor field arises naturally in the
gravitational multiplet of string theory. Nevertheless, the respective
low-energy field theory action, in which, for reasons of gauge invariance, the
only dependence on the KR field is through its field strength, constitutes an
interesting model \emph{per se}. In this context, the KR field strength also
acts as a totally antisymmetric torsion field, while in four space-time
dimensions is \emph{dual} to an (KR) axion-like pseudoscalar field. In this
context, we review here first the r\^ole of quantum fluctuations of the KR
axion on the generation of Majorana mass for neutrinos, via a mixing with
ordinary axions that may exist in the theory as providers of dark matter
candidates. Then we proceed to discuss the r\^ole of constant in time (thus
Lorentz violating) KR torsion backgrounds, that may exist in the early Universe
but are completely negligible today, on inducing Leptogenesis by means of
\emph{tree-level} CP violating decays of Right Handed Massive Majorana
neutrinos in the presence of such H-torsion backgrounds. Some speculations
regarding microscopic D-brane world models, where such scenarios may be
realised, are also given.
| [
{
"created": "Thu, 8 Oct 2015 19:48:04 GMT",
"version": "v1"
}
] | 2015-10-09 | [
[
"Mavromatos",
"Nick E.",
""
]
] | The Kalb-Ramond (KR) antisymmetric tensor field arises naturally in the gravitational multiplet of string theory. Nevertheless, the respective low-energy field theory action, in which, for reasons of gauge invariance, the only dependence on the KR field is through its field strength, constitutes an interesting model \emph{per se}. In this context, the KR field strength also acts as a totally antisymmetric torsion field, while in four space-time dimensions is \emph{dual} to an (KR) axion-like pseudoscalar field. In this context, we review here first the r\^ole of quantum fluctuations of the KR axion on the generation of Majorana mass for neutrinos, via a mixing with ordinary axions that may exist in the theory as providers of dark matter candidates. Then we proceed to discuss the r\^ole of constant in time (thus Lorentz violating) KR torsion backgrounds, that may exist in the early Universe but are completely negligible today, on inducing Leptogenesis by means of \emph{tree-level} CP violating decays of Right Handed Massive Majorana neutrinos in the presence of such H-torsion backgrounds. Some speculations regarding microscopic D-brane world models, where such scenarios may be realised, are also given. |
q-bio/0509030 | Antonia Kropfinger | Daphn\'e Reiss (IJM), Danielle Nouaud (IJM), St\'ephane Ronsseray
(IJM), Dominique Anxolab\'eh\`ere (IJM) | Domesticated P elements in the Drosophila montium species subgroup have
a new function related to a DNA binding property | null | Journal of Molecular Evolution vol ? (2005) sous presse | 10.1007/s00239-004-0324-0 | null | q-bio.GN | null | Molecular domestication of a transposable element is defined as its
functional recruitment by the host genome. To date, two independent events of
molecular domestication of the P transposable element have been described: in
the Drosophila obscura species group and in the Drosophila montium species
subgroup. These P neogenes consist to stationary, non repeated sequences,
potentially encoding 66 kDa repressor-like proteins (RLs). Here we investigate
the function of the montium P neogenes. We provide evidence for the presence of
RLs proteins in two montium species (D. tsacasi and D. bocqueti) specifically
expressed in adult and larval brain and gonads. We tested the hypothesis that
the montium P neogenes function is related to the repression of the
transposition of distant related mobile P elements which coexist in the genome.
Our results strongly suggest that the montium P neogenes are not recruited to
down regulate the P element transposition. Given that all the proteins encoded
by mobile or stationary P homologous sequences show a strong conservation of
the DNA Binding Domain, we tested the capacity of the RLs proteins to bind DNA
in vivo. Immunstaining of polytene chromosomes in D. melanogaster transgenic
lines strongly suggest that montium P neogenes encode proteins that bind DNA in
vivo. RLs proteins show multiple binding to the chromosomes. We suggest that
the property recruited in the case of the montium P neoproteins is their DNA
binding property. The possible functions of these neogenes are discussed.
| [
{
"created": "Fri, 23 Sep 2005 11:07:06 GMT",
"version": "v1"
}
] | 2016-08-16 | [
[
"Reiss",
"Daphné",
"",
"IJM"
],
[
"Nouaud",
"Danielle",
"",
"IJM"
],
[
"Ronsseray",
"Stéphane",
"",
"IJM"
],
[
"Anxolabéhère",
"Dominique",
"",
"IJM"
]
] | Molecular domestication of a transposable element is defined as its functional recruitment by the host genome. To date, two independent events of molecular domestication of the P transposable element have been described: in the Drosophila obscura species group and in the Drosophila montium species subgroup. These P neogenes consist to stationary, non repeated sequences, potentially encoding 66 kDa repressor-like proteins (RLs). Here we investigate the function of the montium P neogenes. We provide evidence for the presence of RLs proteins in two montium species (D. tsacasi and D. bocqueti) specifically expressed in adult and larval brain and gonads. We tested the hypothesis that the montium P neogenes function is related to the repression of the transposition of distant related mobile P elements which coexist in the genome. Our results strongly suggest that the montium P neogenes are not recruited to down regulate the P element transposition. Given that all the proteins encoded by mobile or stationary P homologous sequences show a strong conservation of the DNA Binding Domain, we tested the capacity of the RLs proteins to bind DNA in vivo. Immunstaining of polytene chromosomes in D. melanogaster transgenic lines strongly suggest that montium P neogenes encode proteins that bind DNA in vivo. RLs proteins show multiple binding to the chromosomes. We suggest that the property recruited in the case of the montium P neoproteins is their DNA binding property. The possible functions of these neogenes are discussed. |
2301.09420 | Ansh Mittal | Ansh Mittal, Aditya Malte | On Multi-Agent Deep Deterministic Policy Gradients and their
Explainability for SMARTS Environment | 6 pages, 5 figures | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multi-Agent RL or MARL is one of the complex problems in Autonomous Driving
literature that hampers the release of fully-autonomous vehicles today. Several
simulators have been in iteration after their inception to mitigate the problem
of complex scenarios with multiple agents in Autonomous Driving. One such
simulator--SMARTS, discusses the importance of cooperative multi-agent
learning. For this problem, we discuss two approaches--MAPPO and MADDPG, which
are based on-policy and off-policy RL approaches. We compare our results with
the state-of-the-art results for this challenge and discuss the potential areas
of improvement while discussing the explainability of these approaches in
conjunction with waypoints in the SMARTS environment.
| [
{
"created": "Fri, 20 Jan 2023 03:17:16 GMT",
"version": "v1"
}
] | 2023-01-24 | [
[
"Mittal",
"Ansh",
""
],
[
"Malte",
"Aditya",
""
]
] | Multi-Agent RL or MARL is one of the complex problems in Autonomous Driving literature that hampers the release of fully-autonomous vehicles today. Several simulators have been in iteration after their inception to mitigate the problem of complex scenarios with multiple agents in Autonomous Driving. One such simulator--SMARTS, discusses the importance of cooperative multi-agent learning. For this problem, we discuss two approaches--MAPPO and MADDPG, which are based on-policy and off-policy RL approaches. We compare our results with the state-of-the-art results for this challenge and discuss the potential areas of improvement while discussing the explainability of these approaches in conjunction with waypoints in the SMARTS environment. |
1507.01414 | Davide Campagnari | Davide R. Campagnari and Hugo Reinhardt | Dyson--Schwinger approach to Hamiltonian Quantum Chromodynamics | null | Phys. Rev. D 92, 065021 (2015) | 10.1103/PhysRevD.92.065021 | null | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The general method for treating non-Gaussian wave functionals in the
Hamiltonian formulation of a quantum field theory, which was previously
proposed and developed for Yang--Mills theory in Coulomb gauge, is generalized
to full QCD. For this purpose the quark part of the QCD vacuum wave functional
is expressed in the basis of coherent fermion states, which are defined in term
of Grassmann variables. Our variational ansatz for the QCD vacuum wave
functional is assumed to be given by exponentials of polynomials in the
occurring fields and, furthermore, contains an explicit coupling of the quarks
to the gluons. Exploiting Dyson--Schwinger equation techniques, we express the
various $n$-point functions, which are required for the expectation values of
observables like the Hamiltonian, in terms of the variational kernels of our
trial ansatz. Finally the equations of motion for these variational kernels are
derived by minimizing the energy density.
| [
{
"created": "Mon, 6 Jul 2015 12:25:19 GMT",
"version": "v1"
}
] | 2015-09-30 | [
[
"Campagnari",
"Davide R.",
""
],
[
"Reinhardt",
"Hugo",
""
]
] | The general method for treating non-Gaussian wave functionals in the Hamiltonian formulation of a quantum field theory, which was previously proposed and developed for Yang--Mills theory in Coulomb gauge, is generalized to full QCD. For this purpose the quark part of the QCD vacuum wave functional is expressed in the basis of coherent fermion states, which are defined in term of Grassmann variables. Our variational ansatz for the QCD vacuum wave functional is assumed to be given by exponentials of polynomials in the occurring fields and, furthermore, contains an explicit coupling of the quarks to the gluons. Exploiting Dyson--Schwinger equation techniques, we express the various $n$-point functions, which are required for the expectation values of observables like the Hamiltonian, in terms of the variational kernels of our trial ansatz. Finally the equations of motion for these variational kernels are derived by minimizing the energy density. |
2004.05513 | Marc Timme | Andreas Bossert, Moritz Kersting, Marc Timme, Malte Schr\"oder, Azza
Feki, Justin Coetzee and Jan Schl\"uter | Limited containment options of COVID-19 outbreak revealed by regional
agent-based simulations for South Africa | including 3 Figures and Supplementary Material | null | null | null | q-bio.PE physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19 has spread from China across Europe and the United States and has
become a global pandemic. In countries of the Global South, due to often weaker
socioeconomic options and health care systems, effective local countermeasures
remain debated. We combine large-scale socioeconomic and traffic survey data
with detailed agent-based simulations of local transportation to analyze
COVID-19 spreading in a regional model for the Nelson Mandela Bay Municipality
in South Africa under a range of countermeasure scenarios. The simulations
indicate that any realistic containment strategy, including those similar to
the one ongoing in South Africa, may yield a manifold overload of available
intensive care units (ICUs). Only immediate and the most severe
countermeasures, up to a complete lock-down that essentially inhibits all joint
human activities, can contain the epidemic effectively.
As South Africa exhibits rather favorable conditions compared to many other
countries of the Global South, our findings constitute rough conservative
estimates and may support identifying strategies towards containing COVID-19 as
well as any major future pandemics in these countries.
| [
{
"created": "Sun, 12 Apr 2020 00:47:40 GMT",
"version": "v1"
}
] | 2020-04-28 | [
[
"Bossert",
"Andreas",
""
],
[
"Kersting",
"Moritz",
""
],
[
"Timme",
"Marc",
""
],
[
"Schröder",
"Malte",
""
],
[
"Feki",
"Azza",
""
],
[
"Coetzee",
"Justin",
""
],
[
"Schlüter",
"Jan",
""
]
] | COVID-19 has spread from China across Europe and the United States and has become a global pandemic. In countries of the Global South, due to often weaker socioeconomic options and health care systems, effective local countermeasures remain debated. We combine large-scale socioeconomic and traffic survey data with detailed agent-based simulations of local transportation to analyze COVID-19 spreading in a regional model for the Nelson Mandela Bay Municipality in South Africa under a range of countermeasure scenarios. The simulations indicate that any realistic containment strategy, including those similar to the one ongoing in South Africa, may yield a manifold overload of available intensive care units (ICUs). Only immediate and the most severe countermeasures, up to a complete lock-down that essentially inhibits all joint human activities, can contain the epidemic effectively. As South Africa exhibits rather favorable conditions compared to many other countries of the Global South, our findings constitute rough conservative estimates and may support identifying strategies towards containing COVID-19 as well as any major future pandemics in these countries. |
2303.04218 | Eivind Meyer | Eivind Meyer, Lars Frederik Peiss, and Matthias Althoff | Deep Occupancy-Predictive Representations for Autonomous Driving | Accepted at ICRA 2023 | null | null | null | cs.LG cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Manually specifying features that capture the diversity in traffic
environments is impractical. Consequently, learning-based agents cannot realize
their full potential as neural motion planners for autonomous vehicles.
Instead, this work proposes to learn which features are task-relevant. Given
its immediate relevance to motion planning, our proposed architecture encodes
the probabilistic occupancy map as a proxy for obtaining pre-trained state
representations. By leveraging a map-aware graph formulation of the
environment, our agent-centric encoder generalizes to arbitrary road networks
and traffic situations. We show that our approach significantly improves the
downstream performance of a reinforcement learning agent operating in urban
traffic environments.
| [
{
"created": "Tue, 7 Mar 2023 20:21:49 GMT",
"version": "v1"
}
] | 2023-03-09 | [
[
"Meyer",
"Eivind",
""
],
[
"Peiss",
"Lars Frederik",
""
],
[
"Althoff",
"Matthias",
""
]
] | Manually specifying features that capture the diversity in traffic environments is impractical. Consequently, learning-based agents cannot realize their full potential as neural motion planners for autonomous vehicles. Instead, this work proposes to learn which features are task-relevant. Given its immediate relevance to motion planning, our proposed architecture encodes the probabilistic occupancy map as a proxy for obtaining pre-trained state representations. By leveraging a map-aware graph formulation of the environment, our agent-centric encoder generalizes to arbitrary road networks and traffic situations. We show that our approach significantly improves the downstream performance of a reinforcement learning agent operating in urban traffic environments. |
1710.04437 | Yuichiroh Matsubayashi | Yuichiroh Matsubayashi and Kentaro Inui | Revisiting the Design Issues of Local Models for Japanese
Predicate-Argument Structure Analysis | 6 pages, 2 figures, in IJCNLP 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The research trend in Japanese predicate-argument structure (PAS) analysis is
shifting from pointwise prediction models with local features to global models
designed to search for globally optimal solutions. However, the existing global
models tend to employ only relatively simple local features; therefore, the
overall performance gains are rather limited. The importance of designing a
local model is demonstrated in this study by showing that the performance of a
sophisticated local model can be considerably improved with recent feature
embedding methods and a feature combination learning based on a neural network,
outperforming the state-of-the-art global models in $F_1$ on a common benchmark
dataset.
| [
{
"created": "Thu, 12 Oct 2017 10:36:41 GMT",
"version": "v1"
}
] | 2017-10-13 | [
[
"Matsubayashi",
"Yuichiroh",
""
],
[
"Inui",
"Kentaro",
""
]
] | The research trend in Japanese predicate-argument structure (PAS) analysis is shifting from pointwise prediction models with local features to global models designed to search for globally optimal solutions. However, the existing global models tend to employ only relatively simple local features; therefore, the overall performance gains are rather limited. The importance of designing a local model is demonstrated in this study by showing that the performance of a sophisticated local model can be considerably improved with recent feature embedding methods and a feature combination learning based on a neural network, outperforming the state-of-the-art global models in $F_1$ on a common benchmark dataset. |
1503.04426 | Carlo Comin | Carlo Comin, Romeo Rizzi | An Improved Pseudo-Polynomial Upper Bound for the Value Problem and
Optimal Strategy Synthesis in Mean Payoff Games | null | null | null | null | cs.DS cs.CC cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we offer an $O(|V|^2 |E|\, W)$ pseudo-polynomial time
deterministic algorithm for solving the Value Problem and Optimal Strategy
Synthesis in Mean Payoff Games. This improves by a factor $\log(|V|\, W)$ the
best previously known pseudo-polynomial time upper bound due to Brim,~\etal The
improvement hinges on a suitable characterization of values, and a description
of optimal positional strategies, in terms of reweighted Energy Games and Small
Energy-Progress Measures.
| [
{
"created": "Sun, 15 Mar 2015 13:48:06 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Apr 2015 09:09:21 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Apr 2015 10:08:35 GMT",
"version": "v3"
},
{
"created": "Mon, 20 Apr 2015 20:57:29 GMT",
"version": "v4"
},
{
"c... | 2016-04-26 | [
[
"Comin",
"Carlo",
""
],
[
"Rizzi",
"Romeo",
""
]
] | In this work we offer an $O(|V|^2 |E|\, W)$ pseudo-polynomial time deterministic algorithm for solving the Value Problem and Optimal Strategy Synthesis in Mean Payoff Games. This improves by a factor $\log(|V|\, W)$ the best previously known pseudo-polynomial time upper bound due to Brim,~\etal The improvement hinges on a suitable characterization of values, and a description of optimal positional strategies, in terms of reweighted Energy Games and Small Energy-Progress Measures. |
hep-th/9707228 | Michael Douglas | Michael R. Douglas | D-branes and Matrix Theory in Curved Space | LaTeX with espcrc2; 13 pages, STRINGS97. References added; a
speculation refuted | Nucl.Phys.Proc.Suppl. 68 (1998) 381-393 | 10.1016/S0920-5632(98)00173-X | RU-97-66 | hep-th | null | We discuss the relation between supersymmetric gauge theory of branes and
supergravity; as it was discovered in D-brane physics, and as it appears in
Matrix theory, with emphasis on motion in curved backgrounds. We argue that
gauged sigma model Lagrangians can be used as definitions of Matrix theory in
curved space. Lecture given at Strings '97; June 20, 1997.
| [
{
"created": "Mon, 28 Jul 1997 14:14:33 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Jul 1997 11:56:17 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Aug 1997 09:43:44 GMT",
"version": "v3"
},
{
"created": "Thu, 21 Aug 1997 11:24:53 GMT",
"version": "v4"
}
] | 2009-10-30 | [
[
"Douglas",
"Michael R.",
""
]
] | We discuss the relation between supersymmetric gauge theory of branes and supergravity; as it was discovered in D-brane physics, and as it appears in Matrix theory, with emphasis on motion in curved backgrounds. We argue that gauged sigma model Lagrangians can be used as definitions of Matrix theory in curved space. Lecture given at Strings '97; June 20, 1997. |
1708.07961 | Ming Ding Dr. | Ming Ding, David Lopez Perez, Amir H. Jafari, Guoqiang Mao, Zihuai Lin | Ultra-Dense Networks: A New Look at the Proportional Fair Scheduler | To appear in IEEE GLOBECOM2017 | null | null | null | cs.NI cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we theoretically study the proportional fair (PF) scheduler in
the context of ultra-dense networks (UDNs). Analytical results are obtained for
the coverage probability and the area spectral efficiency (ASE) performance of
dense small cell networks (SCNs) with the PF scheduler employed at base
stations (BSs). The key point of our analysis is that the typical user is no
longer a random user as assumed in most studies in the literature. Instead, a
user with the maximum PF metric is chosen by its serving BS as the typical
user. By comparing the previous results of the round-robin (RR) scheduler with
our new results of the PF scheduler, we quantify the loss of the multi-user
diversity of the PF scheduler with the network densification, which casts a new
look at the role of the PF scheduler in UDNs. Our conclusion is that the RR
scheduler should be used in UDNs to simplify the radio resource management
(RRM).
| [
{
"created": "Sat, 26 Aug 2017 12:04:17 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Sep 2017 06:49:13 GMT",
"version": "v2"
}
] | 2017-09-27 | [
[
"Ding",
"Ming",
""
],
[
"Perez",
"David Lopez",
""
],
[
"Jafari",
"Amir H.",
""
],
[
"Mao",
"Guoqiang",
""
],
[
"Lin",
"Zihuai",
""
]
] | In this paper, we theoretically study the proportional fair (PF) scheduler in the context of ultra-dense networks (UDNs). Analytical results are obtained for the coverage probability and the area spectral efficiency (ASE) performance of dense small cell networks (SCNs) with the PF scheduler employed at base stations (BSs). The key point of our analysis is that the typical user is no longer a random user as assumed in most studies in the literature. Instead, a user with the maximum PF metric is chosen by its serving BS as the typical user. By comparing the previous results of the round-robin (RR) scheduler with our new results of the PF scheduler, we quantify the loss of the multi-user diversity of the PF scheduler with the network densification, which casts a new look at the role of the PF scheduler in UDNs. Our conclusion is that the RR scheduler should be used in UDNs to simplify the radio resource management (RRM). |
1802.06940 | Alexander Semenov | Irina Gribanova and Alexander Semenov | Using Automatic Generation of Relaxation Constraints to Improve the
Preimage Attack on 39-step MD4 | This paper was submitted to MIPRO 2018 as a conference paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we construct preimage attack on the truncated variant of the
MD4 hash function. Specifically, we study the MD4-39 function defined by the
first 39 steps of the MD4 algorithm. We suggest a new attack on MD4-39, which
develops the ideas proposed by H. Dobbertin in 1998. Namely, the special
relaxation constraints are introduced in order to simplify the equations
corresponding to the problem of finding a preimage for an arbitrary MD4-39 hash
value. The equations supplemented with the relaxation constraints are then
reduced to the Boolean Satisfiability Problem (SAT) and solved using the
state-of-the-art SAT solvers. We show that the effectiveness of a set of
relaxation constraints can be evaluated using the black-box function of a
special kind. Thus, we suggest automatic method of relaxation constraints
generation by applying the black-box optimization to this function. The
proposed method made it possible to find new relaxation constraints that
contribute to a SAT-based preimage attack on MD4-39 which significantly
outperforms the competition.
| [
{
"created": "Tue, 20 Feb 2018 02:47:41 GMT",
"version": "v1"
}
] | 2019-10-07 | [
[
"Gribanova",
"Irina",
""
],
[
"Semenov",
"Alexander",
""
]
] | In this paper we construct preimage attack on the truncated variant of the MD4 hash function. Specifically, we study the MD4-39 function defined by the first 39 steps of the MD4 algorithm. We suggest a new attack on MD4-39, which develops the ideas proposed by H. Dobbertin in 1998. Namely, the special relaxation constraints are introduced in order to simplify the equations corresponding to the problem of finding a preimage for an arbitrary MD4-39 hash value. The equations supplemented with the relaxation constraints are then reduced to the Boolean Satisfiability Problem (SAT) and solved using the state-of-the-art SAT solvers. We show that the effectiveness of a set of relaxation constraints can be evaluated using the black-box function of a special kind. Thus, we suggest automatic method of relaxation constraints generation by applying the black-box optimization to this function. The proposed method made it possible to find new relaxation constraints that contribute to a SAT-based preimage attack on MD4-39 which significantly outperforms the competition. |
hep-th/9408117 | Niegawa | A. Ni\'egawa | Production of soft photons from the quark-gluon plasma in hot QCD-
Screening of mass singularities | 15 pages, LaTex | Mod.Phys.Lett.A10:379-389,1995 | 10.1142/S0217732395000417 | OCU-PHYS.153 | hep-th | null | It has been reported that, within the hard-thermal-loop resummation scheme,
the production rate of soft real photons from a hot quark-gluon plasma exhibits
unscreened mass singularities. We show that still higher-order resummations
screen the mass singularities and obtain the finite soft-photon production rate
to leading order at logarithmic accuracy ${\cal O} (\alpha \alpha_s \ln^2
\alpha_s)$.
| [
{
"created": "Mon, 22 Aug 1994 07:01:13 GMT",
"version": "v1"
}
] | 2016-08-14 | [
[
"Niégawa",
"A.",
""
]
] | It has been reported that, within the hard-thermal-loop resummation scheme, the production rate of soft real photons from a hot quark-gluon plasma exhibits unscreened mass singularities. We show that still higher-order resummations screen the mass singularities and obtain the finite soft-photon production rate to leading order at logarithmic accuracy ${\cal O} (\alpha \alpha_s \ln^2 \alpha_s)$. |
1312.6503 | Nicolas Gastineau | Nicolas Gastineau (Le2i, LIRIS), Hamamache Kheddouci (LIRIS), Olivier
Togni (Le2i) | On the family of $r$-regular graphs with Grundy number $r+1$ | null | null | null | null | cs.DM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Grundy number of a graph $G$, denoted by $\Gamma(G)$, is the largest $k$
such that there exists a partition of $V(G)$, into $k$ independent sets
$V_1,\ldots, V_k$ and every vertex of $V_i$ is adjacent to at least one vertex
in $V_j$, for every $j < i$. The objects which are studied in this article are
families of $r$-regular graphs such that $\Gamma(G) = r + 1$. Using the notion
of independent module, a characterization of this family is given for $r=3$.
Moreover, we determine classes of graphs in this family, in particular the
class of $r$-regular graphs without induced $C_4$, for $r \le 4$. Furthermore,
our propositions imply results on partial Grundy number.
| [
{
"created": "Mon, 23 Dec 2013 10:05:36 GMT",
"version": "v1"
},
{
"created": "Mon, 19 May 2014 19:26:23 GMT",
"version": "v2"
}
] | 2014-05-20 | [
[
"Gastineau",
"Nicolas",
"",
"Le2i, LIRIS"
],
[
"Kheddouci",
"Hamamache",
"",
"LIRIS"
],
[
"Togni",
"Olivier",
"",
"Le2i"
]
] | The Grundy number of a graph $G$, denoted by $\Gamma(G)$, is the largest $k$ such that there exists a partition of $V(G)$, into $k$ independent sets $V_1,\ldots, V_k$ and every vertex of $V_i$ is adjacent to at least one vertex in $V_j$, for every $j < i$. The objects which are studied in this article are families of $r$-regular graphs such that $\Gamma(G) = r + 1$. Using the notion of independent module, a characterization of this family is given for $r=3$. Moreover, we determine classes of graphs in this family, in particular the class of $r$-regular graphs without induced $C_4$, for $r \le 4$. Furthermore, our propositions imply results on partial Grundy number. |
2305.10014 | Mrittika Chakraborty | Mrittika Chakraborty (1), Wreetbhas Pal (1), Sanghamitra Bandyopadhyay
(2) and Ujjwal Maulik (1) ((1) Jadavpur University, (2) Indian Statistical
Institute) | A Survey on Multi-Objective based Parameter Optimization for Deep
Learning | The paper has been accepted for publication in Computer Science
journal: http://journals.agh.edu.pl/csci | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning models form one of the most powerful machine learning models
for the extraction of important features. Most of the designs of deep neural
models, i.e., the initialization of parameters, are still manually tuned.
Hence, obtaining a model with high performance is exceedingly time-consuming
and occasionally impossible. Optimizing the parameters of the deep networks,
therefore, requires improved optimization algorithms with high convergence
rates. The single objective-based optimization methods generally used are
mostly time-consuming and do not guarantee optimum performance in all cases.
Mathematical optimization problems containing multiple objective functions that
must be optimized simultaneously fall under the category of multi-objective
optimization sometimes referred to as Pareto optimization. Multi-objective
optimization problems form one of the alternatives yet useful options for
parameter optimization. However, this domain is a bit less explored. In this
survey, we focus on exploring the effectiveness of multi-objective optimization
strategies for parameter optimization in conjunction with deep neural networks.
The case studies used in this study focus on how the two methods are combined
to provide valuable insights into the generation of predictions and analysis in
multiple applications.
| [
{
"created": "Wed, 17 May 2023 07:48:54 GMT",
"version": "v1"
}
] | 2023-05-18 | [
[
"Chakraborty",
"Mrittika",
""
],
[
"Pal",
"Wreetbhas",
""
],
[
"Bandyopadhyay",
"Sanghamitra",
""
],
[
"Maulik",
"Ujjwal",
""
]
] | Deep learning models form one of the most powerful machine learning models for the extraction of important features. Most of the designs of deep neural models, i.e., the initialization of parameters, are still manually tuned. Hence, obtaining a model with high performance is exceedingly time-consuming and occasionally impossible. Optimizing the parameters of the deep networks, therefore, requires improved optimization algorithms with high convergence rates. The single objective-based optimization methods generally used are mostly time-consuming and do not guarantee optimum performance in all cases. Mathematical optimization problems containing multiple objective functions that must be optimized simultaneously fall under the category of multi-objective optimization sometimes referred to as Pareto optimization. Multi-objective optimization problems form one of the alternatives yet useful options for parameter optimization. However, this domain is a bit less explored. In this survey, we focus on exploring the effectiveness of multi-objective optimization strategies for parameter optimization in conjunction with deep neural networks. The case studies used in this study focus on how the two methods are combined to provide valuable insights into the generation of predictions and analysis in multiple applications. |
2308.03113 | Fan Liu | Fan Liu, Huilin Chen, Zhiyong Cheng, Liqiang Nie and Mohan Kankanhalli | Semantic-Guided Feature Distillation for Multimodal Recommendation | ACM Multimedia 2023 Accepted | In Proceedings of the 31st ACM International Conference on
Multimedia (MM '23), 2023 | null | null | cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal recommendation exploits the rich multimodal information associated
with users or items to enhance the representation learning for better
performance. In these methods, end-to-end feature extractors (e.g.,
shallow/deep neural networks) are often adopted to tailor the generic
multimodal features that are extracted from raw data by pre-trained models for
recommendation. However, compact extractors, such as shallow neural networks,
may find it challenging to extract effective information from complex and
high-dimensional generic modality features. Conversely, DNN-based extractors
may encounter the data sparsity problem in recommendation. To address this
problem, we propose a novel model-agnostic approach called Semantic-guided
Feature Distillation (SGFD), which employs a teacher-student framework to
extract feature for multimodal recommendation. The teacher model first extracts
rich modality features from the generic modality feature by considering both
the semantic information of items and the complementary information of multiple
modalities. SGFD then utilizes response-based and feature-based distillation
loss to effectively transfer the knowledge encoded in the teacher model to the
student model. To evaluate the effectiveness of our SGFD, we integrate SGFD
into three backbone multimodal recommendation models. Extensive experiments on
three public real-world datasets demonstrate that SGFD-enhanced models can
achieve substantial improvement over their counterparts.
| [
{
"created": "Sun, 6 Aug 2023 13:39:23 GMT",
"version": "v1"
}
] | 2023-08-08 | [
[
"Liu",
"Fan",
""
],
[
"Chen",
"Huilin",
""
],
[
"Cheng",
"Zhiyong",
""
],
[
"Nie",
"Liqiang",
""
],
[
"Kankanhalli",
"Mohan",
""
]
] | Multimodal recommendation exploits the rich multimodal information associated with users or items to enhance the representation learning for better performance. In these methods, end-to-end feature extractors (e.g., shallow/deep neural networks) are often adopted to tailor the generic multimodal features that are extracted from raw data by pre-trained models for recommendation. However, compact extractors, such as shallow neural networks, may find it challenging to extract effective information from complex and high-dimensional generic modality features. Conversely, DNN-based extractors may encounter the data sparsity problem in recommendation. To address this problem, we propose a novel model-agnostic approach called Semantic-guided Feature Distillation (SGFD), which employs a teacher-student framework to extract feature for multimodal recommendation. The teacher model first extracts rich modality features from the generic modality feature by considering both the semantic information of items and the complementary information of multiple modalities. SGFD then utilizes response-based and feature-based distillation loss to effectively transfer the knowledge encoded in the teacher model to the student model. To evaluate the effectiveness of our SGFD, we integrate SGFD into three backbone multimodal recommendation models. Extensive experiments on three public real-world datasets demonstrate that SGFD-enhanced models can achieve substantial improvement over their counterparts. |
hep-th/0306226 | Nathan Jacob Berkovits | Nathan Berkovits and Nathan Seiberg | Superstrings in Graviphoton Background and N=1/2+3/2 Supersymmetry | Added references and a footnote | JHEP 0307:010,2003 | 10.1088/1126-6708/2003/07/010 | IFT-P.026/2003 | hep-th | null | Motivated by Ooguri and Vafa, we study superstrings in flat R^4 in a constant
self-dual graviphoton background. The supergravity equations of motion are
satisfied in this background which deforms the N=2 d=4 flat space
super-Poincare algebra to another algebra with eight supercharges. A D-brane in
this space preserves a quarter of the supercharges; i.e. N=1/2 supersymmetry is
realized linearly, and the remaining N=3/2 supersymmetry is realized
nonlinearly. The theory on the brane can be described as a theory in
noncommutative superspace in which the chiral fermionic coordinates
$\theta^\alpha$ of N=1 d=4 superspace are not Grassman variables but satisfy a
Clifford algebra.
| [
{
"created": "Tue, 24 Jun 2003 15:37:14 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Jun 2003 22:39:24 GMT",
"version": "v2"
}
] | 2009-11-10 | [
[
"Berkovits",
"Nathan",
""
],
[
"Seiberg",
"Nathan",
""
]
] | Motivated by Ooguri and Vafa, we study superstrings in flat R^4 in a constant self-dual graviphoton background. The supergravity equations of motion are satisfied in this background which deforms the N=2 d=4 flat space super-Poincare algebra to another algebra with eight supercharges. A D-brane in this space preserves a quarter of the supercharges; i.e. N=1/2 supersymmetry is realized linearly, and the remaining N=3/2 supersymmetry is realized nonlinearly. The theory on the brane can be described as a theory in noncommutative superspace in which the chiral fermionic coordinates $\theta^\alpha$ of N=1 d=4 superspace are not Grassman variables but satisfy a Clifford algebra. |
1607.00373 | Patrick Concha | P.K. Concha, M.C. Ipinza, L. Ravera, E.K. Rodr\'iguez | On the Supersymmetric Extension of Gauss-Bonnet like Gravity | v4, 13 pages, version accepted for publication in JHEP | JHEP 09 (2016) 007 | 10.1007/JHEP09(2016)007 | UAI-PHY-16/11 | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the supersymmetry invariance of a supergravity theory in the
presence of a non-trivial boundary. The explicit construction of a bulk
Lagrangian based on an enlarged superalgebra, known as $AdS$-Lorentz, is
presented. Using a geometric approach we show that the supersymmetric extension
of a Gauss-Bonnet like gravity is required in order to restore the
supersymmetry invariance of the theory.
| [
{
"created": "Fri, 1 Jul 2016 19:57:10 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Jul 2016 04:13:58 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Aug 2016 22:08:54 GMT",
"version": "v3"
},
{
"created": "Wed, 31 Aug 2016 16:43:17 GMT",
"version": "v4"
}
] | 2016-09-08 | [
[
"Concha",
"P. K.",
""
],
[
"Ipinza",
"M. C.",
""
],
[
"Ravera",
"L.",
""
],
[
"Rodríguez",
"E. K.",
""
]
] | We explore the supersymmetry invariance of a supergravity theory in the presence of a non-trivial boundary. The explicit construction of a bulk Lagrangian based on an enlarged superalgebra, known as $AdS$-Lorentz, is presented. Using a geometric approach we show that the supersymmetric extension of a Gauss-Bonnet like gravity is required in order to restore the supersymmetry invariance of the theory. |
0808.1340 | Costas Kounnas Dr | Costas Kounnas | Massive Boson-Fermion Degeneracy and the Early Structure of the Universe | 24 pages | Fortsch.Phys.56:1143-1156,2008 | 10.1002/prop.200810570 | LPTENS 08/44 | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The existence of a new kind of massive boson-fermion symmetry is shown
explicitly in the framework of the heterotic, type II and type II orientifold
superstring theories. The target space-time is two-dimensional. Higher
dimensional models are defined via large marginal deformations of JxJ-type. The
spectrum of the initial undeformed two dimensional vacuum consists of massless
boson degrees of freedom, while all massive boson and fermion degrees of
freedom exhibit a new Massive Spectrum Degeneracy Symmetry (MSDS). This precise
property, distinguishes the MSDS theories from the well known supersymmetric
SUSY-theories. Some proposals are stated in the framework of these theories
concerning the structure of: (i) The Early Non-singular Phase of the Universe,
(ii) The two dimensional boundary theory of AdS3 Black-Holes, (iii) Plausible
applications of the MSDS theories in particle physics, alternative to SUSY.
| [
{
"created": "Sat, 9 Aug 2008 15:28:10 GMT",
"version": "v1"
}
] | 2008-12-22 | [
[
"Kounnas",
"Costas",
""
]
] | The existence of a new kind of massive boson-fermion symmetry is shown explicitly in the framework of the heterotic, type II and type II orientifold superstring theories. The target space-time is two-dimensional. Higher dimensional models are defined via large marginal deformations of JxJ-type. The spectrum of the initial undeformed two dimensional vacuum consists of massless boson degrees of freedom, while all massive boson and fermion degrees of freedom exhibit a new Massive Spectrum Degeneracy Symmetry (MSDS). This precise property, distinguishes the MSDS theories from the well known supersymmetric SUSY-theories. Some proposals are stated in the framework of these theories concerning the structure of: (i) The Early Non-singular Phase of the Universe, (ii) The two dimensional boundary theory of AdS3 Black-Holes, (iii) Plausible applications of the MSDS theories in particle physics, alternative to SUSY. |
2005.05587 | Nils Jansen | Dennis Gross, Nils Jansen, Guillermo A. P\'erez, Stephan Raaijmakers | Robustness Verification for Classifier Ensembles | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a formal verification procedure that decides whether a classifier
ensemble is robust against arbitrary randomized attacks. Such attacks consist
of a set of deterministic attacks and a distribution over this set. The
robustness-checking problem consists of assessing, given a set of classifiers
and a labelled data set, whether there exists a randomized attack that induces
a certain expected loss against all classifiers. We show the NP-hardness of the
problem and provide an upper bound on the number of attacks that is sufficient
to form an optimal randomized attack. These results provide an effective way to
reason about the robustness of a classifier ensemble. We provide SMT and MILP
encodings to compute optimal randomized attacks or prove that there is no
attack inducing a certain expected loss. In the latter case, the classifier
ensemble is provably robust. Our prototype implementation verifies multiple
neural-network ensembles trained for image-classification tasks. The
experimental results using the MILP encoding are promising both in terms of
scalability and the general applicability of our verification procedure.
| [
{
"created": "Tue, 12 May 2020 07:38:43 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jul 2020 07:43:16 GMT",
"version": "v2"
}
] | 2020-07-10 | [
[
"Gross",
"Dennis",
""
],
[
"Jansen",
"Nils",
""
],
[
"Pérez",
"Guillermo A.",
""
],
[
"Raaijmakers",
"Stephan",
""
]
] | We give a formal verification procedure that decides whether a classifier ensemble is robust against arbitrary randomized attacks. Such attacks consist of a set of deterministic attacks and a distribution over this set. The robustness-checking problem consists of assessing, given a set of classifiers and a labelled data set, whether there exists a randomized attack that induces a certain expected loss against all classifiers. We show the NP-hardness of the problem and provide an upper bound on the number of attacks that is sufficient to form an optimal randomized attack. These results provide an effective way to reason about the robustness of a classifier ensemble. We provide SMT and MILP encodings to compute optimal randomized attacks or prove that there is no attack inducing a certain expected loss. In the latter case, the classifier ensemble is provably robust. Our prototype implementation verifies multiple neural-network ensembles trained for image-classification tasks. The experimental results using the MILP encoding are promising both in terms of scalability and the general applicability of our verification procedure. |
0706.1893 | Susha Parameswaran | S.L. Parameswaran, S. Randjbar-Daemi and A. Salvio | Stability and Negative Tensions in 6D Brane Worlds | 28 pages, 2 figures | JHEP 0801:051,2008 | 10.1088/1126-6708/2008/01/051 | null | hep-th | null | We investigate the dynamical stability of warped, axially symmetric
compactifications in anomaly free 6D gauged supergravity. The solutions have
conical defects, which we source by 3-branes placed on orbifold fixed points,
and a smooth limit to the classic sphere-monopole compactification. Like for
the sphere, the extra fields that are generically required by anomaly freedom
are especially relevant for stability. With positive tension branes only, there
is a strict stability criterion (identical to the sphere case) on the charges
present under the monopole background. Thus brane world models with positive
tensions can be embedded into anomaly free theories in only a few ways.
Meanwhile, surprisingly, in the presence of a negative tension brane the
stability criteria can be relaxed. We also describe in detail the geometries
induced by negative tension codimension two branes.
| [
{
"created": "Wed, 13 Jun 2007 14:06:10 GMT",
"version": "v1"
}
] | 2009-11-18 | [
[
"Parameswaran",
"S. L.",
""
],
[
"Randjbar-Daemi",
"S.",
""
],
[
"Salvio",
"A.",
""
]
] | We investigate the dynamical stability of warped, axially symmetric compactifications in anomaly free 6D gauged supergravity. The solutions have conical defects, which we source by 3-branes placed on orbifold fixed points, and a smooth limit to the classic sphere-monopole compactification. Like for the sphere, the extra fields that are generically required by anomaly freedom are especially relevant for stability. With positive tension branes only, there is a strict stability criterion (identical to the sphere case) on the charges present under the monopole background. Thus brane world models with positive tensions can be embedded into anomaly free theories in only a few ways. Meanwhile, surprisingly, in the presence of a negative tension brane the stability criteria can be relaxed. We also describe in detail the geometries induced by negative tension codimension two branes. |
hep-th/0303060 | Charlotte Floe Kristjansen | N. Beisert, C. Kristjansen, M. Staudacher | The Dilatation Operator of Conformal N=4 Super Yang-Mills Theory | 54 pages, 5 figures, v2: references added, small textual changes, v3:
to appear in Nucl. Phys. B, v4: zeros in (D.21), signs in (F.4) corrected | Nucl.Phys.B664:131-184,2003 | 10.1016/S0550-3213(03)00406-1 | AEI 2003-028 | hep-th cond-mat.stat-mech nlin.SI | null | We argue that existing methods for the perturbative computation of anomalous
dimensions and the disentanglement of mixing in N = 4 gauge theory can be
considerably simplified, systematized and extended by focusing on the theory's
dilatation operator. The efficiency of the method is first illustrated at the
one-loop level for general non-derivative scalar states. We then go on to
derive, for pure scalar states, the two-loop structure of the dilatation
operator. This allows us to obtain a host of new results. Among these are an
infinite number of previously unknown two-loop anomalous dimensions, new
subtleties concerning 't Hooft's large N expansion due to mixing effects of
degenerate single and multiple trace states, two-loop tests of various
protected operators, as well as two-loop non-planar results for two-impurity
operators in BMN gauge theory. We also put to use the recently discovered
integrable spin chain description of the planar one-loop dilatation operator
and show that the associated Yang-Baxter equation explains the existence of a
hitherto unknown planar ``axial'' symmetry between infinitely many gauge theory
states. We present evidence that this integrability can be extended to all
loops, with intriguing consequences for gauge theory, and that it leads to a
novel integrable deformation of the XXX Heisenberg spin chain. Assuming that
the integrability structure extends to more than two loops, we determine the
planar three-loop contribution to the dilatation operator.
| [
{
"created": "Fri, 7 Mar 2003 12:57:35 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Mar 2003 19:18:15 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Jul 2003 19:49:52 GMT",
"version": "v3"
},
{
"created": "Fri, 3 Sep 2004 16:05:58 GMT",
"version": "v4"
}
] | 2011-03-23 | [
[
"Beisert",
"N.",
""
],
[
"Kristjansen",
"C.",
""
],
[
"Staudacher",
"M.",
""
]
] | We argue that existing methods for the perturbative computation of anomalous dimensions and the disentanglement of mixing in N = 4 gauge theory can be considerably simplified, systematized and extended by focusing on the theory's dilatation operator. The efficiency of the method is first illustrated at the one-loop level for general non-derivative scalar states. We then go on to derive, for pure scalar states, the two-loop structure of the dilatation operator. This allows us to obtain a host of new results. Among these are an infinite number of previously unknown two-loop anomalous dimensions, new subtleties concerning 't Hooft's large N expansion due to mixing effects of degenerate single and multiple trace states, two-loop tests of various protected operators, as well as two-loop non-planar results for two-impurity operators in BMN gauge theory. We also put to use the recently discovered integrable spin chain description of the planar one-loop dilatation operator and show that the associated Yang-Baxter equation explains the existence of a hitherto unknown planar ``axial'' symmetry between infinitely many gauge theory states. We present evidence that this integrability can be extended to all loops, with intriguing consequences for gauge theory, and that it leads to a novel integrable deformation of the XXX Heisenberg spin chain. Assuming that the integrability structure extends to more than two loops, we determine the planar three-loop contribution to the dilatation operator. |
1309.0650 | Jochen Zahn | Harold Steinacker, Jochen Zahn | An Index for Intersecting Branes in Matrix Models | null | SIGMA 9 (2013), 067, 7 pages | 10.3842/SIGMA.2013.067 | UWThPh-2013-21 | hep-th math-ph math.MP | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We introduce an index indicating the occurrence of chiral fermions at the
intersection of branes in matrix models. This allows to discuss the stability
of chiral fermions under perturbations of the branes.
| [
{
"created": "Tue, 3 Sep 2013 11:50:46 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Nov 2013 05:38:15 GMT",
"version": "v2"
}
] | 2013-11-11 | [
[
"Steinacker",
"Harold",
""
],
[
"Zahn",
"Jochen",
""
]
] | We introduce an index indicating the occurrence of chiral fermions at the intersection of branes in matrix models. This allows to discuss the stability of chiral fermions under perturbations of the branes. |
2212.06681 | Telmo Menezes | Telmo Menezes, Antonin Pottier, Camille Roth | The two sides of the Environmental Kuznets Curve: a socio-semantic
analysis | null | {\OE}conomia, 13-2 (2023) 279-321 | 10.4000/oeconomia.15729 | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the 1990s, the Environmental Kuznets Curve (EKC) hypothesis posits an
inverted U-shaped relationship between pollutants and economic development. The
hypothesis has attracted a lot of research. We provide here a review of more
than 2000 articles that have been published on the EKC. We aim at mapping the
development of this specialized research, both in term of actors and of
content, and to trace the transformation it has undergone from its beginning to
the present. To that end, we combine traditional bibliometric analysis and
semantic analysis with a novel method, that enables us to recover the type of
pollutants that are studied and the empirical claims made on EKC (whether the
hypothesis is invalidated or not). We principally exhibit the existence of a
few epistemic communities that are related to distinct time periods, topics
and, to some extent, proportion of positive results on EKC.
| [
{
"created": "Tue, 13 Dec 2022 15:58:24 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Oct 2023 16:17:05 GMT",
"version": "v2"
}
] | 2023-10-31 | [
[
"Menezes",
"Telmo",
""
],
[
"Pottier",
"Antonin",
""
],
[
"Roth",
"Camille",
""
]
] | Since the 1990s, the Environmental Kuznets Curve (EKC) hypothesis posits an inverted U-shaped relationship between pollutants and economic development. The hypothesis has attracted a lot of research. We provide here a review of more than 2000 articles that have been published on the EKC. We aim at mapping the development of this specialized research, both in term of actors and of content, and to trace the transformation it has undergone from its beginning to the present. To that end, we combine traditional bibliometric analysis and semantic analysis with a novel method, that enables us to recover the type of pollutants that are studied and the empirical claims made on EKC (whether the hypothesis is invalidated or not). We principally exhibit the existence of a few epistemic communities that are related to distinct time periods, topics and, to some extent, proportion of positive results on EKC. |
2007.11713 | Poulami Nandi | Arjun Bagchi, Poulami Nandi, Amartya Saha, and Zodinmawia | BMS Modular Diaries: Torus one-point function | 101 pages, 4 figures, 12 appendices; v2: typos and minor errors
corrected, references added, acknowledgement updated, matches journal version | JHEP11(2020)065 | 10.1007/JHEP11(2020)065 | null | hep-th | http://creativecommons.org/licenses/by/4.0/ | Two dimensional field theories invariant under the Bondi-Metzner-Sachs (BMS)
group are conjectured to be dual to asymptotically flat spacetimes in three
dimensions. In this paper, we continue our investigations of the modular
properties of these field theories. In particular, we focus on the BMS torus
one-point function. We use two different methods to arrive at expressions for
asymptotic structure constants for general states in the theory utilising
modular properties of the torus one-point function. We then concentrate on the
BMS highest weight representation, and derive a host of new results, the most
important of which is the BMS torus block. In a particular limit of large
weights, we derive the leading and sub-leading pieces of the BMS torus block,
which we then use to rederive an expression for the asymptotic structure
constants for BMS primaries. Finally, we perform a bulk computation of a probe
scalar in the background of a flatspace cosmological solution based on the
geodesic approximation to reproduce our field theoretic results.
| [
{
"created": "Wed, 22 Jul 2020 22:56:27 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Oct 2020 02:41:55 GMT",
"version": "v2"
}
] | 2020-11-19 | [
[
"Bagchi",
"Arjun",
""
],
[
"Nandi",
"Poulami",
""
],
[
"Saha",
"Amartya",
""
],
[
"Zodinmawia",
"",
""
]
] | Two dimensional field theories invariant under the Bondi-Metzner-Sachs (BMS) group are conjectured to be dual to asymptotically flat spacetimes in three dimensions. In this paper, we continue our investigations of the modular properties of these field theories. In particular, we focus on the BMS torus one-point function. We use two different methods to arrive at expressions for asymptotic structure constants for general states in the theory utilising modular properties of the torus one-point function. We then concentrate on the BMS highest weight representation, and derive a host of new results, the most important of which is the BMS torus block. In a particular limit of large weights, we derive the leading and sub-leading pieces of the BMS torus block, which we then use to rederive an expression for the asymptotic structure constants for BMS primaries. Finally, we perform a bulk computation of a probe scalar in the background of a flatspace cosmological solution based on the geodesic approximation to reproduce our field theoretic results. |
2108.02505 | Jose Jurandir Alves Esteves | Jose Jurandir Alves Esteves, Amina Boubendir, Fabrice Guillemin,
Pierre Sens | On the Robustness of Controlled Deep Reinforcement Learning for Slice
Placement | arXiv admin note: substantial text overlap with arXiv:2105.06741,
arXiv:2108.02495 | null | null | null | cs.NI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evaluation of the impact of using Machine Learning in the management of
softwarized networks is considered in multiple research works. Beyond that, we
propose to evaluate the robustness of online learning for optimal network slice
placement. A major assumption to this study is to consider that slice request
arrivals are non-stationary. In this context, we simulate unpredictable network
load variations and compare two Deep Reinforcement Learning (DRL) algorithms: a
pure DRL-based algorithm and a heuristically controlled DRL as a hybrid
DRL-heuristic algorithm, to assess the impact of these unpredictable changes of
traffic load on the algorithms performance. We conduct extensive simulations of
a large-scale operator infrastructure. The evaluation results show that the
proposed hybrid DRL-heuristic approach is more robust and reliable in case of
unpredictable network load changes than pure DRL as it reduces the performance
degradation. These results are follow-ups for a series of recent research we
have performed showing that the proposed hybrid DRL-heuristic approach is
efficient and more adapted to real network scenarios than pure DRL.
| [
{
"created": "Thu, 5 Aug 2021 10:24:33 GMT",
"version": "v1"
}
] | 2021-08-21 | [
[
"Esteves",
"Jose Jurandir Alves",
""
],
[
"Boubendir",
"Amina",
""
],
[
"Guillemin",
"Fabrice",
""
],
[
"Sens",
"Pierre",
""
]
] | The evaluation of the impact of using Machine Learning in the management of softwarized networks is considered in multiple research works. Beyond that, we propose to evaluate the robustness of online learning for optimal network slice placement. A major assumption to this study is to consider that slice request arrivals are non-stationary. In this context, we simulate unpredictable network load variations and compare two Deep Reinforcement Learning (DRL) algorithms: a pure DRL-based algorithm and a heuristically controlled DRL as a hybrid DRL-heuristic algorithm, to assess the impact of these unpredictable changes of traffic load on the algorithms performance. We conduct extensive simulations of a large-scale operator infrastructure. The evaluation results show that the proposed hybrid DRL-heuristic approach is more robust and reliable in case of unpredictable network load changes than pure DRL as it reduces the performance degradation. These results are follow-ups for a series of recent research we have performed showing that the proposed hybrid DRL-heuristic approach is efficient and more adapted to real network scenarios than pure DRL. |
2406.08754 | HengRui Xing | Bangxin Li and Hengrui Xing and Chao Huang and Jin Qian and Huangqing
Xiao and Linfeng Feng and Cong Tian | Exploiting Uncommon Text-Encoded Structures for Automated Jailbreaks in
LLMs | 12 pages, 4 figures | null | null | null | cs.CL cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) are widely used in natural language processing
but face the risk of jailbreak attacks that maliciously induce them to generate
harmful content. Existing jailbreak attacks, including character-level and
context-level attacks, mainly focus on the prompt of the plain text without
specifically exploring the significant influence of its structure. In this
paper, we focus on studying how prompt structure contributes to the jailbreak
attack. We introduce a novel structure-level attack method based on tail
structures that are rarely used during LLM training, which we refer to as
Uncommon Text-Encoded Structure (UTES). We extensively study 12 UTESs templates
and 6 obfuscation methods to build an effective automated jailbreak tool named
StructuralSleight that contains three escalating attack strategies: Structural
Attack, Structural and Character/Context Obfuscation Attack, and Fully
Obfuscated Structural Attack. Extensive experiments on existing LLMs show that
StructuralSleight significantly outperforms baseline methods. In particular,
the attack success rate reaches 94.62\% on GPT-4o, which has not been addressed
by state-of-the-art techniques.
| [
{
"created": "Thu, 13 Jun 2024 02:24:08 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jul 2024 08:23:38 GMT",
"version": "v2"
}
] | 2024-07-22 | [
[
"Li",
"Bangxin",
""
],
[
"Xing",
"Hengrui",
""
],
[
"Huang",
"Chao",
""
],
[
"Qian",
"Jin",
""
],
[
"Xiao",
"Huangqing",
""
],
[
"Feng",
"Linfeng",
""
],
[
"Tian",
"Cong",
""
]
] | Large Language Models (LLMs) are widely used in natural language processing but face the risk of jailbreak attacks that maliciously induce them to generate harmful content. Existing jailbreak attacks, including character-level and context-level attacks, mainly focus on the prompt of the plain text without specifically exploring the significant influence of its structure. In this paper, we focus on studying how prompt structure contributes to the jailbreak attack. We introduce a novel structure-level attack method based on tail structures that are rarely used during LLM training, which we refer to as Uncommon Text-Encoded Structure (UTES). We extensively study 12 UTESs templates and 6 obfuscation methods to build an effective automated jailbreak tool named StructuralSleight that contains three escalating attack strategies: Structural Attack, Structural and Character/Context Obfuscation Attack, and Fully Obfuscated Structural Attack. Extensive experiments on existing LLMs show that StructuralSleight significantly outperforms baseline methods. In particular, the attack success rate reaches 94.62\% on GPT-4o, which has not been addressed by state-of-the-art techniques. |
1111.6493 | Taiki Takahashi | Taiki Takahashi (1), Hidemi Oono (2), Takeshi Inoue (3), Shuken Boku
(3), Yuki Kako (3), Yuji Kitaichi (3), Ichiro Kusumi (3), Takuya Masui (3),
Shin Nakagawa (3), Katsuji Suzuki (3), Teruaki Tanaka (3), Tsukasa Koyama
(3), and Mark H. B. Radford (4) ((1) Direct all correspondence to Taiki
Takahashi, Unit of Cognitive and Behavioral Sciences Department of Life
Sciences, School of Arts and Sciences, The University of Tokyo, Komaba,
(taikitakahashi@gmail.com), (2) Department of Behavioral Science, Hokkaido
University, Sapporo, Japan, (3) Department of Psychiatry, Graduate School of
Medicine, Hokkaido University, Sapporo, (4) Symbiosis Group Limited, Milton,
Australia, and Department of Behavioral Science, Hokkaido University,
Sapporo, Japan) | Depressive patients are more impulsive and inconsistent in intertemporal
choice behavior for monetary gain and loss than healthy subjects- an analysis
based on Tsallis' statistics | null | Neuro Endocrinol Lett. 2008, 29(3):351-358 | null | null | q-bio.NC q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Depression has been associated with impaired neural processing of reward and
punishment. However, to date, little is known regarding the relationship
between depression and intertemporal choice for gain and loss. We compared
impulsivity and inconsistency in intertemporal choice for monetary gain and
loss (quantified with parameters in the q-exponential discount function based
on Tsallis' statistics) between depressive patients and healthy control
subjects. This examination is potentially important for advances in
neuroeconomics of intertemporal choice, because depression is associated with
reduced serotonergic activities in the brain. We observed that depressive
patients were more impulsive and time-inconsistent in intertemporal choice
action for gain and loss, in comparison to healthy controls. The usefulness of
the q-exponential discount function for assessing the impaired decision-making
by depressive patients was demonstrated. Furthermore, biophysical mechanisms
underlying the altered intertemporal choice by depressive patients are
discussed in relation to impaired serotonergic neural systems.
Keywords: Depression, Discounting, Neuroeconomics, Impulsivity,
Inconsistency, Tsallis' statistics
| [
{
"created": "Tue, 22 Nov 2011 15:38:02 GMT",
"version": "v1"
}
] | 2012-12-04 | [
[
"Takahashi",
"Taiki",
""
],
[
"Oono",
"Hidemi",
""
],
[
"Inoue",
"Takeshi",
""
],
[
"Boku",
"Shuken",
""
],
[
"Kako",
"Yuki",
""
],
[
"Kitaichi",
"Yuji",
""
],
[
"Kusumi",
"Ichiro",
""
],
[
"Masui",... | Depression has been associated with impaired neural processing of reward and punishment. However, to date, little is known regarding the relationship between depression and intertemporal choice for gain and loss. We compared impulsivity and inconsistency in intertemporal choice for monetary gain and loss (quantified with parameters in the q-exponential discount function based on Tsallis' statistics) between depressive patients and healthy control subjects. This examination is potentially important for advances in neuroeconomics of intertemporal choice, because depression is associated with reduced serotonergic activities in the brain. We observed that depressive patients were more impulsive and time-inconsistent in intertemporal choice action for gain and loss, in comparison to healthy controls. The usefulness of the q-exponential discount function for assessing the impaired decision-making by depressive patients was demonstrated. Furthermore, biophysical mechanisms underlying the altered intertemporal choice by depressive patients are discussed in relation to impaired serotonergic neural systems. Keywords: Depression, Discounting, Neuroeconomics, Impulsivity, Inconsistency, Tsallis' statistics |
2405.14866 | Hanzhang Tu | Hanzhang Tu, Ruizhi Shao, Xue Dong, Shunyuan Zheng, Hao Zhang, Lili
Chen, Meili Wang, Wenyu Li, Siyan Ma, Shengping Zhang, Boyao Zhou, Yebin Liu | Tele-Aloha: A Low-budget and High-authenticity Telepresence System Using
Sparse RGB Cameras | Paper accepted by SIGGRAPH 2024. Project page:
http://118.178.32.38/c/Tele-Aloha/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present a low-budget and high-authenticity bidirectional
telepresence system, Tele-Aloha, targeting peer-to-peer communication
scenarios. Compared to previous systems, Tele-Aloha utilizes only four sparse
RGB cameras, one consumer-grade GPU, and one autostereoscopic screen to achieve
high-resolution (2048x2048), real-time (30 fps), low-latency (less than 150ms)
and robust distant communication. As the core of Tele-Aloha, we propose an
efficient novel view synthesis algorithm for upper-body. Firstly, we design a
cascaded disparity estimator for obtaining a robust geometry cue. Additionally
a neural rasterizer via Gaussian Splatting is introduced to project latent
features onto target view and to decode them into a reduced resolution.
Further, given the high-quality captured data, we leverage weighted blending
mechanism to refine the decoded image into the final resolution of 2K.
Exploiting world-leading autostereoscopic display and low-latency iris
tracking, users are able to experience a strong three-dimensional sense even
without any wearable head-mounted display device. Altogether, our telepresence
system demonstrates the sense of co-presence in real-life experiments,
inspiring the next generation of communication.
| [
{
"created": "Thu, 23 May 2024 17:59:45 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Tu",
"Hanzhang",
""
],
[
"Shao",
"Ruizhi",
""
],
[
"Dong",
"Xue",
""
],
[
"Zheng",
"Shunyuan",
""
],
[
"Zhang",
"Hao",
""
],
[
"Chen",
"Lili",
""
],
[
"Wang",
"Meili",
""
],
[
"Li",
"Wenyu",
... | In this paper, we present a low-budget and high-authenticity bidirectional telepresence system, Tele-Aloha, targeting peer-to-peer communication scenarios. Compared to previous systems, Tele-Aloha utilizes only four sparse RGB cameras, one consumer-grade GPU, and one autostereoscopic screen to achieve high-resolution (2048x2048), real-time (30 fps), low-latency (less than 150ms) and robust distant communication. As the core of Tele-Aloha, we propose an efficient novel view synthesis algorithm for upper-body. Firstly, we design a cascaded disparity estimator for obtaining a robust geometry cue. Additionally a neural rasterizer via Gaussian Splatting is introduced to project latent features onto target view and to decode them into a reduced resolution. Further, given the high-quality captured data, we leverage weighted blending mechanism to refine the decoded image into the final resolution of 2K. Exploiting world-leading autostereoscopic display and low-latency iris tracking, users are able to experience a strong three-dimensional sense even without any wearable head-mounted display device. Altogether, our telepresence system demonstrates the sense of co-presence in real-life experiments, inspiring the next generation of communication. |
2305.08346 | Marcos Kalinowski | Clauvin Almeida, Marcos Kalinowski, Anderson Uchoa, Bruno Feijo | Negative Effects of Gamification in Education Software: Systematic
Mapping and Practitioner Perceptions | null | Information and Software Technology, Volume 156, April 2023,
107142 | 10.1016/j.infsof.2022.107142 | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Context: While most research shows positive effects of gamification, the
focus on its adverse effects is considerably smaller and further understanding
is needed. Objective: To provide a comprehensive overview on research reporting
negative effects of game design elements and to provide insights into the
awareness of developers on these effects and into how they could be considered
in practice. Method: We conducted a systematic mapping study of the negative
effects of game design elements on education/learning systems. We also held a
focus group discussion with developers of a gamified software, discussing the
mapping study results with regard to their awareness and perceptions on the
reported negative effects in practice. Results: The mapping study revealed 87
papers reporting undesired effects of game design elements. We found that
badges, leaderboards, competitions, and points are the game design elements
most often reported as causing negative effects. The most cited negative
effects were lack of effect, worsened performance, motivational issues, lack of
understanding, and irrelevance. The ethical issues of gaming the system and
cheating were also often reported. As part of our results, we map the relations
between game design elements and the negative effects that they may cause. The
focus group revealed that developers were not aware of many of the possible
negative effects and that they consider this type of information useful. The
discussion revealed their agreement on some of those potential negative effects
and also some positive counterparts. Conclusions: Gamification, when properly
applied, can have positive effects on education/learning software. However,
gamified software is also prone to generate harmful effects. Revealing and
discussing potentially negative effects can help to make more informed
decisions considering their trade-off with respect to the expected benefits.
| [
{
"created": "Mon, 15 May 2023 04:51:00 GMT",
"version": "v1"
}
] | 2023-05-16 | [
[
"Almeida",
"Clauvin",
""
],
[
"Kalinowski",
"Marcos",
""
],
[
"Uchoa",
"Anderson",
""
],
[
"Feijo",
"Bruno",
""
]
] | Context: While most research shows positive effects of gamification, the focus on its adverse effects is considerably smaller and further understanding is needed. Objective: To provide a comprehensive overview on research reporting negative effects of game design elements and to provide insights into the awareness of developers on these effects and into how they could be considered in practice. Method: We conducted a systematic mapping study of the negative effects of game design elements on education/learning systems. We also held a focus group discussion with developers of a gamified software, discussing the mapping study results with regard to their awareness and perceptions on the reported negative effects in practice. Results: The mapping study revealed 87 papers reporting undesired effects of game design elements. We found that badges, leaderboards, competitions, and points are the game design elements most often reported as causing negative effects. The most cited negative effects were lack of effect, worsened performance, motivational issues, lack of understanding, and irrelevance. The ethical issues of gaming the system and cheating were also often reported. As part of our results, we map the relations between game design elements and the negative effects that they may cause. The focus group revealed that developers were not aware of many of the possible negative effects and that they consider this type of information useful. The discussion revealed their agreement on some of those potential negative effects and also some positive counterparts. Conclusions: Gamification, when properly applied, can have positive effects on education/learning software. However, gamified software is also prone to generate harmful effects. Revealing and discussing potentially negative effects can help to make more informed decisions considering their trade-off with respect to the expected benefits. |
2403.15864 | Yihang Zhao | Yihang Zhao, Neil Vetter, Kaveh Aryan | Using Large Language Models for OntoClean-based Ontology Refinement | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the integration of Large Language Models (LLMs) such as
GPT-3.5 and GPT-4 into the ontology refinement process, specifically focusing
on the OntoClean methodology. OntoClean, critical for assessing the
metaphysical quality of ontologies, involves a two-step process of assigning
meta-properties to classes and verifying a set of constraints. Manually
conducting the first step proves difficult in practice, due to the need for
philosophical expertise and lack of consensus among ontologists. By employing
LLMs with two prompting strategies, the study demonstrates that high accuracy
in the labelling process can be achieved. The findings suggest the potential
for LLMs to enhance ontology refinement, proposing the development of plugin
software for ontology tools to facilitate this integration.
| [
{
"created": "Sat, 23 Mar 2024 15:09:50 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Zhao",
"Yihang",
""
],
[
"Vetter",
"Neil",
""
],
[
"Aryan",
"Kaveh",
""
]
] | This paper explores the integration of Large Language Models (LLMs) such as GPT-3.5 and GPT-4 into the ontology refinement process, specifically focusing on the OntoClean methodology. OntoClean, critical for assessing the metaphysical quality of ontologies, involves a two-step process of assigning meta-properties to classes and verifying a set of constraints. Manually conducting the first step proves difficult in practice, due to the need for philosophical expertise and lack of consensus among ontologists. By employing LLMs with two prompting strategies, the study demonstrates that high accuracy in the labelling process can be achieved. The findings suggest the potential for LLMs to enhance ontology refinement, proposing the development of plugin software for ontology tools to facilitate this integration. |
cs/9809024 | Anoop Sarkar | XTAG Research Group (University of Pennsylvania) | A Lexicalized Tree Adjoining Grammar for English | 310 pages, 181 Postscript figures, uses 11pt, psfig.tex | null | null | IRCS Tech Report 98-18, ftp://ftp.cis.upenn.edu/pub/ircs/tr/98-18/ | cs.CL | null | This document describes a sizable grammar of English written in the TAG
formalism and implemented for use with the XTAG system. This report and the
grammar described herein supersedes the TAG grammar described in an earlier
1995 XTAG technical report. The English grammar described in this report is
based on the TAG formalism which has been extended to include lexicalization,
and unification-based feature structures. The range of syntactic phenomena that
can be handled is large and includes auxiliaries (including inversion), copula,
raising and small clause constructions, topicalization, relative clauses,
infinitives, gerunds, passives, adjuncts, it-clefts, wh-clefts, PRO
constructions, noun-noun modifications, extraposition, determiner sequences,
genitives, negation, noun-verb contractions, sentential adjuncts and
imperatives. This technical report corresponds to the XTAG Release 8/31/98. The
XTAG grammar is continuously updated with the addition of new analyses and
modification of old ones, and an online version of this report can be found at
the XTAG web page at http://www.cis.upenn.edu/~xtag/
| [
{
"created": "Fri, 18 Sep 1998 00:33:47 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Sep 1998 02:49:22 GMT",
"version": "v2"
}
] | 2012-08-27 | [
[
"XTAG Research Group",
"",
""
]
] | This document describes a sizable grammar of English written in the TAG formalism and implemented for use with the XTAG system. This report and the grammar described herein supersedes the TAG grammar described in an earlier 1995 XTAG technical report. The English grammar described in this report is based on the TAG formalism which has been extended to include lexicalization, and unification-based feature structures. The range of syntactic phenomena that can be handled is large and includes auxiliaries (including inversion), copula, raising and small clause constructions, topicalization, relative clauses, infinitives, gerunds, passives, adjuncts, it-clefts, wh-clefts, PRO constructions, noun-noun modifications, extraposition, determiner sequences, genitives, negation, noun-verb contractions, sentential adjuncts and imperatives. This technical report corresponds to the XTAG Release 8/31/98. The XTAG grammar is continuously updated with the addition of new analyses and modification of old ones, and an online version of this report can be found at the XTAG web page at http://www.cis.upenn.edu/~xtag/ |
2408.00741 | Jovan Stojkovic | Jovan Stojkovic and Chaojie Zhang and \'I\~nigo Goiri and Josep
Torrellas and Esha Choukse | DynamoLLM: Designing LLM Inference Clusters for Performance and Energy
Efficiency | null | null | null | null | cs.AI cs.AR cs.DC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The rapid evolution and widespread adoption of generative large language
models (LLMs) have made them a pivotal workload in various applications. Today,
LLM inference clusters receive a large number of queries with strict Service
Level Objectives (SLOs). To achieve the desired performance, these models
execute on power-hungry GPUs causing the inference clusters to consume large
amount of energy and, consequently, result in excessive carbon emissions.
Fortunately, we find that there is a great opportunity to exploit the
heterogeneity in inference compute properties and fluctuations in inference
workloads, to significantly improve energy-efficiency. However, such a diverse
and dynamic environment creates a large search-space where different system
configurations (e.g., number of instances, model parallelism, and GPU
frequency) translate into different energy-performance trade-offs. To address
these challenges, we propose DynamoLLM, the first energy-management framework
for LLM inference environments. DynamoLLM automatically and dynamically
reconfigures the inference cluster to optimize for energy and cost of LLM
serving under the service's performance SLOs. We show that at a service-level,
DynamoLLM conserves 53% energy and 38% operational carbon emissions, and
reduces 61% cost to the customer, while meeting the latency SLOs.
| [
{
"created": "Thu, 1 Aug 2024 17:40:45 GMT",
"version": "v1"
}
] | 2024-08-02 | [
[
"Stojkovic",
"Jovan",
""
],
[
"Zhang",
"Chaojie",
""
],
[
"Goiri",
"Íñigo",
""
],
[
"Torrellas",
"Josep",
""
],
[
"Choukse",
"Esha",
""
]
] | The rapid evolution and widespread adoption of generative large language models (LLMs) have made them a pivotal workload in various applications. Today, LLM inference clusters receive a large number of queries with strict Service Level Objectives (SLOs). To achieve the desired performance, these models execute on power-hungry GPUs causing the inference clusters to consume large amount of energy and, consequently, result in excessive carbon emissions. Fortunately, we find that there is a great opportunity to exploit the heterogeneity in inference compute properties and fluctuations in inference workloads, to significantly improve energy-efficiency. However, such a diverse and dynamic environment creates a large search-space where different system configurations (e.g., number of instances, model parallelism, and GPU frequency) translate into different energy-performance trade-offs. To address these challenges, we propose DynamoLLM, the first energy-management framework for LLM inference environments. DynamoLLM automatically and dynamically reconfigures the inference cluster to optimize for energy and cost of LLM serving under the service's performance SLOs. We show that at a service-level, DynamoLLM conserves 53% energy and 38% operational carbon emissions, and reduces 61% cost to the customer, while meeting the latency SLOs. |
0903.1324 | Boris Altshuler | Boris L. Altshuler | Electron neutrino mass scale in spectrum of Dirac equation with the
5-form flux term on the AdS(5)xS(5) background | 11 pages | null | 10.1088/1126-6708/2009/08/091 | null | hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dimensional reduction from 10 to 5 dimensions of the IIB supergravity Dirac
equation written down on the AdS(5)xS(5) (+ self-dual 5-form) background
provides the unambiguous values of bulk masses of Fermions in the effective 5D
Randall Sundrum theory. The use of "untwisted" and "twisted" (hep-th/0012378)
boundary conditions at the UV and IR ends of the warped space-time results in
two towers of spectrum of Dirac equation: the ordinary one which is linear in
spectral number and the "twisted" one exponentially decreasing with growth of
spectral number. Taking into account of the Fermion-5-form interaction
(hep-th/9811106) gives the electron neutrino mass scale in the "twisted"
spectrum of Dirac equation. Profiles in extra space of the eigenfunctions of
left and right "neutrinos" drastically differ which may result in the extremely
small coupling of light right neutrino with ordinary matter thus joining it to
plethora of candidates for Dark Matter.
| [
{
"created": "Sat, 7 Mar 2009 05:54:52 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Altshuler",
"Boris L.",
""
]
] | Dimensional reduction from 10 to 5 dimensions of the IIB supergravity Dirac equation written down on the AdS(5)xS(5) (+ self-dual 5-form) background provides the unambiguous values of bulk masses of Fermions in the effective 5D Randall Sundrum theory. The use of "untwisted" and "twisted" (hep-th/0012378) boundary conditions at the UV and IR ends of the warped space-time results in two towers of spectrum of Dirac equation: the ordinary one which is linear in spectral number and the "twisted" one exponentially decreasing with growth of spectral number. Taking into account of the Fermion-5-form interaction (hep-th/9811106) gives the electron neutrino mass scale in the "twisted" spectrum of Dirac equation. Profiles in extra space of the eigenfunctions of left and right "neutrinos" drastically differ which may result in the extremely small coupling of light right neutrino with ordinary matter thus joining it to plethora of candidates for Dark Matter. |
2204.04980 | Leonhard Hennig | Yuxuan Chen and Jonas Mikkelsen and Arne Binder and Christoph Alt and
Leonhard Hennig | A Comparative Study of Pre-trained Encoders for Low-Resource Named
Entity Recognition | Accepted at Repl4NLP 2022 (ACL) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-trained language models (PLM) are effective components of few-shot named
entity recognition (NER) approaches when augmented with continued pre-training
on task-specific out-of-domain data or fine-tuning on in-domain data. However,
their performance in low-resource scenarios, where such data is not available,
remains an open question. We introduce an encoder evaluation framework, and use
it to systematically compare the performance of state-of-the-art pre-trained
representations on the task of low-resource NER. We analyze a wide range of
encoders pre-trained with different strategies, model architectures,
intermediate-task fine-tuning, and contrastive learning. Our experimental
results across ten benchmark NER datasets in English and German show that
encoder performance varies significantly, suggesting that the choice of encoder
for a specific low-resource scenario needs to be carefully evaluated.
| [
{
"created": "Mon, 11 Apr 2022 09:48:26 GMT",
"version": "v1"
}
] | 2022-04-12 | [
[
"Chen",
"Yuxuan",
""
],
[
"Mikkelsen",
"Jonas",
""
],
[
"Binder",
"Arne",
""
],
[
"Alt",
"Christoph",
""
],
[
"Hennig",
"Leonhard",
""
]
] | Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data. However, their performance in low-resource scenarios, where such data is not available, remains an open question. We introduce an encoder evaluation framework, and use it to systematically compare the performance of state-of-the-art pre-trained representations on the task of low-resource NER. We analyze a wide range of encoders pre-trained with different strategies, model architectures, intermediate-task fine-tuning, and contrastive learning. Our experimental results across ten benchmark NER datasets in English and German show that encoder performance varies significantly, suggesting that the choice of encoder for a specific low-resource scenario needs to be carefully evaluated. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.