id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1904.08745 | edGNN: a Simple and Powerful GNN for Directed Labeled Graphs | The ability of a graph neural network (GNN) to leverage both the graph topology and graph labels is fundamental to building discriminative node and graph embeddings. Building on previous work, we theoretically show that edGNN, our model for directed labeled graphs, is as powerful as the Weisfeiler-Lehman algorithm for graph isomorphism. Our experiments support our theoretical findings, confirming that graph neural networks can be used effectively for inference problems on directed graphs with both node and edge labels. Code available at https://github.com/guillaumejaume/edGNN. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 128,165 |
2211.15521 | G^3: Geolocation via Guidebook Grounding | We demonstrate how language can improve geolocation: the task of predicting the location where an image was taken. Here we study explicit knowledge from human-written guidebooks that describe the salient and class-discriminative visual features humans use for geolocation. We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game. Our approach predicts a country for each image by attending over the clues automatically extracted from the guidebook. Supervising attention with country-level pseudo labels achieves the best performance. Our approach substantially outperforms a state-of-the-art image-only geolocation method, with an improvement of over 5% in Top-1 accuracy. Our dataset and code can be found at https://github.com/g-luo/geolocation_via_guidebook_grounding. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 333,304 |
2302.13475 | Elementwise Language Representation | We propose a new technique for computational language representation called elementwise embedding, in which a material (semantic unit) is abstracted into a horizontal concatenation of lower-dimensional element (character) embeddings. While elements are always characters, materials are arbitrary levels of semantic units so it generalizes to any type of tokenization. To focus only on the important letters, the $n^{th}$ spellings of each semantic unit are aligned in $n^{th}$ attention heads, then concatenated back into original forms creating unique embedding representations; they are jointly projected thereby determining own contextual importance. Technically, this framework is achieved by passing a sequence of materials, each consists of $v$ elements, to a transformer having $h=v$ attention heads. As a pure embedding technique, elementwise embedding replaces the $w$-dimensional embedding table of a transformer model with $256$ $c$-dimensional elements (each corresponding to one of UTF-8 bytes) where $c=w/v$. Using this novel approach, we show that the standard transformer architecture can be reused for all levels of language representations and be able to process much longer sequences at the same time-complexity without "any" architectural modification and additional overhead. BERT trained with elementwise embedding outperforms its subword equivalence (original implementation) in multilabel patent document classification exhibiting superior robustness to domain-specificity and data imbalance, despite using $0.005\%$ of embedding parameters. Experiments demonstrate the generalizability of the proposed method by successfully transferring these enhancements to differently architected transformers CANINE and ALBERT. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 347,962 |
2410.18371 | Gibberish is All You Need for Membership Inference Detection in
Contrastive Language-Audio Pretraining | Audio can disclose PII, particularly when combined with related text data. Therefore, it is essential to develop tools to detect privacy leakage in Contrastive Language-Audio Pretraining(CLAP). Existing MIAs need audio as input, risking exposure of voiceprint and requiring costly shadow models. We first propose PRMID, a membership inference detector based probability ranking given by CLAP, which does not require training shadow models but still requires both audio and text of the individual as input. To address these limitations, we then propose USMID, a textual unimodal speaker-level membership inference detector, querying the target model using only text data. We randomly generate textual gibberish that are clearly not in training dataset. Then we extract feature vectors from these texts using the CLAP model and train a set of anomaly detectors on them. During inference, the feature vector of each test text is input into the anomaly detector to determine if the speaker is in the training set (anomalous) or not (normal). If available, USMID can further enhance detection by integrating real audio of the tested speaker. Extensive experiments on various CLAP model architectures and datasets demonstrate that USMID outperforms baseline methods using only text data. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 501,854 |
2007.06284 | Artificial Neural Networks Jamming on the Beat | This paper addresses the issue of long-scale correlations that is characteristic for symbolic music and is a challenge for modern generative algorithms. It suggests a very simple workaround for this challenge, namely, generation of a drum pattern that could be further used as a foundation for melody generation. The paper presents a large dataset of drum patterns alongside with corresponding melodies. It explores two possible methods for drum pattern generation. Exploring a latent space of drum patterns one could generate new drum patterns with a given music style. Finally, the paper demonstrates that a simple artificial neural network could be trained to generate melodies corresponding with these drum patters used as inputs. Resulting system could be used for end-to-end generation of symbolic music with song-like structure and higher long-scale correlations between the notes. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 186,968 |
2103.06766 | Stable Tuple Embeddings for Dynamic Databases | We study the problem of computing an embedding of the tuples of a relational database in a manner that is extensible to dynamic changes of the database. In this problem, the embedding should be stable in the sense that it should not change on the existing tuples due to the embedding of newly inserted tuples (as database applications might already rely on existing embeddings); at the same time, the embedding of all tuples, old and new, should retain high quality. This task is challenging since inter-dependencies among the embeddings of different entities are inherent in state-of-the-art embedding techniques for structured data. We study two approaches to solving the problem. The first is an adaptation of Node2Vec to dynamic databases. The second is the FoRWaRD algorithm (Foreign Key Random Walk Embeddings for Relational Databases) that draws from embedding techniques for general graphs and knowledge graphs, and is inherently utilizing the schema and its key and foreign-key constraints. We evaluate the embedding algorithms using a collection of downstream tasks of column prediction over geographical and biological domains. We find that in the traditional static setting, our two embedding methods achieve comparable results that are compatible with the state-of-the-art for the specific applications. In the dynamic setting, we find that the FoRWaRD algorithm generally outperforms and runs faster than the alternatives, and moreover, it features only a mild reduction of quality even when the database consists of more than half newly inserted tuples after the initial training of the embedding. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 224,402 |
2412.03681 | Acquired TASTE: Multimodal Stance Detection with Textual and Structural
Embeddings | Stance detection plays a pivotal role in enabling an extensive range of downstream applications, from discourse parsing to tracing the spread of fake news and the denial of scientific facts. While most stance classification models rely on textual representation of the utterance in question, prior work has demonstrated the importance of the conversational context in stance detection. In this work we introduce TASTE -- a multimodal architecture for stance detection that harmoniously fuses Transformer-based content embedding with unsupervised structural embedding. Through the fine-tuning of a pretrained transformer and the amalgamation with social embedding via a Gated Residual Network (GRN) layer, our model adeptly captures the complex interplay between content and conversational structure in determining stance. TASTE achieves state-of-the-art results on common benchmarks, significantly outperforming an array of strong baselines. Comparative evaluations underscore the benefits of social grounding -- emphasizing the criticality of concurrently harnessing both content and structure for enhanced stance detection. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 514,060 |
2409.13951 | Deep learning for fast segmentation and critical dimension metrology &
characterization enabling AR/VR design and fabrication | Quantitative analysis of microscopy images is essential in the design and fabrication of components used in augmented reality/virtual reality (AR/VR) modules. However, segmenting regions of interest (ROIs) from these complex images and extracting critical dimensions (CDs) requires novel techniques, such as deep learning models which are key for actionable decisions on process, material and device optimization. In this study, we report on the fine-tuning of a pre-trained Segment Anything Model (SAM) using a diverse dataset of electron microscopy images. We employed methods such as low-rank adaptation (LoRA) to reduce training time and enhance the accuracy of ROI extraction. The model's ability to generalize to unseen images facilitates zero-shot learning and supports a CD extraction model that precisely extracts CDs from the segmented ROIs. We demonstrate the accurate extraction of binary images from cross-sectional images of surface relief gratings (SRGs) and Fresnel lenses in both single and multiclass modes. Furthermore, these binary images are used to identify transition points, aiding in the extraction of relevant CDs. The combined use of the fine-tuned segmentation model and the CD extraction model offers substantial advantages to various industrial applications by enhancing analytical capabilities, time to data and insights, and optimizing manufacturing processes. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 490,251 |
2302.03978 | Structural hierarchical learning for energy networks | Many sectors nowadays require accurate and coherent predictions across their organization to effectively operate. Otherwise, decision-makers would be planning using disparate views of the future, resulting in inconsistent decisions across their sectors. To secure coherency across hierarchies, recent research has put forward hierarchical learning, a coherency-informed hierarchical regressor leveraging the power of machine learning thanks to a custom loss function founded on optimal reconciliation methods. While promising potentials were outlined, results exhibited discordant performances in which coherency information only improved hierarchical forecasts in one setting. This work proposes to tackle these obstacles by investigating custom neural network designs inspired by the topological structures of hierarchies. Results unveil that, in a data-limited setting, structural models with fewer connections perform overall best and demonstrate the coherency information value for both accuracy and coherency forecasting performances, provided individual forecasts were generated within reasonable accuracy limits. Overall, this work expands and improves hierarchical learning methods thanks to a structurally-scaled learning mechanism extension coupled with tailored network designs, producing a resourceful, data-efficient, and information-rich learning process. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 344,543 |
2205.07182 | Fair Bayes-Optimal Classifiers Under Predictive Parity | Increasing concerns about disparate effects of AI have motivated a great deal of work on fair machine learning. Existing works mainly focus on independence- and separation-based measures (e.g., demographic parity, equality of opportunity, equalized odds), while sufficiency-based measures such as predictive parity are much less studied. This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups. We prove that, if the overall performances of different groups vary only moderately, all fair Bayes-optimal classifiers under predictive parity are group-wise thresholding rules. Perhaps surprisingly, this may not hold if group performance levels vary widely; in this case we find that predictive parity among protected groups may lead to within-group unfairness. We then propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied. FairBayes-DPP is an adaptive thresholding algorithm that aims to achieve predictive parity, while also seeking to maximize test accuracy. We provide supporting experiments conducted on synthetic and empirical data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,508 |
2110.09904 | Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance
Action Space | Intelligent agents must be able to think fast and slow to perform elaborate manipulation tasks. Reinforcement Learning (RL) has led to many promising results on a range of challenging decision-making tasks. However, in real-world robotics, these methods still struggle, as they require large amounts of expensive interactions and have slow feedback loops. On the other hand, fast human-like adaptive control methods can optimize complex robotic interactions, yet fail to integrate multimodal feedback needed for unstructured tasks. In this work, we propose to factor the learning problem in a hierarchical learning and adaption architecture to get the best of both worlds. The framework consists of two components, a slow reinforcement learning policy optimizing the task strategy given multimodal observations, and a fast, real-time adaptive control policy continuously optimizing the motion, stability, and effort of the manipulator. We combine these components through a bio-inspired action space that we call AFORCE. We demonstrate the new action space on a contact-rich manipulation task on real hardware and evaluate its performance on three simulated manipulation tasks. Our experiments show that AFORCE drastically improves sample efficiency while reducing energy consumption and improving safety. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 261,968 |
2501.19399 | Scalable-Softmax Is Superior for Attention | The maximum element of the vector output by the Softmax function approaches zero as the input vector size increases. Transformer-based language models rely on Softmax to compute attention scores, causing the attention distribution to flatten as the context size grows. This reduces the model's ability to prioritize key information effectively and potentially limits its length generalization. To address this problem, we propose Scalable-Softmax (SSMax), which replaces Softmax in scenarios where the input vector size varies. SSMax can be seamlessly integrated into existing Transformer-based architectures. Experimental results in language modeling show that models using SSMax not only achieve faster loss reduction during pretraining but also significantly improve performance in long contexts and key information retrieval. Furthermore, an analysis of attention scores reveals that SSMax enables the model to focus attention on key information even in long contexts. Additionally, although models that use SSMax from the beginning of pretraining achieve better length generalization, those that have already started pretraining can still gain some of this ability by replacing Softmax in the attention layers with SSMax, either during or after pretraining. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 529,149 |
cs/0111012 | Intelligent Anticipated Exploration of Web Sites | In this paper we describe a web search agent, called Global Search Agent (hereafter GSA for short). GSA integrates and enhances several search techniques in order to achieve significant improvements in the user-perceived quality of delivered information as compared to usual web search engines. GSA features intelligent merging of relevant documents from different search engines, anticipated selective exploration and evaluation of links from the current result set, automated derivation of refined queries based on user relevance feedback. System architecture as well as experimental accounts are also illustrated. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 537,454 |
2112.13058 | Tri-Transformer Hawkes Process: Three Heads are better than one | Abstract. Most of the real world data we encounter are asynchronous event sequence, so the last decades have been characterized by the implementation of various point process into the field of social networks,electronic medical records and financial transactions. At the beginning, Hawkes process and its variants which can simulate simultaneously the self-triggering and mutual triggering patterns between different events in complex sequences in a clear and quantitative way are more popular.Later on, with the advances of neural network, neural Hawkes process has been proposed one after another, and gradually become a research hotspot. The proposal of the transformer Hawkes process (THP) has gained a huge performance improvement, so a new upsurge of the neural Hawkes process based on transformer is set off. However, THP does not make full use of the information of occurrence time and type of event in the asynchronous event sequence. It simply adds the encoding of event type conversion and the location encoding of time conversion to the source encoding. At the same time, the learner built from a single transformer will result in an inescapable learning bias. In order to mitigate these problems, we propose a tri-transformer Hawkes process (Tri-THP) model, in which the event and time information are added to the dot-product attention as auxiliary information to form a new multihead attention. The effectiveness of the Tri-THP is proved by a series of well-designed experiments on both real world and synthetic data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 273,129 |
2304.05144 | Phase Calibration of Distributed Antenna Arrays | Antenna arrays can be either reciprocity calibrated (R-calibrated), which facilitates reciprocity-based beamforming, or fully calibrated (F-calibrated), which additionally facilitates transmission and reception in specific physical directions. We first expose, to provide context, the fundamental principles of over-the-air R- and F-calibration of distributed arrays. We then describe a new method for calibration of two arrays that are individually F-calibrated, such that the combined array becomes jointly F-calibrated. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 357,509 |
1706.09865 | Generalising Random Forest Parameter Optimisation to Include Stability
and Cost | Random forests are among the most popular classification and regression methods used in industrial applications. To be effective, the parameters of random forests must be carefully tuned. This is usually done by choosing values that minimize the prediction error on a held out dataset. We argue that error reduction is only one of several metrics that must be considered when optimizing random forest parameters for commercial applications. We propose a novel metric that captures the stability of random forests predictions, which we argue is key for scenarios that require successive predictions. We motivate the need for multi-criteria optimization by showing that in practical applications, simply choosing the parameters that lead to the lowest error can introduce unnecessary costs and produce predictions that are not stable across independent runs. To optimize this multi-criteria trade-off, we present a new framework that efficiently finds a principled balance between these three considerations using Bayesian optimisation. The pitfalls of optimising forest parameters purely for error reduction are demonstrated using two publicly available real world datasets. We show that our framework leads to parameter settings that are markedly different from the values discovered by error reduction metrics. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 76,215 |
2309.03773 | Extending Transductive Knowledge Graph Embedding Models for Inductive
Logical Relational Inference | Many downstream inference tasks for knowledge graphs, such as relation prediction, have been handled successfully by knowledge graph embedding techniques in the transductive setting. To address the inductive setting wherein new entities are introduced into the knowledge graph at inference time, more recent work opts for models which learn implicit representations of the knowledge graph through a complex function of a network's subgraph structure, often parametrized by graph neural network architectures. These come at the cost of increased parametrization, reduced interpretability and limited generalization to other downstream inference tasks. In this work, we bridge the gap between traditional transductive knowledge graph embedding approaches and more recent inductive relation prediction models by introducing a generalized form of harmonic extension which leverages representations learned through transductive embedding methods to infer representations of new entities introduced at inference time as in the inductive setting. This harmonic extension technique provides the best such approximation, can be implemented via an efficient iterative scheme, and can be employed to answer a family of conjunctive logical queries over the knowledge graph, further expanding the capabilities of transductive embedding methods. In experiments on a number of large-scale knowledge graph embedding benchmarks, we find that this approach for extending the functionality of transductive knowledge graph embedding models to perform knowledge graph completion and answer logical queries in the inductive setting is competitive with--and in some scenarios outperforms--several state-of-the-art models derived explicitly for such inductive tasks. | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 390,503 |
2408.17175 | Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio
Language Model | Recent advancements in audio generation have been significantly propelled by the capabilities of Large Language Models (LLMs). The existing research on audio LLM has primarily focused on enhancing the architecture and scale of audio language models, as well as leveraging larger datasets, and generally, acoustic codecs, such as EnCodec, are used for audio tokenization. However, these codecs were originally designed for audio compression, which may lead to suboptimal performance in the context of audio LLM. Our research aims to address the shortcomings of current audio LLM codecs, particularly their challenges in maintaining semantic integrity in generated audio. For instance, existing methods like VALL-E, which condition acoustic token generation on text transcriptions, often suffer from content inaccuracies and elevated word error rates (WER) due to semantic misinterpretations of acoustic tokens, resulting in word skipping and errors. To overcome these issues, we propose a straightforward yet effective approach called X-Codec. X-Codec incorporates semantic features from a pre-trained semantic encoder before the Residual Vector Quantization (RVQ) stage and introduces a semantic reconstruction loss after RVQ. By enhancing the semantic ability of the codec, X-Codec significantly reduces WER in speech synthesis tasks and extends these benefits to non-speech applications, including music and sound generation. Our experiments in text-to-speech, music continuation, and text-to-sound tasks demonstrate that integrating semantic information substantially improves the overall performance of language models in audio generation. Our code and demo are available (Demo: https://x-codec-audio.github.io Code: https://github.com/zhenye234/xcodec) | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 484,609 |
2305.18259 | GlyphControl: Glyph Conditional Control for Visual Text Generation | Recently, there has been an increasing interest in developing diffusion-based text-to-image generative models capable of generating coherent and well-formed visual text. In this paper, we propose a novel and efficient approach called GlyphControl to address this task. Unlike existing methods that rely on character-aware text encoders like ByT5 and require retraining of text-to-image models, our approach leverages additional glyph conditional information to enhance the performance of the off-the-shelf Stable-Diffusion model in generating accurate visual text. By incorporating glyph instructions, users can customize the content, location, and size of the generated text according to their specific requirements. To facilitate further research in visual text generation, we construct a training benchmark dataset called LAION-Glyph. We evaluate the effectiveness of our approach by measuring OCR-based metrics, CLIP score, and FID of the generated visual text. Our empirical evaluations demonstrate that GlyphControl outperforms the recent DeepFloyd IF approach in terms of OCR accuracy, CLIP score, and FID, highlighting the efficacy of our method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 368,921 |
2005.06835 | RegQCNET: Deep Quality Control for Image-to-template Brain MRI Affine
Registration | Affine registration of one or several brain image(s) onto a common reference space is a necessary prerequisite for many image processing tasks, such as brain segmentation or functional analysis. Manual assessment of registration quality is a tedious and time-consuming task, especially in studies comprising a large amount of data. An automated and reliable quality control (QC) becomes mandatory. Moreover, the computation time of the QC must be also compatible with the processing of massive datasets. Therefore, an automated deep neural network approaches appear as a method of choice to automatically assess registration quality. In the current study, a compact 3D convolutional neural network (CNN), referred to as RegQCNET, is introduced to quantitatively predict the amplitude of an affine registration mismatch between a registered image and a reference template. This quantitative estimation of registration error is expressed using metric unit system. Therefore, a meaningful task-specific threshold can be manually or automatically defined in order to distinguish usable and non-usable images. The robustness of the proposed RegQCNET is first analyzed on lifespan brain images undergoing various simulated spatial transformations and intensity variations between training and testing. Secondly, the potential of RegQCNET to classify images as usable or non-usable is evaluated using both manual and automatic thresholds. During our experiments, automatic thresholds are estimated using several computer-assisted classification models through cross-validation. To this end we used expert's visual quality control estimated on a lifespan cohort of 3953 brains. Finally, the RegQCNET accuracy is compared to usual image features. Results show that the proposed deep learning QC is robust, fast and accurate to estimate affine registration error in processing pipeline. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 177,125 |
1705.09847 | Lifelong Generative Modeling | Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to unsupervised generative modeling, where we continuously incorporate newly observed distributions into a learned model. We do so through a student-teacher Variational Autoencoder architecture which allows us to learn and preserve all the distributions seen so far, without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, inspired by a Bayesian update rule, the student model leverages the information learned by the teacher, which acts as a probabilistic knowledge store. The regularizer reduces the effect of catastrophic interference that appears when we learn over sequences of distributions. We validate our model's performance on sequential variants of MNIST, FashionMNIST, PermutedMNIST, SVHN and Celeb-A and demonstrate that our model mitigates the effects of catastrophic interference faced by neural networks in sequential learning scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 74,280 |
1505.04098 | Asymptotically Optimal Planning by Feasible Kinodynamic Planning in
State-Cost Space | This paper presents an equivalence between feasible kinodynamic planning and optimal kinodynamic planning, in that any optimal planning problem can be transformed into a series of feasible planning problems in a state-cost space whose solutions approach the optimum. This transformation gives rise to a meta-algorithm that produces an asymptotically optimal planner, given any feasible kinodynamic planner as a subroutine. The meta-algorithm is proven to be asymptotically optimal, and a formula is derived relating expected running time and solution suboptimality. It is directly applicable to a wide range of optimal planning problems because it does not resort to the use of steering functions or numerical boundary-value problem solvers. On a set of benchmark problems, it is demonstrated to perform, using the EST and RRT algorithms as subroutines, at a superior or comparable level to related planners. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 43,147 |
2109.04939 | Modeling Human Sentence Processing with Left-Corner Recurrent Neural
Network Grammars | In computational linguistics, it has been shown that hierarchical structures make language models (LMs) more human-like. However, the previous literature has been agnostic about a parsing strategy of the hierarchical models. In this paper, we investigated whether hierarchical structures make LMs more human-like, and if so, which parsing strategy is most cognitively plausible. In order to address this question, we evaluated three LMs against human reading times in Japanese with head-final left-branching structures: Long Short-Term Memory (LSTM) as a sequential model and Recurrent Neural Network Grammars (RNNGs) with top-down and left-corner parsing strategies as hierarchical models. Our computational modeling demonstrated that left-corner RNNGs outperformed top-down RNNGs and LSTM, suggesting that hierarchical and left-corner architectures are more cognitively plausible than top-down or sequential architectures. In addition, the relationships between the cognitive plausibility and (i) perplexity, (ii) parsing, and (iii) beam size will also be discussed. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 254,599 |
2411.11892 | Green My LLM: Studying the key factors affecting the energy consumption
of code assistants | In recent years,Large Language Models (LLMs) have significantly improved in generating high-quality code, enabling their integration into developers' Integrated Development Environments (IDEs) as code assistants. These assistants, such as GitHub Copilot, deliver real-time code suggestions and can greatly enhance developers' productivity. However, the environmental impact of these tools, in particular their energy consumption, remains a key concern. This paper investigates the energy consumption of LLM-based code assistants by simulating developer interactions with GitHub Copilot and analyzing various configuration factors. We collected a dataset of development traces from 20 developers and conducted extensive software project development simulations to measure energy usage under different scenarios. Our findings reveal that the energy consumption and performance of code assistants are influenced by various factors, such as the number of concurrent developers, model size, quantization methods, and the use of streaming. Notably, a substantial portion of generation requests made by GitHub Copilot is either canceled or rejected by developers, indicating a potential area for reducing wasted computations. Based on these findings, we share actionable insights into optimizing configurations for different use cases, demonstrating that careful adjustments can lead to significant energy savings. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 509,212 |
2201.00384 | On the effectiveness of Randomized Signatures as Reservoir for Learning
Rough Dynamics | Many finance, physics, and engineering phenomena are modeled by continuous-time dynamical systems driven by highly irregular (stochastic) inputs. A powerful tool to perform time series analysis in this context is rooted in rough path theory and leverages the so-called Signature Transform. This algorithm enjoys strong theoretical guarantees but is hard to scale to high-dimensional data. In this paper, we study a recently derived random projection variant called Randomized Signature, obtained using the Johnson-Lindenstrauss Lemma. We provide an in-depth experimental evaluation of the effectiveness of the Randomized Signature approach, in an attempt to showcase the advantages of this reservoir to the community. Specifically, we find that this method is preferable to the truncated Signature approach and alternative deep learning techniques in terms of model complexity, training time, accuracy, robustness, and data hungriness. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 273,949 |
2004.05167 | Individual Fairness in Pipelines | It is well understood that a system built from individually fair components may not itself be individually fair. In this work, we investigate individual fairness under pipeline composition. Pipelines differ from ordinary sequential or repeated composition in that individuals may drop out at any stage, and classification in subsequent stages may depend on the remaining "cohort" of individuals. As an example, a company might hire a team for a new project and at a later point promote the highest performer on the team. Unlike other repeated classification settings, where the degree of unfairness degrades gracefully over multiple fair steps, the degree of unfairness in pipelines can be arbitrary, even in a pipeline with just two stages. Guided by a panoply of real-world examples, we provide a rigorous framework for evaluating different types of fairness guarantees for pipelines. We show that na\"{i}ve auditing is unable to uncover systematic unfairness and that, in order to ensure fairness, some form of dependence must exist between the design of algorithms at different stages in the pipeline. Finally, we provide constructions that permit flexibility at later stages, meaning that there is no need to lock in the entire pipeline at the time that the early stage is constructed. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 172,109 |
2210.01794 | Implicit Warping for Animation with Image Sets | We present a new implicit warping framework for image animation using sets of source images through the transfer of the motion of a driving video. A single cross- modal attention layer is used to find correspondences between the source images and the driving image, choose the most appropriate features from different source images, and warp the selected features. This is in contrast to the existing methods that use explicit flow-based warping, which is designed for animation using a single source and does not extend well to multiple sources. The pick-and-choose capability of our framework helps it achieve state-of-the-art results on multiple datasets for image animation using both single and multiple source images. The project website is available at https://deepimagination.cc/implicit warping/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 321,402 |
1905.00084 | A Probabilistic Approach for Demand-Aware Ride-Sharing Optimization | Ride-sharing is a modern urban-mobility paradigm with tremendous potential in reducing congestion and pollution. Demand-aware design is a promising avenue for addressing a critical challenge in ride-sharing systems, namely joint optimization of request-vehicle assignment and routing for a fleet of vehicles. In this paper, we develop a probabilistic demand-aware framework to tackle the challenge. We focus on maximizing the expected number of passenger pickups, given the probability distributions of future demands. The key idea of our approach is to assign requests to vehicles in a probabilistic manner. It differentiates our work from existing ones and allows us to explore a richer design space to tackle the request-vehicle assignment puzzle with a performance guarantee but still keeping the final solution practically implementable. The optimization problem is non-convex, combinatorial, and NP-hard in nature. As a key contribution, we explore the problem structure and propose an elegant approximation of the objective function to develop a dual-subgradient heuristic. We characterize a condition under which the heuristic generates a $\left(1-1/e\right)$ approximation solution. Our solution is simple and scalable, amendable for practical implementation. Results of numerical experiments based on real-world traces in Manhattan show that, as compared to a conventional demand-oblivious scheme, our demand-aware solution improves the passenger pickups by up to 46%. The results also show that joint optimization at the fleet level leads to 19% more pickups than that by separate optimizations at individual vehicles. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 129,394 |
1908.01289 | Dueling Posterior Sampling for Preference-Based Reinforcement Learning | In preference-based reinforcement learning (RL), an agent interacts with the environment while receiving preferences instead of absolute feedback. While there is increasing research activity in preference-based RL, the design of formal frameworks that admit tractable theoretical analysis remains an open challenge. Building upon ideas from preference-based bandit learning and posterior sampling in RL, we present DUELING POSTERIOR SAMPLING (DPS), which employs preference-based posterior sampling to learn both the system dynamics and the underlying utility function that governs the preference feedback. As preference feedback is provided on trajectories rather than individual state-action pairs, we develop a Bayesian approach for the credit assignment problem, translating preferences to a posterior distribution over state-action reward models. We prove an asymptotic Bayesian no-regret rate for DPS with a Bayesian linear regression credit assignment model. This is the first regret guarantee for preference-based RL to our knowledge. We also discuss possible avenues for extending the proof methodology to other credit assignment models. Finally, we evaluate the approach empirically, showing competitive performance against existing baselines. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 140,721 |
2203.03833 | Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point
Clouds for Closing Domain Gap | Semantic analyses of object point clouds are largely driven by releasing of benchmarking datasets, including synthetic ones whose instances are sampled from object CAD models. However, learning from synthetic data may not generalize to practical scenarios, where point clouds are typically incomplete, non-uniformly distributed, and noisy. Such a challenge of Simulation-to-Reality (Sim2Real) domain gap could be mitigated via learning algorithms of domain adaptation; however, we argue that generation of synthetic point clouds via more physically realistic rendering is a powerful alternative, as systematic non-uniform noise patterns can be captured. To this end, we propose an integrated scheme consisting of physically realistic synthesis of object point clouds via rendering stereo images via projection of speckle patterns onto CAD models and a novel quasi-balanced self-training designed for more balanced data distribution by sparsity-driven selection of pseudo labeled samples for long tailed classes. Experiment results can verify the effectiveness of our method as well as both of its modules for unsupervised domain adaptation on point cloud classification, achieving the state-of-the-art performance. Source codes and the SpeckleNet synthetic dataset are available at https://github.com/Gorilla-Lab-SCUT/QS3. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 284,243 |
1205.6917 | Robust self-triggered coordination with ternary controllers | This paper regards coordination of networked systems, which is studied in the framework of hybrid dynamical systems. We design a coordination scheme which combines the use of ternary controllers with a self-triggered communication policy. The communication policy requires the agents to collect, at each sampling time, relative measurements of their neighbors' states: the collected information is then used to update the control and determine the following sampling time. We prove that the proposed scheme ensures finite-time convergence to a neighborhood of a consensus state. We then study the robustness of the proposed self-triggered coordination system with respect to skews in the agents' local clocks, to delays, and to limited precision in communication. Furthermore, we present two significant variations of our scheme. First, we design a time-varying controller which asymptotically drives the system to consensus. Second, we adapt our framework to a communication model in which an agent does not poll all its neighbors simultaneously, but single neighbors instead. This communication policy actually leads to a self-triggered "gossip" coordination system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 16,259 |
1208.4042 | Measuring quality, reputation and trust in online communities | In the Internet era the information overload and the challenge to detect quality content has raised the issue of how to rank both resources and users in online communities. In this paper we develop a general ranking method that can simultaneously evaluate users' reputation and objects' quality in an iterative procedure, and that exploits the trust relationships and social acquaintances of users as an additional source of information. We test our method on two real online communities, the EconoPhysics forum and the Last.fm music catalogue, and determine how different variants of the algorithm influence the resultant ranking. We show the benefits of considering trust relationships, and define the form of the algorithm better apt to common situations. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 18,170 |
2310.13213 | MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy
Named Entity Recognition | We present MULTICONER V2, a dataset for fine-grained Named Entity Recognition covering 33 entity classes across 12 languages, in both monolingual and multilingual settings. This dataset aims to tackle the following practical challenges in NER: (i) effective handling of fine-grained classes that include complex entities like movie titles, and (ii) performance degradation due to noise generated from typing mistakes or OCR errors. The dataset is compiled from open resources like Wikipedia and Wikidata, and is publicly available. Evaluation based on the XLM-RoBERTa baseline highlights the unique challenges posed by MULTICONER V2: (i) the fine-grained taxonomy is challenging, where the scores are low with macro-F1=0.63 (across all languages), and (ii) the corruption strategy significantly impairs performance, with entity corruption resulting in 9% lower performance relative to non-entity corruptions across all languages. This highlights the greater impact of entity noise in contrast to context noise. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 401,336 |
2405.16164 | Acquiring Better Load Estimates by Combining Anomaly and Change Point
Detection in Power Grid Time-series Measurements | In this paper we present novel methodology for automatic anomaly and switch event filtering to improve load estimation in power grid systems. By leveraging unsupervised methods with supervised optimization, our approach prioritizes interpretability while ensuring robust and generalizable performance on unseen data. Through experimentation, a combination of binary segmentation for change point detection and statistical process control for anomaly detection emerges as the most effective strategy, specifically when ensembled in a novel sequential manner. Results indicate the clear wasted potential when filtering is not applied. The automatic load estimation is also fairly accurate, with approximately 90% of estimates falling within a 10% error margin, with only a single significant failure in both the minimum and maximum load estimates across 60 measurements in the test set. Our methodology's interpretability makes it particularly suitable for critical infrastructure planning, thereby enhancing decision-making processes. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 457,291 |
2311.09646 | Reconstructing Continuous Light Field From Single Coded Image | We propose a method for reconstructing a continuous light field of a target scene from a single observed image. Our method takes the best of two worlds: joint aperture-exposure coding for compressive light-field acquisition, and a neural radiance field (NeRF) for view synthesis. Joint aperture-exposure coding implemented in a camera enables effective embedding of 3-D scene information into an observed image, but in previous works, it was used only for reconstructing discretized light-field views. NeRF-based neural rendering enables high quality view synthesis of a 3-D scene from continuous viewpoints, but when only a single image is given as the input, it struggles to achieve satisfactory quality. Our method integrates these two techniques into an efficient and end-to-end trainable pipeline. Trained on a wide variety of scenes, our method can reconstruct continuous light fields accurately and efficiently without any test time optimization. To our knowledge, this is the first work to bridge two worlds: camera design for efficiently acquiring 3-D information and neural rendering. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 408,224 |
2109.02593 | General-Purpose Question-Answering with Macaw | Despite the successes of pretrained language models, there are still few high-quality, general-purpose QA systems that are freely available. In response, we present Macaw, a versatile, generative question-answering (QA) system that we are making available to the community. Macaw is built on UnifiedQA, itself built on T5, and exhibits strong performance, zero-shot, on a wide variety of topics, including outperforming GPT-3 by over 10% (absolute) on Challenge300, a suite of 300 challenge questions, despite being an order of magnitude smaller (11 billion vs. 175 billion parameters). In addition, Macaw allows different permutations ("angles") of its inputs and outputs to be used, for example Macaw can take a question and produce an answer; or take an answer and produce a question; or take an answer and question, and produce multiple-choice options. We describe the system, and illustrate a variety of question types where it produces surprisingly good answers, well outside the training setup. We also identify question classes where it still appears to struggle, offering insights into the limitations of pretrained language models. Macaw is freely available, and we hope that it proves useful to the community. Macaw is available at https://github.com/allenai/macaw | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 253,800 |
2402.01306 | KTO: Model Alignment as Prospect Theoretic Optimization | Kahneman & Tversky's $\textit{prospect theory}$ tells us that humans perceive random variables in a biased but well-defined manner (1992); for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases -- the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them belonging to a family of loss functions that we call $\textit{human-aware losses}$ (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach KTO, and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B, despite only learning from a binary signal of whether an output is desirable. More broadly, our work suggests that there is no one HALO that is universally superior; the best loss depends on the inductive biases most appropriate for a given setting, an oft-overlooked consideration. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 425,963 |
1101.2378 | Extracting Features from Ratings: The Role of Factor Models | Performing effective preference-based data retrieval requires detailed and preferentially meaningful structurized information about the current user as well as the items under consideration. A common problem is that representations of items often only consist of mere technical attributes, which do not resemble human perception. This is particularly true for integral items such as movies or songs. It is often claimed that meaningful item features could be extracted from collaborative rating data, which is becoming available through social networking services. However, there is only anecdotal evidence supporting this claim; but if it is true, the extracted information could very valuable for preference-based data retrieval. In this paper, we propose a methodology to systematically check this common claim. We performed a preliminary investigation on a large collection of movie ratings and present initial evidence. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 8,798 |
2308.04608 | Offline coupling of segregated multi-physical simulations with
consistent boundary conditions and source terms based on scattered data | This article presents the openCFS submodule scattered data reader for coupling multi-physical simulations performed in different simulation programs. For instance, by considering a forward-coupling of a surface vibration simulation (mechanical system) to an acoustic propagation simulation using time-dependent acoustic absorbing material as a noise mitigation measure. The nearest-neighbor search of the target and source points from the interpolation is performed using the FLANN or the CGAL library. In doing so, the coupled field (e.g., surface velocity) is interpolated from a source representation consisting of field values physically stored and organized in a file directory to a target representation being the quadrature points in the case of the finite element method. A test case of the functionality is presented in the "testsuite" module of the openCFS software called "Abc2dcsvt". This scattered data reader module was successfully applied in numerous studies on flow-induced sound generation. Within this short article, the functionality, and usability of this module are described. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 384,474 |
1309.0872 | Producing a Set of Models for the Iron Homeostasis Network | This paper presents a method for modeling biological systems which combines formal techniques on intervals, numerical simulations and satisfaction of Signal Temporal Logic (STL) formulas. The main modeling challenge addressed by this approach is the large uncertainty in the values of the parameters due to the experimental difficulties of getting accurate biological data. This method considers intervals for each parameter and a formal description of the expected behavior of the model. In a first step, it produces reduced intervals of possible parameter values. Then by performing a systematic search in these intervals, it defines sets of parameter values used in the next step. This procedure aims at finding a sub-space where the model robustly behaves as expected. We apply this method to the modeling of the cellular iron homeostasis network in erythroid progenitors. The produced model describes explicitly the regulation mechanism which acts at the translational level. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 26,819 |
2105.03852 | Towards Dynamic Feature Selection with Attention to Assist Banking
Customers in Establishing a New Business | Establishing a new business may involve Knowledge acquisition in various areas, from personal to business and marketing sources. This task is challenging as it requires examining various data islands to uncover hidden patterns and unknown correlations such as purchasing behavior, consumer buying signals, and demographic and socioeconomic attributes of different locations. This paper introduces a novel framework for extracting and identifying important features from banking and non-banking data sources to address this challenge. We present an attention-based supervised feature selection approach to select important and relevant features which contribute most to the customer's query regarding establishing a new business. We report on the experiment conducted on an openly available dataset created from Kaggle and the UCI machine learning repositories. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 234,295 |
2411.15355 | UniGaussian: Driving Scene Reconstruction from Multiple Camera Models
via Unified Gaussian Representations | Urban scene reconstruction is crucial for real-world autonomous driving simulators. Although existing methods have achieved photorealistic reconstruction, they mostly focus on pinhole cameras and neglect fisheye cameras. In fact, how to effectively simulate fisheye cameras in driving scene remains an unsolved problem. In this work, we propose UniGaussian, a novel approach that learns a unified 3D Gaussian representation from multiple camera models for urban scene reconstruction in autonomous driving. Our contributions are two-fold. First, we propose a new differentiable rendering method that distorts 3D Gaussians using a series of affine transformations tailored to fisheye camera models. This addresses the compatibility issue of 3D Gaussian splatting with fisheye cameras, which is hindered by light ray distortion caused by lenses or mirrors. Besides, our method maintains real-time rendering while ensuring differentiability. Second, built on the differentiable rendering method, we design a new framework that learns a unified Gaussian representation from multiple camera models. By applying affine transformations to adapt different camera models and regularizing the shared Gaussians with supervision from different modalities, our framework learns a unified 3D Gaussian representation with input data from multiple sources and achieves holistic driving scene understanding. As a result, our approach models multiple sensors (pinhole and fisheye cameras) and modalities (depth, semantic, normal and LiDAR point clouds). Our experiments show that our method achieves superior rendering quality and fast rendering speed for driving scene simulation. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 510,571 |
2412.18808 | Provable Uncertainty Decomposition via Higher-Order Calibration | We give a principled method for decomposing the predictive uncertainty of a model into aleatoric and epistemic components with explicit semantics relating them to the real-world data distribution. While many works in the literature have proposed such decompositions, they lack the type of formal guarantees we provide. Our method is based on the new notion of higher-order calibration, which generalizes ordinary calibration to the setting of higher-order predictors that predict mixtures over label distributions at every point. We show how to measure as well as achieve higher-order calibration using access to $k$-snapshots, namely examples where each point has $k$ independent conditional labels. Under higher-order calibration, the estimated aleatoric uncertainty at a point is guaranteed to match the real-world aleatoric uncertainty averaged over all points where the prediction is made. To our knowledge, this is the first formal guarantee of this type that places no assumptions whatsoever on the real-world data distribution. Importantly, higher-order calibration is also applicable to existing higher-order predictors such as Bayesian and ensemble models and provides a natural evaluation metric for such models. We demonstrate through experiments that our method produces meaningful uncertainty decompositions for image classification. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 520,581 |
1705.10768 | Reflection Invariant and Symmetry Detection | Symmetry detection and discrimination are of fundamental meaning in science, technology, and engineering. This paper introduces reflection invariants and defines the directional moment to detect symmetry for shape analysis and object recognition. And it demonstrates that detection of reflection symmetry can be done in a simple way by solving a trigonometric system derived from the directional moment, and discrimination of reflection symmetry can be achieved by application of the reflection invariants in 2D and 3D. Rotation symmetry can also be determined based on that.The experiments in 2D and 3D, including the regular triangle, the square, and the five Platonic objects, show that all the reflection lines or planes can be deterministically found using directional moments up to order six. This result can be used to simplify the efforts of symmetry detection in research areas, such as protein structure, model retrieval, inverse engineering, and machine vision etc. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 74,469 |
1209.0378 | Provenance for SPARQL queries | Determining trust of data available in the Semantic Web is fundamental for applications and users, in particular for linked open data obtained from SPARQL endpoints. There exist several proposals in the literature to annotate SPARQL query results with values from abstract models, adapting the seminal works on provenance for annotated relational databases. We provide an approach capable of providing provenance information for a large and significant fragment of SPARQL 1.1, including for the first time the major non-monotonic constructs under multiset semantics. The approach is based on the translation of SPARQL into relational queries over annotated relations with values of the most general m-semiring, and in this way also refuting a claim in the literature that the OPTIONAL construct of SPARQL cannot be captured appropriately with the known abstract models. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 18,361 |
1505.07690 | Invertible Orientation Scores of 3D Images | The enhancement and detection of elongated structures in noisy image data is relevant for many biomedical applications. To handle complex crossing structures in 2D images, 2D orientation scores were introduced, which already showed their use in a variety of applications. Here we extend this work to 3D orientation scores. First, we construct the orientation score from a given dataset, which is achieved by an invertible coherent state type of transform. For this transformation we introduce 3D versions of the 2D cake-wavelets, which are complex wavelets that can simultaneously detect oriented structures and oriented edges. For efficient implementation of the different steps in the wavelet creation we use a spherical harmonic transform. Finally, we show some first results of practical applications of 3D orientation scores. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 43,563 |
1909.12969 | Counterfactual States for Atari Agents via Generative Deep Learning | Although deep reinforcement learning agents have produced impressive results in many domains, their decision making is difficult to explain to humans. To address this problem, past work has mainly focused on explaining why an action was chosen in a given state. A different type of explanation that is useful is a counterfactual, which deals with "what if?" scenarios. In this work, we introduce the concept of a counterfactual state to help humans gain a better understanding of what would need to change (minimally) in an Atari game image for the agent to choose a different action. We introduce a novel method to create counterfactual states from a generative deep learning architecture. In addition, we evaluate the effectiveness of counterfactual states on human participants who are not machine learning experts. Our user study results suggest that our generated counterfactual states are useful in helping non-expert participants gain a better understanding of an agent's decision making process. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 147,278 |
2207.10237 | SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic
Networks | Recent isotropic networks, such as ConvMixer and vision transformers, have found significant success across visual recognition tasks, matching or outperforming non-isotropic convolutional neural networks (CNNs). Isotropic architectures are particularly well-suited to cross-layer weight sharing, an effective neural network compression technique. In this paper, we perform an empirical evaluation on methods for sharing parameters in isotropic networks (SPIN). We present a framework to formalize major weight sharing design decisions and perform a comprehensive empirical evaluation of this design space. Guided by our experimental results, we propose a weight sharing strategy to generate a family of models with better overall efficiency, in terms of FLOPs and parameters versus accuracy, compared to traditional scaling methods alone, for example compressing ConvMixer by 1.9x while improving accuracy on ImageNet. Finally, we perform a qualitative study to further understand the behavior of weight sharing in isotropic architectures. The code is available at https://github.com/apple/ml-spin. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 309,181 |
2310.18458 | Do Not Harm Protected Groups in Debiasing Language Representation Models | Language Representation Models (LRMs) trained with real-world data may capture and exacerbate undesired bias and cause unfair treatment of people in various demographic groups. Several techniques have been investigated for applying interventions to LRMs to remove bias in benchmark evaluations on, for example, word embeddings. However, the negative side effects of debiasing interventions are usually not revealed in the downstream tasks. We propose xGAP-DEBIAS, a set of evaluations on assessing the fairness of debiasing. In this work, We examine four debiasing techniques on a real-world text classification task and show that reducing biasing is at the cost of degrading performance for all demographic groups, including those the debiasing techniques aim to protect. We advocate that a debiasing technique should have good downstream performance with the constraint of ensuring no harm to the protected group. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 403,552 |
2006.15987 | Robustifying Sequential Neural Processes | When tasks change over time, meta-transfer learning seeks to improve the efficiency of learning a new task via both meta-learning and transfer-learning. While the standard attention has been effective in a variety of settings, we question its effectiveness in improving meta-transfer learning since the tasks being learned are dynamic and the amount of context can be substantially smaller. In this paper, using a recently proposed meta-transfer learning model, Sequential Neural Processes (SNP), we first empirically show that it suffers from a similar underfitting problem observed in the functions inferred by Neural Processes. However, we further demonstrate that unlike the meta-learning setting, the standard attention mechanisms are not effective in meta-transfer setting. To resolve, we propose a new attention mechanism, Recurrent Memory Reconstruction (RMR), and demonstrate that providing an imaginary context that is recurrently updated and reconstructed with interaction is crucial in achieving effective attention for meta-transfer learning. Furthermore, incorporating RMR into SNP, we propose Attentive Sequential Neural Processes-RMR (ASNP-RMR) and demonstrate in various tasks that ASNP-RMR significantly outperforms the baselines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 184,681 |
2412.02698 | Scaling BERT Models for Turkish Automatic Punctuation and Capitalization
Correction | This paper investigates the effectiveness of BERT based models for automated punctuation and capitalization corrections in Turkish texts across five distinct model sizes. The models are designated as Tiny, Mini, Small, Medium, and Base. The design and capabilities of each model are tailored to address the specific challenges of the Turkish language, with a focus on optimizing performance while minimizing computational overhead. The study presents a systematic comparison of the performance metrics precision, recall, and F1 score of each model, offering insights into their applicability in diverse operational contexts. The results demonstrate a significant improvement in text readability and accuracy as model size increases, with the Base model achieving the highest correction precision. This research provides a comprehensive guide for selecting the appropriate model size based on specific user needs and computational resources, establishing a framework for deploying these models in real-world applications to enhance the quality of written Turkish. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 513,644 |
cs/0702085 | Social Behaviours Applied to P2P Systems: An efficient Algorithm for
Resource Organisation | P2P systems are a great solution to the problem of distributing resources. The main issue of P2P networks is that searching and retrieving resources shared by peers is usually expensive and does not take into account similarities among peers. In this paper we present preliminary simulations of PROSA, a novel algorithm for P2P network structuring, inspired by social behaviours. Peers in PROSA self--organise in social groups of similar peers, called ``semantic--groups'', depending on the resources they are sharing. Such a network smoothly evolves to a small--world graph, where queries for resources are efficiently and effectively routed. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 540,163 |
1906.01504 | Embedded hyper-parameter tuning by Simulated Annealing | We propose a new metaheuristic training scheme that combines Stochastic Gradient Descent (SGD) and Discrete Optimization in an unconventional way. Our idea is to define a discrete neighborhood of the current SGD point containing a number of "potentially good moves" that exploit gradient information, and to search this neighborhood by using a classical metaheuristic scheme borrowed from Discrete Optimization. In the present paper we investigate the use of a simple Simulated Annealing (SA) metaheuristic that accepts/rejects a candidate new solution in the neighborhood with a probability that depends both on the new solution quality and on a parameter (the temperature) which is modified over time to lower the probability of accepting worsening moves. We use this scheme as an automatic way to perform hyper-parameter tuning, hence the title of the paper. A distinctive feature of our scheme is that hyper-parameters are modified within a single SGD execution (and not in an external loop, as customary) and evaluated on the fly on the current minibatch, i.e., their tuning is fully embedded within the SGD algorithm. The use of SA for training is not new, but previous proposals were mainly intended for non-differentiable objective functions for which SGD is not applied due to the lack of gradients. On the contrary, our SA method requires differentiability of (a proxy of) the loss function, and leverages on the availability of a gradient direction to define local moves that have a large probability to improve the current solution. Computational results on image classification (CIFAR-10) are reported, showing that the proposed approach leads to an improvement of the final validation accuracy for modern Deep Neural Networks such as ResNet34 and VGG16. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 133,726 |
1705.10638 | A Receding Horizon Push Recovery Strategy for Balancing the iCub
Humanoid Robot | Balancing and reacting to strong and unexpected pushes is a critical requirement for humanoid robots. We recently designed a capture point based approach which interfaces with a momentum-based torque controller and we implemented and validated it on the iCub humanoid robot. In this work we implement a Receding Horizon control, also known as Model Predictive Control, to add the possibility to predict the future evolution of the robot, especially the constraints switching given by the hybrid nature of the system. We prove that the proposed MPC extension makes the step-recovery controller more robust and reliable when executing the recovery strategy. Experiments in simulation show the results of the proposed approach. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 74,442 |
2409.05137 | READoc: A Unified Benchmark for Realistic Document Structured Extraction | Document Structured Extraction (DSE) aims to extract structured content from raw documents. Despite the emergence of numerous DSE systems, their unified evaluation remains inadequate, significantly hindering the field's advancement. This problem is largely attributed to existing benchmark paradigms, which exhibit fragmented and localized characteristics. To address these limitations and offer a thorough evaluation of DSE systems, we introduce a novel benchmark named READoc, which defines DSE as a realistic task of converting unstructured PDFs into semantically rich Markdown. The READoc dataset is derived from 2,233 diverse and real-world documents from arXiv and GitHub. In addition, we develop a DSE Evaluation S$^3$uite comprising Standardization, Segmentation and Scoring modules, to conduct a unified evaluation of state-of-the-art DSE approaches. By evaluating a range of pipeline tools, expert visual models, and general VLMs, we identify the gap between current work and the unified, realistic DSE objective for the first time. We aspire that READoc will catalyze future research in DSE, fostering more comprehensive and practical solutions. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 486,649 |
2308.16278 | Autonomous damage assessment of structural columns using low-cost micro
aerial vehicles and multi-view computer vision | Structural columns are the crucial load-carrying components of buildings and bridges. Early detection of column damage is important for the assessment of the residual performance and the prevention of system-level collapse. This research proposes an innovative end-to-end micro aerial vehicles (MAVs)-based approach to automatically scan and inspect columns. First, an MAV-based automatic image collection method is proposed. The MAV is programmed to sense the structural columns and their surrounding environment. During the navigation, the MAV first detects and approaches the structural columns. Then, it starts to collect image data at multiple viewpoints around every detected column. Second, the collected images will be used to assess the damage types and damage locations. Third, the damage state of the structural column will be determined by fusing the evaluation outcomes from multiple camera views. In this study, reinforced concrete (RC) columns are selected to demonstrate the effectiveness of the approach. Experimental results indicate that the proposed MAV-based inspection approach can effectively collect images from multiple viewing angles, and accurately assess critical RC column damages. The approach improves the level of autonomy during the inspection. In addition, the evaluation outcomes are more comprehensive than the existing 2D vision methods. The concept of the proposed inspection approach can be extended to other structural columns such as bridge piers. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 388,938 |
2108.08739 | Neural Predictive Control for the Optimization of Smart Grid Flexibility
Schedules | Model predictive control (MPC) is a method to formulate the optimal scheduling problem for grid flexibilities in a mathematical manner. The resulting time-constrained optimization problem can be re-solved in each optimization time step using classical optimization methods such as Second Order Cone Programming (SOCP) or Interior Point Methods (IPOPT). When applying MPC in a rolling horizon scheme, the impact of uncertainty in forecasts on the optimal schedule is reduced. While MPC methods promise accurate results for time-constrained grid optimization they are inherently limited by the calculation time needed for large and complex power system models. Learning the optimal control behaviour using function approximation offers the possibility to determine near-optimal control actions with short calculation time. A Neural Predictive Control (NPC) scheme is proposed to learn optimal control policies for linear and nonlinear power systems through imitation. It is demonstrated that this procedure can find near-optimal solutions, while reducing the calculation time by an order of magnitude. The learned controllers are validated using a benchmark smart grid. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | 251,366 |
2009.13437 | A Human-in-the-Loop Approach based on Explainability to Improve NTL
Detection | Implementing systems based on Machine Learning to detect fraud and other Non-Technical Losses (NTL) is challenging: the data available is biased, and the algorithms currently used are black-boxes that cannot be either easily trusted or understood by stakeholders. This work explains our human-in-the-loop approach to mitigate these problems in a real system that uses a supervised model to detect Non-Technical Losses (NTL) for an international utility company from Spain. This approach exploits human knowledge (e.g. from the data scientists or the company's stakeholders) and the information provided by explanatory methods to guide the system during the training process. This simple, efficient method that can be easily implemented in other industrial projects is tested in a real dataset and the results show that the derived prediction model is better in terms of accuracy, interpretability, robustness and flexibility. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 197,745 |
2009.11239 | Deep multi-stations weather forecasting: explainable recurrent
convolutional neural networks | Deep learning applied to weather forecasting has started gaining popularity because of the progress achieved by data-driven models. The present paper compares two different deep learning architectures to perform weather prediction on daily data gathered from 18 cities across Europe and spanned over a period of 15 years. We propose the Deep Attention Unistream Multistream (DAUM) networks that investigate different types of input representations (i.e. tensorial unistream vs. multistream ) as well as the incorporation of the attention mechanism. In particular, we show that adding a self-attention block within the models increases the overall forecasting performance. Furthermore, visualization techniques such as occlusion analysis and score maximization are used to give an additional insight on the most important features and cities for predicting a particular target feature of target cities. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 197,118 |
2405.14815 | Designing A Sustainable Marine Debris Clean-up Framework without Human
Labels | Marine debris poses a significant ecological threat to birds, fish, and other animal life. Traditional methods for assessing debris accumulation involve labor-intensive and costly manual surveys. This study introduces a framework that utilizes aerial imagery captured by drones to conduct remote trash surveys. Leveraging computer vision techniques, our approach detects, classifies, and maps marine debris distributions. The framework uses Grounding DINO, a transformer-based zero-shot object detector, and CLIP, a vision-language model for zero-shot object classification, enabling the detection and classification of debris objects based on material type without the need for training labels. To mitigate over-counting due to different views of the same object, Scale-Invariant Feature Transform (SIFT) is employed for duplicate matching using local object features. Additionally, we have developed a user-friendly web application that facilitates end-to-end analysis of drone images, including object detection, classification, and visualization on a map to support cleanup efforts. Our method achieves competitive performance in detection (0.69 mean IoU) and classification (0.74 F1 score) across seven debris object classes without labeled data, comparable to state-of-the-art supervised methods. This framework has the potential to streamline automated trash sampling surveys, fostering efficient and sustainable community-led cleanup initiatives. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 456,610 |
1905.05987 | EasiCS: the objective and fine-grained classification method of cervical
spondylosis dysfunction | The precise diagnosis is of great significance in developing precise treatment plans to restore neck function and reduce the burden posed by the cervical spondylosis (CS). However, the current available neck function assessment method are subjective and coarse-grained. In this paper, based on the relationship among CS, cervical structure, cervical vertebra function, and surface electromyography (sEMG), we seek to develop a clustering algorithms on the sEMG data set collected from the clinical environment and implement the division. We proposed and developed the framework EasiCS, which consists of dimension reduction, clustering algorithm EasiSOM, spectral clustering algorithm EasiSC. The EasiCS outperform the commonly used seven algorithms overall. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 130,878 |
2110.12076 | Applications of Generative Adversarial Networks in Anomaly Detection: A
Systematic Literature Review | Anomaly detection has become an indispensable tool for modern society, applied in a wide range of applications, from detecting fraudulent transactions to malignant brain tumours. Over time, many anomaly detection techniques have been introduced. However, in general, they all suffer from the same problem: a lack of data that represents anomalous behaviour. As anomalous behaviour is usually costly (or dangerous) for a system, it is difficult to gather enough data that represents such behaviour. This, in turn, makes it difficult to develop and evaluate anomaly detection techniques. Recently, generative adversarial networks (GANs) have attracted a great deal of attention in anomaly detection research, due to their unique ability to generate new data. In this paper, we present a systematic literature review of the applications of GANs in anomaly detection, covering 128 papers on the subject. The goal of this review paper is to analyze and summarize: (1) which anomaly detection techniques can benefit from certain types of GANs, and how, (2) in which application domains GAN-assisted anomaly detection techniques have been applied, and (3) which datasets and performance metrics have been used to evaluate these techniques. Our study helps researchers and practitioners to find the most suitable GAN-assisted anomaly detection technique for their application. In addition, we present a research roadmap for future studies in this area. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 262,701 |
2410.06961 | Self-Boosting Large Language Models with Synthetic Preference Data | Through alignment with human preferences, Large Language Models (LLMs) have advanced significantly in generating honest, harmless, and helpful responses. However, collecting high-quality preference data is a resource-intensive and creativity-demanding process, especially for the continual improvement of LLMs. We introduce SynPO, a self-boosting paradigm that leverages synthetic preference data for model alignment. SynPO employs an iterative mechanism wherein a self-prompt generator creates diverse prompts, and a response improver refines model responses progressively. This approach trains LLMs to autonomously learn the generative rewards for their own outputs and eliminates the need for large-scale annotation of prompts and human preferences. After four SynPO iterations, Llama3-8B and Mistral-7B show significant enhancements in instruction-following abilities, achieving over 22.1% win rate improvements on AlpacaEval 2.0 and ArenaHard. Simultaneously, SynPO improves the general performance of LLMs on various tasks, validated by a 3.2 to 5.0 average score increase on the well-recognized Open LLM leaderboard. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 496,409 |
1811.02307 | Toward Driving Scene Understanding: A Dataset for Learning Driver
Behavior and Causal Reasoning | Driving Scene understanding is a key ingredient for intelligent transportation systems. To achieve systems that can operate in a complex physical and social environment, they need to understand and learn how humans drive and interact with traffic scenes. We present the Honda Research Institute Driving Dataset (HDD), a challenging dataset to enable research on learning driver behavior in real-life environments. The dataset includes 104 hours of real human driving in the San Francisco Bay Area collected using an instrumented vehicle equipped with different sensors. We provide a detailed analysis of HDD with a comparison to other driving datasets. A novel annotation methodology is introduced to enable research on driver behavior understanding from untrimmed data sequences. As the first step, baseline algorithms for driver behavior detection are trained and tested to demonstrate the feasibility of the proposed task. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 112,557 |
2210.01376 | Improved High-Probability Regret for Adversarial Bandits with
Time-Varying Feedback Graphs | We study high-probability regret bounds for adversarial $K$-armed bandits with time-varying feedback graphs over $T$ rounds. For general strongly observable graphs, we develop an algorithm that achieves the optimal regret $\widetilde{\mathcal{O}}((\sum_{t=1}^T\alpha_t)^{1/2}+\max_{t\in[T]}\alpha_t)$ with high probability, where $\alpha_t$ is the independence number of the feedback graph at round $t$. Compared to the best existing result [Neu, 2015] which only considers graphs with self-loops for all nodes, our result not only holds more generally, but importantly also removes any $\text{poly}(K)$ dependence that can be prohibitively large for applications such as contextual bandits. Furthermore, we also develop the first algorithm that achieves the optimal high-probability regret bound for weakly observable graphs, which even improves the best expected regret bound of [Alon et al., 2015] by removing the $\mathcal{O}(\sqrt{KT})$ term with a refined analysis. Our algorithms are based on the online mirror descent framework, but importantly with an innovative combination of several techniques. Notably, while earlier works use optimistic biased loss estimators for achieving high-probability bounds, we find it important to use a pessimistic one for nodes without self-loop in a strongly observable graph. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 321,243 |
2011.00446 | Efficient Learning of Control Policies for Robust Quadruped Bounding
using Pretrained Neural Networks | Bounding is one of the important gaits in quadrupedal locomotion for negotiating obstacles. The authors proposed an effective approach that can learn robust bounding gaits more efficiently despite its large variation in dynamic body movements. The authors first pretrained the neural network (NN) based on data from a robot operated by conventional model based controllers, and then further optimised the pretrained NN via deep reinforcement learning (DRL). In particular, the authors designed a reward function considering contact points and phases to enforce the gait symmetry and periodicity, which improved the bounding performance. The NN based feedback controller was learned in the simulation and directly deployed on the real quadruped robot Jueying Mini successfully. A variety of environments are presented both indoors and outdoors with the authors approach. The authors approach shows efficient computing and good locomotion results by the Jueying Mini quadrupedal robot bounding over uneven terrain. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 204,222 |
2001.08472 | Joint Inference on Truth/Rumor and Their Sources in Social Networks | In the contemporary era of information explosion, we are often faced with the mixture of massive \emph{truth} (true information) and \emph{rumor} (false information) flooded over social networks. Under such circumstances, it is very essential to infer whether each claim (e.g., news, messages) is a truth or a rumor, and identify their \emph{sources}, i.e., the users who initially spread those claims. While most prior arts have been dedicated to the two tasks respectively, this paper aims to offer the joint inference on truth/rumor and their sources. Our insight is that a joint inference can enhance the mutual performance on both sides. To this end, we propose a framework named SourceCR, which alternates between two modules, i.e., \emph{credibility-reliability training} for truth/rumor inference and \emph{division-querying} for source detection, in an iterative manner. To elaborate, the former module performs a simultaneous estimation of claim credibility and user reliability by virtue of an Expectation Maximization algorithm, which takes the source reliability outputted from the latter module as the initial input. Meanwhile, the latter module divides the network into two different subnetworks labeled via the claim credibility, and in each subnetwork launches source detection by applying querying of theoretical budget guarantee to the users selected via the estimated reliability from the former module. The proposed SourceCR is provably convergent, and algorithmic implementable with reasonable computational complexity. We empirically validate the effectiveness of the proposed framework in both synthetic and real datasets, where the joint inference leads to an up to 35\% accuracy of credibility gain and 29\% source detection rate gain compared with the separate counterparts. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 161,298 |
2211.17116 | Global Convergence of Localized Policy Iteration in Networked
Multi-Agent Reinforcement Learning | We study a multi-agent reinforcement learning (MARL) problem where the agents interact over a given network. The goal of the agents is to cooperatively maximize the average of their entropy-regularized long-term rewards. To overcome the curse of dimensionality and to reduce communication, we propose a Localized Policy Iteration (LPI) algorithm that provably learns a near-globally-optimal policy using only local information. In particular, we show that, despite restricting each agent's attention to only its $\kappa$-hop neighborhood, the agents are able to learn a policy with an optimality gap that decays polynomially in $\kappa$. In addition, we show the finite-sample convergence of LPI to the global optimal policy, which explicitly captures the trade-off between optimality and computational complexity in choosing $\kappa$. Numerical simulations demonstrate the effectiveness of LPI. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 333,869 |
2309.09531 | Decompose Semantic Shifts for Composed Image Retrieval | Composed image retrieval is a type of image retrieval task where the user provides a reference image as a starting point and specifies a text on how to shift from the starting point to the desired target image. However, most existing methods focus on the composition learning of text and reference images and oversimplify the text as a description, neglecting the inherent structure and the user's shifting intention of the texts. As a result, these methods typically take shortcuts that disregard the visual cue of the reference images. To address this issue, we reconsider the text as instructions and propose a Semantic Shift network (SSN) that explicitly decomposes the semantic shifts into two steps: from the reference image to the visual prototype and from the visual prototype to the target image. Specifically, SSN explicitly decomposes the instructions into two components: degradation and upgradation, where the degradation is used to picture the visual prototype from the reference image, while the upgradation is used to enrich the visual prototype into the final representations to retrieve the desired target image. The experimental results show that the proposed SSN demonstrates a significant improvement of 5.42% and 1.37% on the CIRR and FashionIQ datasets, respectively, and establishes a new state-of-the-art performance. Codes will be publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 392,653 |
2405.15182 | RFLPA: A Robust Federated Learning Framework against Poisoning Attacks
with Secure Aggregation | Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks. To address the privacy concern, secure aggregation (SecAgg) is often used to obtain the aggregation of gradients on sever without inspecting individual user updates. Unfortunately, existing defense strategies against poisoning attacks rely on the analysis of local updates in plaintext, making them incompatible with SecAgg. To reconcile the conflicts, we propose a robust federated learning framework against poisoning attacks (RFLPA) based on SecAgg protocol. Our framework computes the cosine similarity between local updates and server updates to conduct robust aggregation. Furthermore, we leverage verifiable packed Shamir secret sharing to achieve reduced communication cost of $O(M+N)$ per user, and design a novel dot product aggregation algorithm to resolve the issue of increased information leakage. Our experimental results show that RFLPA significantly reduces communication and computation overhead by over $75\%$ compared to the state-of-the-art secret sharing method, BREA, while maintaining competitive accuracy. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 456,791 |
2502.05677 | Surprise Potential as a Measure of Interactivity in Driving Scenarios | Validating the safety and performance of an autonomous vehicle (AV) requires benchmarking on real-world driving logs. However, typical driving logs contain mostly uneventful scenarios with minimal interactions between road users. Identifying interactive scenarios in real-world driving logs enables the curation of datasets that amplify critical signals and provide a more accurate assessment of an AV's performance. In this paper, we present a novel metric that identifies interactive scenarios by measuring an AV's surprise potential on others. First, we identify three dimensions of the design space to describe a family of surprise potential measures. Second, we exhaustively evaluate and compare different instantiations of the surprise potential measure within this design space on the nuScenes dataset. To determine how well a surprise potential measure correctly identifies an interactive scenario, we use a reward model learned from human preferences to assess alignment with human intuition. Our proposed surprise potential, arising from this exhaustive comparative study, achieves a correlation of more than 0.82 with the human-aligned reward function, outperforming existing approaches. Lastly, we validate motion planners on curated interactive scenarios to demonstrate downstream applications. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 531,720 |
2112.07089 | Building on Huang et al. GlossBERT for Word Sense Disambiguation | We propose to take on the problem ofWord Sense Disambiguation (WSD). In language, words of the same form can take different meanings depending on context. While humans easily infer the meaning or gloss of such words by their context, machines stumble on this task.As such, we intend to replicated and expand upon the results of Huang et al.GlossBERT, a model which they design to disambiguate these words (Huang et al.,2019). Specifically, we propose the following augmentations: data-set tweaking(alpha hyper-parameter), ensemble methods, and replacement of BERT with BART andALBERT. The following GitHub repository contains all code used in this report, which extends on the code made available by Huang et al. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 271,372 |
2303.02890 | An Analysis of Physics-Informed Neural Networks | Whilst the partial differential equations that govern the dynamics of our world have been studied in great depth for centuries, solving them for complex, high-dimensional conditions and domains still presents an incredibly large mathematical and computational challenge. Analytical methods can be cumbersome to utilise, and numerical methods can lead to errors and inaccuracies. On top of this, sometimes we lack the information or knowledge to pose the problem well enough to apply these kinds of methods. Here, we present a new approach to approximating the solution to physical systems - physics-informed neural networks. The concept of artificial neural networks is introduced, the objective function is defined, and optimisation strategies are discussed. The partial differential equation is then included as a constraint in the loss function for the optimisation problem, giving the network access to knowledge of the dynamics of the physical system it is modelling. Some intuitive examples are displayed, and more complex applications are considered to showcase the power of physics informed neural networks, such as in seismic imaging. Solution error is analysed, and suggestions are made to improve convergence and/or solution precision. Problems and limitations are also touched upon in the conclusions, as well as some thoughts as to where physics informed neural networks are most useful, and where they could go next. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 349,525 |
1711.00457 | Almost instant brain atlas segmentation for large-scale studies | Large scale studies of group differences in healthy controls and patients and screenings for early stage disease prevention programs require processing and analysis of extensive multisubject datasets. Complexity of the task increases even further when segmenting structural MRI of the brain into an atlas with more than 50 regions. Current automatic approaches are time-consuming and hardly scalable; they often involve many error prone intermediate steps and don't utilize other available modalities. To alleviate these problems, we propose a feedforward fully convolutional neural network trained on the output produced by the state of the art models. Incredible speed due to available powerful GPUs neural network makes this analysis much easier and faster (from $>10$ hours to a minute). The proposed model is more than two orders of magnitudes faster than the state of the art and yet as accurate. We have evaluated the network's performance by comparing it with the state of the art in the task of differentiating region volumes of healthy controls and patients with schizophrenia on a dataset with 311 subjects. This comparison provides a strong evidence that speed did not harm the accuracy. The overall quality may also be increased by utilizing multi-modal datasets (not an easy task for other models) by simple adding more modalities as an input. Our model will be useful in large-scale studies as well as in clinical care solutions, where it can significantly reduce delay between the patient screening and the result. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 83,721 |
1810.02494 | A note on spanoid rank | We construct a spanoid $\mathcal{S}$ on $n$ elements with $\textsf{rank}(\mathcal{S}) \ge n^c \textsf{f-rank}(\mathcal{S})$ where $c = \log_5 3 - \log_5 2.5 \approx 0.113283$. This answers a question of Dvir-Gopi-Wigderson [DGW18]. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 109,599 |
2201.09205 | Deeply Explain CNN via Hierarchical Decomposition | In computer vision, some attribution methods for explaining CNNs attempt to study how the intermediate features affect the network prediction. However, they usually ignore the feature hierarchies among the intermediate features. This paper introduces a hierarchical decomposition framework to explain CNN's decision-making process in a top-down manner. Specifically, we propose a gradient-based activation propagation (gAP) module that can decompose any intermediate CNN decision to its lower layers and find the supporting features. Then we utilize the gAP module to iteratively decompose the network decision to the supporting evidence from different CNN layers. The proposed framework can generate a deep hierarchy of strongly associated supporting evidence for the network decision, which provides insight into the decision-making process. Moreover, gAP is effort-free for understanding CNN-based models without network architecture modification and extra training process. Experiments show the effectiveness of the proposed method. The code and interactive demo website will be made publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 276,600 |
2105.02095 | Two-layer neural networks with values in a Banach space | We study two-layer neural networks whose domain and range are Banach spaces with separable preduals. In addition, we assume that the image space is equipped with a partial order, i.e. it is a Riesz space. As the nonlinearity we choose the lattice operation of taking the positive part; in case of $\mathbb R^d$-valued neural networks this corresponds to the ReLU activation function. We prove inverse and direct approximation theorems with Monte-Carlo rates for a certain class of functions, extending existing results for the finite-dimensional case. In the second part of the paper, we study, from the regularisation theory viewpoint, the problem of finding optimal representations of such functions via signed measures on a latent space from a finite number of noisy observations. We discuss regularity conditions known as source conditions and obtain convergence rates in a Bregman distance for the representing measure in the regime when both the noise level goes to zero and the number of samples goes to infinity at appropriate rates. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 233,725 |
2105.13153 | Cardiac Segmentation on CT Images through Shape-Aware Contour Attentions | Cardiac segmentation of atriums, ventricles, and myocardium in computed tomography (CT) images is an important first-line task for presymptomatic cardiovascular disease diagnosis. In several recent studies, deep learning models have shown significant breakthroughs in medical image segmentation tasks. Unlike other organs such as the lungs and liver, the cardiac organ consists of multiple substructures, i.e., ventricles, atriums, aortas, arteries, veins, and myocardium. These cardiac substructures are proximate to each other and have indiscernible boundaries (i.e., homogeneous intensity values), making it difficult for the segmentation network focus on the boundaries between the substructures. In this paper, to improve the segmentation accuracy between proximate organs, we introduce a novel model to exploit shape and boundary-aware features. We primarily propose a shape-aware attention module, that exploits distance regression, which can guide the model to focus on the edges between substructures so that it can outperform the conventional contour-based attention method. In the experiments, we used the Multi-Modality Whole Heart Segmentation dataset that has 20 CT cardiac images for training and validation, and 40 CT cardiac images for testing. The experimental results show that the proposed network produces more accurate results than state-of-the-art networks by improving the Dice similarity coefficient score by 4.97%. Our proposed shape-aware contour attention mechanism demonstrates that distance transformation and boundary features improve the actual attention map to strengthen the responses in the boundary area. Moreover, our proposed method significantly reduces the false-positive responses of the final output, resulting in accurate segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 237,225 |
1405.5732 | Self-tuned Visual Subclass Learning with Shared Samples An Incremental
Approach | Computer vision tasks are traditionally defined and evaluated using semantic categories. However, it is known to the field that semantic classes do not necessarily correspond to a unique visual class (e.g. inside and outside of a car). Furthermore, many of the feasible learning techniques at hand cannot model a visual class which appears consistent to the human eye. These problems have motivated the use of 1) Unsupervised or supervised clustering as a preprocessing step to identify the visual subclasses to be used in a mixture-of-experts learning regime. 2) Felzenszwalb et al. part model and other works model mixture assignment with latent variables which is optimized during learning 3) Highly non-linear classifiers which are inherently capable of modelling multi-modal input space but are inefficient at the test time. In this work, we promote an incremental view over the recognition of semantic classes with varied appearances. We propose an optimization technique which incrementally finds maximal visual subclasses in a regularized risk minimization framework. Our proposed approach unifies the clustering and classification steps in a single algorithm. The importance of this approach is its compliance with the classification via the fact that it does not need to know about the number of clusters, the representation and similarity measures used in pre-processing clustering methods a priori. Following this approach we show both qualitatively and quantitatively significant results. We show that the visual subclasses demonstrate a long tail distribution. Finally, we show that state of the art object detection methods (e.g. DPM) are unable to use the tails of this distribution comprising 50\% of the training samples. In fact we show that DPM performance slightly increases on average by the removal of this half of the data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 33,298 |
2406.19608 | Multi-service collaboration and composition of cloud manufacturing
customized production based on problem decomposition | Cloud manufacturing system is a service-oriented and knowledge-based one, which can provide solutions for the large-scale customized production. The service resource allocation is the primary factor that restricts the production time and cost in the cloud manufacturing customized production (CMCP). In order to improve the efficiency and reduce the cost in CMCP, we propose a new framework which considers the collaboration among services with the same functionality. A mathematical evaluation formulation for the service composition and service usage scheme is constructed with the following critical indexes: completion time, cost, and number of selected services. Subsequently, a problem decomposition based genetic algorithm is designed to obtain the optimal service compositions with service usage schemes. A smart clothing customization case is illustrated so as to show the effectiveness and efficiency of the method proposed in this paper. Finally, the results of simulation experiments and comparisons show that these solutions obtained by our method are with the minimum time, a lower cost, and the fewer selected services. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 468,484 |
1908.01161 | Distributed Adaptive Coverage Control of Differential Drive Robotic
Sensors | This paper is concerned with the deployment of multiple mobile robots in order to autonomously cover a region Q. The region to be covered is described using a density function which may not be apriori known. In this paper, we pose the coverage problem as an optimization problem over some space of functions on Q. In particular, we look at L 2 -distance based coverage algorithm and derive adaptive control laws for the same. We also propose a modified adaptive control law incorporating consensus for better parameter convergence. We implement the algorithms on real differential drive robots with both simulated density function as well as density function implemented using light sources. We also compare the L 2 -distance based method with the locational optimization method using experiments. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | 140,689 |
0806.2216 | An Intelligent Multi-Agent Recommender System for Human Capacity
Building | This paper presents a Multi-Agent approach to the problem of recommending training courses to engineering professionals. The recommendation system is built as a proof of concept and limited to the electrical and mechanical engineering disciplines. Through user modelling and data collection from a survey, collaborative filtering recommendation is implemented using intelligent agents. The agents work together in recommending meaningful training courses and updating the course information. The system uses a users profile and keywords from courses to rank courses. A ranking accuracy for courses of 90% is achieved while flexibility is achieved using an agent that retrieves information autonomously using data mining techniques from websites. This manner of recommendation is scalable and adaptable. Further improvements can be made using clustering and recording user feedback. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 1,919 |
2207.05993 | A new database of Houma Alliance Book ancient handwritten characters and
classifier fusion approach | The Houma Alliance Book is one of the national treasures of the Museum in Shanxi Museum Town in China. It has great historical significance in researching ancient history. To date, the research on the Houma Alliance Book has been staying in the identification of paper documents, which is inefficient to identify and difficult to display, study and publicize. Therefore, the digitization of the recognized ancient characters of Houma League can effectively improve the efficiency of recognizing ancient characters and provide more reliable technical support and text data. This paper proposes a new database of Houma Alliance Book ancient handwritten characters and a multi-modal fusion method to recognize ancient handwritten characters. In the database, 297 classes and 3,547 samples of Houma Alliance ancient handwritten characters are collected from the original book collection and by human imitative writing. Furthermore, the decision-level classifier fusion strategy is applied to fuse three well-known deep neural network architectures for ancient handwritten character recognition. Experiments are performed on our new database. The experimental results first provide the baseline result of the new database to the research community and then demonstrate the efficiency of our proposed method. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 307,736 |
2202.01949 | A Reinforcement Learning Framework for PQoS in a Teleoperated Driving
Scenario | In recent years, autonomous networks have been designed with Predictive Quality of Service (PQoS) in mind, as a means for applications operating in the industrial and/or automotive sectors to predict unanticipated Quality of Service (QoS) changes and react accordingly. In this context, Reinforcement Learning (RL) has come out as a promising approach to perform accurate predictions, and optimize the efficiency and adaptability of wireless networks. Along these lines, in this paper we propose the design of a new entity, implemented at the RAN-level that, with the support of an RL framework, implements PQoS functionalities. Specifically, we focus on the design of the reward function of the learning agent, able to convert QoS estimates into appropriate countermeasures if QoS requirements are not satisfied. We demonstrate via ns-3 simulations that our approach achieves the best trade-off in terms of QoS and Quality of Experience (QoE) performance of end users in a teleoperated-driving-like scenario, compared to other baseline solutions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 278,652 |
1506.06155 | CO2 Forest: Improved Random Forest by Continuous Optimization of Oblique
Splits | We propose a novel algorithm for optimizing multivariate linear threshold functions as split functions of decision trees to create improved Random Forest classifiers. Standard tree induction methods resort to sampling and exhaustive search to find good univariate split functions. In contrast, our method computes a linear combination of the features at each node, and optimizes the parameters of the linear combination (oblique) split functions by adopting a variant of latent variable SVM formulation. We develop a convex-concave upper bound on the classification loss for a one-level decision tree, and optimize the bound by stochastic gradient descent at each internal node of the tree. Forests of up to 1000 Continuously Optimized Oblique (CO2) decision trees are created, which significantly outperform Random Forest with univariate splits and previous techniques for constructing oblique trees. Experimental results are reported on multi-class classification benchmarks and on Labeled Faces in the Wild (LFW) dataset. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 44,387 |
2409.15398 | Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in
Red Teaming GenAI | As generative AI, particularly large language models (LLMs), become increasingly integrated into production applications, new attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems. Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks. Despite growing academic interest in adversarial risks for generative AI, there is limited guidance tailored for practitioners to assess and mitigate these challenges in real-world environments. To address this, our contributions include: (1) a practical examination of red- and blue-teaming strategies for securing generative AI, (2) identification of key challenges and open questions in defense development and evaluation, and (3) the Attack Atlas, an intuitive framework that brings a practical approach to analyzing single-turn input attacks, placing it at the forefront for practitioners. This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 490,913 |
2412.02790 | An Evolutionary Large Language Model for Hallucination Mitigation | The emergence of LLMs, like ChatGPT and Gemini, has marked the modern era of artificial intelligence applications characterized by high-impact applications generating text, images, and videos. However, these models usually ensue with one critical challenge called hallucination: confident presentation of inaccurate or fabricated information. This problem attracts serious concern when these models are applied to specialized domains, including healthcare and law, where the accuracy and preciseness of information are absolute conditions. In this paper, we propose EvoLLMs, an innovative framework inspired by Evolutionary Computation, which automates the generation of high-quality Question-answering (QA) datasets while minimizing hallucinations. EvoLLMs employs genetic algorithms, mimicking evolutionary processes like selection, variation, and mutation, to guide LLMs in generating accurate, contextually relevant question-answer pairs. Comparative analysis shows that EvoLLMs consistently outperforms human-generated datasets in key metrics such as Depth, Relevance, and Coverage, while nearly matching human performance in mitigating hallucinations. These results highlight EvoLLMs as a robust and efficient solution for QA dataset generation, significantly reducing the time and resources required for manual curation. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 513,676 |
1606.02378 | SE3-Nets: Learning Rigid Body Motion using Deep Neural Networks | We introduce SE3-Nets, which are deep neural networks designed to model and learn rigid body motion from raw point cloud data. Based only on sequences of depth images along with action vectors and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE3 transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks. Additional experiments with a depth camera observing a Baxter robot pushing objects on a table show that SE3-Nets also work well on real data. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 56,947 |
1403.7657 | The Call of the Crowd: Event Participation in Location-based Social
Services | Understanding the social and behavioral forces behind event participation is not only interesting from the viewpoint of social science, but also has important applications in the design of personalized event recommender systems. This paper takes advantage of data from a widely used location-based social network, Foursquare, to analyze event patterns in three metropolitan cities. We put forward several hypotheses on the motivating factors of user participation and confirm that social aspects play a major role in determining the likelihood of a user to participate in an event. While an explicit social filtering signal accounting for whether friends are attending dominates the factors, the popularity of an event proves to also be a strong attractor. Further, we capture an implicit social signal by performing random walks in a high dimensional graph that encodes the place type preferences of friends and that proves especially suited to identify relevant niche events for users. Our findings on the extent to which the various temporal, spatial and social aspects underlie users' event preferences lead us to further hypothesize that a combination of factors better models users' event interests. We verify this through a supervised learning framework. We show that for one in three users in London and one in five users in New York and Chicago it identifies the exact event the user would attend among the pool of suggestions. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 31,912 |
2306.01485 | Robust low-rank training via approximate orthonormal constraints | With the growth of model and data sizes, a broad effort has been made to design pruning techniques that reduce the resource demand of deep learning pipelines, while retaining model performance. In order to reduce both inference and training costs, a prominent line of work uses low-rank matrix factorizations to represent the network weights. Although able to retain accuracy, we observe that low-rank methods tend to compromise model robustness against adversarial perturbations. By modeling robustness in terms of the condition number of the neural network, we argue that this loss of robustness is due to the exploding singular values of the low-rank weight matrices. Thus, we introduce a robust low-rank training algorithm that maintains the network's weights on the low-rank matrix manifold while simultaneously enforcing approximate orthonormal constraints. The resulting model reduces both training and inference costs while ensuring well-conditioning and thus better adversarial robustness, without compromising model accuracy. This is shown by extensive numerical evidence and by our main approximation theorem that shows the computed robust low-rank network well-approximates the ideal full model, provided a highly performing low-rank sub-network exists. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 370,480 |
1905.13178 | Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its
Full Potential | Artificial Intelligence (AI) technology is rapidly changing many areas of society. While there is tremendous potential in this transition, there are several pitfalls as well. Using the history of computing and the world-wide web as a guide, in this article we identify those pitfalls and actions that lead AI development to its full potential. If done right, AI will be instrumental in achieving the goals we set for economy, society, and the world in general. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 133,021 |
2111.01275 | Recurrent neural network models for working memory of continuous
variables: activity manifolds, connectivity patterns, and dynamic codes | Many daily activities and psychophysical experiments involve keeping multiple items in working memory. When items take continuous values (e.g., orientation, contrast, length, loudness) they must be stored in a continuous structure of appropriate dimensions. We investigate how this structure is represented in neural circuits by training recurrent networks to report two previously shown stimulus orientations. We find the activity manifold for the two orientations resembles a Clifford torus. Although a Clifford and standard torus (the surface of a donut) are topologically equivalent, they have important functional differences. A Clifford torus treats the two orientations equally and keeps them in orthogonal subspaces, as demanded by the task, whereas a standard torus does not. We find and characterize the connectivity patterns that support the Clifford torus. Moreover, in addition to attractors that store information via persistent activity, our networks also use a dynamic code where units change their tuning to prevent new sensory input from overwriting the previously stored one. We argue that such dynamic codes are generally required whenever multiple inputs enter a memory system via shared connections. Finally, we apply our framework to a human psychophysics experiment in which subjects reported two remembered orientations. By varying the training conditions of the RNNs, we test and support the hypothesis that human behavior is a product of both neural noise and reliance on the more stable and behaviorally relevant memory of the ordinal relationship between the two orientations. This suggests that suitable inductive biases in RNNs are important for uncovering how the human brain implements working memory. Together, these results offer an understanding of the neural computations underlying a class of visual decoding tasks, bridging the scales from human behavior to synaptic connectivity. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 264,509 |
2106.05027 | Scientometric engineering: Exploring citation dynamics via arXiv eprints | Scholarly communications have been rapidly integrated into digitised and networked open ecosystems, where preprint servers have played a pivotal role in accelerating the knowledge transfer processes. However, quantitative evidence is scarce regarding how this paradigm shift beyond the traditional journal publication system has affected the dynamics of collective attention on science. To address this issue, we investigate the citation data of more than 1.5 million eprints on arXiv (https://arxiv.org/) and analyse the long-term citation trend for each discipline involved. We find that the typical growth and obsolescence patterns vary across disciplines, reflecting different publication and communication practices. The results provide unique evidence on the attention dynamics shaped by the research community today, including the dramatic growth and fast obsolescence of Computer Science eprints, which has not been captured in previous studies relying on the citation data of journal papers. Subsequently, we develop a quantitatively-and-temporally normalised citation index with an approximately normal distribution, which is useful for comparing citational attention across disciplines and time periods. Further, we derive a stochastic model consistent with the observed quantitative and temporal characteristics of citation growth and obsolescence. The findings and the developed framework open a new avenue for understanding the nature of citation dynamics. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | true | 239,947 |
1908.09156 | A framework for anomaly detection using language modeling, and its
applications to finance | In the finance sector, studies focused on anomaly detection are often associated with time-series and transactional data analytics. In this paper, we lay out the opportunities for applying anomaly and deviation detection methods to text corpora and challenges associated with them. We argue that language models that use distributional semantics can play a significant role in advancing these studies in novel directions, with new applications in risk identification, predictive modeling, and trend analysis. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 142,770 |
1811.06477 | Multi-cell LSTM Based Neural Language Model | Language models, being at the heart of many NLP problems, are always of great interest to researchers. Neural language models come with the advantage of distributed representations and long range contexts. With its particular dynamics that allow the cycling of information within the network, `Recurrent neural network' (RNN) becomes an ideal paradigm for neural language modeling. Long Short-Term Memory (LSTM) architecture solves the inadequacies of the standard RNN in modeling long-range contexts. In spite of a plethora of RNN variants, possibility to add multiple memory cells in LSTM nodes was seldom explored. Here we propose a multi-cell node architecture for LSTMs and study its applicability for neural language modeling. The proposed multi-cell LSTM language models outperform the state-of-the-art results on well-known Penn Treebank (PTB) setup. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | 113,535 |
2312.09944 | Power Minimizing MEC Offloading with QoS Constraints over RIS-Empowered
Communications | This work lies at the intersection of two cutting edge technologies envisioned to proliferate in future 6G wireless systems: Multi-access Edge Computing (MEC) and Reconfigurable Intelligent Surfaces (RISs). While the former will bring a powerful information technology environment at the wireless edge, the latter will enhance communication performance, thanks to the possibility of adapting wireless propagation as per end users' convenience, according to specific service requirements. We propose a joint optimization of radio, computing, and wireless environment reconfiguration through an RIS, with the goal of enabling low power computation offloading services with reliability guarantees. Going beyond previous works on this topic, multi-carrier frequency selective RIS elements' responses and wireless channels are considered. This opens new challenges in RIS optimization, accounting for frequency dependent RIS response profiles, which strongly affect RIS-aided wireless links and, as a consequence, MEC service performance. We formulate an optimization problem accounting for short and long-term constraints involving device transmit power allocation across multiple subcarriers and local computing resources, as well as RIS reconfiguration parameters according to a recently developed Lorentzian model. Besides a theoretical optimization framework, numerical results show the effectiveness of the proposed method in enabling low power reliable computation offloading over RIS-aided frequency selective channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 415,941 |
2204.01411 | Computer-Aided Extraction of Select MRI Markers of Cerebral Small Vessel
Disease: A Systematic Review | Cerebral small vessel disease (CSVD) is a major vascular contributor to cognitive impairment in ageing, including dementias. Imaging remains the most promising method for in vivo studies of CSVD. To replace the subjective and laborious visual rating approaches, emerging studies have applied state-of-the-art artificial intelligence to extract imaging biomarkers of CSVD from MRI scans. We aimed to summarise published computer-aided methods to examine three imaging biomarkers of CSVD, namely cerebral microbleeds (CMB), dilated perivascular spaces (PVS), and lacunes of presumed vascular origin. Seventy-one classical image processing, classical machine learning, and deep learning studies were identified. CMB and PVS have been better studied, compared to lacunes. While good performance metrics have been achieved in local test datasets, there have not been generalisable pipelines validated in different research or clinical cohorts. Transfer learning and weak supervision techniques have been applied to accommodate the limitations in training data. Future studies could consider pooling data from multiple sources to increase diversity, and validating the performance of the methods using both image processing metrics and associations with clinical measures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 289,596 |
2211.06883 | Generalizing distribution of partial rewards for multi-armed bandits
with temporally-partitioned rewards | We investigate the Multi-Armed Bandit problem with Temporally-Partitioned Rewards (TP-MAB) setting in this paper. In the TP-MAB setting, an agent will receive subsets of the reward over multiple rounds rather than the entire reward for the arm all at once. In this paper, we introduce a general formulation of how an arm's cumulative reward is distributed across several rounds, called Beta-spread property. Such a generalization is needed to be able to handle partitioned rewards in which the maximum reward per round is not distributed uniformly across rounds. We derive a lower bound on the TP-MAB problem under the assumption that Beta-spread holds. Moreover, we provide an algorithm TP-UCB-FR-G, which uses the Beta-spread property to improve the regret upper bound in some scenarios. By generalizing how the cumulative reward is distributed, this setting is applicable in a broader range of applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 330,057 |
1203.6027 | Causal State Communication | The problem of state communication over a discrete memoryless channel with discrete memoryless state is studied when the state information is available strictly causally at the encoder. It is shown that block Markov encoding, in which the encoder communicates a description of the state sequence in the previous block by incorporating side information about the state sequence at the decoder, yields the minimum state estimation error. When the same channel is used to send additional independent information at the expense of a higher channel state estimation error, the optimal tradeoff between the rate of the independent information and the state estimation error is characterized via the capacity- distortion function. It is shown that any optimal tradeoff pair can be achieved via rate-splitting. These coding theorems are then extended optimally to the case of causal channel state information at the encoder using the Shannon strategy. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 15,141 |
2408.16647 | DriveGenVLM: Real-world Video Generation for Vision Language Model based
Autonomous Driving | The advancement of autonomous driving technologies necessitates increasingly sophisticated methods for understanding and predicting real-world scenarios. Vision language models (VLMs) are emerging as revolutionary tools with significant potential to influence autonomous driving. In this paper, we propose the DriveGenVLM framework to generate driving videos and use VLMs to understand them. To achieve this, we employ a video generation framework grounded in denoising diffusion probabilistic models (DDPM) aimed at predicting real-world video sequences. We then explore the adequacy of our generated videos for use in VLMs by employing a pre-trained model known as Efficient In-context Learning on Egocentric Videos (EILEV). The diffusion model is trained with the Waymo open dataset and evaluated using the Fr\'echet Video Distance (FVD) score to ensure the quality and realism of the generated videos. Corresponding narrations are provided by EILEV for these generated videos, which may be beneficial in the autonomous driving domain. These narrations can enhance traffic scene understanding, aid in navigation, and improve planning capabilities. The integration of video generation with VLMs in the DriveGenVLM framework represents a significant step forward in leveraging advanced AI models to address complex challenges in autonomous driving. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 484,405 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.