date_created timestamp[ns] | abstract string | title string | categories string | arxiv_id string | year int32 | embedding_str string | embedding sequence | data_map sequence |
|---|---|---|---|---|---|---|---|---|
2007-04-01T13:06:50 | The intelligent acoustic emission locator is described in Part I, while Part II discusses blind source separation, time delay estimation and location of two simultaneously active continuous acoustic emission sources. The location of acoustic emission on complicated aircraft frame structures is a difficult problem o... | Intelligent location of simultaneously active acoustic emission sources: Part I | cs.NE cs.AI | 0704.0047 | 2,007 | # Intelligent location of simultaneously active acoustic emission sources: Part I
The intelligent acoustic emission locator is described in Part I, while Part II discusses blind source separation, time delay estimation and location of two simultaneously active continuous acoustic emission sources. The location o... | [
-0.04404345899820328,
0.028888946399092674,
-0.018734410405158997,
0.021733781322836876,
0.05986011028289795,
-0.04248189926147461,
-0.010882575996220112,
0.028235934674739838,
-0.04097148776054382,
0.021222002804279327,
0.003917800262570381,
-0.031001055613160133,
-0.03315604105591774,
-0... | [
9.242573738098145,
2.837263584136963
] |
2007-04-01T18:53:13 | Part I describes an intelligent acoustic emission locator, while Part II discusses blind source separation, time delay estimation and location of two continuous acoustic emission sources. Acoustic emission (AE) analysis is used for characterization and location of developing defects in materials. AE sources often g... | Intelligent location of simultaneously active acoustic emission sources: Part II | cs.NE cs.AI | 0704.0050 | 2,007 | # Intelligent location of simultaneously active acoustic emission sources: Part II
Part I describes an intelligent acoustic emission locator, while Part II discusses blind source separation, time delay estimation and location of two continuous acoustic emission sources. Acoustic emission (AE) analysis is used fo... | [
-0.03380154073238373,
0.005963773000985384,
-0.011631040833890438,
0.007265498861670494,
0.075153648853302,
-0.0016867886297404766,
-0.027387138456106186,
0.05997736379504204,
-0.047392453998327255,
0.021001353859901428,
0.02238968387246132,
-0.031653229147195816,
-0.0463835671544075,
-0.0... | [
9.297958374023438,
2.8726723194122314
] |
2007-04-03T02:08:48 | This paper discusses the benefits of describing the world as information, especially in the study of the evolution of life and cognition. Traditional studies encounter problems because it is difficult to describe life and cognition in terms of matter and energy, since their laws are valid only at the physical scale. ... | The World as Evolving Information | cs.IT cs.AI math.IT q-bio.PE | 0704.0304 | 2,007 | # The World as Evolving Information
This paper discusses the benefits of describing the world as information, especially in the study of the evolution of life and cognition. Traditional studies encounter problems because it is difficult to describe life and cognition in terms of matter and energy, since their laws a... | [
-0.005738923791795969,
0.01626133918762207,
0.00978090800344944,
0.013342449441552162,
0.04137216508388519,
-0.05840257927775383,
0.06933693587779999,
0.05871616676449776,
-0.0040770405903458595,
0.02691437117755413,
0.01023237407207489,
-0.015425452031195164,
0.007778653409332037,
-0.0055... | [
3.5662941932678223,
10.24143123626709
] |
2007-04-05T02:57:15 | The problem of statistical learning is to construct a predictor of a random variable $Y$ as a function of a related random variable $X$ on the basis of an i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable predictors are drawn from some specified class, and the goal is to approach asymptotically... | Learning from compressed observations | cs.IT cs.LG math.IT | 0704.0671 | 2,007 | # Learning from compressed observations
The problem of statistical learning is to construct a predictor of a random variable $Y$ as a function of a related random variable $X$ on the basis of an i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable predictors are drawn from some specified class, a... | [
0.004663723520934582,
0.02371317893266678,
-0.039355162531137466,
0.013416587375104427,
0.043439339846372604,
-0.038411695510149,
0.026434142142534256,
0.03063337504863739,
0.014469015412032604,
0.017912784591317177,
-0.029166392982006073,
-0.019612282514572144,
-0.057046618312597275,
0.01... | [
-0.9903357625007629,
7.9114227294921875
] |
2007-04-06T21:58:52 | In a sensor network, in practice, the communication among sensors is subject to:(1) errors or failures at random times; (3) costs; and(2) constraints since sensors and networks operate under scarce resources, such as power, data rate, or communication. The signal-to-noise ratio (SNR) is usually a main factor in deter... | Sensor Networks with Random Links: Topology Design for Distributed Consensus | cs.IT cs.LG math.IT | 0704.0954 | 2,007 | # Sensor Networks with Random Links: Topology Design for Distributed Consensus
In a sensor network, in practice, the communication among sensors is subject to:(1) errors or failures at random times; (3) costs; and(2) constraints since sensors and networks operate under scarce resources, such as power, data rate, o... | [
0.03113919124007225,
0.01722194068133831,
-0.02114740014076233,
0.01870373636484146,
0.02696949988603592,
-0.03307199478149414,
0.026141555979847908,
0.04134734719991684,
0.00817064568400383,
0.011354908347129822,
-0.008670620620250702,
-0.037836264818906784,
-0.042228423058986664,
-0.0447... | [
-2.087822675704956,
9.253089904785156
] |
2007-04-07T13:40:49 | Advances in semiconductor technology are contributing to the increasing complexity in the design of embedded systems. Architectures with novel techniques such as evolvable nature and autonomous behavior have engrossed lot of attention. This paper demonstrates conceptually evolvable embedded systems can be characteriz... | Architecture for Pseudo Acausal Evolvable Embedded Systems | cs.NE cs.AI | 0704.0985 | 2,007 | # Architecture for Pseudo Acausal Evolvable Embedded Systems
Advances in semiconductor technology are contributing to the increasing complexity in the design of embedded systems. Architectures with novel techniques such as evolvable nature and autonomous behavior have engrossed lot of attention. This paper demonstra... | [
-0.06772801280021667,
-0.04751911759376526,
0.05334857106208801,
-0.008886154741048813,
0.01621079444885254,
0.01813499629497528,
0.01197113934904337,
0.08326824009418488,
-0.013812050223350525,
0.024017518386244774,
0.05047218129038811,
-0.02993476390838623,
-0.024074623361229897,
0.03323... | [
1.2674580812454224,
10.50879955291748
] |
2007-04-08T10:15:54 | The on-line shortest path problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a path between two distinguished vertices such that the loss of th... | The on-line shortest path problem under partial monitoring | cs.LG cs.SC | 0704.1020 | 2,007 | # The on-line shortest path problem under partial monitoring
The on-line shortest path problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a pa... | [
0.0214654840528965,
-0.002736347960308194,
-0.004288981668651104,
-0.01874680444598198,
0.03156999498605728,
-0.11516711860895157,
0.03861626237630844,
0.015422715805470943,
0.0850357711315155,
0.05775915086269379,
-0.013711989857256413,
-0.0203807782381773,
-0.034029729664325714,
0.029818... | [
0.9680965542793274,
11.29276180267334
] |
2007-04-08T17:36:00 | Ordinal regression is an important type of learning, which has properties of both classification and regression. Here we describe a simple and effective approach to adapt a traditional neural network to learn ordinal categories. Our approach is a generalization of the perceptron method for ordinal regression. On seve... | A neural network approach to ordinal regression | cs.LG cs.AI cs.NE | 0704.1028 | 2,007 | # A neural network approach to ordinal regression
Ordinal regression is an important type of learning, which has properties of both classification and regression. Here we describe a simple and effective approach to adapt a traditional neural network to learn ordinal categories. Our approach is a generalization of th... | [
0.02058105356991291,
0.0417758971452713,
-0.039088230580091476,
0.058142419904470444,
0.029806766659021378,
-0.092986099421978,
-0.021008415147662163,
0.009555886499583721,
-0.01678277738392353,
0.012610609643161297,
-0.004243081901222467,
-0.022666405886411667,
-0.0019207492005079985,
-0.... | [
1.0912777185440063,
6.977015972137451
] |
2007-04-09T17:52:17 | This paper explores the following question: what kind of statistical guarantees can be given when doing variable selection in high-dimensional models? In particular, we look at the error rates and power of some multi-stage regression methods. In the first stage we fit a set of candidate models. In the second stage we... | High-dimensional variable selection | math.ST stat.ML stat.TH | 0704.1139 | 2,007 | # High-dimensional variable selection
This paper explores the following question: what kind of statistical guarantees can be given when doing variable selection in high-dimensional models? In particular, we look at the error rates and power of some multi-stage regression methods. In the first stage we fit a set of c... | [
0.013912519440054893,
-0.013798755593597889,
-0.018254350870847702,
-0.020377112552523613,
-0.02309061400592327,
-0.08434917777776718,
0.015011212788522243,
0.05040811374783516,
0.027782447636127472,
0.05066540092229843,
-0.0827070102095604,
-0.0655580535531044,
-0.025871815159916878,
0.08... | [
-1.2605152130126953,
7.261412620544434
] |
2007-04-09T22:02:29 | The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among one mi... | Novelty and Collective Attention | cs.CY cs.IR physics.soc-ph | 0704.1158 | 2,007 | # Novelty and Collective Attention
The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics ... | [
0.0059202867560088634,
0.04674818366765976,
0.027123160660266876,
-0.03438131883740425,
0.023253699764609337,
-0.04595598205924034,
0.008280239067971706,
0.03546557575464249,
0.009776218794286251,
0.02797929011285305,
0.04194650799036026,
-0.033732082694768906,
0.0010071067372336984,
-0.03... | [
7.0756731033325195,
10.080440521240234
] |
Dataset Card for "arxiv_ml"
Dataset Description
Dataset Summary
This is a dataset of titles and abstracts of machine learning related papers from ArXiv. This data is derived from the ArXiv dataset available on Kaggle. The selection of papers was determined by selecting all papers that used a category tag in the set {"cs.LG", "cs.AI", "cs.CL", "stat.ML", "cs.IR", "cs.NE", "cs.SC"}. To supplement the titles and abstracts the creation time of the paper, as well as the categories are provided. To make exploration easier embeddings of the title and abstract have been made using the Nomic-embed-v2-moe text embedding model, and a 2D representation using UMAP is also provided.
Supported Tasks
This dataset is primarily aimed at tasks such as topic modelling, corpus triage, search and information retrieval, and other NLP tasks.
Languages
The dataset is in English, although other languages may also be present.
Dataset Creation
Curation Rationale
The fill ArXiv dataset is too large for many tasks. Subsetting to a selection of ArXiv categories related the AI and ML ensures a reasonably sized dataset that should mostly contain topics that are familiar to those wishing to use the dataset.
Source Data
This data is derived from the ArXiv dataset available on Kaggle.
Personal and Sensitive Information
This dataset contains publicly published information that was available under a CC0: public domain license via Kaggle. There should be no personal or senstive information in this dataset. If this is in error, please contact the maintainer and we will endeavour to remedy any issues.
Additional Information
Dataset Curators
Leland McInnes for the curated subset, Cornell University for the initial full dataset.
Licensing Information
Licensed as CC0: Public Domain.
- Downloads last month
- 41