Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
125
1
126
null
1
770
I want learn about NoSQL and when is better to use SQL or NoSQL. I know that this question depends on the case, but I'm asking for a good documentation on NoSQL, and some explanation of when is better to use SQL or NoSQL (use cases, etc). Also, your opinions on NoSQL databases, and any recommendations for learning abou...
How to learn noSQL databases and how to know when SQL or noSQL is better
CC BY-SA 3.0
null
2014-05-17T21:52:34.563
2021-03-15T21:14:08.800
2014-05-18T16:10:50.820
84
null
[ "nosql" ]
126
2
null
125
2
null
Please have a look at my answer here: [Motivations for using relational database / ORM or document database / ODM](https://stackoverflow.com/questions/13528216/motivations-for-using-relational-database-orm-or-document-database-odm/13599767#13599767) Short version: - Use NoSQL when data size and number of transactions...
null
CC BY-SA 3.0
null
2014-05-17T23:53:42.700
2016-04-11T22:08:14.620
2017-05-23T12:38:53.587
-1
26
null
128
1
296
null
62
31257
[Latent Dirichlet Allocation (LDA)](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) and [Hierarchical Dirichlet Process (HDP)](http://en.wikipedia.org/wiki/Hierarchical_Dirichlet_process) are both topic modeling processes. The major difference is LDA requires the specification of the number of topics, and HDP...
Latent Dirichlet Allocation vs Hierarchical Dirichlet Process
CC BY-SA 3.0
null
2014-05-18T06:10:52.543
2021-02-04T09:10:56.807
2014-05-20T13:45:59.373
84
122
[ "nlp", "topic-model", "lda" ]
129
1
166
null
10
1581
[This question](https://stackoverflow.com/questions/879432/what-is-the-difference-between-a-generative-and-discriminative-algorithm) asks about generative vs. discriminative algorithm, but can someone give an example of the difference between these forms when applied to Natural Language Processing? How are generative a...
What is generative and discriminative model? How are they used in Natural Language Processing?
CC BY-SA 3.0
null
2014-05-18T06:17:37.587
2014-05-19T11:13:48.067
2017-05-23T12:38:53.587
-1
122
[ "nlp", "language-model" ]
130
1
132
null
70
39376
From wikipedia: > dimensionality reduction or dimension reduction is the process of reducing the number of random variables under consideration, and can be divided into feature selection and feature extraction. What is the difference between feature selection and feature extraction? What is an example of dimensional...
What is dimensionality reduction? What is the difference between feature selection and extraction?
CC BY-SA 4.0
null
2014-05-18T06:26:15.673
2021-02-07T20:32:33.683
2021-02-07T20:32:33.683
29169
122
[ "feature-selection", "feature-extraction", "dimensionality-reduction" ]
131
2
null
115
9
null
Look it up on: - Google Scholar link - Citeseer link If you get a single exact title match then you have probably found the right article, and can fill in the rest of the info from there. Both give you download links and bibtex-style output. What you would likely want to do though to get perfect metadata is downl...
null
CC BY-SA 3.0
null
2014-05-18T06:54:08.560
2014-05-18T06:54:08.560
null
null
26
null
132
2
null
130
57
null
Simply put: - feature selection: you select a subset of the original feature set; while - feature extraction: you build a new set of features from the original feature set. Examples of feature extraction: extraction of contours in images, extraction of digrams from a text, extraction of phonemes from recording of...
null
CC BY-SA 3.0
null
2014-05-18T07:53:58.203
2014-05-18T07:53:58.203
null
null
172
null
133
2
null
103
11
null
- I think a number of clustering algorithms that normally use a metric, do not actually rely on the metric properties (other than commutativity, but I think you'd have that here). For example, DBSCAN uses epsilon-neighborhoods around a point; there is nothing in there that specifically says the triangle inequality ma...
null
CC BY-SA 3.0
null
2014-05-18T09:09:47.780
2014-05-18T09:17:15.557
2014-05-18T09:17:15.557
26
26
null
134
1
246
null
11
703
In our company, we have a MongoDB database containing a lot of unstructured data, on which we need to run map-reduce algorithms to generate reports and other analyses. We have two approaches to select from for implementing the required analyses: - One approach is to extract the data from MongoDB to a Hadoop cluster an...
Can map-reduce algorithms written for MongoDB be ported to Hadoop later?
CC BY-SA 3.0
null
2014-05-18T12:03:21.650
2014-06-10T02:42:02.050
null
null
227
[ "scalability", "apache-hadoop", "map-reduce", "mongodb" ]
135
2
null
41
9
null
Considering another criteria, I think that in some cases using Python may be much superior to R for Big Data. I know the wide-spread use of R in data science educational materials and the good data analysis libraries available for it, but sometimes it just depend on the team. In my experience, for people already famili...
null
CC BY-SA 3.0
null
2014-05-18T12:30:06.853
2014-05-19T08:13:05.037
2014-05-19T08:13:05.037
227
227
null
136
5
null
null
0
null
MapReduce is a framework for processing parallelizable problems across huge datasets using a large number of computers (nodes), collectively referred to as a cluster (if all nodes are on the same local network and use similar hardware) or a grid (if the nodes are shared across geographically and administratively distri...
null
CC BY-SA 4.0
null
2014-05-18T12:55:39.657
2021-02-10T14:59:02.167
2021-02-10T14:59:02.167
111611
227
null
137
4
null
null
0
null
MapReduce is a programming model for processing large data sets with a parallel, distributed algorithm on a cluster.
null
CC BY-SA 3.0
null
2014-05-18T12:55:39.657
2014-05-20T13:53:39.727
2014-05-20T13:53:39.727
227
227
null
138
1
251
null
10
223
Any small database processing can be easily tackled by Python/Perl/... scripts, that uses libraries and/or even utilities from the language itself. However, when it comes to performance, people tend to reach out for C/C++/low-level languages. The possibility of tailoring the code to the needs seems to be what makes the...
Why is it hard to grant efficiency while using libraries?
CC BY-SA 3.0
null
2014-05-18T14:02:51.350
2014-06-10T05:57:13.897
2014-05-29T15:01:17.840
84
84
[ "bigdata", "efficiency", "performance" ]
139
2
null
76
3
null
I'd suggest [Apache Kafka](http://kafka.apache.org) as message store and any stream processing solution of your choice like [Apache Camel](https://camel.apache.org) or [Twitter Storm](https://github.com/apache/incubator-storm)
null
CC BY-SA 3.0
null
2014-05-18T14:04:37.870
2014-05-18T14:04:37.870
null
null
118
null
140
2
null
107
4
null
I've read very good [article](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/) recently that suggests using [Twitter storm](https://github.com/nathanmarz/storm) for a task that looks pretty similar to yours.
null
CC BY-SA 3.0
null
2014-05-18T14:30:10.553
2014-05-18T14:30:10.553
null
null
118
null
141
5
null
null
0
null
null
CC BY-SA 3.0
null
2014-05-18T14:36:16.350
2014-05-18T14:36:16.350
2014-05-18T14:36:16.350
-1
-1
null
142
4
null
null
0
null
Efficiency, in algorithmic processing, is usually associated to resource usage. The metrics to evaluate the efficiency of a process are commonly account for execution time, memory/disk or storage requirements, network usage and power consumption.
null
CC BY-SA 3.0
null
2014-05-18T14:36:16.350
2014-05-20T13:51:49.240
2014-05-20T13:51:49.240
84
84
null
143
1
165
null
10
1015
As we all know, there are some data indexing techniques, using by well-known indexing apps, like Lucene (for java) or Lucene.NET (for .NET), MurMurHash, B+Tree etc. For a No-Sql / Object Oriented Database (which I try to write/play a little around with C#), which technique you suggest? I read about MurMurhash-2 and spe...
What is the most efficient data indexing technique
CC BY-SA 3.0
null
2014-05-18T14:37:20.477
2014-05-19T12:05:13.513
2014-05-19T12:05:13.513
229
229
[ "nosql", "efficiency", "indexing", "data-indexing-techniques", ".net" ]
144
5
null
null
0
null
[Cluster analysis](http://en.wikipedia.org/wiki/Cluster_analysis) is the task of grouping objects into subsets (called clusters) so that observations in the same cluster are similar in some sense, while observations in different clusters are dissimilar. In [machine-learning](/questions/tagged/machine-learning) and [dat...
null
CC BY-SA 3.0
null
2014-05-18T14:58:34.853
2018-01-01T18:56:53.570
2018-01-01T18:56:53.570
29575
29575
null
145
4
null
null
0
null
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical...
null
CC BY-SA 3.0
null
2014-05-18T14:58:34.853
2014-05-20T13:53:26.907
2014-05-20T13:53:26.907
118
118
null
146
5
null
null
0
null
Natural language processing (NLP) is a subfield of artificial intelligence that involves transforming or extracting useful information from natural language data. Methods include machine-learning and rule-based approaches. It is often regarded as the engineering arm of Computational Linguistics. NLP tasks - Text pre-p...
null
CC BY-SA 3.0
null
2014-05-18T15:01:24.080
2018-04-10T01:03:28.253
2018-04-10T01:03:28.253
21163
29575
null
147
4
null
null
0
null
Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve natural language understanding, th...
null
CC BY-SA 3.0
null
2014-05-18T15:01:24.080
2014-05-20T13:52:59.427
2014-05-20T13:52:59.427
118
118
null
148
5
null
null
0
null
In Computer science and Technologies The Data is the most important part. Since to work with data in an efficient manner to store the data in mediums and reuse, we use some techniques which named in general indexing. This tag interests with these techniques, efficiency, and anything about using stored data with indexin...
null
CC BY-SA 3.0
null
2014-05-18T15:08:08.913
2014-05-20T13:53:31.717
2014-05-20T13:53:31.717
229
229
null
149
4
null
null
0
null
In Computer science and Technologies The Data is the most important part. Since to work with data in an efficient manner to store the data in mediums and reuse, we use some techniques which named in general indexing. This tag interests with these techniques, efficiency, and anything about using stored data with ind...
null
CC BY-SA 3.0
null
2014-05-18T15:08:08.913
2014-05-20T13:52:19.333
2014-05-20T13:52:19.333
229
229
null
151
5
null
null
0
null
Indexing is the almost most important part of data to get an efficient, properly storing and retrieval data from mediums In different Programming Languages, there are different indexing algorithms and structures can be found. As an Example, in Java Language the Apache Foundation's Lucene is very popular. In years there...
null
CC BY-SA 3.0
null
2014-05-18T15:34:16.437
2014-05-20T13:48:12.163
2014-05-20T13:48:12.163
229
229
null
152
4
null
null
0
null
Indexing is the almost most important part of data to get an efficient, properly storing and retrieval data from mediums
null
CC BY-SA 3.0
null
2014-05-18T15:34:16.437
2014-05-20T13:53:17.567
2014-05-20T13:53:17.567
229
229
null
153
2
null
81
3
null
The answers presented so far are very nice, but I was also expecting an emphasis on a particular difference between parallel and distributed processing: the code executed. Considering parallel processes, the code executed is the same, regardless of the level of parallelism (instruction, data, task). You write a single ...
null
CC BY-SA 3.0
null
2014-05-18T17:38:01.383
2014-05-18T17:38:01.383
null
null
84
null
154
2
null
125
3
null
Check [Martin Fowler's Personal website](http://www.martinfowler.com). He writes good and specially answers one of your questions in his book: "NoSQL Distilled"
null
CC BY-SA 4.0
null
2014-05-18T17:53:37.750
2021-03-15T21:14:08.800
2021-03-15T21:14:08.800
29169
229
null
155
1
158
null
201
32456
One of the common problems in data science is gathering data from various sources in a somehow cleaned (semi-structured) format and combining metrics from various sources for making a higher level analysis. Looking at the other people's effort, especially other questions on this site, it appears that many people in thi...
Publicly Available Datasets
CC BY-SA 3.0
null
2014-05-18T18:45:38.957
2022-07-01T05:57:50.363
2016-12-05T22:33:53.380
26596
227
[ "open-source", "dataset" ]
156
2
null
155
38
null
[Freebase](https://www.freebase.com) is a free community driven database that spans many interesting topics and contains about 2,5 billion facts in machine readable format. It is also have good API to perform data queries. [Here](http://www.datapure.co/open-data-sets) is another compiled list of open data sets
null
CC BY-SA 4.0
null
2014-05-18T19:19:44.240
2021-07-08T20:42:25.690
2021-07-08T20:42:25.690
120060
118
null
157
2
null
41
12
null
R is great for "big data"! However, you need a workflow since R is limited (with some simplification) by the amount of RAM in the operating system. The approach I take is to interact with a relational database (see the `RSQLite` package for creating and interacting with a SQLite databse), run SQL-style queries to under...
null
CC BY-SA 3.0
null
2014-05-18T19:22:05.160
2014-05-18T19:22:05.160
null
null
36
null
158
2
null
155
111
null
There is, in fact, a very reasonable list of publicly-available datasets, supported by different enterprises/sources. Some of them are below: - Public Datasets on Amazon WebServices; - Frequent Itemset Mining Implementation Repository; - UCI Machine Learning Repository; - KDnuggets -- a big list of lots of public ...
null
CC BY-SA 3.0
null
2014-05-18T19:29:53.530
2016-11-29T04:04:46.463
2016-11-29T04:04:46.463
26596
84
null
159
1
160
null
6
558
I see a lot of courses in Data Science emerging in the last 2 years. Even big universities like Stanford and Columbia offers MS specifically in Data Science. But as long as I see, it looks like data science is just a mix of computer science and statistics techniques. So I always think about this. If it is just a trend ...
Is Data Science just a trend or is a long term concept?
CC BY-SA 3.0
null
2014-05-18T19:46:44.653
2014-05-18T21:05:28.990
null
null
199
[ "bigdata", "machine-learning", "databases", "statistics", "education" ]
160
2
null
159
12
null
The one thing that you can say for sure is: Nobody can say this for sure. And it might indeed be opinion-based to some extent. The introduction of terms like "Big Data" that some people consider as "hypes" or "buzzwords" don't make it easier to flesh out an appropriate answer here. But I'll try. In general, interdisci...
null
CC BY-SA 3.0
null
2014-05-18T21:05:28.990
2014-05-18T21:05:28.990
null
null
156
null
161
2
null
138
7
null
I don't think that everyone reaches for C/C++ when performance is an issue. The advantage to writing low-level code is using fewer CPU cycles, or sometimes, less memory. But I'd note that higher-level languages can call down to lower-level languages, and do, to get some of this value. Python and JVM languages can do th...
null
CC BY-SA 3.0
null
2014-05-18T21:25:29.040
2014-05-18T21:25:29.040
null
null
21
null
162
2
null
155
44
null
There are many openly available data sets, one many people often overlook is [data.gov](http://www.data.gov/). As mentioned previously Freebase is great, so are all the examples posted by @Rubens
null
CC BY-SA 3.0
null
2014-05-18T22:16:19.300
2014-05-18T22:16:19.300
null
null
59
null
163
2
null
138
4
null
As all we know, in Digital world there are many ways to do the same work / get expected results.. And responsibilities / risks which comes from the code are on developers' shoulders.. This is small but i guess a very useful example from .NET world.. So Many .NET developers use the built-in BinaryReader - BinaryWriter o...
null
CC BY-SA 3.0
null
2014-05-18T23:21:07.220
2014-05-18T23:29:39.227
2014-05-18T23:29:39.227
229
229
null
164
2
null
14
11
null
What @Clayton posted seems about right to me, for those terms, and for "data mining" being one tool of the data scientist. However, I haven't really used the term "data collection," and it doesn't strike me as synonymous with "data mining." My own answer to your question: no, the terms aren't the same. Definitions may ...
null
CC BY-SA 3.0
null
2014-05-19T00:09:17.900
2014-05-19T00:09:17.900
null
null
208
null
165
2
null
143
10
null
I think you messed up some things in your question. Lucene (I know nothing about Lucene,NET, but I suppose is the same) is a library used to analyze, split in tokens, and store documents in order to be able to query and retrieve them later. Lucene has a pretty old but effective model, it uses inverted trees to find and...
null
CC BY-SA 3.0
null
2014-05-19T08:46:47.683
2014-05-19T08:46:47.683
null
null
108
null
166
2
null
129
8
null
Let's say you are predicting the topic of a document given its words. A generative model describes how likely each topic is, and how likely words are given the topic. This is how it says documents are actually "generated" by the world -- a topic arises according to some distribution, words arise because of the topic, y...
null
CC BY-SA 3.0
null
2014-05-19T11:13:48.067
2014-05-19T11:13:48.067
null
null
21
null
167
5
null
null
0
null
.NET is a very popular Object Oriented Programming Language Family.This Family includes members such as C# (pronounced CSharp), VB.NET, F# (pronounced Fsharp), J# (pronounced JSharp) and much more. The .NET Family offers programming with small effort with well-known high speed of compiled languages such as C and C++ Th...
null
CC BY-SA 3.0
null
2014-05-19T12:17:45.960
2014-05-20T13:52:50.373
2014-05-20T13:52:50.373
229
229
null
168
4
null
null
0
null
.NET is a very popular Object Oriented Programming Language Family which includes members such as C# (pronounced CSharp), VB.NET, F# (pronounced FSharp), J# (pronounced JSharp) and much more. The .NET Family offers programming with small effort with well-known high speed of compiled languages such as C and C++ This ...
null
CC BY-SA 3.0
null
2014-05-19T12:17:45.960
2014-05-20T13:50:32.440
2014-05-20T13:50:32.440
229
229
null
169
1
170
null
15
5505
Assume a set of loosely structured data (e.g. Web tables/Linked Open Data), composed of many data sources. There is no common schema followed by the data and each source can use synonym attributes to describe the values (e.g. "nationality" vs "bornIn"). My goal is to find some "important" attributes that somehow "defi...
How to specify important attributes?
CC BY-SA 3.0
null
2014-05-19T15:55:24.983
2021-03-11T20:12:24.030
2015-05-18T13:30:46.940
113
113
[ "machine-learning", "statistics", "feature-selection" ]
170
2
null
169
16
null
A possible solution is to calculate the [information gain](http://en.wikipedia.org/wiki/Decision_tree_learning#Information_gain) associated to each attribute: $$I_{E}(f) = - \sum \limits_{i = 1}^m f_ilog_2f_i$$ Initially you have the whole dataset, and compute the information gain of each item. The item with the best i...
null
CC BY-SA 4.0
null
2014-05-19T18:08:32.327
2021-03-11T20:12:24.030
2021-03-11T20:12:24.030
29169
84
null
171
2
null
35
5
null
Two things you might find useful: - meta-learning to speedup the search for the right model and the optimal parameters. Meta learning consists in applying machine learning tools to the problem of finding the right machine learning tool/parameters for the problem at hand. This for instance this paper for a practical ex...
null
CC BY-SA 3.0
null
2014-05-19T19:44:48.500
2014-05-19T19:44:48.500
null
null
172
null
172
1
null
null
27
19451
I have a modeling and scoring program that makes heavy use of the `DataFrame.isin` function of pandas, searching through lists of facebook "like" records of individual users for each of a few thousand specific pages. This is the most time-consuming part of the program, more so than the modeling or scoring pieces, simpl...
Is there a straightforward way to run pandas.DataFrame.isin in parallel?
CC BY-SA 3.0
null
2014-05-19T23:59:58.070
2020-08-02T12:40:19.397
2014-05-20T04:47:25.207
84
250
[ "performance", "python", "pandas", "parallel" ]
173
2
null
35
4
null
Guessing it's likely you've seen this [YouTube demo](http://www.youtube.com/watch?v=1GhNXHCQGsM) and the related [Google Tech Talk](http://www.youtube.com/watch?v=lmG_FjG4Dy8), which is related to these papers: - P-N Learning: Bootstrapping Binary Classifiers by Structural Constraints - Tracking-Learning-Detection ...
null
CC BY-SA 3.0
null
2014-05-20T03:56:43.147
2014-05-20T03:56:43.147
null
null
158
null
174
2
null
116
5
null
Another suggestion is to test the [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression). As an added bonus, the weights (coefficients) of the model will give you an idea of which sites are age-distriminant. Sklearn offers the [sklearn.linear_model.LogisticRegression](http://scikit-learn.org/stable/...
null
CC BY-SA 3.0
null
2014-05-20T09:24:30.697
2014-05-20T19:36:16.283
2014-05-20T19:36:16.283
172
172
null
175
1
null
null
3
62
I have data coming from a source system that is pipe delimited. Pipe was selected over comma since it was believed no pipes appeared in field, while it was known that commas do occur. After ingesting this data into Hive however it has been discovered that rarely a field does in fact contain a pipe character. Due to a c...
Can metadata be used to adapt parsing for an unescaped in field use of the delimiter?
CC BY-SA 3.0
null
2014-05-20T22:14:02.927
2014-05-21T07:19:32.297
null
null
249
[ "metadata", "parsing" ]
176
1
190
null
12
4786
I am seeking for a library/tool to visualize how social network changes when new nodes/edges are added to it. One of the existing solutions is [SoNIA: Social Network Image Animator](http://www.stanford.edu/group/sonia/). It let's you make movies like [this one](https://www.youtube.com/watch?v=yGSNCED6mDc). SoNIA's doc...
How to animate growth of a social network?
CC BY-SA 3.0
null
2014-05-21T05:29:36.787
2014-05-22T12:14:49.727
2014-05-21T05:51:58.330
173
173
[ "social-network-analysis", "time-series", "javascript", "visualization" ]
177
5
null
null
0
null
[MongoDB](https://www.mongodb.com) is a [widely used](https://www.mongodb.com/who-uses-mongodb), [general-purpose](https://www.mongodb.com/use-cases), [document-oriented NoSQL database](https://en.wikipedia.org/wiki/Document-oriented_database) with features including high-availability replication and auto-sharding for ...
null
CC BY-SA 4.0
null
2014-05-21T07:05:44.780
2019-04-08T13:45:37.843
2019-04-08T13:45:37.843
201
201
null
178
4
null
null
0
null
MongoDB is a scalable, high-performance, open source, document-oriented NoSQL database. It supports a large number of languages and application development platforms. Questions about server administration can be asked on http://dba.stackexchange.com.
null
CC BY-SA 4.0
null
2014-05-21T07:05:44.780
2019-04-08T17:28:18.497
2019-04-08T17:28:18.497
201
201
null
179
2
null
176
6
null
My first guess is to [visualize social network in Tableau](https://www.google.com/search?q=visualize%20social%20network%20in%20tableau). And particularly: [building network graphs in Tableau](http://www.clearlyandsimply.com/clearly_and_simply/2012/12/build-network-graphs-in-tableau.html). What you need is to add time d...
null
CC BY-SA 3.0
null
2014-05-21T07:09:20.093
2014-05-21T07:18:48.453
2014-05-21T07:18:48.453
97
97
null
180
5
null
null
0
null
Neo4j is a open-source, transactional, high performance native graph database. Neo4j stores its data as a graph: Nodes are connected through Relationships, both with arbitrary properties. Neo4j features a graph-centric declarative query language called [Cypher](http://neo4j.com/docs/stable/cypher-refcard). Its drivers ...
null
CC BY-SA 4.0
null
2014-05-21T07:10:45.827
2019-04-08T13:45:45.567
2019-04-08T13:45:45.567
201
201
null
181
4
null
null
0
null
Neo4j is an open-source graph database (GDB) well suited to connected data. Please mention your exact version of Neo4j when asking questions. You can use it for recommendation engines, fraud detection, graph-based search, network ops/security, and many other user cases. The database is accessed via official drivers in...
null
CC BY-SA 4.0
null
2014-05-21T07:10:45.827
2019-04-08T13:45:18.490
2019-04-08T13:45:18.490
201
201
null
182
2
null
175
4
null
So, a few of your rows will have too many columns by one or more as a result. That's easy to detect, but harder to infer where the error was -- which two columns are actually one? which delimiter is not a delimiter? In some cases, you can use the metadata, because it helps you know when an interpretation of the columns...
null
CC BY-SA 3.0
null
2014-05-21T07:19:32.297
2014-05-21T07:19:32.297
null
null
21
null
183
2
null
176
12
null
### Fancy animations are cool I was very impressed when I saw [this animation](http://youtu.be/T7Ncmq6scck) of the [discourse](http://www.discourse.org) git repository. They used [Gourse](https://code.google.com/p/gource/) which is specifically for git. But it may give ideas about how to represent the dynamics of gr...
null
CC BY-SA 3.0
null
2014-05-21T10:53:26.870
2014-05-21T15:39:57.380
2020-06-16T11:08:43.077
-1
262
null
184
1
185
null
9
520
The details of the Google Prediction API are on this [page](https://developers.google.com/prediction/), but I am not able to find any details about the prediction algorithms running behind the API. So far I have gathered that they let you provide your preprocessing steps in PMML format.
Google prediction API: What training/prediction methods Google Prediction API employs?
CC BY-SA 3.0
null
2014-05-21T11:22:34.657
2014-06-10T04:47:13.040
null
null
200
[ "tools" ]
185
2
null
184
6
null
If you take a look over the specifications of PMML which you can find [here](http://www.dmg.org/v3-0/GeneralStructure.html) you can see on the left menu what options you have (like ModelTree, NaiveBayes, Neural Nets and so on).
null
CC BY-SA 3.0
null
2014-05-21T14:14:38.797
2014-05-21T14:14:38.797
null
null
108
null
186
1
187
null
9
345
I'm learning [Support Vector Machines](http://en.wikipedia.org/wiki/Support_vector_machine), and I'm unable to understand how a class label is chosen for a data point in a binary classifier. Is it chosen by consensus with respect to the classification in each dimension of the separating hyperplane?
Using SVM as a binary classifier, is the label for a data point chosen by consensus?
CC BY-SA 3.0
null
2014-05-21T15:12:18.980
2014-05-21T15:39:54.830
2014-05-21T15:26:02.533
84
133
[ "svm", "classification", "binary" ]
187
2
null
186
9
null
The term consensus, as far as I'm concerned, is used rather for cases when you have more a than one source of metric/measure/choice from which to make a decision. And, in order to choose a possible result, you perform some average evaluation/consensus over the values available. This is not the case for SVM. The algorit...
null
CC BY-SA 3.0
null
2014-05-21T15:39:54.830
2014-05-21T15:39:54.830
null
null
84
null
188
2
null
134
5
null
You can use map reduce algorithms in Hadoop without programming them in Java. It is called streaming and works like Linux piping. If you believe that you can port your functions to read and write to terminal, it should work nicely. [Here](http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-pyth...
null
CC BY-SA 3.0
null
2014-05-21T16:13:25.590
2014-05-21T16:13:25.590
null
null
82
null
189
1
null
null
4
983
I'm currently using [General Algebraic Modeling System](http://en.wikipedia.org/wiki/General_Algebraic_Modeling_System) (GAMS), and more specifically CPLEX within GAMS, to solve a very large mixed integer programming problem. This allows me to parallelize the process over 4 cores (although I have more, CPLEX utilizes a...
Open source solver for large mixed integer programming task?
CC BY-SA 3.0
null
2014-05-21T19:41:19.857
2014-06-17T16:18:02.437
2014-06-17T16:18:02.437
84
151
[ "r", "open-source", "parallel", "optimization" ]
190
2
null
176
7
null
It turned out that this task was quite easy to accomplish using [vis.js](http://visjs.org/). [This](http://visjs.org/examples/graph/20_navigation.html) was the best example code which I have found. The example of what I have built upon this is [here](http://laboratoriumdanych.pl/jak-powstaje-siec/) (scroll to the botto...
null
CC BY-SA 3.0
null
2014-05-22T12:14:49.727
2014-05-22T12:14:49.727
null
null
173
null
191
1
194
null
8
1166
Can someone explain me, how to classify a data like MNIST with MLBP-Neural network if I make more than one output (e.g 8), I mean if I just use one output I can easily classify the data, but if I use more than one, which output should I choose ?
Multi layer back propagation Neural network for classification
CC BY-SA 3.0
null
2014-05-22T13:36:24.120
2014-06-10T08:38:27.093
null
null
273
[ "neural-network" ]
192
1
null
null
8
522
The most popular use case seem to be recommender systems of different kinds (such as recommending shopping items, users in social networks etc.). But what are other typical data science applications, which may be used in a different verticals? For example: customer churn prediction with machine learning, evaluating cus...
What are the most popular data science application use cases for consumer web companies
CC BY-SA 3.0
null
2014-05-22T15:15:41.133
2014-05-23T06:03:40.577
null
null
88
[ "usecase", "consumerweb" ]
193
2
null
192
4
null
It depends, of course, on the focus of the company: commerce, service, etc. In adition to the use cases you suggested, some other use cases would be: - Funnel analysis: Analyzing the way in which consumers use a website and complete a sale may include data science techniques, especially if the company operates at a l...
null
CC BY-SA 3.0
null
2014-05-22T15:43:57.160
2014-05-22T15:43:57.160
null
null
178
null
194
2
null
191
5
null
Suppose that you need to classify something in K classes, where K > 2. In this case the most often setup I use is one hot encoding. You will have K output columns, and in the training set you will set all values to 0, except the one which has the category index, which could have value 1. Thus, for each training data se...
null
CC BY-SA 3.0
null
2014-05-22T19:20:14.130
2014-05-22T19:20:14.130
null
null
108
null
195
2
null
192
5
null
Satisfaction is a huge one that I run into a lot. Huge referring to importance/difficulty/complexity. The bottom line is that for very large services (search engines, facebook, linkedin, etc...) your users are simply a collection of log lines. You have little ability to solicit feed back from them (not a hard and fas...
null
CC BY-SA 3.0
null
2014-05-22T20:48:55.297
2014-05-23T06:03:40.577
2014-05-23T06:03:40.577
92
92
null
196
1
197
null
13
7379
So we have potential for a machine learning application that fits fairly neatly into the traditional problem domain solved by classifiers, i.e., we have a set of attributes describing an item and a "bucket" that they end up in. However, rather than create models of probabilities like in Naive Bayes or similar classifie...
Algorithm for generating classification rules
CC BY-SA 3.0
null
2014-05-22T21:47:26.980
2020-08-06T11:04:09.857
2014-05-23T03:27:20.630
84
275
[ "machine-learning", "classification" ]
197
2
null
196
10
null
C45 made by Quinlan is able to produce rule for prediction. Check this [Wikipedia](http://en.wikipedia.org/wiki/C4.5_algorithm) page. I know that in [Weka](http://www.cs.waikato.ac.nz/~ml/weka/) its name is J48. I have no idea which are implementations in R or Python. Anyway, from this kind of decision tree you should...
null
CC BY-SA 3.0
null
2014-05-22T21:54:05.660
2014-05-22T21:59:22.117
2014-05-22T21:59:22.117
108
108
null
198
2
null
192
5
null
Also, there seem to be a very comprehensive list of data science use cases by function and by vertical on Kaggle - ["Data Science Use Cases"](http://www.kaggle.com/wiki/DataScienceUseCases)
null
CC BY-SA 3.0
null
2014-05-23T03:05:57.990
2014-05-23T03:05:57.990
null
null
88
null
199
1
202
null
25
33962
LDA has two hyperparameters, tuning them changes the induced topics. What does the alpha and beta hyperparameters contribute to LDA? How does the topic change if one or the other hyperparameters increase or decrease? Why are they hyperparamters and not just parameters?
What does the alpha and beta hyperparameters contribute to in Latent Dirichlet allocation?
CC BY-SA 3.0
null
2014-05-23T06:25:50.480
2022-05-20T16:20:33.417
null
null
122
[ "topic-model", "lda", "parameter" ]
200
2
null
134
4
null
You also can create a MongoDB-Hadoop [connection](http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-hadoop/).
null
CC BY-SA 3.0
null
2014-05-23T08:34:38.900
2014-05-23T08:34:38.900
null
null
278
null
201
2
null
155
76
null
Update: Kaggle.com, a home of modern data science & machine learning enthusiasts:), opened [it's own repository of the data sets](https://www.kaggle.com/datasets). --- In addition to the listed sources. Some social network data sets: - Stanford University large network dataset collection (SNAP) - A huge twitter da...
null
CC BY-SA 3.0
null
2014-05-23T09:09:44.490
2016-01-21T08:44:01.610
2017-04-13T12:44:20.183
-1
97
null
202
2
null
199
22
null
The Dirichlet distribution is a multivariate distribution. We can denote the parameters of the Dirichlet as a vector of size K of the form ~$\frac{1}{B(a)} \cdot \prod\limits_{i} x_i^{a_{i-1}}$, where $a$ is the vector of size $K$ of the parameters, and $\sum x_i = 1$. Now the LDA uses some constructs like: - a docume...
null
CC BY-SA 4.0
null
2014-05-23T13:47:54.603
2022-05-20T16:20:33.417
2022-05-20T16:20:33.417
-1
108
null
203
2
null
189
2
null
Never done stuff on that scale, but as no-one else has jumped in yet have you seen these two papers that discuss non-commercial solutions? Symphony and COIN-OR seem to be the dominant suggestions. Linderoth, Jeffrey T., and Andrea Lodi. "MILP software." Wiley encyclopedia of operations research and management science ...
null
CC BY-SA 3.0
null
2014-05-23T14:28:41.563
2014-05-23T14:28:41.563
null
null
265
null
204
2
null
116
7
null
I recently did a similar project in Python (predicting opinions using FB like data), and had good results with the following basic process: - Read in the training set (n = N) by iterating over comma-delimited like records line-by-line and use a counter to identify the most popular pages - For each of the K most popul...
null
CC BY-SA 4.0
null
2014-05-23T19:28:01.903
2020-08-20T18:51:34.423
2020-08-20T18:51:34.423
98307
250
null
205
1
208
null
12
1771
Working on what could often be called "medium data" projects, I've been able to parallelize my code (mostly for modeling and prediction in Python) on a single system across anywhere from 4 to 32 cores. Now I'm looking at scaling up to clusters on EC2 (probably with StarCluster/IPython, but open to other suggestions as ...
Instances vs. cores when using EC2
CC BY-SA 3.0
null
2014-05-23T19:45:54.283
2017-02-19T09:12:49.270
null
null
250
[ "parallel", "clustering", "aws" ]
206
2
null
205
6
null
All things considered equal (cost, CPU perf, etc.) you could choose the smallest instance that can hold all of my dataset in memory and scale out. That way - you make sure not to induce unnecessary latencies due to network communications, and - you tend to maximize the overall available memory bandwidth for your pro...
null
CC BY-SA 3.0
null
2014-05-23T21:01:18.630
2014-05-23T21:01:18.630
null
null
172
null
207
2
null
205
11
null
A general rule of thumb is to not distribute until you have to. It's usually more efficient to have N servers of a certain capacity than 2N servers of half that capacity. More of the data access will be local, and therefore fast in memory versus slow across the network. At a certain point, scaling up one machine become...
null
CC BY-SA 3.0
null
2014-05-24T10:36:58.987
2014-05-24T10:36:58.987
null
null
21
null
208
2
null
205
11
null
When using IPython, you very nearly don't have to worry about it (at the expense of some loss of efficiency/greater communication overhead). The parallel IPython plugin in StarCluster will by default start one engine per physical core on each node (I believe this is configurable but not sure where). You just run what...
null
CC BY-SA 3.0
null
2014-05-24T11:18:26.497
2017-02-19T09:12:49.270
2017-02-19T09:12:49.270
26
26
null
209
1
212
null
10
1421
A recommendation system keeps a log of what recommendations have been made to a particular user and whether that user accepts the recommendation. It's like ``` user_id item_id result 1 4 1 1 7 -1 5 19 1 5 80 1 ``` where 1 means the user accepted the recommendation while -1...
How should one deal with implicit data in recommendation
CC BY-SA 3.0
null
2014-05-25T13:57:52.657
2014-05-27T10:58:00.620
2014-05-26T05:12:32.653
71
71
[ "recommender-system" ]
210
2
null
199
13
null
Assuming symmetric Dirichlet distributions (for simplicity), a low alpha value places more weight on having each document composed of only a few dominant topics (whereas a high value will return many more relatively dominant topics). Similarly, a low beta value places more weight on having each topic composed of only a...
null
CC BY-SA 3.0
null
2014-05-26T04:07:32.390
2014-05-26T04:07:32.390
null
null
283
null
211
1
213
null
9
4593
I'm new to this community and hopefully my question will well fit in here. As part of my undergraduate data analytics course I have choose to do the project on human activity recognition using smartphone data sets. As far as I'm concern this topic relates to Machine Learning and Support Vector Machines. I'm not well fa...
Human activity recognition using smartphone data set problem
CC BY-SA 4.0
null
2014-05-27T10:41:33.220
2020-08-17T03:25:03.437
2020-08-16T21:51:47.670
98307
295
[ "bigdata", "machine-learning", "databases", "clustering", "data-mining" ]
212
2
null
209
7
null
Your system isn't just trained on items that are recommended right? if so you have a big feedback loop here. You want to learn from all clicks/views, I hope. You suggest that not-looking at an item is a negative signal. I strongly suggest you do not treat it that way. Not interacting with something is almost always bes...
null
CC BY-SA 3.0
null
2014-05-27T10:58:00.620
2014-05-27T10:58:00.620
null
null
21
null
213
2
null
211
8
null
The data set definitions are on the page here: [Attribute Information at the bottom](http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones#) or you can see inside the ZIP folder the file named activity_labels, that has your column headings inside of it, make sure you read the README carefu...
null
CC BY-SA 3.0
null
2014-05-27T12:07:45.920
2014-05-27T12:07:45.920
null
null
59
null
214
2
null
155
37
null
The following links are available - Public Data Sets - Google Public Data Sets - Amazon Web Services - Finding Data on the Internet
null
CC BY-SA 3.0
null
2014-05-27T16:05:02.883
2017-12-17T19:30:24.190
2017-12-17T19:30:24.190
28175
295
null
215
1
null
null
16
1055
I'm building a workflow for creating machine learning models (in my case, using Python's `pandas` and `sklearn` packages) from data pulled from a very large database (here, Vertica by way of SQL and `pyodbc`), and a critical step in that process involves imputing missing values of the predictors. This is straightforwar...
Where in the workflow should we deal with missing data?
CC BY-SA 3.0
null
2014-05-27T21:07:48.973
2014-06-11T11:32:13.433
null
null
250
[ "machine-learning", "python", "pandas", "scikit-learn" ]
216
2
null
215
15
null
My strong opinion regarding automated tasks like imputation (but, here I can include also scaling, centering, feature selection, etc) is to avoid in any way do such things without carefully inspecting your data. Of course, after deciding what kind of imputation to apply it can be automated (under the assumption that t...
null
CC BY-SA 3.0
null
2014-05-28T07:08:05.393
2014-05-28T07:08:05.393
null
null
108
null
217
2
null
211
5
null
It looks like this (or very similar data set) is used for Coursera courses. Cleaning this dataset is task for [Getting and Cleaning Data](https://www.coursera.org/course/getdata), but it is also used for case study for [Exploratory Data analysis](https://class.coursera.org/exdata-002). Video from this case study is ava...
null
CC BY-SA 3.0
null
2014-05-28T09:43:54.197
2014-05-28T09:43:54.197
null
null
82
null
218
1
null
null
3
1714
So, I have a dataset with 39.949 variables and 180 rows. dataset is successfully saved in DataFrame but when I try to find cov() it result an error. here is the code ``` import pandas as pd cov_data=pd.DataFrame(dataset).cov() ``` Here is the error ``` File "/home/syahdeini/Desktop/FP/pca_2.py", line 44, in find_...
built-in cov in pandas DataFrame results ValueError array is too big
CC BY-SA 3.0
null
2014-05-29T13:08:09.060
2014-05-29T14:31:51.913
null
null
273
[ "python", "pandas" ]
219
2
null
218
2
null
Since you have 39,949 variables, the covariance matrix would have about 1.6 billion elements (39,949 * 39,949 = 1,595,922,601). That is likely why you are getting that error.
null
CC BY-SA 3.0
null
2014-05-29T13:51:48.820
2014-05-29T13:51:48.820
null
null
178
null
220
2
null
218
6
null
Christopher is right about the size of the array. To be simplistic about it, if this translates to 1.6B floats, at 16 bytes per float (32-bit version; 64-bit is bigger), then you're trying to create an array of about 26 GB. Even if you have the RAM for that, I'd imagine that it's probably going to overload something el...
null
CC BY-SA 3.0
null
2014-05-29T14:13:00.317
2014-05-29T14:31:51.913
2014-05-29T14:31:51.913
250
250
null
221
2
null
196
8
null
It's actually even simpler than that, from what you describe---you're just looking for a basic classification tree algorithm (so no need for slightly more complex variants like C4.5 which are optimized for prediction accuracy). The canonical text is [this](https://rads.stackoverflow.com/amzn/click/com/0412048418). This...
null
CC BY-SA 4.0
null
2014-05-29T14:30:21.357
2020-08-06T11:03:53.690
2020-08-06T11:03:53.690
98307
250
null
222
2
null
155
22
null
[Enigma](http://enigma.io) is a repository of public available datasets. Its free plan offers public data search, with 10k API calls per month. Not all public databases are listed, but the list is enough for common cases. I used it for academic research and it saved me a lot of time. --- Another interesting source o...
null
CC BY-SA 3.0
null
2014-05-29T19:02:13.210
2014-05-30T21:05:52.147
2014-05-30T21:05:52.147
43
43
null
223
1
null
null
23
3690
Having a lot of text documents (in natural language, unstructured), what are the possible ways of annotating them with some semantic meta-data? For example, consider a short document: ``` I saw the company's manager last day. ``` To be able to extract information from it, it must be annotated with additional data to b...
How to annotate text documents with meta-data?
CC BY-SA 3.0
null
2014-05-29T20:11:16.327
2020-07-02T13:04:00.943
null
null
227
[ "nlp", "metadata", "data-cleaning", "text-mining" ]
224
1
null
null
4
342
The output of my word alignment file looks as such: ``` I wish to say with regard to the initiative of the Portuguese Presidency that we support the spirit and the political intention behind it . In bezug auf die Initiative der portugiesischen Präsidentschaft möchte ich zum Ausdruck bringen , daß wir den Geist und die ...
How to get phrase tables from word alignments?
CC BY-SA 3.0
null
2014-05-31T14:28:42.317
2018-05-06T22:11:50.250
2018-05-06T22:11:50.250
25180
122
[ "machine-translation" ]
226
2
null
155
16
null
I've found this link in Data Science Central with a list of free datasets: [Big data sets available for free](http://www.datasciencecentral.com/profiles/blogs/big-data-sets-available-for-free)
null
CC BY-SA 3.0
null
2014-05-31T19:59:15.563
2014-05-31T19:59:15.563
null
null
290
null
227
1
242
null
12
7178
Can someone kindly tell me about the trade-offs involved when choosing between Storm and MapReduce in Hadoop Cluster for data processing? Of course, aside from the obvious one, that Hadoop (processing via MapReduce in a Hadoop Cluster) is a batch processing system, and Storm is a real-time processing system. I have wor...
Tradeoffs between Storm and Hadoop (MapReduce)
CC BY-SA 3.0
null
2014-06-01T10:25:51.163
2014-06-10T23:33:28.443
2014-06-01T21:59:02.347
339
339
[ "bigdata", "efficiency", "apache-hadoop", "distributed" ]