Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
228
1
249
null
8
75
Going through the presentation and material of Summingbird by Twitter, one of the reasons that is mentioned for using Storm and Hadoop clusters together in Summingbird is that processing through Storm results in cascading of error. In order to avoid this cascading of error and accumulation of it, Hadoop cluster is used...
Cascaded Error in Apache Storm
CC BY-SA 3.0
null
2014-06-01T12:51:25.040
2014-06-10T03:42:51.637
null
null
339
[ "bigdata", "apache-hadoop" ]
229
2
null
227
9
null
This is kind of like asking about the tradeoffs between frying pan and your drawer of silverware. They are not two things you compare, really. You might use them together as part of a larger project. Hadoop itself is not one thing, but a name for a federation of services, like HDFS, Hive, HBase, MapReduce, etc. Storm ...
null
CC BY-SA 3.0
null
2014-06-01T19:48:41.693
2014-06-01T19:48:41.693
null
null
21
null
230
2
null
223
7
null
In general, you don't want to use XML tags to tag documents in this way because tags may overlap. [UIMA](http://uima.apache.org/), [GATE](http://gate.ac.uk/) and similar NLP frameworks denote the tags separate from the text. Each tag, such as `Person`, `ACME`, `John` etc. is stored as the position that the tag begins ...
null
CC BY-SA 3.0
null
2014-06-02T15:03:35.940
2014-06-02T15:03:35.940
null
null
178
null
231
1
287
null
10
6442
I want to test the accuracy of a methodology. I ran it ~400 times, and I got a different classification for each run. I also have the ground truth, i.e., the real classification to test against. For each classification I computed a confusion matrix. Now I want to aggregate these results in order to get the overall conf...
How to get an aggregate confusion matrix from n different classifications
CC BY-SA 3.0
null
2014-06-05T09:00:27.950
2014-06-11T09:39:34.373
2014-06-05T15:21:40.640
84
133
[ "classification", "confusion-matrix", "accuracy" ]
232
2
null
231
4
null
There are a few ways to achieve your "master confusion matrix". - Sum all the confusion matrices together: Like you suggested, summing this results in a confusion matrix. The problem with this is you can not interpret totals. - Average the entries. This method is the same as number one, but you divide each entry b...
null
CC BY-SA 3.0
null
2014-06-06T14:55:18.867
2014-06-06T14:55:18.867
null
null
375
null
233
2
null
130
3
null
To complete Damien's answer, an example of dimensionality reduction in NLP is a [topic model](http://en.wikipedia.org/wiki/Topic_model), where you represent the document by a vector indicating the weights of its constituent topics.
null
CC BY-SA 3.0
null
2014-06-08T07:03:43.710
2014-06-08T07:03:43.710
null
null
381
null
234
1
null
null
5
470
As Yann LeCun [mentioned](http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun/chisdw1), a number of PhD programs in data science will be popping up in the next few years. [NYU](http://datascience.nyu.edu/academics/programs/) already have one, where Prof.LeCun is at right now. A statistics or cs PhD i...
Data science Ph.D. program, what do you think?
CC BY-SA 3.0
null
2014-06-09T04:43:03.497
2014-06-10T03:21:56.473
null
null
386
[ "knowledge-base" ]
235
1
237
null
3
1572
Data visualization is an important sub-field of data science and python programmers need to have available toolkits for them. Is there a Python API to Tableau? Are there any Python based data visualization toolkits?
Are there any python based data visualization toolkits?
CC BY-SA 4.0
null
2014-06-09T08:34:29.337
2019-06-08T03:11:24.957
2019-06-08T03:11:24.957
29169
122
[ "python", "visualization" ]
236
2
null
234
2
null
No-one knows since no-one's completed one of these PhD programs yet! However, I would look at the syllabus and the teachers to base my decision. It all depends on what you want to do; industry or academia?
null
CC BY-SA 3.0
null
2014-06-09T18:02:00.613
2014-06-09T18:02:00.613
null
null
381
null
237
2
null
235
11
null
There is a Tablaeu API and you can use Python to use it, but maybe not in the sense that you think. There is a Data Extract API that you could use to import your data into Python and do your visualizations there, so I do not know if this is going to answer your question entirely. As in the first comment you can use Mat...
null
CC BY-SA 3.0
null
2014-06-09T19:52:41.847
2014-06-09T19:52:41.847
null
null
59
null
238
2
null
234
1
null
I think this question assumes a false premise. As a student at NYU, I only know of a Masters in Data Science. You linked to a page that confirms this. It's hard to gauge the benefit of a program that doesn't exist yet.
null
CC BY-SA 3.0
null
2014-06-09T21:36:44.297
2014-06-09T21:36:44.297
null
null
395
null
241
2
null
234
3
null
It seems to me that the premise of a PhD is to expand knowledge in some little slice of the world. Since a "data scientist" is by nature is somewhat of a jack-of-all-trades it does seem a little odd to me. A masters program seems much more appropriate. What do you hope to gain from a PhD? If the rigor scares (or bores)...
null
CC BY-SA 3.0
null
2014-06-09T21:51:53.793
2014-06-09T21:51:53.793
null
null
403
null
242
2
null
227
13
null
MapReduce: A fault tolerant distributed computational framework. MapReduce allows you to operate over huge amounts of data- with a lot of work put in to prevent failure due to hardware. MapReduce is a poor choice for computing results on the fly because it is slow. (A typical MapReduce job takes on the order of minutes...
null
CC BY-SA 3.0
null
2014-06-09T21:57:30.240
2014-06-10T23:33:28.443
2014-06-10T23:33:28.443
406
406
null
243
2
null
184
5
null
A variety of methods are available to the user. The support documentation gives walkthroughs and tips for when one or another model is most appropriate. [This page](https://developers.google.com/prediction/docs/pmml-schema) shows the following learning methods: - "AssociationModel" - "ClusteringModel" - "General...
null
CC BY-SA 3.0
null
2014-06-10T01:36:27.520
2014-06-10T01:43:47.883
2014-06-10T01:43:47.883
432
432
null
244
2
null
138
4
null
Coming from a programmers perspective, frameworks rarely target performance as the highest priority. If your library is going to be widely leveraged the things people are likely to value most are ease of use, flexibility, and reliability. Performance is generally valued in secondary competitive libraries. "X library ...
null
CC BY-SA 3.0
null
2014-06-10T01:40:23.263
2014-06-10T01:40:23.263
null
null
434
null
245
2
null
234
3
null
Computer Science is itself a multi-disciplinary field which has varying requirements among universities. For example, Stockholm University does not require any math above algebra for its CS programs (some courses may have higher requirements, but not often). I am not sure what you mean by a machine learning program ...
null
CC BY-SA 3.0
null
2014-06-10T01:54:40.647
2014-06-10T01:54:40.647
null
null
432
null
246
2
null
134
4
null
There will definitely be a translation task at the end if you prototype using just mongo. When you run a MapReduce task on mongodb, it has the data source and structure built in. When you eventually convert to hadoop, your data structures might not look the same. You could leverage the mongodb-hadoop connector to acc...
null
CC BY-SA 3.0
null
2014-06-10T02:42:02.050
2014-06-10T02:42:02.050
null
null
434
null
247
2
null
235
12
null
[Bokeh](http://bokeh.pydata.org/) is an excellent data visualization library for python. [NodeBox](http://www.cityinabottle.org/nodebox/) is another that comes to mind.
null
CC BY-SA 3.0
null
2014-06-10T02:50:51.153
2014-06-10T02:50:51.153
null
null
434
null
248
2
null
234
3
null
A cash cow program? No. PhD programs are never cash cows. I don't know why you couldn't be a professor with a PhD in data science. Rarely does a professor of a given course have to have a specific degree in order to teach it. As far as publishing goes, there are any number of related journals that would accept paper...
null
CC BY-SA 3.0
null
2014-06-10T03:21:56.473
2014-06-10T03:21:56.473
null
null
434
null
249
2
null
228
4
null
Twitter uses Storm for real-time processing of data. Problems can happen with real-time data. Systems might go down. Data might be inadvertently processed twice. Network connections can be lost. A lot can happen in a real-time system. They use hadoop to reliably process historical data. I don't know specifics, ...
null
CC BY-SA 3.0
null
2014-06-10T03:42:51.637
2014-06-10T03:42:51.637
null
null
434
null
250
2
null
184
3
null
Google does not publish the models they use, but they specifically do not support models from the PMML specification. If you look closely at the documentation on [this page](https://developers.google.com/prediction/docs/pmml-schema), you will notice that the model selection within the schema is greyed out indicating th...
null
CC BY-SA 3.0
null
2014-06-10T04:47:13.040
2014-06-10T04:47:13.040
null
null
434
null
251
2
null
138
4
null
Having done the rewriting game over and over myself (and still doing it), my immediate reaction was adaptability. While frameworks and libraries have a huge arsenal of (possibly intertwinable) routines for standard tasks, their framework property often (always?) disallows shortcuts. In fact, most frameworks have some ...
null
CC BY-SA 3.0
null
2014-06-10T05:57:13.897
2014-06-10T05:57:13.897
null
null
451
null
252
1
null
null
3
852
There seem to be at least 2 ways to connect to HBase from external application, with language other then Java (i.e. Python): - HBase Thrift API - HBase Stargate (REST API) Does anyone know which one should be used in which circumstances? I.e. what are their main differences, and pros/cons?
HBase connector - Thrift or REST
CC BY-SA 3.0
null
2014-06-10T06:19:46.510
2014-06-10T07:56:45.587
null
null
88
[ "bigdata", "apache-hadoop" ]
253
1
256
null
39
21145
An aspiring data scientist here. I don't know anything about Hadoop, but as I have been reading about Data Science and Big Data, I see a lot of talk about Hadoop. Is it absolutely necessary to learn Hadoop to be a Data Scientist?
Do I need to learn Hadoop to be a Data Scientist?
CC BY-SA 3.0
null
2014-06-10T06:20:20.817
2022-08-26T05:51:09.493
null
null
456
[ "bigdata", "apache-hadoop" ]
254
2
null
169
7
null
Actually there are more than one question to answer here: - How to work on schemaless/loose/missing data - How to label a person (from what I understand unsupervised) and create an identifier - How to train your system so that it can tell you which attributes you should use in order to identify the person As Rube...
null
CC BY-SA 3.0
null
2014-06-10T07:10:45.057
2014-06-10T07:10:45.057
null
null
463
null
255
2
null
253
2
null
You should learn Hadoop if you want to be work as data scientist, but maybe before starting with Hadoop you should read something about ETL or Big Data... this book could be a good starting point: [http://www.amazon.com/Big-Data-Principles-practices-scalable/dp/1617290343](http://rads.stackoverflow.com/amzn/click/16172...
null
CC BY-SA 3.0
null
2014-06-10T07:28:49.127
2014-06-10T07:28:49.127
null
null
466
null
256
2
null
253
48
null
Different people use different tools for different things. Terms like Data Science are generic for a reason. A data scientist could spend an entire career without having to learn a particular tool like hadoop. Hadoop is widely used, but it is not the only platform that is capable of managing and manipulating data, e...
null
CC BY-SA 3.0
null
2014-06-10T07:30:53.510
2014-06-10T08:21:19.197
2014-06-10T08:21:19.197
434
434
null
257
2
null
253
4
null
Yes, you should learn a platform that is capable of dissecting your problem as a data parallel problem. Hadoop is one. For your simple needs (design patterns like counting, aggregation, filtering etc.) you need Hadoop and for more complex Machine Learning stuff like doing some Bayesian, SVM you need Mahout which in tur...
null
CC BY-SA 3.0
null
2014-06-10T07:42:57.470
2014-06-10T07:42:57.470
null
null
11
null
258
2
null
252
2
null
Thrift is generally faster because the data being exchanged is smaller. Stargate offers a web service which is an integration method that is widely supported, which is a concern when you are working with commercial products with limited integration possibilities. In a closed environment where everything is controlled,...
null
CC BY-SA 3.0
null
2014-06-10T07:56:45.587
2014-06-10T07:56:45.587
null
null
434
null
259
2
null
155
20
null
I would like to point to [The Open Data Census](http://national.census.okfn.org/). It is an initiative of the Open Knowledge Foundation based on contributions from open data advocates and experts around the world. The value of Open data Census is open, community driven, and systematic effort to collect and update the ...
null
CC BY-SA 3.0
null
2014-06-10T08:04:20.400
2014-06-10T08:04:20.400
null
null
454
null
260
2
null
191
2
null
The algorithm that is used in this case is called [one-vs-all classifier](https://class.coursera.org/ml-003/lecture/38) or multiclass classifier. In your case you have to take one class, e. g. number 1 , mark it as positive and combine the rest seven classes in one negative class. The neural network will output the pr...
null
CC BY-SA 3.0
null
2014-06-10T08:38:27.093
2014-06-10T08:38:27.093
null
null
454
null
261
2
null
196
3
null
You could take a look at CN2 rule learner in [Orange 2](http://orange.biolab.si/orange2/)
null
CC BY-SA 4.0
null
2014-06-10T09:21:14.013
2020-08-06T11:04:09.857
2020-08-06T11:04:09.857
98307
480
null
262
1
293
null
40
22477
What are the advantages of HDF compared to alternative formats? What are the main data science tasks where HDF is really suitable and useful?
What are the advantages of HDF compared to alternative formats?
CC BY-SA 4.0
null
2014-06-10T09:26:06.593
2021-07-02T00:43:22.377
2020-04-14T16:54:08.463
null
97
[ "data-formats", "hierarchical-data-format" ]
263
2
null
172
8
null
Unfortunately, parallelization is not yet implemented in pandas. You can join [this github issue](http://github.com/pydata/pandas/issues/5751) if you want to participate in the development of this feature. I don't know any "magic unicorn package" for this purposes, so the best thing will be write your own solution. But...
null
CC BY-SA 3.0
null
2014-06-10T09:41:34.697
2014-06-10T09:41:34.697
null
null
478
null
264
2
null
22
4
null
You can also give the Expectation Maximization clustering algorithm a try. It can work on categorical data and will give you a statistical likelihood of which categorical value (or values) a cluster is most likely to take on.
null
CC BY-SA 3.0
null
2014-06-10T10:48:58.457
2014-06-10T10:48:58.457
null
null
490
null
265
1
285
null
42
45677
I have a variety of NFL datasets that I think might make a good side-project, but I haven't done anything with them just yet. Coming to this site made me think of machine learning algorithms and I wondering how good they might be at either predicting the outcome of football games or even the next play. It seems to me t...
Can machine learning algorithms predict sports scores or plays?
CC BY-SA 3.0
null
2014-06-10T10:58:58.447
2020-08-20T18:25:42.540
2015-03-02T12:33:11.007
553
434
[ "machine-learning", "sports" ]
266
1
272
null
12
3010
Being new to machine-learning in general, I'd like to start playing around and see what the possibilities are. I'm curious as to what applications you might recommend that would offer the fastest time from installation to producing a meaningful result. Also, any recommendations for good getting-started materials on the...
What are some easy to learn machine-learning applications?
CC BY-SA 3.0
null
2014-06-10T11:05:47.273
2014-06-12T17:58:21.467
null
null
434
[ "machine-learning" ]
268
2
null
266
5
null
I think [Weka](http://www.cs.waikato.ac.nz/ml/weka/) is a good starting point. You can do a bunch of stuff like supervised learning or clustering and easily compare a large set of algorithms na methodologies. Weka's manual is actually a book on machine learning and data mining that can be used as introductory material....
null
CC BY-SA 3.0
null
2014-06-10T11:36:19.287
2014-06-10T11:36:19.287
null
null
418
null
269
2
null
265
9
null
Definitely they can. I can target you to a [nice paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.7448&rep=rep1&type=pdf). Once I used it for soccer league results prediction algorithm implementation, primarily aiming at having some value against bookmakers. From paper's abstract: > a Bayesian dynami...
null
CC BY-SA 3.0
null
2014-06-10T11:37:28.293
2014-06-25T16:03:21.447
2014-06-25T16:03:21.447
322
97
null
270
2
null
265
7
null
Machine learning and statistical techniques can improve the forecast, but nobody can predict the real result. There was a kaggle competition a few month ago about [predicting the 2014 NCAA Tournament](https://www.kaggle.com/c/march-machine-learning-mania). You can read the Competition Forum to get a better idea on what...
null
CC BY-SA 3.0
null
2014-06-10T11:39:19.603
2014-06-10T11:39:19.603
null
null
478
null
271
2
null
265
8
null
It has been shown before that machine learning techniques can be applied for predicting sport results. Simple google search should give you a bunch of results. However, it has also been showed (for NFL btw) that very complex predictive models, simple predictive models, questioning people, or crowd knowledge by utilisin...
null
CC BY-SA 3.0
null
2014-06-10T11:49:23.777
2014-06-10T11:49:23.777
null
null
418
null
272
2
null
266
13
null
I would recommend to start with some MOOC on machine learning. For example Andrew Ng's [course](https://www.coursera.org/course/ml) at coursera. You should also take a look at [Orange](http://orange.biolab.si/) application. It has a graphical interface and probably it is easier to understand some ML techniques using it...
null
CC BY-SA 3.0
null
2014-06-10T11:53:07.737
2014-06-10T11:53:07.737
null
null
478
null
273
2
null
253
2
null
You can apply data science techniques to data on one machine so the answer to the question as the OP phrased it, is no.
null
CC BY-SA 3.0
null
2014-06-10T12:10:28.713
2014-06-10T15:04:01.177
2014-06-10T15:04:01.177
498
498
null
274
2
null
262
12
null
One benefit is wide support - C, Java, Perl, Python, and R all have HDF5 bindings. Another benefit is speed. I haven't ever seen it benchmarked, but HDF is supposed to be faster than SQL databases. I understand that it is very good when used with both large sets of scientific data and time series data - network monito...
null
CC BY-SA 3.0
null
2014-06-10T12:57:04.307
2014-06-10T12:57:04.307
null
null
434
null
275
2
null
155
26
null
For time series data in particular, [Quandl](http://www.quandl.com/) is an excellent resource -- an easily browsable directory of (mostly) clean time series. One of their coolest features is [open-data stock prices](http://blog.quandl.com/blog/quandl-open-data/) -- i.e. financial data that can be edited wiki-style, and...
null
CC BY-SA 3.0
null
2014-06-10T13:17:48.433
2014-06-10T13:17:48.433
null
null
508
null
276
2
null
155
10
null
Not all government data is listed on data.gov - [Sunlight Foundation](http://sunlightfoundation.com/blog/2014/02/21/open-data-inventories-ready-for-human-consumption/) put together a [set of spreadsheets](https://drive.google.com/folderview?id=0B4QuErjcV2a0WXVDOURwbzh6S2s&usp=sharing) back in February describing sets o...
null
CC BY-SA 3.0
null
2014-06-10T13:38:31.207
2014-06-10T13:38:31.207
null
null
434
null
277
2
null
266
11
null
To be honest, I think that doing some projects will teach you much more than doing a full course. One reason is that doing a project is more motivating and open-ended than doing assignments. A course, if you have the time AND motivation (real motivation), is better than doing a project. The other commentators have made...
null
CC BY-SA 3.0
null
2014-06-10T14:25:41.903
2014-06-10T14:25:41.903
null
null
518
null
278
2
null
266
2
null
Assuming you're familiar with programming I would recommend looking at [scikit-learn](http://scikit-learn.org/stable/). It has especially nice help pages that can serve as mini-tutorials/a quick tour through machine learning. Pick an area you find interesting and work through the examples.
null
CC BY-SA 3.0
null
2014-06-10T14:30:33.667
2014-06-10T14:30:33.667
null
null
524
null
279
2
null
155
19
null
There is also another resource provided by The Guardian, the British Daily on their website. The datasets published by the Guardian Datablog are all hosted. Datasets related to Football Premier League Clubs' accounts, Inflation and GDP details of UK, Grammy awards data etc. The datasets are available at - http://www....
null
CC BY-SA 3.0
null
2014-06-10T14:57:47.810
2014-06-11T16:30:06.930
2014-06-11T16:30:06.930
514
514
null
280
1
null
null
8
164
I am developing a system that is intended to capture the "context" of user activity within an application; it is a framework that web applications can use to tag user activity based on requests made to the system. It is hoped that this data can then power ML features such as context aware information retrieval. I'm ha...
Feature selection for tracking user activity within an application
CC BY-SA 3.0
null
2014-06-10T15:08:54.073
2020-06-19T08:28:09.320
null
null
531
[ "feature-selection" ]
282
2
null
265
14
null
Yes. Why not?! With so much of data being recorded in each sport in each game, smart use of data could lead us to obtain important insights regarding player performance. Some examples: - Baseball: In the movie Moneyball (which is an adaptation of the Moneyball book), Brad Pitt plays a character who analyses player sta...
null
CC BY-SA 4.0
null
2014-06-10T16:25:24.223
2020-08-20T18:25:04.213
2020-08-20T18:25:04.213
98307
514
null
284
2
null
280
5
null
Well, this may not answer the question thoroughly, but since you're dealing with information retrieval, it may be of some use. [This page](http://moz.com/search-ranking-factors) mantains a set of features and associated correlations with page-ranking methods of search engines. As a disclaimer from the webpage itself: >...
null
CC BY-SA 3.0
null
2014-06-10T17:06:54.950
2014-06-10T17:06:54.950
null
null
84
null
285
2
null
265
19
null
There are a lot of good questions about Football (and sports, in general) that would be awesome to throw to an algorithm and see what comes out. The tricky part is to know what to throw to the algorithm. A team with a good RB could just pass on 3rd-and-short just because the opponents would probably expect run, for ins...
null
CC BY-SA 4.0
null
2014-06-10T17:15:52.953
2019-01-23T14:36:40.430
2019-01-23T14:36:40.430
553
553
null
287
2
null
231
5
null
I do not know a standard answer to this, but I thought about it some times ago and I have some ideas to share. When you have one confusion matrix, you have more or less a picture of how you classification model confuse (mis-classify) classes. When you repeat classification tests you will end up having multiple confusi...
null
CC BY-SA 3.0
null
2014-06-10T17:32:19.120
2014-06-11T09:39:34.373
2014-06-11T09:39:34.373
108
108
null
288
2
null
280
5
null
The goal determines the features, so I would initially take as many as possible, then use cross validation to select the optimal subset. My educated guess is that a Markov model would work. If you discretize the action space (e.g., select this menu item, press that button, etc.), you can predict the next action based o...
null
CC BY-SA 3.0
null
2014-06-10T17:36:11.580
2014-06-11T01:19:18.183
2014-06-11T01:19:18.183
381
381
null
289
1
291
null
10
580
Yann LeCun mentioned in his [AMA](http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun/) that he considers having a PhD very important in order to get a job at a top company. I have a masters in statistics and my undergrad was in economics and applied math, but I am now looking into ML PhD programs. M...
Qualifications for PhD Programs
CC BY-SA 3.0
null
2014-06-10T17:56:34.847
2015-09-10T20:17:35.897
2015-09-10T20:17:35.897
560
560
[ "education" ]
290
2
null
266
2
null
I found the pluralsight course [Introduction to machine learning encog](http://pluralsight.com/training/courses/TableOfContents?courseName=introduction-to-machine-learning-encog&highlight=abhishek-kumar_introduction-to-machine-learning-encog-m2-applications!abhishek-kumar_introduction-to-machine-learning-encog-m3-tasks...
null
CC BY-SA 3.0
null
2014-06-10T18:22:32.610
2014-06-10T18:22:32.610
null
null
571
null
291
2
null
289
10
null
If I were you I would take a MOOC or two (e.g., [Algorithms, Part I](https://www.coursera.org/course/algs4partI), [Algorithms, Part II](https://www.coursera.org/course/algs4partII), [Functional Programming Principles in Scala](https://www.coursera.org/course/progfun)), a good book on data structures and algorithms, the...
null
CC BY-SA 3.0
null
2014-06-10T18:55:39.010
2014-06-10T18:55:39.010
null
null
381
null
292
2
null
289
7
null
Your time would probably be better spent on Kaggle than in a PhD program. When you read the stories by winners ([Kaggle blog](http://blog.kaggle.com/)) you'll see that it takes a large amount of practice and the winners are not just experts of one single method. On the other hand, being active and having a plan in a Ph...
null
CC BY-SA 3.0
null
2014-06-10T19:43:11.860
2014-06-10T19:43:11.860
null
null
587
null
293
2
null
262
37
null
Perhaps a good way to paraphrase the question is, what are the advantages compared to alternative formats? The main alternatives are, I think: a database, text files, or another packed/binary format. The database options to consider are probably a columnar store or NoSQL, or for small self-contained datasets SQLite. ...
null
CC BY-SA 3.0
null
2014-06-10T20:28:54.613
2014-06-10T20:28:54.613
null
null
26
null
294
2
null
253
9
null
As a former Hadoop engineer, it is not needed but it helps. Hadoop is just one system - the most common system, based on Java, and a ecosystem of products, which apply a particular technique "Map/Reduce" to obtain results in a timely manner. Hadoop is not used at Google, though I assure you they use big data analytics....
null
CC BY-SA 3.0
null
2014-06-10T20:40:25.623
2014-06-10T20:40:25.623
null
null
602
null
295
2
null
289
5
null
I am glad you also found Yann LeCun's AMA page, it's very useful. Here are my opinions Q: Should I take some intro software engineering courses at my local University to make myself a stronger candidate? A: No, you need to take more math courses. It's not the applied stuff that's hard, it's the theory stuff. I don't ...
null
CC BY-SA 3.0
null
2014-06-10T20:43:28.533
2014-06-10T20:43:28.533
null
null
386
null
296
2
null
128
44
null
HDP is an extension of LDA, designed to address the case where the number of mixture components (the number of "topics" in document-modeling terms) is not known a priori. So that's the reason why there's a difference. Using LDA for document modeling, one treats each "topic" as a distribution of words in some known voc...
null
CC BY-SA 3.0
null
2014-06-10T21:50:51.347
2014-06-10T21:50:51.347
null
null
14
null
297
2
null
130
7
null
As in @damienfrancois answer feature selection is about selecting a subset of features. So in NLP it would be selecting a set of specific words (the typical in NLP is that each word represents a feature with value equal to the frequency of the word or some other weight based on TF/IDF or similar). Dimensionality reduct...
null
CC BY-SA 3.0
null
2014-06-10T22:26:53.623
2015-10-20T03:28:37.247
2015-10-20T03:28:37.247
381
418
null
298
2
null
289
7
null
You already have a Masters in Statistics, which is great! In general, I'd suggest to people to take as much statistics as they can, especially Bayesian Data Analysis. Depending on what you want to do with your PhD, you would benefit from foundational courses in the discipline(s) in your application area. You already ...
null
CC BY-SA 3.0
null
2014-06-10T22:29:52.873
2014-06-10T22:29:52.873
null
null
609
null
300
2
null
103
4
null
Topological Data Analysis is a method explicitly designed for the setting you describe. Rather than a global distance metric, it relies only on a local metric of proximity or neighborhood. See: [Topology and data](http://www.ams.org/bull/2009-46-02/S0273-0979-09-01249-X/S0273-0979-09-01249-X.pdf) and [Extracting insigh...
null
CC BY-SA 3.0
null
2014-06-10T23:20:16.670
2014-06-10T23:20:16.670
null
null
609
null
301
2
null
52
11
null
One reason that data cleaning is rarely fully automated is that there is so much judgment required to define what "clean" means given your particular problem, methods, and goals. It may be as simple as imputing values for any missing data, or it might be as complex as diagnosing data entry errors or data transformation...
null
CC BY-SA 3.0
null
2014-06-11T00:32:54.887
2014-06-11T00:32:54.887
null
null
609
null
302
2
null
280
3
null
I've seen a few similar systems over the years. I remember a company called ClickTrax which if I'm not mistaken got bought by Google and some of their features are now part of Google Analytics. Their purpose was marketing, but the same concept can be applied to user experience analytics. The beauty of their system wa...
null
CC BY-SA 3.0
null
2014-06-11T00:44:20.517
2014-06-11T00:44:20.517
null
null
434
null
303
2
null
224
3
null
This isn't my area of specialty and I'm not familiar with Moses, but I found this after some searching. I think you are looking for GIZA++. You'll see GIZA++ listed in the "Training" section (left menu) on the Moses home page, as the second step. GIZA++ is briefly described in tutorial fashion [here](https://stackov...
null
CC BY-SA 3.0
null
2014-06-11T01:09:06.100
2014-06-11T01:09:06.100
2017-05-23T12:38:53.587
-1
609
null
305
1
309
null
12
3082
There is plenty of hype surrounding Hadoop and its eco-system. However, in practice, where many data sets are in the terabyte range, is it not more reasonable to use [Amazon RedShift](http://aws.amazon.com/redshift/) for querying large data sets, rather than spending time and effort building a Hadoop cluster? Also, h...
Does Amazon RedShift replace Hadoop for ~1XTB data?
CC BY-SA 3.0
null
2014-06-11T04:24:04.183
2015-01-28T18:42:06.763
2014-06-11T15:02:46.890
434
534
[ "apache-hadoop", "map-reduce", "aws" ]
306
2
null
305
3
null
Personally, I don't think it's all that difficult to set up a hadoop cluster, but I know that it is sometimes painful when you are getting started. HDFS size limitations well exceed a TB (or did you mean exabyte?). If I'm not mistaken it scales to yottabytes or some other measurement that I don't even know the word fo...
null
CC BY-SA 3.0
null
2014-06-11T05:17:12.253
2014-06-11T05:17:12.253
null
null
434
null
307
1
null
null
14
3467
I have read lot of blogs\article on how different type of industries are using Big Data Analytic. But most of these article fails to mention - What kinda data these companies used. What was the size of the data - What kinda of tools technologies they used to process the data - What was the problem they were facing a...
Big data case study or use case example
CC BY-SA 3.0
null
2014-06-11T06:07:45.767
2020-08-16T16:54:32.553
2016-08-17T10:41:41.383
3151
496
[ "data-mining", "bigdata", "usecase" ]
308
2
null
307
14
null
News outlets tend to use "Big Data" pretty loosely. Vendors usually provide case studies surrounding their specific products. There aren't a lot out there for open source implementations, but they do get mentioned. For instance, Apache isn't going to spend a lot of time building a case study on hadoop, but vendors l...
null
CC BY-SA 3.0
null
2014-06-11T06:49:04.070
2014-06-11T06:54:42.593
2014-06-11T06:54:42.593
434
434
null
309
2
null
305
12
null
tl;dr: They markedly differ in many aspects and I can't think Redshift will replace Hadoop. -Function You can't run anything other than SQL on Redshift. Perhaps most importantly, you can't run any type of custom functions on Redshift. In Hadoop you can, using many languages (Java, Python, Ruby.. you name it). For exa...
null
CC BY-SA 3.0
null
2014-06-11T06:51:19.143
2014-06-11T09:07:33.570
2014-06-11T09:07:33.570
638
638
null
310
1
null
null
17
2865
I'm working on improving an existing supervised classifier, for classifying {protein} sequences as belonging to a specific class (Neuropeptide hormone precursors), or not. There are about 1,150 known "positives", against a background of about 13 million protein sequences ("Unknown/poorly annotated background"), or abou...
One-Class discriminatory classification with imbalanced, heterogenous Negative background?
CC BY-SA 3.0
null
2014-06-11T10:11:59.397
2020-08-16T13:00:45.530
null
null
555
[ "machine-learning", "data-mining", "python", "classification" ]
311
2
null
310
5
null
The way I would attack the problem, in general, is to leverage statistical analysis like Principal Component Analysis or Ordinary Least Squares to help determine what attributes within these protein sequences are best suited to classify proteins as Neuropeptide hormone precursors. In order to do that, you'll have to co...
null
CC BY-SA 4.0
null
2014-06-11T11:24:19.963
2020-08-16T13:00:45.530
2020-08-16T13:00:45.530
50406
434
null
312
2
null
215
5
null
Therriault, really happy to hear you are using Vertica! Full disclosure, I am the chief data scientist there :) . The workflow you describe is exactly what I encounter quite frequently and I am a true believer in preprocessing those very large datasets in the database prior to any pyODBC and pandas work. I'd suggest cr...
null
CC BY-SA 3.0
null
2014-06-11T11:32:13.433
2014-06-11T11:32:13.433
null
null
655
null
313
1
null
null
29
3323
What are the books about the science and mathematics behind data science? It feels like so many "data science" books are programming tutorials and don't touch things like data generating processes and statistical inference. I can already code, what I am weak on is the math/stats/theory behind what I am doing. If I am r...
Books about the "Science" in Data Science?
CC BY-SA 3.0
null
2014-06-11T13:28:35.980
2016-02-21T04:02:38.847
2016-02-21T04:02:38.847
11097
663
[ "statistics", "reference-request" ]
314
2
null
313
14
null
If I could only recomend one to you, it would be: [The Elements of Statistical Learning and Prediction](http://rads.stackoverflow.com/amzn/click/0387848576) by Hastie, Tibshirani and Friedman. It provides the math/statistics behind a lot of commonly used techniques in data science. For Bayesian Techniques, [Bayesian D...
null
CC BY-SA 3.0
null
2014-06-11T13:49:08.970
2014-06-11T13:49:08.970
null
null
178
null
316
2
null
59
10
null
As Konstantin has pointed, R performs all its computation in the system's memory i.e. RAM. Hence, RAM capacity is a very important constraint for computation intensive operations in R. Overcoming this constraint, data is being stored these days in HDFS systems, where data isn't loaded onto memory and program is run ins...
null
CC BY-SA 3.0
null
2014-06-11T16:25:34.747
2014-06-11T16:25:34.747
null
null
514
null
317
2
null
266
2
null
If you already know R Studio, then the caret package is a good place to start. Here are some tutorials: - https://class.coursera.org/predmachlearn-002 - http://caret.r-forge.r-project.org/index.html With R and caret you can easily load and splice data sets, feature reduction, principal component analysis, and trai...
null
CC BY-SA 3.0
null
2014-06-11T16:56:59.660
2014-06-11T16:56:59.660
null
null
680
null
318
2
null
266
2
null
If you can reproduce the 6x3 grid of graphs from the banner of the [http://scikit-learn.org/](http://scikit-learn.org/) page then you will have learnt some ML and some Python. You didn't mention a language. Python is easy enough to learn very quickly, and scikit-learn has a wide range of algorithms implemented. Then tr...
null
CC BY-SA 3.0
null
2014-06-11T17:24:53.610
2014-06-11T17:24:53.610
null
null
471
null
319
1
null
null
10
2177
I've built an artificial neural network in python using the scipy.optimize.minimize (Conjugate gradient) optimization function. I've implemented gradient checking, double checked everything etc and I'm pretty certain it's working correctly. I've run it a few times and it reaches 'Optimization terminated successfully' h...
Debugging Neural Networks
CC BY-SA 3.0
null
2014-06-11T18:22:36.267
2014-06-12T18:08:07.507
2014-06-12T16:43:59.513
381
691
[ "machine-learning", "python", "neural-network" ]
320
2
null
319
9
null
There are so many ways to go wrong with a neural net that it's going to be difficult to debug. Also, to address your intuition, each additional hidden layer makes learning much harder. With that said, here are some possibilities: - You have added weight decay. Adding more layers adds more weights which increases your ...
null
CC BY-SA 3.0
null
2014-06-11T20:34:51.873
2014-06-11T20:34:51.873
null
null
574
null
323
1
null
null
5
9516
The setup is simple: binary classification using a simple decision tree, each node of the tree has a single threshold applied on a single feature. In general, building a ROC curve requires moving a decision threshold over different values and computing the effect of that change on the true positive rate and the false p...
How can we calculate AUC for a simple decision tree?
CC BY-SA 3.0
null
2014-06-11T23:52:37.823
2021-02-18T20:33:30.027
null
null
418
[ "machine-learning" ]
324
1
null
null
-2
619
I would like to extract news about a company from online news by using the RODBC package in R. I would then like to use the extracted data for sentiment analysis. I want to accomplish this in such a way that the positive news is assigned a value of +1, the negative news is assigned a value of -1, and the neutral news i...
How can I extract news about a particular company from various websites using RODBC package in R? And perform sentiment analysis on the data?
CC BY-SA 4.0
null
2014-06-12T03:11:00.033
2019-06-11T14:43:33.343
2019-06-11T14:43:33.343
29169
714
[ "r", "text-mining", "sentiment-analysis" ]
325
2
null
324
2
null
This isn't a question with a simple answer, so all I can really do is point you in the right direction. The RODBC package isn't meant to extract data online, it's meant to pull data from a database. If you will be leveraging that package, it will be after you pull data down from the web. Jeffrey Bean put together a [s...
null
CC BY-SA 3.0
null
2014-06-12T05:17:46.453
2014-06-12T05:17:46.453
null
null
434
null
326
1
null
null
127
119318
I'm just starting to develop a [machine learning](https://en.wikipedia.org/wiki/Machine_learning) application for academic purposes. I'm currently using R and training myself in it. However, in a lot of places, I have seen people using Python. What are people using in academia and industry, and what is the recommendati...
Python vs R for machine learning
CC BY-SA 4.0
null
2014-06-12T06:04:48.243
2022-07-11T21:50:49.357
2019-06-10T15:56:58.013
29169
721
[ "machine-learning", "r", "python" ]
327
2
null
326
26
null
There is nothing like "python is better" or "R is much better than x". The only fact I know is that in the industry allots of people stick to python because that is what they learned at the university. The python community is really active and have a few great frameworks for ML and data mining etc. But to be honest, ...
null
CC BY-SA 3.0
null
2014-06-12T07:05:05.653
2014-06-12T07:05:05.653
null
null
115
null
328
2
null
326
7
null
There is no "better" language. I have tried both of them and I am comfortable with Python so I work with Python only. Though I am still learning stuff, but I haven't encounter any roadblock with Python till now. The good thing about Python is community is too good and you can get a lot of help on the Internet easily. O...
null
CC BY-SA 3.0
null
2014-06-12T08:30:49.757
2014-06-12T08:30:49.757
null
null
456
null
330
2
null
235
4
null
You can also checkout the seaborn package for statistical charts.
null
CC BY-SA 3.0
null
2014-06-12T09:57:39.890
2014-06-12T09:57:39.890
null
null
729
null
331
2
null
326
18
null
Some additional thoughts. The programming language 'per se' is only a tool. All languages were designed to make some type of constructs more easy to build than others. And the knowledge and mastery of a programming language is more important and effective than the features of that language compared to others. As far...
null
CC BY-SA 4.0
null
2014-06-12T10:09:23.887
2018-11-14T18:58:00.230
2018-11-14T18:58:00.230
62609
108
null
332
2
null
323
3
null
In order to build the ROC curve and AUC (Area under curve) you have to have a binary classifier which provides you at classification time, the distribution (or at least a score), not the classification label. To give you an example, suppose you have a binary classification model, with classes $c1$ and $c2$. For a given...
null
CC BY-SA 4.0
null
2014-06-12T10:27:14.480
2021-02-18T20:33:30.027
2021-02-18T20:33:30.027
29169
108
null
334
1
338
null
37
19856
I've now seen two data science certification programs - the [John Hopkins one available at Coursera](https://www.coursera.org/specialization/jhudatascience/1?utm_medium=listingPage) and the [Cloudera one](http://cloudera.com/content/cloudera/en/training/certification/ccp-ds.html). I'm sure there are others out there. T...
What do you think of Data Science certifications?
CC BY-SA 3.0
null
2014-06-12T10:52:03.410
2019-07-11T07:02:42.803
2014-06-13T11:35:51.697
434
434
[ "education" ]
335
2
null
334
12
null
The certification programs you mentioned are really entry level courses. Personally, I think these certificates show only person's persistence and they can be only useful to those who is applying for internships, not the real data science jobs.
null
CC BY-SA 3.0
null
2014-06-12T11:11:35.600
2014-06-12T11:11:35.600
null
null
478
null
336
2
null
326
13
null
There isn't a silver bullet language that can be used to solve each and every data related problem. The language choice depends on the context of the problem, size of data and if you are working at a workplace you have to stick to what they use. Personally I use R more often than Python due to its visualization librari...
null
CC BY-SA 3.0
null
2014-06-12T11:30:20.943
2014-06-12T11:30:20.943
null
null
733
null
337
2
null
326
15
null
I would add to what others have said till now. There is no single answer that one language is better than other. Having said that, R has a better community for data exploration and learning. It has extensive visualization capabilities. Python, on the other hand, has become better at data handling since introduction of ...
null
CC BY-SA 3.0
null
2014-06-12T11:54:59.140
2014-06-12T11:54:59.140
null
null
735
null
338
2
null
334
13
null
I did the first 2 courses and I'm planning to do all the others too. If you don't know R, it's a really good program. There are assignments and quizzes every week. Many people find some courses very difficult. You are going to have hard time if you don't have any programming experience (even if they say it's not requi...
null
CC BY-SA 3.0
null
2014-06-12T12:13:26.940
2014-06-12T12:13:26.940
null
null
737
null
339
2
null
326
110
null
Some real important differences to consider when you are choosing R or Python over one another: - Machine Learning has 2 phases. Model Building and Prediction phase. Typically, model building is performed as a batch process and predictions are done realtime. The model building process is a compute intensive process wh...
null
CC BY-SA 3.0
null
2014-06-12T12:59:00.663
2017-01-17T14:14:47.423
2017-01-17T14:14:47.423
28021
514
null
340
2
null
235
5
null
There's plenty. If you've ever used ggplot2 in R and want to do that in [Python](https://pypi.python.org/pypi/ggplot/0.5.9). If you want to use a similar visualisation grammar (Vega) and go via [D3](https://github.com/wrobstory/vincent). Or if you want the full-on 3d: [shizzle](http://docs.enthought.com/mayavi/mayavi/)...
null
CC BY-SA 4.0
null
2014-06-12T13:38:10.877
2019-04-25T18:59:20.977
2019-04-25T18:59:20.977
29575
471
null